Introducing Linch-Pin: Hybrid cloud provisioning using Ansible

Background

Over the past 6+ months, I've been working at Red Hat. During this time, I've been working mostly with Continuous Integration projects and the like. Recently, I joined a new team, called Continouous Infrastructure and started working on automating use cases around Project Atomic and OpenShift.

As part of that project, we have an internal tool, called ci-factory. It has within its components, a provisioner that works with OpenStack, Beaker, etc. However, it doesn't support a broader set of clouds/infrastructure, including Amazon AWS, Google Compute Engine, Libvirt, etc. Additionally, ci-factory provided some tooling for ansible dynamic inventories. However, the configurations that generated this were not very flexible, and sometimes outright incorrect.

Enter Provisioner 2.0 (Linch-Pin)

Beginning in June this year, our team started creating a new tool. Lead by developer extraordinaire, Samvaran Rallabandi (we call him SK), we now have Linch-Pin.

The concept of Linch-Pin was to retool the basic ci-factory provisioner into something much more flexible. While provisioning was important, ci-factory was written in a mix of python and bash scripts. Provisioner 2.0 is written completely in Ansible. Because Ansible is excellent at both configuration management and orchestration of systems, it could be used to both provision and configure systems. Ansible can handle complex cluster configurations (eg openshift-ansible) as well as much simpler tasks, like adding users to a system idempotently.

Additional, considerations for Provisioner 2.0 would allow leveraging of existing Ansible cloud modules, reducing the amount of code needing to be written. So far, this has proven very valuable and made development time much shorter overall. There are, however certain modules, Libvirt for example, that seem poorly implemented in Ansible. Thus, an updated module will need to be written.

Lastly, Provisioner 2.0 should exist upstream. Linch-Pin has been an upstream project since the first working code was created. This was done to encourage contribution from inside and outside of Red Hat. We believe that many projects will be able to take advantage of Linch-Pin and contribute back to the project as a whole. Many other upstream and downstream projects have expressed interest in Linch-Pin just from a basic demonstration.

Linch-Pin Architecture

Linch-Pin has some basic components that make it work. First we'll cover the file structure, then dive into the core bits:

provision/
├── group_vars/
├── hosts
├── roles/
└── site.yml

At one point in time, Linch-Pin was going to have both provision and configure playbooks. The provision components became Linch-Pin, while the configure components became external repositories of useful components which leverage Linch-Pin in some way. A couple of examples are the CentOS PaaS SIG's paas-sig-ci project, and the cinch project, by, Red Hat Quality Engineering.

Both group_vars and hosts paths are related to inventory.

The roles path is the meat of Linch-Pin. All of the power that makes hybrid cloud provisioning possible exists here. Ansible defines things called playbooks to drive these roles. The site.yml is the playbook itself, which start the execution.

Other paths exist:

├── outputs/
├── filter_plugins/
├── InventoryFilters/
└── library/

We will cover these components later on in this post, or later in the series.

Topologies

To consider complex cloud infrastructure, a topology definition can help. The topology definition is created using YAML. Before Linch-Pin provisions anything, the topology must be validated using a predefined schema. Schemas can be created to change the way the topology works if desired. However, there is a default schema already defined for simplicity.

After validation, the topology is then used to provision resources. A topology is broken into its resource components, and each is delegated to the appropriate resource provisioner. This is generally done asynchronously, meaning that nodes on different cloud providers can be provisioned at the same time using appropriate credentials. Assuming a successful provisioning event per provider, the resource provisoner(s) will return appropriate response data to the topology provisioner.

As mentioned above, credentials may be required for some cloud providers. This is handled by the credentials manager. The credentials details are stored in the topology definition as a reference to the vault/location of said credentials. The resource provisioner uses these to authenticate to the appropriate cloud provider as necessary.

A very simple example topology definition is provided here.

# openstack-3node-cluster.yml
---
topology_name: "openstack_3node_cluster"
site: "ci-osp"
resource_groups:
  -
    resource_group_name: "os"
    res_group_type: "openstack"
    res_defs:
      -
        res_name: "3node_inst"
        flavor: "m1.small"
        res_type: "os_server"
        image: "rhel-7.3-server-x86_64-latest"
        count: 3
        networks:
          - "atomic-e2e-jenkins"
    assoc_creds: "openstack_creds"

This topology describes a single set of instances, which will run on the local ci-osp site. There will be 3 nodes, given network devices on the 'atomic-e2e-jenkins' network. Each instance will be of type 'm1.small'. Keep in mind, the openstack server must actually be configured and accept the above options. This is out of scope of this post, however.

Topology terminology explained:

  • topology_name: a reference point which Linch-Pin uses to track the nodes, networks, and other resources throughout provisioning and teardown
  • resource_groups: a set of resource definitions. One or many groups of nodes, storage, etc.
  • res_group_type: a predefined set of group types (eg. openstack, aws, gcloud, libvirt, etc.)
  • res_defs: A definition of a resource with its component attributes (flavor, image, count, region, etc.)
  • res_type: A predefined cloud resource name (eg. os_server, gcloud_gce, aws_ec2, os_heat)

As mentioned above, the topology describes the resources needed. When Linch-Pin is invoked, this file will be read and create the described systems. More information about topologies and structures is described in the Linch-Pin documentation. More complex examples can be found in the Linch-Pin github repository.

Provisioning

To provision the openstack-3node-cluster.yml, Linch-Pin currently uses the ansible-playbook command. There are many options available that can be passed as --extra-vars, but here, we only show two: state and topology. Simply calling the provision playbook will provision resources:

$ ansible-playbook /path/to/linch-pin/provision/site.yml \
topology='/path/to/openstack-3node-cluster.yml' state=present"

Note

There are other --extra-vars options documented here.

The diagram shows this process, the topology definition is provided to the provisioner, which then provisions the requested resources. Once provisioned, all cloud data is gathered and stored.

Click image to see full size

A great many things happen when this playbook is run. Let's have a look at the process in a bit more detail.

Determining Defaults

Linch-Pin first discovers the needed configurations. Either from the --extra-vars as shown above, or from the linchpin_config.yml. This determines the schema, topology, and some paths for outputs and the like (covered later in this post).

Schema Check

Once everything is configured properly, a schema check is performed. This process is used to ensure the topology file matches up with the defined constraints. For example, there are specific resource types (res_type), like os_server, aws_ec2, and gcloud_gce. Others, like libvirt, beaker, and docker are not yet implemented. The schema check ensures that further processing only occurs for resource types that are currently supported.

Provisioning Nodes

Once the schema check passes, the topology is provisioned with the cloud provider(s). In the example, there is only one, openstack, but there could be several clouds provisioned at once. The provisioner plugin is called for each cloud provider, credentials are passed along as needed. If all is successful, nodes will be provisioned according to the topology definition.

Note

Additional example topologies available at the Linch-Pin github.

Determining Credentials

It may not have been clear above, but when provisioning nodes from certain cloud providers, credentials are required. In the topology definition file, there is one line that indicates credentials:

assoc_creds: "openstack_creds"

However, this doesn't really tell us anything. It turns out, each provisioner plugin has an ansible role. Each role contains some variables for determining how to connect to the cloud provider. For instance, with openstack, the roles/openstack/vars/openstack_creds.yml relates to our topology definition:

--- # openstack credentials
endpoint: "{{ lookup('env','OS_AUTH_URL') }}"
project: "{{ lookup('env','OS_TENANT_NAME') }}"
username: "{{ lookup('env','OS_USERNAME') }}"
password: "{{ lookup('env','OS_PASSWORD') }}"

The openstack credentials currently come from environment variables. Simply export these variable to the shell and they will be picked up properly. From this point, openstack will grant access according to its policies. The environmental variables are used this way, where appropriate, for all cloud providers.

In the future, there will be a way to manage credentials a bit more succinctly. See https://github.com/CentOS-PaaS-SIG/linch-pin/issues/76 for updates.

Outputs and Teardown

Teardown is fairly straightforward. The command is similar to provisioning. The main difference is state=absent, telling the provisioner to perform a teardown instead of provisioning a new resource:

$ ansible-playbook /path/to/linch-pin/provision/site.yml \
topology='/path/to/openstack-3node-cluster.yml' state=absent"

This process requires knowledge of the outputs, as mentioned previously. Outputs are tracked by a few variables in the linchpin_config.yml. Specifically, the outputfolder_path provides the location of the output, along with the filename, which is based upon the topology_name.

Consider the following; outputfolder_path=/tmp/outputs, and topology_name=openstack_3node_cluster. From this, the output of provisioning would reside at /tmp/outputs/openstack_3node_cluser.yml.

The contents of this file may look something like this:

aws_ec2_res: []
duffy_res: []
gcloud_gce_res: []
os_server_res:
-   _ansible_item_result: true
    _ansible_no_log: false
    changed: true
    id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
    instance:
    - <endpoint>
    - <username>
    - <password>
    - <username>
    - present
    - rhel-6.5_jeos
    - <ssh-key-reference>
    - m1.small
    - <network>
    - testgroup2
    - ano_inst
    - 0
    invocation:
        module_args:
            api_timeout: 99999
            auth:
                auth_url: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                project_name: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                username: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
            auth_type: null
            auto_ip: true
            availability_zone: null
            boot_from_volume: false
            boot_volume: null
            cacert: null
            cert: null
            cloud: null
            config_drive: false
            endpoint_type: public
            flavor: m1.small
            flavor_include: null
            flavor_ram: null
            floating_ip_pools: null
            floating_ips: null
            image: rhel-6.5_jeos
            image_exclude: (deprecated)
            key: null
            key_name: <ssh-key-reference>
            meta: null
            name: testgroup2_ano_inst_0
            network: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
            nics: []
            region_name: null
            scheduler_hints: null
            security_groups:
            - default
            state: present
            terminate_volume: false
            timeout: 180
            userdata: null
            verify: true
            volume_size: false
            volumes: []
            wait: true
        module_name: os_server
    openstack:
        HUMAN_ID: true
        NAME_ATTR: name
        OS-DCF:diskConfig: MANUAL
        OS-EXT-AZ:availability_zone: nova
        OS-EXT-STS:power_state: 1
        OS-EXT-STS:task_state: null
        OS-EXT-STS:vm_state: active
        OS-SRV-USG:launched_at: '2016-07-26T16:56:40.000000'
        OS-SRV-USG:terminated_at: null
        accessIPv4: 10.8.183.233
        accessIPv6: ''
        addresses:
            <network>:
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: fixed
                addr: 172.16.100.26
                version: 4
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: floating
                addr: 10.8.183.233
                version: 4
        adminPass: <redacted>
        az: nova
        cloud: defaults
        config_drive: ''
        created: '2016-07-26T16:56:10Z'
        flavor:
            id: '2'
            name: m1.small
        hostId: 6363c13fc31066a15f2ff4c81bf054290d4d56d4d08e568570354580
        human_id: testgroup2_ano_inst_0
        id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
        image:
            id: 3bcfd17c-6bf0-4134-ae7f-80bded8b46fd
            name: rhel-6.5_jeos
        interface_ip: 10.8.183.233
        key_name: <ssh-key-reference>
        metadata: {}
        name: testgroup2_ano_inst_0
        networks:
            <network>:
            - 172.16.100.26
            - 10.8.183.233
        os-extended-volumes:volumes_attached: []
        private_v4: 172.16.100.26
        progress: 0
        public_v4: 10.8.183.233
        public_v6: ''
        region: ''
        security_groups:
        -   description: Default security group
            id: df1a797b-009c-4685-a7c9-43863c36d653
            name: default
            security_group_rules:
            -   direction: ingress
                ethertype: IPv4
                id: ade9fcb9-14c1-4975-a04d-6007f80005c1
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
            -   direction: ingress
                ethertype: IPv4
                id: d03e4bae-24b6-415a-a30c-ee0d060f566f
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
        status: ACTIVE
        tenant_id: f1dda47890754241a3e111f9b7394707
        updated: '2016-07-26T16:56:40Z'
        user_id: 9c770dbddda444799e627004fee26e0a
        volumes: []
    server:
        HUMAN_ID: true
        NAME_ATTR: name
        OS-DCF:diskConfig: MANUAL
        OS-EXT-AZ:availability_zone: nova
        OS-EXT-STS:power_state: 1
        OS-EXT-STS:task_state: null
        OS-EXT-STS:vm_state: active
        OS-SRV-USG:launched_at: '2016-07-26T16:56:40.000000'
        OS-SRV-USG:terminated_at: null
        accessIPv4: 10.8.183.233
        accessIPv6: ''
        addresses:
            <network>:
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: fixed
                addr: 172.16.100.26
                version: 4
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: floating
                addr: 10.8.183.233
                version: 4
        adminPass: <redacted>
        az: nova
        cloud: defaults
        config_drive: ''
        created: '2016-07-26T16:56:10Z'
        flavor:
            id: '2'
            name: m1.small
        hostId: 6363c13fc31066a15f2ff4c81bf054290d4d56d4d08e568570354580
        human_id: testgroup2_ano_inst_0
        id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
        image:
            id: 3bcfd17c-6bf0-4134-ae7f-80bded8b46fd
            name: rhel-6.5_jeos
        interface_ip: 10.8.183.233
        key_name: <ssh-key-reference>
        metadata: {}
        name: testgroup2_ano_inst_0
        networks:
            <network>:
            - 172.16.100.26
            - 10.8.183.233
        os-extended-volumes:volumes_attached: []
        private_v4: 172.16.100.26
        progress: 0
        public_v4: 10.8.183.233
        public_v6: ''
        region: ''
        security_groups:
        -   description: Default security group
            id: df1a797b-009c-4685-a7c9-43863c36d653
            name: default
            security_group_rules:
            -   direction: ingress
                ethertype: IPv4
                id: ade9fcb9-14c1-4975-a04d-6007f80005c1
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
            -   direction: ingress
                ethertype: IPv4
                id: d03e4bae-24b6-415a-a30c-ee0d060f566f
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
        status: ACTIVE
        tenant_id: f1dda47890754241a3e111f9b7394707
        updated: '2016-07-26T16:56:40Z'
        user_id: 9c770dbddda444799e627004fee26e0a
        volumes: []

Because the linchpin_config.yml contains the path to the output file, it is then parsed and used to teardown the resources. In this case, the single openstack node listed and its networking resources are torn down. If there were more nodes, more data would exist. Similarly, if there were additional clouds, the data would be populated for the appropriate output fields.

Conclusion

Finally! We've made it through the introduction to Linch-Pin. As you can see, running Linch-Pin is pretty easy, but there's a lot to it internally. Now it's time to use the provisioner, so go ahead and try it out.

Cheers,

herlo

Adjusting your git email address per clone

I’ve been working a lot with git over the years. To that end, I commonly use my personal email address for projects I post on my github. Additionally, I have a project or two in which I participate. The email address I use there is just an alias to my personal email, but I want to use it instead of the email above.

Today, I came across an article that makes changing this value much, much simpler.  Try this:

$ git config --global alias.personal 'config user.email "herlo@personal"'
$ git personal
$ git config user.email
herlo@personal
$ git config --global alias.work 'config user.email "herlo@work"'
$ git work
$ git config user.email
herlo@work

As you can see, it’s now trivial to switch between certain email addresses. Man, git aliases are very nice!

Cheers,

herlo

Setting up a FreeIPA Replica

In my last post, I mentioned that I would show how to configure a client with sudo access. Well, I lied! Hopefully that will be the next post. Instead, I&#8217’m going to cover how to set up FreeIPA replica.

Replicating FreeIPA services is useful in many ways. Managing DNS, LDAP and Kerberos services, for one. Additionally, it makes sense to have replicas in different VLANs or network zones. Opening up only the ports needed for replication between the FreeIPA servers instead of for all hosts makes things more secure.

All Replicas are Masters

Save for things like running a Certificate Authority or DNS, all of the hosts which run FreeIPA are masters. They replicate using agreements. This means that when a new replica is setup, it will communicate with other FreeIPA servers and replicate to/from only the ones it is allowed to communicate. Further reading is available here.

Preparing for the FreeIPA Replica

Preparing the replica requires setting up the agreement documents on one of the IPA master servers.

$ ssh ipa7.example.com
..snip..

$ su -
Password: centos

# ipa-replica-prepare replica.example.com --ip-address 192.168.122.210
Directory Manager (existing master) password: manager72

Preparing replica for replica.example.com from ipa.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-replica.example.com.gpg
Adding DNS records for replica.example.com
Using reverse zone 122.168.192.in-addr.arpa.

Copy the Replica gpg Data to the New Replica

The data is now packaged and needs to be put on the replica. The simplest and most secure process is to scp the file directly to the replica. In some cases, this is not possible, and other methods may need to be employed.

# scp /var/lib/ipa/replica-info-replica.example.com.gpg root@192.168.122.210:
..snip..

Install the Replica

Once the agreements are created and moved to the new replica, it&#8217’s time to perform the install.

$ ssh root@192.168.122.210
root@192.168.122.210's password:

Ensure the replica can contact the master server. Additionally, make sure the replica&#8217’s ip address resolves locally. Modifying /etc/hosts is the simplest solution.

# cat /etc/hosts
..snip..
192.168.122.210 replica.example.com
192.168.122.200 ipa7.example.com
..snip..

Set the hostname properly. If it isn&#8217’t, correct it by modifying /etc/sysconfig/network.

# hostname
replica.example.com

Once all of the above is correct, it&#8217’s time to perform the installation itself. Since a replica is just a clone of another master, installing the same packages makes sense.

# yum install ipa-server bind-dyndb-ldap -y
..snip..

Next, perform the install. Consider options like –setup-ca_ and –setup-dns_ as optional, though very useful if those processes are going to be needed on the new replica.

# ipa-replica-install --setup-dns /root/replica-info-replica.example.com.gpg \
--no-forwarders --ip-address=192.168.122.210
Directory Manager (existing master) password: manager72

Run connection check to master
Check connection from replica to remote master 'ipa7.example.com':
 Directory Service: Unsecure port (389): OK
 Directory Service: Secure port (636): OK
 Kerberos KDC: TCP (88): OK
 Kerberos Kpasswd: TCP (464): OK
 HTTP Server: Unsecure port (80): OK
 HTTP Server: Secure port (443): OK
 PKI-CA: Directory Service port (7389): OK

The following list of ports use UDP protocol and would need to be
checked manually:
 Kerberos KDC: UDP (88): SKIPPED
 Kerberos Kpasswd: UDP (464): SKIPPED

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@CODEAURORA.ORG password: ipaadmin72

Configuring NTP daemon (ntpd)
..snip..
Configuring directory server for the CA (pkids): Estimated time 30 seconds
..snip..
Done configuring directory server for the CA (pkids).
Configuring certificate server (pki-cad): Estimated time 3 minutes 30 seconds
..snip..
Done configuring certificate server (pki-cad).
Restarting the directory and certificate servers
Configuring directory server (dirsrv): Estimated time 1 minute
..snip..
Starting replication, please wait until this has completed.
Update in progress
..snip..
Update succeeded
..snip..
Done configuring directory server (dirsrv).
Configuring Kerberos KDC (krb5kdc): Estimated time 30 seconds
..snip..
Done configuring Kerberos KDC (krb5kdc).
Configuring kadmin
..snip..
Done configuring kadmin.
Configuring ipa_memcached
..snip..
Done configuring ipa_memcached.
Configuring the web interface (httpd): Estimated time 1 minute
..snip..
Done configuring the web interface (httpd).
Applying LDAP updates
Restarting the directory server
Restarting the KDC
Using reverse zone 122.168.192.in-addr.arpa.
Configuring DNS (named)
..snip..
Done configuring DNS (named).

Global DNS configuration in LDAP server is empty
You can use 'dnsconfig-mod' command to set global DNS options that
would override settings in local named.conf files

Restarting the web server

At this point, the replica should start functioning. Since this is a FreeIPA server, make sure to allow successful logins. A reboot is a very good idea.

# authconfig --enablemkhomedir --update
..snip..
# chkconfig sssd on
..snip..
# init 6

Once rebooted, login with an existing user.

$ ssh replica.example.com
Warning: Permanently added 'replica.example.com' (ECDSA) to the 
list of known hosts.
Creating home directory for herlo.
Last login: Fri Oct 22 12:27:44 2014 from 192.168.122.1
[herlo@replica ~]$ id
uid=151600001(herlo) gid=151600001(herlo) groups=151600001(herlo)...

Assuming all works well, replication should be working.

If all goes well, I&#8217’ll show how to install a client and enable sudo access in the next post.

Cheers,

herlo

Introducing FreeIPA – Identity Management (IdM) Done Right!

It has been a while since I&#8217’ve posted anything on this blog. It is high time I get something useful up here. Lucky for you, dear reader, I have a series of posts to share. Each of them about a new technical passion of mine, Identity Management.

Many of you probably know of Active Directory. The all encompassing Identity Management solution from Microsoft. It’s is the most popular solution out there, and its got a good hold on the market, for sure. But with almost all things Microsoft comes closed source, GUI only management, and resentment among many.

I’m not saying that Active Directory does not do its job as an IdM solution. In fact, I think it’s a fine solution, if you want to have a proprietary solution, pay a ton of money yearly and not follow standards. In terms of Identity Management, however, it is a pretty good system overall.

The thing is, closed source systems historically have more issues and delays for fixes long term. Until recently, however, there hasn&#8217’t been a reasonable open source solution for IdM. Enter FreeIPA, Identity Management done right.

What is FreeIPA?

FreeIPA is a solution for managing users, groups, hosts, services, and much, much more. It uses open source solutions with some Python glue to make things work. Identity Management made easy for the Linux administrator.

ipa-componentsInside FreeIPA are some common pieces; The Apache Web Server, BIND, 389DS, and MIT Kerberos. Additionally, Dogtag is used for certificate management, and sssd for client side configurations. Put that all together with some python glue, and you have FreeIPA.

As you can see from the diagram, there is also an API which provides programmatic management via Web and Command Line interfaces. Additionally, many plugins exist. For example, one exists to set up trust agreements for replication to Active Directory. Additional functionality exists for managing Samba shares via users and groups in FreeIPA.

It’s probably a good time to setup a FreeIPA server and show its power.

Installation

Installing FreeIPA is simple on a Linux system. However, there are a few things needed. This installation is being performed on a fully updated CentOS 7.0 system. An entry in the /etc/hosts matching the server ip and hostname is useful. Additionally, make sure to set the hostname properly.

# echo 192.168.122.200 ipa7.example.com ipa7 >> /etc/hosts
# echo ipa7.example.com > /etc/hostname

We’ll be installing the server at this time, but there is a client install, which we’ll show in later posts. It’s recommended to use RHEL/CentOS >= 6.x or Fedora >= 14. Simply perform a yum install.

(rhel/centos) # yum install ipa-server
(fedora) # yum install freeipa-server
..snip..

Once the {free,}ipa-server package is installed. Run the install itself. Since FreeIPA can manage a dns server, a decision must be made. Here, we are going to choose to manage our internal dns with FreeIPA, which uses ldap via 389DS to store the records.

# yum install bind-dyndb-ldap
..snip..
# ipa-server-install --setup-dns
The log file for this installation can be found in /var/log/ipaserver-install.log
==============================================================================
This program will set up the IPA Server.

This includes:
 * Configure a stand-alone CA (dogtag) for certificate management
 * Configure the Network Time Daemon (ntpd)
 * Create and configure an instance of Directory Server
 * Create and configure a Kerberos Key Distribution Center (KDC)
 * Configure Apache (httpd)
 * Configure DNS (bind)

To accept the default shown in brackets, press the Enter key.

WARNING: conflicting time&date synchronization service 'chronyd' will be disabled
in favor of ntpd

Existing BIND configuration detected, overwrite? [no]: yes

The installation explains the process and services which will be installed. Because we’re installing using DNS, some skeleton files exist. It’s safe to overwrite them and move forward.

Next, define the server hostname, and the domain name (for DNS).

Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
<hostname>.<domainname>
Example: master.example.com.


Server host name [ipa7.example.com]: ipa7.example.com

Warning: skipping DNS resolution of host ipa7.example.com
The domain name has been determined based on the host name.

Please confirm the domain name [example.com]: example.com

The next section covers the kerberos realm. This may seem confusing, but kerberos is one of the big powerhouses behind FreeIPA. It makes registering client systems very simple. Kerberos realm names are always in upper case. Usually, they emulate the domain name.

The kerberos protocol requires a Realm name to be defined.
This is typically the domain name converted to uppercase.

Please provide a realm name [EXAMPLE.COM]: EXAMPLE.COM

Next, configure the passwords for the Directory Manager (for ldap administration) and the IPA admin user.

Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and has full access
to the Directory for system management tasks and will be added to the
instance of directory server created for IPA.
The password must be at least 8 characters long.

Directory Manager password: manager72
Password (confirm): manager72

The IPA server requires an administrative user, named 'admin'.
This user is a regular system account used for IPA server administration.

IPA admin password: ipaadmin72
Password (confirm): ipaadmin72

Finally, the installer follows up with a request for more DNS info.

Do you want to configure DNS forwarders? [yes]: yes
Enter the IP address of DNS forwarder to use, or press Enter to finish.
Enter IP address for a DNS forwarder: 8.8.8.8
DNS forwarder 8.8.8.8 added
Enter IP address for a DNS forwarder: 8.8.4.4
DNS forwarder 8.8.4.4 added
Enter IP address for a DNS forwarder: <enter>
Do you want to configure the reverse zone? [yes]: yes
Please specify the reverse zone name [122.168.192.in-addr.arpa.]: 122.168.192.in-addr.arpa
Using reverse zone 122.168.192.in-addr.arpa.

Now that all of the questions have been asked and answered. It’s time to let the installer do its thing. A verification step prints out all of the values entered. Make sure to review them carefully.

The IPA Master Server will be configured with:
Hostname: ipa7.example.com
IP address: 192.168.122.200
Domain name: example.com
Realm name: EXAMPLE.COM

BIND DNS server will be configured to serve IPA domain with:
Forwarders: 8.8.8.8, 8.8.4.4
Reverse zone: 122.168.192.in-addr.arpa.

Then, when ready, confirm the install and go grab a cup of joe. Installation takes anywhere from 10-30 minutes.

Continue to configure the system with these values? [no]: yes

The following operations may take some minutes to complete.
Please wait until the prompt is returned.

Configuring NTP daemon (ntpd)
 [1/4]: stopping ntpd
 [2/4]: writing configuration
 [3/4]: configuring ntpd to start on boot
 [4/4]: starting ntpd
..snip..

When complete, the installation gives a bit of useful information. Make sure to open the ports within the firewall. This is beyond the scope here, and is left as an exercise for the reader.

Setup complete

Next steps:
    1. You must make sure these network ports are open:
        TCP Ports:
          * 80, 443: HTTP/HTTPS
          * 389, 636: LDAP/LDAPS
          * 88, 464: kerberos
          * 53: bind
        UDP Ports:
          * 88, 464: kerberos
          * 53: bind
          * 123: ntp

    2. You can now obtain a kerberos ticket using the command: 'kinit admin'
       This ticket will allow you to use the IPA tools (e.g., ipa user-add)
       and the web user interface.

Be sure to back up the CA certificate stored in /root/cacert.p12
This file is required to create replicas. The password for this
file is the Directory Manager password

Now that everything is installed. One last simple configuration will help. To ensure users can login correctly, use authconfig to ensure home directories are created. Followed by a quick reboot.

# authconfig --enablemkhomedir --update
# chkconfig sssd on
Note: Forwarding request to 'systemctl enable sssd.service'.
# init 6

Once the system has rebooted, point the browser to the newly installed FreeIPA service. Logging into FreeIPA can be done in two different ways; from the browser, or via kerberos. For now, login via the web browser as the admin user.

 

Once logged in, a useful configurations is necessary before adding users. Change the default shell for all users to /bin/bash. This is done by choosing IPA Server -> Configuration. Once modified, click Update.

ipa-config-shell

Now it’s time to add a user. Choose the Identity tab. Then click Add.

ipa-user-add

Clicking Add and Edit presents the user’s data. This is useful for adding an ssh key.

ipa-user-sshkey

NOTE: Don’t forget to click Update after setting the key.

This should now allow ssh into the FreeIPA server as the new user. To make this possible, make sure the new FreeIPA server is configured as a resolver. The simplest way is to update the /etc/resolv.conf file.

# cat /etc/resolv.conf
search egavas.org
nameserver 192.168.122.200
..snip..

# host ipa7.example.com
ipa7.example.com has address 192.168.122.200

Once the FreeIPA server is resolvable, ssh should now work.

[herlo@x220 ~]$ ssh ipa7.example.com
The authenticity of host 'ipa7.example.com (192.168.122.200)' can't be established.
ECDSA key fingerprint is 42:96:09:a7:1b:ac:df:dd:1c:de:73:2b:86:51:19:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ipa7.example.com' (ECDSA) to the list of known hosts.
Creating home directory for herlo.
Last login: Fri Oct 10 02:27:44 2014 from 192.168.122.1
[herlo@ipa7 ~]$ id
uid=151600001(herlo) gid=151600001(herlo) groups=151600001(herlo) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c102

Congratulations! The FreeIPA server is now configured. In the next post, I will cover how to configure a client system and setup centralized sudo.

Cheers,

herlo

Changing the IP range for docker0

Lately, I’ve been tinkering a lot with docker. Mostly, I’ve been doing it for work at The Linux Foundation. But I do have a desire to have docker instances on my local box for distros which I do not run.

While doing some testing for work on my personal laptop, I noticed that the network which docker uses for it’s bridge, aptly named docker0, was in the same network as one of our VPNs.

# ip a s docker0
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
 link/ether fe:54:00:18:1a:fd brd ff:ff:ff:ff:ff:ff
 inet 172.17.41.1/16 brd 10.100.72.255 scope global docker0
 ..snip..

# ip a s tun0
139: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1412 ...
    link/none 
    inet 172.17.123.32/24 brd 172.17.224.255 scope global tun0
       valid_lft forever preferred_lft forever

As you can tell, the docker0 network bridge covers all of the tun0 network. Any time I would attempt to ssh into one of the systems inside the VPN, it would time out. I was left wondering why for a few moments.

Luckily, it’s very easy to fix this problem. All that is needed is a defined bridge for docker0 and to restart the docker service. Here’s what to do:

First, stop docker:

# service docker stop
Redirecting to /bin/systemctl stop  docker.service

Next, create the network bridge file. You can choose any IP range you like. On Fedora 19, it looks like this:

# cat /etc/sysconfig/network-scripts/ifcfg-docker0 
DEVICE="docker0"
TYPE="Bridge"
ONBOOT="yes"
NM_CONTROLLED="no"
BOOTPROTO="static"
IPADDR=10.100.72.254
NETMASK=255.255.255.0

Restart your network services.  NOTE: service network restart may be needed.

# service NetworkManager restart
Redirecting to /bin/systemctl restart  NetworkManager.service

The docker0 bridge should now be in a range outside the VPN.

# ip a s docker0
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
    link/ether fe:54:00:18:1a:fd brd ff:ff:ff:ff:ff:ff
    inet 10.100.72.254/24 brd 10.100.72.255 scope global docker0

Starting new containers with docker should get IP addesses in the above range:

# service docker start
Redirecting to /bin/systemctl start  docker.service

# docker run -i -t herlo/fedora:20 /bin/bash
bash-4.2# ip a s eth0
141: eth0: <BROADCAST,UP,LOWER_UP> mtu 1412 ...
    link/ether fa:5f:e3:8d:61:f2 brd ff:ff:ff:ff:ff:ff
    inet 10.100.72.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f85f:e3ff:fe8d:61f2/64 scope link 
       valid_lft forever preferred_lft forever

SUCCESS!

Cheers,

herlo

Fedora Ambassadors Update: February 2013

Welcome New Ambassadors

We are happy to welcome our new sponsored Fedora Ambassador in February:

Mon Mar 11 02:53:08 2013 | Accounts: 624 | Inactive: 81 (12.98%)

Cheers,

herlo

Fedora Ambassadors Update: January 2013

Welcome New Ambassadors

We are happy to welcome our new sponsored Fedora Ambassador in January:

Fedora Ambassador Statistics

Thu Feb 7 12:29:06 2013 | Accounts: 619 | Inactive: 81 | % inactive: 13.09

Cheers,

herlo

Yes, the announcement is behind this month. Moving into a new home will do that to you!

FUDCon Lawrence: Day 1 Session Videos and More

While you might have seen the barrage of posts from Fedora’s FUDCon Lawrence this weekend, you might not know that many sessions were streamed for your joy and enrichment.

Here’s a list with links to the videos (all of them are on youtube):

Enjoy,

herlo

FUDCon Lawrence Hackfest: GPG SmartCard Configuration

As I’ve been working at my new job at the Linux Foundation, we have been implementing quite a bit of two-factor authentication. In fact, back in November, Fedora implemented two-factor authentication for sudo at the Security FAD. I was there and helped setup the clients and did some testing.

While I was there, I had another agenda item, creating a HOWTO for enabling a GPG SmartCard for use with SSH. Of course, the SmartCard can be used for both encryption and signing as well.

After finishing that HOWTO, I was talking about it with a few people within Fedora, and there seemed to be quite a bit of interest. It turns out there was quite a bit of interest, so I’ve decided to do a hackfest on Saturday at FUDCon to help move people over to GPG SmartCards. This is also going to be quite nice in that there will be a GPG key signing event after this hackfest.

Come Prepared – Equipment Required

It’s important you come prepared! If you have ever had interest in this sort of thing, there’s time to get equipment for the hackfest. Here’s what you need:

Both pieces are required. Order ASAP, it takes about 10 days to ship. Consider even shipping to the FUDCon hotel if you are concerned or late. I know Petra will work hard to deliver them as fast as possible.

Doing a Little Prep Work

If you get your Token and SmartCard before FUDCon Lawrence and have a few spare minutes, feel free to read through my HOWTO. If you get through it, come to the hackfest and help others who might not have had time.

Cheers,

herlo

GoOSe Linux 6.0 Beta Release Candidate 5 (RC5) Now Available!

The hope was to make a Golden GoOSe available for Christmas, but it didn’t work. Oh well, here’s another release candidate!

GoOSe Linux 6.0 Beta Release Candidate 5 (RC5) is now available for testing. Visit http://get.gooseproject.org/ to obtain the download.

Once you arrive at the above link, the downloads are under /releases/6.0/Beta-RC5/GoOSe//.

The GoOSe Project is always interested in feedback around its project. Please feel free to drop us a line about this release.

Comments/Questions:

**Issues with GoOSe 6.0

**

If you find yourself having trouble installing, using or managing GoOSe Linux, please let us know. The best way is to file issues at our main project site on github, but we’re happy to have the information in any of the ways listed above.

Testing can be done by anyone. Please feel free to check our new testing wiki page for information on what tests need to be run and which have already passed. If you have interest in helping us test, thank you, thank you, thank you for the help!

GoOSe Beta-RC5 will be available for 1 week from today. At such time the GoOSe team will decide whether it will be the first Golden GoOSe release. This will be based upon feedback provided, so please do tell us what you think!

Cheers,

herlo

All Posts