My First Flock: Hyannis, Massachussetts

In the past I've been a big part of the Fedora Project, attending many FUDCons and Activity Days. The last FUDCon, however, was in 2013, in Lawrence, Kansas. In other words, I haven't been to a Fedora-run event in about 4-1/2 years. That changed this weekend when I arrived in Hyannis, Massachussetts for Flock.

Getting to Flock

My trip down from Boston was very comfortabe thanks to Mohan, whom I had not met prior to riding down. I also talked with a nice fellow Robert, who rode along with us. I arrived easily around 6pm, checked in and was able to find dinner in town. I walked around the town for a bit, and headed back for an early evening.

My Purpose at Flock

As part of my charge, I've started investigating where things might be falling down in the future with regard to scaling resources. Considering the amount of builds, repository creation, composes and tools to help build all of those things in an automated way, it's likely things will need a way to scale up.

I believe the scale of the infrastructure needed will not only be increasing by orders of magnitude within Fedora, but will also need to scale on-demand. Because I'm helping to write and maintain a cloudy provisioner, this is a place were we should start having discussions.

Day One: Factory 2.0, Containers, and Scaling

The first part of every FUDCon, was the State of Fedora by the Fedora Project Leader. As a transition to my first Flock, I appreciated that Matthew Miller started it all off in a very similar way. He talked about how releases are always improving, downloads continue to go up, giving a direction.

He discussed a new charge for Ambassadors. A direction for events and such, focusing on specific community objectives. The detail of this is posted on his blog here.

Lastly, he talked about what I saw as the biggest goal for the next year or two in Fedora, Modularity. If you've not heard about Modularity, Factory 2.0, or the Fedora.Next initiative, I highly suggest you perform a quick google search and do some reading.

Sessions I Attended

I attended two sessions on day one. One of the changes they made for Flock this year is it was to be more of a 'DO' type conference. Most presentations were scheduled for the first day, focusing on workshops the rest of the week.

The first was on Factory 2.0 by Mike Bonnet. He covered all of the new tools that help build the Fedora modular components. It covered new features like Freshmaker, ODCS (On-Demand Compose Service), and WaiverDB. Again, a lot to search on the Googles.

The second session I attended was the Become a Container Maintainer workshop. Adam Miller and Josh Berkus guided us through the updates of how to build a container using the Fedora docker registry.

Evening Fun

The first evening of Flock was Game Night. In addition, Justin Flory organized a candy exchange. I enjoyed trying candy and treats from around the world while playing a game I brought. Later after that, I ended up walking again on Main Street in Hyannis, doing a little bar hopping and making some new friends.

Day Two: Packaging, Tooling in-depth, Testing with dist-git, and more

Wow, a long heading! I guess that is justified, giving this was probably my busiest day of sessions.

Sessions I Attended

As usual, Stef Walter did an excellent job explaining CI tooling in simple terms. His talk focused on how dist-git tests might work using Ansible. This session got me thinking about the difference between provisioning nodes to run all of the tests vs tests doing provisioning as part of the test. In both cases, LinchPin can help. I pulled him aside after lunch to ask about how these tests might scale and this direction became even more clear. essentially, it gave me a lot more to think about with regard to provisioning.

I spend the next few hours in the same room, learning about Freshmaker, Greenwave, and Delorian (DLRN). Freshmaker and Greenwave are Factory 2.0 tools. These tools are used to help build Fedora in a more flexible way.

Freshmaker tooling helps to ensure that when an RPM, container, module, or something else changes, all of the affected components in the chain are also scheduled to be updated. Freshmaker has some intelligence built-in to determine what components need to be updated, and performs the actions automatically.

Greenwave is a policy engine for allowing builds to commence. The policies help deterimin if the artifact is 'good enough' to continue down the build and compose chain. Essentially, Greenwave allows gating on automated tests with built-in policies. Humans would obviously control the policies, but some automated policies could be in place to start for each artifact, just to ensure that a policy exists. WaiverDB would then look at Greenwave and make decisions on each artifact and how it must proceed down the chain.

I had also intended to spend time in the Future of fedmsg session, but instead I ended up talking for over an hour with the ever intelligent Dennis Gilmore about scaling of Koji. The challenges here are not small. After some initial verbiage clarifications, we got down to the difficult bits of how koji might be able to scale. This will be quite the undertaking for someone for sure. I really appreciated him sitting down and talking through it with me. This was among the best sessions I had, probably because it was so in-depth.

Evening Fun

Wednesday evening was a blast, as we attended Wackenhammer's Clockwork Arcade and Carousel. We were provided with food from a Vietnamese truck, the noodle bowl I had was quite good. I quite enjoyed the arcade, because we basically got unlimited tokens, which led to unlimited tickets. I spent a good amount of time racking up tickets, and ended up with 1425. I started looking at the prizes, but couldn't find anything I really enjoyed. Then, the lady behind the counter showed me a section I hadn't seen previously.

And there they were, my new specs!

Click image to see full size

I got a bunch of fun comments on them, from Ozzy to Elton to John Lennon. If you have another look for me, let meknow as I might have a plan for Halloween now. :)

At the end of Day Two, I spent another hour talking with my roommate, Adam Williamson. We discussed the day's events, and specifically talked about testing, qa and of course, my koji conversation. This conversation lasted about 2 hours. Then, because my brain was going, I spent the next hour or so thinking, and trying to sleep. It was very late when I finally fell asleep.

Day Three: Better late than never!

Because I was up so late the night before, I slept in, got a decent breakfast in town, and ran a couple errands. I ended up getting back just in time for lunch. My new friend Marie had promised to corn row my hair at lunch time, but I could not find her. It turns out that she had forgotten and went to lunch at Spanky's with some of her friends.

Sessions I Attended

I finally cought up with Marie at the talk by Mo Design Pattern Library just after lunch. It's clearly been a while since I've been around the design folks in Fedora, but it was a refreshing change. I learned about the Atomic structure inside Fedora's design library and how mustache (or other frameworks, which I forget the names of) to put it together.

After Mo's session, Marie did up my hair in a less-corn-row style, but it was very very well done. I thanked her, and left. They continued on their design hackfest, which I heard later was quite successful. Here's her handiwork.

Click image to see full size

I then attended Tomas Tomacek's Let's create a module workshop. I think this was the most in-depth I've gotten with the modulemd file. The file is a YAML descriptor, which describes dependencies (both modules and RPMs) that this module depends upon. It also contains similar things to a SPEC file for RPMS. Essentially, the goal here was to educate as many packagers as possible on how to build modules. Hopefully, this will jumpstart modules into Fedora for the future.

The next session I attended was Stef's talk on how to deliver CI and CD of Fedora. Because I had spent the better part of an hour at lunch talking to him the day before, a lot of this was already clear. One of those 'aha' moments happened at the lunch chat, and was more solidified at the talk.

Centered around provisioning, there are really two big points where tests might need to provision. The first would be the more obvious, provision a system to then configure it and run the tests. The second, and less obvious is that tests themselves might want to provision nodes, containers, vms, etc. as part of running tests. This got me to thinking about whether those who write tests would use what they already know, or consider a tool like LinchPin.

The final session I attended this day was the GPG Keysigning Party. Nick Bebout was a little disorganized. In the end, I was able to verify about 20 others IDs and key fingerprints. I expect to be signing these in the days to come. To the many of these folks who have already signed my new key, thank you.

Evening Fun

Because I hadn't spent any time with the modularity guys since DevConf in January, I found them and joined them for dinner at Tap City Grill. The food and beer was yummy, but the conversations were excellent. I look forward to seeing them again soon. I hope to make DevConf again next year. If not, I am planning to come to Europe next June-August, so hopefully we can have a few beers at that time.

After dinner, I joined my friends, Masha, Amita, Aurelian, Xavier, Jenn, and Marie at Embargo for what ended up being one drink. I ended up playing silly teenage games, like truth or dare (somehow mixed with spin the bottle using a fork). My dare was to go up to a gentleman at the bar, tell him that I thought he was handsome and wanted to dance. A few minutes later, Mike was hanging out with us after a nice little jig. That was very fun and very funny.

I left, headed back to the hotel, and joined some other Fedorans to drink and chat. This ended up moving a couple of times because drinking wasn't allowed there. Eventually, we ended up in our room for a couple of hours until everyone was ready to crash. Long, but fun night. This is the stuff I really enjoy at Fedora conferences.

Day Four: Let's wrap it up!

I'm not sure I loved the wrap up session on day four. But it did allow me to hear perspective from others on how their Flock experience went. For me, it felt like a long session regurgitating what had happened. From an administrative perpsective, I'm sure it was quite useful. Several folks got up multiple times, and it felt a bit forced at times to get people to come up and talk about their take on the conference.

The one thing I got out of it was that I could spend a bit of time writing the beginnings of this post. So I took advantage and wrote down things that I wanted to remember. So you can thank the wrap up session for my inspiration.

Afternoon/Evening Fun

Since the day was short, a few of us headed out to Nauset Beach, the easternmost point on the cape. First, however, we dropped Masha at the Hyannis transport hub, so she could catch her bus to the Airport. After some delay, and a nice goodbye, we headed out.

The beach was gorgeous, and they had one of those planes that fly with banners behind them. That was very interesting. We got a little wet in the ocean, talked some, and napped. The sun was soft and the wind was cool, but it wasn't cold until the sun started setting. At that point, we got up and had fun with our long, maybe 50 foot shadows. We then went north up the cape, got dinner at this amazing place, called Karoo. Unfortunately, I felt a bit ill, so I couldn't finish my lamb shank, but it was very, very good. I asked to stop for ice cream to help with my belly, which was quite yummy as well. Heading back seemed to take less time, as it does. Early to bed for me though.

Day Five: Departure

We had a nice breakfast on Main Street in Hyannis, then headed to the Airport via bus from the same transport hub. As with all good things, they must eventually come to an end. Saturday had come, most of my friends had already left. I will again miss them. I hope they will miss me, our talks, the fun. But of course the memories will remain. I look forward to the next time I will see them.

All in all, Flock was a very good conference. The Hyannis Conference Center was second rate, but still useful. The cape will likely still be there for many years to come. I look forward to my contributions, as well as the contributions of others in the Fedora Project. It has been a big part of my life for such a long time now, and I'm glad to be a part of it.

Cheers,

herlo

LinchPin v1.0: Simplified Cloud Orchestration with Ansible

Late last year, we announced LinchPin, a hybrid cloud orchestration tool using Ansible. Provisioning cloud resources has never been easier or faster. With the power of Ansible behind LinchPin, many cloud resources are available at users' fingertips. In this article, I'll introduce LinchPin and look at how the project has matured in the past 10 months.

Back when LinchPin was introduced, using the ansible-playbook command to run LinchPin was complex. Although that can still be accomplished, LinchPin now has a new front-end command-line user interface (CLI), which is written using Click which makes LinchPin even simpler than it was before.

Not to be outdone by the CLI, LinchPin now also has a Python API, which can be used to manage resources, such as Amazon EC2 and OpenStack instances, networks, storage, security groups, and more. The API documentation can be helpful when you're trying out LinchPin's Python API.

Playbooks as a Library

Because the core bits of LinchPin are Ansible Playbooks, the roles, modules, filters, and anything else to do with calling Ansible modules has been moved into the LinchPin library. What this means is that while one can still call the playbooks directly, it's not the preferred mechanism for managing resources. The linchpin executable has become the de facto front end for the command-line.

Command-Line In Depth

Let's have a look at the linchpin command in depth.

$ linchpin
Usage: linchpin [OPTIONS] COMMAND [ARGS]...

  linchpin: hybrid cloud orchestration

Options:
  -c, --config PATH       Path to config file
  -w, --workspace PATH    Use the specified workspace if the familiar Jenkins
                          $WORKSPACE environment variable is not set
  -v, --verbose           Enable verbose output
  --version               Prints the version and exits
  --creds-path PATH       Use the specified credentials path if WORKSPACE
                          environment variable is not set
  -h, --help              Show this message and exit.

Commands:
  init     Initializes a linchpin project.
  up       Provisions nodes from the given target(s) in...
  destroy  Destroys nodes from the given target(s) in...

What can be seen immediately is a simple description, along with options and arguments that can be passed to the command. The three commands found near the bottom of this help are where the focus will be for this document.

Configuration

In the past, there was linchpin_config.yml. This file is no longer, and has been replaced with an ini-style configuration file, called linchpin.conf. While this file can be modified, or placed elsewhere, it's placement in the library path allows easy lookup of configurations. linchpin.conf file should not need to be modified in most cases.

Workspaces

The workspace is a defined filesystem path, which allows grouping of resources in a logical way. A workspace can be considered a single point for a particular environment, set of services, or some other logical grouping. It can also be just one big storage bin of all managed resources.

The workspace can be specified on the command line with the --workspace (-w) option, followed by the workspace path. It can also be specified with an environment variable. (eg. $WORKSPACE in bash). The default workspace is the current directory.

Initialization (init)

Running linchpin init will generate the directory structure needed, along with an example PinFile, topology, and layout files.

$ export WORKSPACE=/tmp/workspace
$ linchpin init
PinFile and file structure created at /tmp/workspace
$ cd /tmp/workspace/
$ tree
.
├── credentials
├── hooks
├── inventories
├── layouts
│   └── example-layout.yml
├── PinFile
├── resources
└── topologies
    └── example-topology.yml

At this point, one could execute linchpin up and provision a single libvirt virtual machine, with a network named linchpin-centos71. An inventory would be generated and placed in inventories/libvirt.inventory. This can be known by reading the topologies/example-topology.yml and gleaning out the topology_name value.

Provisioning (linchpin up)

Once a PinFile, topology, and optionally a layout are in place, provisioning can happen.

Note

We use the dummy tooling as it is much simpler to configure. It doesn't require anything extra (authentication, network, etc.). The dummy provider creates a temporary file, which represents provisioned hosts. If the temporary file does not have any data, hosts have not been provisioned, or they have been recently destroyed.

The tree for the dummy provider is very simple.

$ tree
.
├── hooks
├── inventories
├── layouts
│   └── dummy-layout.yml
├── PinFile
├── resources
└── topologies
    └── dummy-cluster.yml

The PinFile is also very simple. It specifies which topology, and optional layout to use for the dummy1 target.

---
dummy1:
  topology: dummy-cluster.yml
  layout: dummy-layout.yml

The dummy-cluster.yml topology file is a reference to provision three (3) resources of type dummy_node.

---
topology_name: "dummy_cluster" # topology name
resource_groups:
  -
    resource_group_name: "dummy"
    resource_group_type: "dummy"
    resource_definitions:
      -
        name: "web"
        type: "dummy_node"
        count: 3

Performing the command linchpin up should generate resources and inventory files based upon the topology_name. In this case, is dummy_cluster.

$ linchpin up
target: dummy1, action: up

$ ls {resources,inventories}/dummy*
inventories/dummy_cluster.inventory  resources/dummy_cluster.output

To verify resources with the dummy cluster, check /tmp/dummy.hosts

$ cat /tmp/dummy.hosts
web-0.example.net
web-1.example.net
web-2.example.net

Note

The Dummy module provides a very basic tooling for pretend (or dummy) provisioning. Check out the details for openstack, AWS EC2, google cloud, and more in the LinchPin examples.

Inventory Generation

As part of the PinFile mentioned above, a layout can be specified. If this file is specified and exists in the correct location, an Ansible static inventory file will be generated automatically for the resources provisioned.

---
inventory_layout:
  vars:
    hostname: __IP__
  hosts:
    example-node:
      count: 3
      host_groups:
        - example

When the linchpin up execution is complete, the resources file provides useful details. Specifically, the IP address(es) or host name(s) are interpolated into the static inventory.

[example]
web-2.example.net hostname=web-2.example.net
web-1.example.net hostname=web-1.example.net
web-0.example.net hostname=web-0.example.net

[all]
web-2.example.net hostname=web-2.example.net
web-1.example.net hostname=web-1.example.net
web-0.example.net hostname=web-0.example.net

Teardown (linchpin destroy)

LinchPin can also perform a teardown of resources. A teardown action generally expects that resources have been provisioned; however, because Ansible is idempotent, linchpin destroy will only check to make sure the resources are up. Only if the resources are already up will the teardown happen.

The command linchpin destroy will either use resources and/or topology files to determine the proper teardown procedure.

Note

The dummy Ansible role does not use the resources, only the topology during teardown.

$ linchpin destroy
target: dummy1, action: destroy

$ cat /tmp/dummy.hosts
-- EMPTY FILE --

Note

The teardown functionality is slightly more limited around ephemeral resources, like networking, storage, etc. It is possible that a network resource could be used with multiple cloud instances. In this way, performing a linchpin destroy does not teardown certain resources. This is dependent on each providers implementation. See specific implementations for each of the providers.

The LinchPin Python API

Much of what is implemented in the linchpin command-line tool has been written using the Python API. The API, while not complete, has become a vital component of the LinchPin tooling.

The API consists of three packages:

  • linchpin
  • linchpin.cli
  • linchpin.api

The command-line tool is managed at the base linchpin package. It imports the linchpin.cli modules and classes, which subclasses linchpin.api. The purpose for this is to allow for other possible implementations of LinchPin using the linchpin.api, like a planned RESTful API.

For more information see the Python API library documentation on readthedocs.

Hooks

One of the big improvements in LinchPin 1.0 going forward is hooks. The goal with hooks is to allow additional configuration using external resources in certain specific states during linchpin execution. The states currently are as follows:

  • preup - executed before provisioning the topology resources
  • postup - executed after provisioning the topology resources, and generating the optional inventory
  • predestroy - executed before teardown of the topology resources
  • postdestroy - executed after teardown of the topology resources

In each case, these hooks allow external scripts to run. Several types of hooks exist, including custom ones. These are called Action Managers. Here's a list of built-in Action Managers.

  • shell - Allows either inline shell commands, or an executable shell script
  • python - Executes a python script
  • ansible - Executes an Ansible playbook, allowing passing of a vars_file and extra_vars represented as a python dict
  • nodejs - Executes a nodejs script
  • ruby - Executes a ruby script

Note

A hook is bound to a specific target, and must be restated for each target used. In the future, hooks will be able to be global, and then named in the hooks section for each target more simply.

Using Hooks

Because it's simple enough to describe hooks, it might not be that simple to understand their power. The reason this feature exists is to provide flexible power to the user for things that the LinchPin developers might not consider. This concept could lead to a simple way to ping a set of systems, for instance, before running another hook.

Looking into the 'workspace` more closely, one might have noticed the hooks directory. Let's have a look inside this directory to see the structure.

$ tree hooks/
hooks/
├── ansible
│   ├── ping
│   │   └── dummy_ping.yaml
└── shell
    └── database
        ├── init_db.sh
        └── setup_db.sh

In every case, hooks can be used in the PinFile, shown here.

---
dummy1:
  topology: dummy-cluster.yml
  layout: dummy-layout.yml
  hooks:
    postup:
      - name: ping
        type: ansible
        actions:
          - dummy_ping.yaml
      - name: database
        type: shell
        actions:
          - setup_db.sh
          - init_db.sh

The basic concept here would be that there are three postup actions to complete. Hooks are executed in top-down order. Thus, the ansible ping task would run first, followed by the two shell tasks, setup_db.sh, followed by init_db.sh. Assuming the hooks execute successfully, a ping of the systems would occur, then a database would be setup, and initialized.

Authentication Driver

In the initial design of LinchPin, it was determined to have authentication be managed within the ansible playbooks. However, moving to a more API and command-line driven tool meant that authentication should really be outside of the library where the playbooks now reside, and still pass authentication values along as needed.

Configuration

To accomplish this task, it was determined that the easiest thing to do for a user was to let them use the authentication method provided by the driver used. For instance, if the topology called for openstack, the standard method is to use either a yaml file, or similar OS_ prefixed environment variables. A clouds.yaml file consists of a profile, with an auth section.

clouds:
  default:
    auth:
      auth_url: http://stack.example.com:5000/v2.0/
      project_name: factory2
      username: factory-user
      password: password-is-not-a-good-password

Note

More detail in the openstack documentation

This clouds.yaml, or any other authentication file, is located in the default_credentials_path (eg. ~/.config/linchpin) and referenced in the topology.

---
topology_name: openstack-test
resource_groups:
  -
    resource_group_name: linchpin
    resource_group_type: openstack
    resource_definitions:
      - name: resource
        type: os_server
        flavor: m1.small
        image: rhel-7.2-server-x86_64-released
        count: 1
        keypair: test-key
        networks:
          - test-net2
        fip_pool: 10.0.72.0/24
    credentials:
      filename: clouds.yaml
      profile: default

Note

The default_credentials_path can be changed by modifying the linchpin.conf

The topology includes a new credentials section at the bottom. With openstack, ec2, and gcloud modules, the credentials can be specified similarly. The Authentication driver, will then look in the given 'filename' clouds.yaml, and search for the 'profile' named default.

Assuming authentication is found and loaded, provisioning will continue as normal.

Simplicity

Although LinchPin can be complex around topologies, inventory layouts, hooks, and authentication management, the ultimate goal is simplicity. By simplifying with a command-line interface, along with goals to improve the developer experience coming post-1.0, LinchPin continues to show that complex configurations can be managed with simplicity.

Community Growth

Over the past year, LinchPin's community has grown. To the point that we now have a mailing list, an IRC channel (#linchpin on chat.freenode.net), and even manage our sprints in the open with GitHub.

The community membership has grown immensely. From 2 core developers to about 10 contributors over the past year. More people continue to work with the project. The future is bright for sure! If you've got an interest, drop us a line, file an issue on github, join up on IRC, or send us an email.

Cheers,

herlo

PS - This article is crossposted on Opensource.com. Special thanks to Rikki Endsley and Jen Wike Huger for helping it get published!

Introducing Linch-Pin: Hybrid cloud provisioning using Ansible

Background

Over the past 6+ months, I've been working at Red Hat. During this time, I've been working mostly with Continuous Integration projects and the like. Recently, I joined a new team, called Continouous Infrastructure and started working on automating use cases around Project Atomic and OpenShift.

As part of that project, we have an internal tool, called ci-factory. It has within its components, a provisioner that works with OpenStack, Beaker, etc. However, it doesn't support a broader set of clouds/infrastructure, including Amazon AWS, Google Compute Engine, Libvirt, etc. Additionally, ci-factory provided some tooling for ansible dynamic inventories. However, the configurations that generated this were not very flexible, and sometimes outright incorrect.

Enter Provisioner 2.0 (Linch-Pin)

Beginning in June this year, our team started creating a new tool. Lead by developer extraordinaire, Samvaran Rallabandi (we call him SK), we now have Linch-Pin.

The concept of Linch-Pin was to retool the basic ci-factory provisioner into something much more flexible. While provisioning was important, ci-factory was written in a mix of python and bash scripts. Provisioner 2.0 is written completely in Ansible. Because Ansible is excellent at both configuration management and orchestration of systems, it could be used to both provision and configure systems. Ansible can handle complex cluster configurations (eg openshift-ansible) as well as much simpler tasks, like adding users to a system idempotently.

Additional, considerations for Provisioner 2.0 would allow leveraging of existing Ansible cloud modules, reducing the amount of code needing to be written. So far, this has proven very valuable and made development time much shorter overall. There are, however certain modules, Libvirt for example, that seem poorly implemented in Ansible. Thus, an updated module will need to be written.

Lastly, Provisioner 2.0 should exist upstream. Linch-Pin has been an upstream project since the first working code was created. This was done to encourage contribution from inside and outside of Red Hat. We believe that many projects will be able to take advantage of Linch-Pin and contribute back to the project as a whole. Many other upstream and downstream projects have expressed interest in Linch-Pin just from a basic demonstration.

Linch-Pin Architecture

Linch-Pin has some basic components that make it work. First we'll cover the file structure, then dive into the core bits:

provision/
├── group_vars/
├── hosts
├── roles/
└── site.yml

At one point in time, Linch-Pin was going to have both provision and configure playbooks. The provision components became Linch-Pin, while the configure components became external repositories of useful components which leverage Linch-Pin in some way. A couple of examples are the CentOS PaaS SIG's paas-sig-ci project, and the cinch project, by, Red Hat Quality Engineering.

Both group_vars and hosts paths are related to inventory.

The roles path is the meat of Linch-Pin. All of the power that makes hybrid cloud provisioning possible exists here. Ansible defines things called playbooks to drive these roles. The site.yml is the playbook itself, which start the execution.

Other paths exist:

├── outputs/
├── filter_plugins/
├── InventoryFilters/
└── library/

We will cover these components later on in this post, or later in the series.

Topologies

To consider complex cloud infrastructure, a topology definition can help. The topology definition is created using YAML. Before Linch-Pin provisions anything, the topology must be validated using a predefined schema. Schemas can be created to change the way the topology works if desired. However, there is a default schema already defined for simplicity.

After validation, the topology is then used to provision resources. A topology is broken into its resource components, and each is delegated to the appropriate resource provisioner. This is generally done asynchronously, meaning that nodes on different cloud providers can be provisioned at the same time using appropriate credentials. Assuming a successful provisioning event per provider, the resource provisoner(s) will return appropriate response data to the topology provisioner.

As mentioned above, credentials may be required for some cloud providers. This is handled by the credentials manager. The credentials details are stored in the topology definition as a reference to the vault/location of said credentials. The resource provisioner uses these to authenticate to the appropriate cloud provider as necessary.

A very simple example topology definition is provided here.

# openstack-3node-cluster.yml
---
topology_name: "openstack_3node_cluster"
site: "ci-osp"
resource_groups:
  -
    resource_group_name: "os"
    res_group_type: "openstack"
    res_defs:
      -
        res_name: "3node_inst"
        flavor: "m1.small"
        res_type: "os_server"
        image: "rhel-7.3-server-x86_64-latest"
        count: 3
        networks:
          - "atomic-e2e-jenkins"
    assoc_creds: "openstack_creds"

This topology describes a single set of instances, which will run on the local ci-osp site. There will be 3 nodes, given network devices on the 'atomic-e2e-jenkins' network. Each instance will be of type 'm1.small'. Keep in mind, the openstack server must actually be configured and accept the above options. This is out of scope of this post, however.

Topology terminology explained:

  • topology_name: a reference point which Linch-Pin uses to track the nodes, networks, and other resources throughout provisioning and teardown
  • resource_groups: a set of resource definitions. One or many groups of nodes, storage, etc.
  • res_group_type: a predefined set of group types (eg. openstack, aws, gcloud, libvirt, etc.)
  • res_defs: A definition of a resource with its component attributes (flavor, image, count, region, etc.)
  • res_type: A predefined cloud resource name (eg. os_server, gcloud_gce, aws_ec2, os_heat)

As mentioned above, the topology describes the resources needed. When Linch-Pin is invoked, this file will be read and create the described systems. More information about topologies and structures is described in the Linch-Pin documentation. More complex examples can be found in the Linch-Pin github repository.

Provisioning

To provision the openstack-3node-cluster.yml, Linch-Pin currently uses the ansible-playbook command. There are many options available that can be passed as --extra-vars, but here, we only show two: state and topology. Simply calling the provision playbook will provision resources:

$ ansible-playbook /path/to/linch-pin/provision/site.yml \
topology='/path/to/openstack-3node-cluster.yml' state=present"

Note

There are other --extra-vars options documented here.

The diagram shows this process, the topology definition is provided to the provisioner, which then provisions the requested resources. Once provisioned, all cloud data is gathered and stored.

Click image to see full size

A great many things happen when this playbook is run. Let's have a look at the process in a bit more detail.

Determining Defaults

Linch-Pin first discovers the needed configurations. Either from the --extra-vars as shown above, or from the linchpin_config.yml. This determines the schema, topology, and some paths for outputs and the like (covered later in this post).

Schema Check

Once everything is configured properly, a schema check is performed. This process is used to ensure the topology file matches up with the defined constraints. For example, there are specific resource types (res_type), like os_server, aws_ec2, and gcloud_gce. Others, like libvirt, beaker, and docker are not yet implemented. The schema check ensures that further processing only occurs for resource types that are currently supported.

Provisioning Nodes

Once the schema check passes, the topology is provisioned with the cloud provider(s). In the example, there is only one, openstack, but there could be several clouds provisioned at once. The provisioner plugin is called for each cloud provider, credentials are passed along as needed. If all is successful, nodes will be provisioned according to the topology definition.

Note

Additional example topologies available at the Linch-Pin github.

Determining Credentials

It may not have been clear above, but when provisioning nodes from certain cloud providers, credentials are required. In the topology definition file, there is one line that indicates credentials:

assoc_creds: "openstack_creds"

However, this doesn't really tell us anything. It turns out, each provisioner plugin has an ansible role. Each role contains some variables for determining how to connect to the cloud provider. For instance, with openstack, the roles/openstack/vars/openstack_creds.yml relates to our topology definition:

--- # openstack credentials
endpoint: "{{ lookup('env','OS_AUTH_URL') }}"
project: "{{ lookup('env','OS_TENANT_NAME') }}"
username: "{{ lookup('env','OS_USERNAME') }}"
password: "{{ lookup('env','OS_PASSWORD') }}"

The openstack credentials currently come from environment variables. Simply export these variable to the shell and they will be picked up properly. From this point, openstack will grant access according to its policies. The environmental variables are used this way, where appropriate, for all cloud providers.

In the future, there will be a way to manage credentials a bit more succinctly. See https://github.com/CentOS-PaaS-SIG/linch-pin/issues/76 for updates.

Outputs and Teardown

Teardown is fairly straightforward. The command is similar to provisioning. The main difference is state=absent, telling the provisioner to perform a teardown instead of provisioning a new resource:

$ ansible-playbook /path/to/linch-pin/provision/site.yml \
topology='/path/to/openstack-3node-cluster.yml' state=absent"

This process requires knowledge of the outputs, as mentioned previously. Outputs are tracked by a few variables in the linchpin_config.yml. Specifically, the outputfolder_path provides the location of the output, along with the filename, which is based upon the topology_name.

Consider the following; outputfolder_path=/tmp/outputs, and topology_name=openstack_3node_cluster. From this, the output of provisioning would reside at /tmp/outputs/openstack_3node_cluser.yml.

The contents of this file may look something like this:

aws_ec2_res: []
duffy_res: []
gcloud_gce_res: []
os_server_res:
-   _ansible_item_result: true
    _ansible_no_log: false
    changed: true
    id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
    instance:
    - <endpoint>
    - <username>
    - <password>
    - <username>
    - present
    - rhel-6.5_jeos
    - <ssh-key-reference>
    - m1.small
    - <network>
    - testgroup2
    - ano_inst
    - 0
    invocation:
        module_args:
            api_timeout: 99999
            auth:
                auth_url: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                project_name: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
                username: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
            auth_type: null
            auto_ip: true
            availability_zone: null
            boot_from_volume: false
            boot_volume: null
            cacert: null
            cert: null
            cloud: null
            config_drive: false
            endpoint_type: public
            flavor: m1.small
            flavor_include: null
            flavor_ram: null
            floating_ip_pools: null
            floating_ips: null
            image: rhel-6.5_jeos
            image_exclude: (deprecated)
            key: null
            key_name: <ssh-key-reference>
            meta: null
            name: testgroup2_ano_inst_0
            network: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
            nics: []
            region_name: null
            scheduler_hints: null
            security_groups:
            - default
            state: present
            terminate_volume: false
            timeout: 180
            userdata: null
            verify: true
            volume_size: false
            volumes: []
            wait: true
        module_name: os_server
    openstack:
        HUMAN_ID: true
        NAME_ATTR: name
        OS-DCF:diskConfig: MANUAL
        OS-EXT-AZ:availability_zone: nova
        OS-EXT-STS:power_state: 1
        OS-EXT-STS:task_state: null
        OS-EXT-STS:vm_state: active
        OS-SRV-USG:launched_at: '2016-07-26T16:56:40.000000'
        OS-SRV-USG:terminated_at: null
        accessIPv4: 10.8.183.233
        accessIPv6: ''
        addresses:
            <network>:
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: fixed
                addr: 172.16.100.26
                version: 4
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: floating
                addr: 10.8.183.233
                version: 4
        adminPass: <redacted>
        az: nova
        cloud: defaults
        config_drive: ''
        created: '2016-07-26T16:56:10Z'
        flavor:
            id: '2'
            name: m1.small
        hostId: 6363c13fc31066a15f2ff4c81bf054290d4d56d4d08e568570354580
        human_id: testgroup2_ano_inst_0
        id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
        image:
            id: 3bcfd17c-6bf0-4134-ae7f-80bded8b46fd
            name: rhel-6.5_jeos
        interface_ip: 10.8.183.233
        key_name: <ssh-key-reference>
        metadata: {}
        name: testgroup2_ano_inst_0
        networks:
            <network>:
            - 172.16.100.26
            - 10.8.183.233
        os-extended-volumes:volumes_attached: []
        private_v4: 172.16.100.26
        progress: 0
        public_v4: 10.8.183.233
        public_v6: ''
        region: ''
        security_groups:
        -   description: Default security group
            id: df1a797b-009c-4685-a7c9-43863c36d653
            name: default
            security_group_rules:
            -   direction: ingress
                ethertype: IPv4
                id: ade9fcb9-14c1-4975-a04d-6007f80005c1
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
            -   direction: ingress
                ethertype: IPv4
                id: d03e4bae-24b6-415a-a30c-ee0d060f566f
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
        status: ACTIVE
        tenant_id: f1dda47890754241a3e111f9b7394707
        updated: '2016-07-26T16:56:40Z'
        user_id: 9c770dbddda444799e627004fee26e0a
        volumes: []
    server:
        HUMAN_ID: true
        NAME_ATTR: name
        OS-DCF:diskConfig: MANUAL
        OS-EXT-AZ:availability_zone: nova
        OS-EXT-STS:power_state: 1
        OS-EXT-STS:task_state: null
        OS-EXT-STS:vm_state: active
        OS-SRV-USG:launched_at: '2016-07-26T16:56:40.000000'
        OS-SRV-USG:terminated_at: null
        accessIPv4: 10.8.183.233
        accessIPv6: ''
        addresses:
            <network>:
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: fixed
                addr: 172.16.100.26
                version: 4
            -   OS-EXT-IPS-MAC:mac_addr: fa:16:3e:48:cf:fb
                OS-EXT-IPS:type: floating
                addr: 10.8.183.233
                version: 4
        adminPass: <redacted>
        az: nova
        cloud: defaults
        config_drive: ''
        created: '2016-07-26T16:56:10Z'
        flavor:
            id: '2'
            name: m1.small
        hostId: 6363c13fc31066a15f2ff4c81bf054290d4d56d4d08e568570354580
        human_id: testgroup2_ano_inst_0
        id: f5832fcc-4e1b-442d-8be8-ba5e3783e7f2
        image:
            id: 3bcfd17c-6bf0-4134-ae7f-80bded8b46fd
            name: rhel-6.5_jeos
        interface_ip: 10.8.183.233
        key_name: <ssh-key-reference>
        metadata: {}
        name: testgroup2_ano_inst_0
        networks:
            <network>:
            - 172.16.100.26
            - 10.8.183.233
        os-extended-volumes:volumes_attached: []
        private_v4: 172.16.100.26
        progress: 0
        public_v4: 10.8.183.233
        public_v6: ''
        region: ''
        security_groups:
        -   description: Default security group
            id: df1a797b-009c-4685-a7c9-43863c36d653
            name: default
            security_group_rules:
            -   direction: ingress
                ethertype: IPv4
                id: ade9fcb9-14c1-4975-a04d-6007f80005c1
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
            -   direction: ingress
                ethertype: IPv4
                id: d03e4bae-24b6-415a-a30c-ee0d060f566f
                port_range_max: null
                port_range_min: null
                protocol: null
                remote_ip_prefix: null
                security_group_id: df1a797b-009c-4685-a7c9-43863c36d653
        status: ACTIVE
        tenant_id: f1dda47890754241a3e111f9b7394707
        updated: '2016-07-26T16:56:40Z'
        user_id: 9c770dbddda444799e627004fee26e0a
        volumes: []

Because the linchpin_config.yml contains the path to the output file, it is then parsed and used to teardown the resources. In this case, the single openstack node listed and its networking resources are torn down. If there were more nodes, more data would exist. Similarly, if there were additional clouds, the data would be populated for the appropriate output fields.

Conclusion

Finally! We've made it through the introduction to Linch-Pin. As you can see, running Linch-Pin is pretty easy, but there's a lot to it internally. Now it's time to use the provisioner, so go ahead and try it out.

Cheers,

herlo

Adjusting your git email address per clone

I’ve been working a lot with git over the years. To that end, I commonly use my personal email address for projects I post on my github. Additionally, I have a project or two in which I participate. The email address I use there is just an alias to my personal email, but I want to use it instead of the email above.

Today, I came across an article that makes changing this value much, much simpler.  Try this:

$ git config --global alias.personal 'config user.email "herlo@personal"'
$ git personal
$ git config user.email
herlo@personal
$ git config --global alias.work 'config user.email "herlo@work"'
$ git work
$ git config user.email
herlo@work

As you can see, it’s now trivial to switch between certain email addresses. Man, git aliases are very nice!

Cheers,

herlo

Setting up a FreeIPA Replica

In my last post, I mentioned that I would show how to configure a client with sudo access. Well, I lied! Hopefully that will be the next post. Instead, I&#8217’m going to cover how to set up FreeIPA replica.

Replicating FreeIPA services is useful in many ways. Managing DNS, LDAP and Kerberos services, for one. Additionally, it makes sense to have replicas in different VLANs or network zones. Opening up only the ports needed for replication between the FreeIPA servers instead of for all hosts makes things more secure.

All Replicas are Masters

Save for things like running a Certificate Authority or DNS, all of the hosts which run FreeIPA are masters. They replicate using agreements. This means that when a new replica is setup, it will communicate with other FreeIPA servers and replicate to/from only the ones it is allowed to communicate. Further reading is available here.

Preparing for the FreeIPA Replica

Preparing the replica requires setting up the agreement documents on one of the IPA master servers.

$ ssh ipa7.example.com
..snip..

$ su -
Password: centos

# ipa-replica-prepare replica.example.com --ip-address 192.168.122.210
Directory Manager (existing master) password: manager72

Preparing replica for replica.example.com from ipa.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-replica.example.com.gpg
Adding DNS records for replica.example.com
Using reverse zone 122.168.192.in-addr.arpa.

Copy the Replica gpg Data to the New Replica

The data is now packaged and needs to be put on the replica. The simplest and most secure process is to scp the file directly to the replica. In some cases, this is not possible, and other methods may need to be employed.

# scp /var/lib/ipa/replica-info-replica.example.com.gpg root@192.168.122.210:
..snip..

Install the Replica

Once the agreements are created and moved to the new replica, it&#8217’s time to perform the install.

$ ssh root@192.168.122.210
root@192.168.122.210's password:

Ensure the replica can contact the master server. Additionally, make sure the replica&#8217’s ip address resolves locally. Modifying /etc/hosts is the simplest solution.

# cat /etc/hosts
..snip..
192.168.122.210 replica.example.com
192.168.122.200 ipa7.example.com
..snip..

Set the hostname properly. If it isn&#8217’t, correct it by modifying /etc/sysconfig/network.

# hostname
replica.example.com

Once all of the above is correct, it&#8217’s time to perform the installation itself. Since a replica is just a clone of another master, installing the same packages makes sense.

# yum install ipa-server bind-dyndb-ldap -y
..snip..

Next, perform the install. Consider options like –setup-ca_ and –setup-dns_ as optional, though very useful if those processes are going to be needed on the new replica.

# ipa-replica-install --setup-dns /root/replica-info-replica.example.com.gpg \
--no-forwarders --ip-address=192.168.122.210
Directory Manager (existing master) password: manager72

Run connection check to master
Check connection from replica to remote master 'ipa7.example.com':
 Directory Service: Unsecure port (389): OK
 Directory Service: Secure port (636): OK
 Kerberos KDC: TCP (88): OK
 Kerberos Kpasswd: TCP (464): OK
 HTTP Server: Unsecure port (80): OK
 HTTP Server: Secure port (443): OK
 PKI-CA: Directory Service port (7389): OK

The following list of ports use UDP protocol and would need to be
checked manually:
 Kerberos KDC: UDP (88): SKIPPED
 Kerberos Kpasswd: UDP (464): SKIPPED

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@CODEAURORA.ORG password: ipaadmin72

Configuring NTP daemon (ntpd)
..snip..
Configuring directory server for the CA (pkids): Estimated time 30 seconds
..snip..
Done configuring directory server for the CA (pkids).
Configuring certificate server (pki-cad): Estimated time 3 minutes 30 seconds
..snip..
Done configuring certificate server (pki-cad).
Restarting the directory and certificate servers
Configuring directory server (dirsrv): Estimated time 1 minute
..snip..
Starting replication, please wait until this has completed.
Update in progress
..snip..
Update succeeded
..snip..
Done configuring directory server (dirsrv).
Configuring Kerberos KDC (krb5kdc): Estimated time 30 seconds
..snip..
Done configuring Kerberos KDC (krb5kdc).
Configuring kadmin
..snip..
Done configuring kadmin.
Configuring ipa_memcached
..snip..
Done configuring ipa_memcached.
Configuring the web interface (httpd): Estimated time 1 minute
..snip..
Done configuring the web interface (httpd).
Applying LDAP updates
Restarting the directory server
Restarting the KDC
Using reverse zone 122.168.192.in-addr.arpa.
Configuring DNS (named)
..snip..
Done configuring DNS (named).

Global DNS configuration in LDAP server is empty
You can use 'dnsconfig-mod' command to set global DNS options that
would override settings in local named.conf files

Restarting the web server

At this point, the replica should start functioning. Since this is a FreeIPA server, make sure to allow successful logins. A reboot is a very good idea.

# authconfig --enablemkhomedir --update
..snip..
# chkconfig sssd on
..snip..
# init 6

Once rebooted, login with an existing user.

$ ssh replica.example.com
Warning: Permanently added 'replica.example.com' (ECDSA) to the 
list of known hosts.
Creating home directory for herlo.
Last login: Fri Oct 22 12:27:44 2014 from 192.168.122.1
[herlo@replica ~]$ id
uid=151600001(herlo) gid=151600001(herlo) groups=151600001(herlo)...

Assuming all works well, replication should be working.

If all goes well, I&#8217’ll show how to install a client and enable sudo access in the next post.

Cheers,

herlo

Introducing FreeIPA – Identity Management (IdM) Done Right!

It has been a while since I&#8217’ve posted anything on this blog. It is high time I get something useful up here. Lucky for you, dear reader, I have a series of posts to share. Each of them about a new technical passion of mine, Identity Management.

Many of you probably know of Active Directory. The all encompassing Identity Management solution from Microsoft. It’s is the most popular solution out there, and its got a good hold on the market, for sure. But with almost all things Microsoft comes closed source, GUI only management, and resentment among many.

I’m not saying that Active Directory does not do its job as an IdM solution. In fact, I think it’s a fine solution, if you want to have a proprietary solution, pay a ton of money yearly and not follow standards. In terms of Identity Management, however, it is a pretty good system overall.

The thing is, closed source systems historically have more issues and delays for fixes long term. Until recently, however, there hasn&#8217’t been a reasonable open source solution for IdM. Enter FreeIPA, Identity Management done right.

What is FreeIPA?

FreeIPA is a solution for managing users, groups, hosts, services, and much, much more. It uses open source solutions with some Python glue to make things work. Identity Management made easy for the Linux administrator.

ipa-componentsInside FreeIPA are some common pieces; The Apache Web Server, BIND, 389DS, and MIT Kerberos. Additionally, Dogtag is used for certificate management, and sssd for client side configurations. Put that all together with some python glue, and you have FreeIPA.

As you can see from the diagram, there is also an API which provides programmatic management via Web and Command Line interfaces. Additionally, many plugins exist. For example, one exists to set up trust agreements for replication to Active Directory. Additional functionality exists for managing Samba shares via users and groups in FreeIPA.

It’s probably a good time to setup a FreeIPA server and show its power.

Installation

Installing FreeIPA is simple on a Linux system. However, there are a few things needed. This installation is being performed on a fully updated CentOS 7.0 system. An entry in the /etc/hosts matching the server ip and hostname is useful. Additionally, make sure to set the hostname properly.

# echo 192.168.122.200 ipa7.example.com ipa7 >> /etc/hosts
# echo ipa7.example.com > /etc/hostname

We’ll be installing the server at this time, but there is a client install, which we’ll show in later posts. It’s recommended to use RHEL/CentOS >= 6.x or Fedora >= 14. Simply perform a yum install.

(rhel/centos) # yum install ipa-server
(fedora) # yum install freeipa-server
..snip..

Once the {free,}ipa-server package is installed. Run the install itself. Since FreeIPA can manage a dns server, a decision must be made. Here, we are going to choose to manage our internal dns with FreeIPA, which uses ldap via 389DS to store the records.

# yum install bind-dyndb-ldap
..snip..
# ipa-server-install --setup-dns
The log file for this installation can be found in /var/log/ipaserver-install.log
==============================================================================
This program will set up the IPA Server.

This includes:
 * Configure a stand-alone CA (dogtag) for certificate management
 * Configure the Network Time Daemon (ntpd)
 * Create and configure an instance of Directory Server
 * Create and configure a Kerberos Key Distribution Center (KDC)
 * Configure Apache (httpd)
 * Configure DNS (bind)

To accept the default shown in brackets, press the Enter key.

WARNING: conflicting time&date synchronization service 'chronyd' will be disabled
in favor of ntpd

Existing BIND configuration detected, overwrite? [no]: yes

The installation explains the process and services which will be installed. Because we’re installing using DNS, some skeleton files exist. It’s safe to overwrite them and move forward.

Next, define the server hostname, and the domain name (for DNS).

Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
<hostname>.<domainname>
Example: master.example.com.


Server host name [ipa7.example.com]: ipa7.example.com

Warning: skipping DNS resolution of host ipa7.example.com
The domain name has been determined based on the host name.

Please confirm the domain name [example.com]: example.com

The next section covers the kerberos realm. This may seem confusing, but kerberos is one of the big powerhouses behind FreeIPA. It makes registering client systems very simple. Kerberos realm names are always in upper case. Usually, they emulate the domain name.

The kerberos protocol requires a Realm name to be defined.
This is typically the domain name converted to uppercase.

Please provide a realm name [EXAMPLE.COM]: EXAMPLE.COM

Next, configure the passwords for the Directory Manager (for ldap administration) and the IPA admin user.

Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and has full access
to the Directory for system management tasks and will be added to the
instance of directory server created for IPA.
The password must be at least 8 characters long.

Directory Manager password: manager72
Password (confirm): manager72

The IPA server requires an administrative user, named 'admin'.
This user is a regular system account used for IPA server administration.

IPA admin password: ipaadmin72
Password (confirm): ipaadmin72

Finally, the installer follows up with a request for more DNS info.

Do you want to configure DNS forwarders? [yes]: yes
Enter the IP address of DNS forwarder to use, or press Enter to finish.
Enter IP address for a DNS forwarder: 8.8.8.8
DNS forwarder 8.8.8.8 added
Enter IP address for a DNS forwarder: 8.8.4.4
DNS forwarder 8.8.4.4 added
Enter IP address for a DNS forwarder: <enter>
Do you want to configure the reverse zone? [yes]: yes
Please specify the reverse zone name [122.168.192.in-addr.arpa.]: 122.168.192.in-addr.arpa
Using reverse zone 122.168.192.in-addr.arpa.

Now that all of the questions have been asked and answered. It’s time to let the installer do its thing. A verification step prints out all of the values entered. Make sure to review them carefully.

The IPA Master Server will be configured with:
Hostname: ipa7.example.com
IP address: 192.168.122.200
Domain name: example.com
Realm name: EXAMPLE.COM

BIND DNS server will be configured to serve IPA domain with:
Forwarders: 8.8.8.8, 8.8.4.4
Reverse zone: 122.168.192.in-addr.arpa.

Then, when ready, confirm the install and go grab a cup of joe. Installation takes anywhere from 10-30 minutes.

Continue to configure the system with these values? [no]: yes

The following operations may take some minutes to complete.
Please wait until the prompt is returned.

Configuring NTP daemon (ntpd)
 [1/4]: stopping ntpd
 [2/4]: writing configuration
 [3/4]: configuring ntpd to start on boot
 [4/4]: starting ntpd
..snip..

When complete, the installation gives a bit of useful information. Make sure to open the ports within the firewall. This is beyond the scope here, and is left as an exercise for the reader.

Setup complete

Next steps:
    1. You must make sure these network ports are open:
        TCP Ports:
          * 80, 443: HTTP/HTTPS
          * 389, 636: LDAP/LDAPS
          * 88, 464: kerberos
          * 53: bind
        UDP Ports:
          * 88, 464: kerberos
          * 53: bind
          * 123: ntp

    2. You can now obtain a kerberos ticket using the command: 'kinit admin'
       This ticket will allow you to use the IPA tools (e.g., ipa user-add)
       and the web user interface.

Be sure to back up the CA certificate stored in /root/cacert.p12
This file is required to create replicas. The password for this
file is the Directory Manager password

Now that everything is installed. One last simple configuration will help. To ensure users can login correctly, use authconfig to ensure home directories are created. Followed by a quick reboot.

# authconfig --enablemkhomedir --update
# chkconfig sssd on
Note: Forwarding request to 'systemctl enable sssd.service'.
# init 6

Once the system has rebooted, point the browser to the newly installed FreeIPA service. Logging into FreeIPA can be done in two different ways; from the browser, or via kerberos. For now, login via the web browser as the admin user.

 

Once logged in, a useful configurations is necessary before adding users. Change the default shell for all users to /bin/bash. This is done by choosing IPA Server -> Configuration. Once modified, click Update.

ipa-config-shell

Now it’s time to add a user. Choose the Identity tab. Then click Add.

ipa-user-add

Clicking Add and Edit presents the user’s data. This is useful for adding an ssh key.

ipa-user-sshkey

NOTE: Don’t forget to click Update after setting the key.

This should now allow ssh into the FreeIPA server as the new user. To make this possible, make sure the new FreeIPA server is configured as a resolver. The simplest way is to update the /etc/resolv.conf file.

# cat /etc/resolv.conf
search egavas.org
nameserver 192.168.122.200
..snip..

# host ipa7.example.com
ipa7.example.com has address 192.168.122.200

Once the FreeIPA server is resolvable, ssh should now work.

[herlo@x220 ~]$ ssh ipa7.example.com
The authenticity of host 'ipa7.example.com (192.168.122.200)' can't be established.
ECDSA key fingerprint is 42:96:09:a7:1b:ac:df:dd:1c:de:73:2b:86:51:19:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ipa7.example.com' (ECDSA) to the list of known hosts.
Creating home directory for herlo.
Last login: Fri Oct 10 02:27:44 2014 from 192.168.122.1
[herlo@ipa7 ~]$ id
uid=151600001(herlo) gid=151600001(herlo) groups=151600001(herlo) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c102

Congratulations! The FreeIPA server is now configured. In the next post, I will cover how to configure a client system and setup centralized sudo.

Cheers,

herlo

Changing the IP range for docker0

Lately, I’ve been tinkering a lot with docker. Mostly, I’ve been doing it for work at The Linux Foundation. But I do have a desire to have docker instances on my local box for distros which I do not run.

While doing some testing for work on my personal laptop, I noticed that the network which docker uses for it’s bridge, aptly named docker0, was in the same network as one of our VPNs.

# ip a s docker0
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
 link/ether fe:54:00:18:1a:fd brd ff:ff:ff:ff:ff:ff
 inet 172.17.41.1/16 brd 10.100.72.255 scope global docker0
 ..snip..

# ip a s tun0
139: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1412 ...
    link/none 
    inet 172.17.123.32/24 brd 172.17.224.255 scope global tun0
       valid_lft forever preferred_lft forever

As you can tell, the docker0 network bridge covers all of the tun0 network. Any time I would attempt to ssh into one of the systems inside the VPN, it would time out. I was left wondering why for a few moments.

Luckily, it’s very easy to fix this problem. All that is needed is a defined bridge for docker0 and to restart the docker service. Here’s what to do:

First, stop docker:

# service docker stop
Redirecting to /bin/systemctl stop  docker.service

Next, create the network bridge file. You can choose any IP range you like. On Fedora 19, it looks like this:

# cat /etc/sysconfig/network-scripts/ifcfg-docker0 
DEVICE="docker0"
TYPE="Bridge"
ONBOOT="yes"
NM_CONTROLLED="no"
BOOTPROTO="static"
IPADDR=10.100.72.254
NETMASK=255.255.255.0

Restart your network services.  NOTE: service network restart may be needed.

# service NetworkManager restart
Redirecting to /bin/systemctl restart  NetworkManager.service

The docker0 bridge should now be in a range outside the VPN.

# ip a s docker0
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
    link/ether fe:54:00:18:1a:fd brd ff:ff:ff:ff:ff:ff
    inet 10.100.72.254/24 brd 10.100.72.255 scope global docker0

Starting new containers with docker should get IP addesses in the above range:

# service docker start
Redirecting to /bin/systemctl start  docker.service

# docker run -i -t herlo/fedora:20 /bin/bash
bash-4.2# ip a s eth0
141: eth0: <BROADCAST,UP,LOWER_UP> mtu 1412 ...
    link/ether fa:5f:e3:8d:61:f2 brd ff:ff:ff:ff:ff:ff
    inet 10.100.72.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f85f:e3ff:fe8d:61f2/64 scope link 
       valid_lft forever preferred_lft forever

SUCCESS!

Cheers,

herlo

Fedora Ambassadors Update: February 2013

Welcome New Ambassadors

We are happy to welcome our new sponsored Fedora Ambassador in February:

Mon Mar 11 02:53:08 2013 | Accounts: 624 | Inactive: 81 (12.98%)

Cheers,

herlo

Fedora Ambassadors Update: January 2013

Welcome New Ambassadors

We are happy to welcome our new sponsored Fedora Ambassador in January:

Fedora Ambassador Statistics

Thu Feb 7 12:29:06 2013 | Accounts: 619 | Inactive: 81 | % inactive: 13.09

Cheers,

herlo

Yes, the announcement is behind this month. Moving into a new home will do that to you!

FUDCon Lawrence: Day 1 Session Videos and More

While you might have seen the barrage of posts from Fedora’s FUDCon Lawrence this weekend, you might not know that many sessions were streamed for your joy and enrichment.

Here’s a list with links to the videos (all of them are on youtube):

Enjoy,

herlo

All Posts