all posts tagged puppet-gluster


by on July 23, 2015

Git archive with submodules and tar magic

Git submodules are actually a very beautiful thing. You might prefer the word powerful or elegant, but that’s not the point. The downside is that they are sometimes misused, so as always, use with care. I’ve used them in projects like puppet-gluster, oh-my-vagrant, and others. If you’re not familiar with them, do a bit of reading and come back later, I’ll wait.

I recently did some work packaging Oh-My-Vagrant as RPM’s. My primary goal was to make sure the entire process was automatic, as I have no patience for manually building RPM’s. Any good packager knows that the pre-requisite for building a SRPM is a source tarball, and I wanted to build those automatically too.

Simply running a tar -cf on my source directory wouldn’t work, because I only want to include files that are stored in git. Thankfully, git comes with a tool called git archive, which does exactly that! No scary tar commands required:

Nobody likes tar

Here’s how you might run it:

$ git archive --prefix=some-project/ -o output.tar.bz2 HEAD

Let’s decompose:

The --prefix argument prepends a string prefix onto every file in the archive. Therefore, if you’d like the root directory to be named some-project, then you prepend that string with a trailing slash, and you’ll have everything nested inside a directory!

The -o flag predictably picks the output file and format. Using .tar.bz2 is quite common.

Lastly, the HEAD portion at the end specifies which git tree to pull the files from. I usually specify a git tag here, but you can specify a commit id if you prefer.

Obligatory, "make this article more interesting" meme image.

Obligatory, “make this article more interesting” meme image.

This is all well and good, but unfortunately, when I open my newly created archive, it is notably missing my git submodules! It would probably make sense for there to be an upstream option so that a --recursive flag would do this magic for you, but unfortunately it doesn’t exist yet.

There are a few scripts floating around that can do this, but I wanted something small, and without any real dependencies, that I can embed in my project Makefile, so that it’s all self-contained.

Here’s what that looks like:

sometarget:
    @echo Running git archive...
    # use HEAD if tag doesn't exist yet, so that development is easier...
    git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) $(VERSION) 2> /dev/null || (echo 'Warning: $(VERSION) does not exist.' && git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) HEAD)
    # TODO: if git archive had a --submodules flag this would easier!
    @echo Running git archive submodules...
    # i thought i would need --ignore-zeros, but it doesn't seem necessary!
    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do 
        temp="$${path%'}"; 
        temp="$${temp#'}"; 
        path=$$temp; 
        [ "$$path" = "" ] && continue; 
        (cd $$path && git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar && tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); 
    done

This is a bit tricky to read, so I’ll try to break it down. Remember, double dollar signs are used in Make syntax for embedded bash code since a single dollar sign is a special Make identifier. The $(VERSION) variable corresponds to the version of the project I’m building, which matches a git tag that I’ve previously created. $(SOURCE) corresponds to an output file name, ending in the .tar.bz2 suffix.

    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do 

In this first line, we store the current working directory for use later, and then loop through the output of the git submodule foreach command. That output normally looks something like this:

james@computer:~/code/oh-my-vagrant$ git submodule foreach 
Entering 'vagrant/gems/xdg'
Entering 'vagrant/kubernetes/templates/default'
Entering 'vagrant/p4h'
Entering 'vagrant/puppet/modules/module-data'
Entering 'vagrant/puppet/modules/puppet'
Entering 'vagrant/puppet/modules/stdlib'
Entering 'vagrant/puppet/modules/yum'

As you can see, this shows that the above read command, eats up the Entering string, and pulls the quoted path into the second path variable. The next part of the code:

        temp="$${path%'}"; 
        temp="$${temp#'}"; 
        path=$$temp; 
        [ "$$path" = "" ] && continue; 

uses bash idioms to remove the two single quotes that wrap our string, and then skip over any empty versions of the path variable in our loop. Lastly, for each submodule found, we first switch into that directory:

        (cd $$path &&

Run a normal git archive command and create a plain uncompressed tar archive in a temporary directory:

git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar &&

Then use the magic of tar to overlay this new tar file, on top of the source file that we’re now building up with each iteration of this loop, and then remove the temporary file.

tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); 

Finally, we end the loop:

    done

Boom, magic! Short, concise, and without any dependencies but bash and git.

Nobody should have to figure that out by themselves, and I wish it was built in to git, but until then, here’s how it’s done! Many thanks to #git on IRC for pointing me in the right direction.

This is the commit where I landed this patch for oh-my-vagrant, if you’re curious to see this in the wild. Now that this is done, I can definitely say that it was worth the time:

Is it worth the time? In this case, it was.

With this feature merged, along with my automatic COPR builds, a simple ‘make rpm‘, causes all of this automation to happen, and delivers a fresh build from git in a few minutes.

I hope you enjoyed this technique, and I hope you have some coding skills to get this feature upstream in git.

Happy Hacking,

James


by on October 18, 2014

Hacking out an Openshift app

I had an itch to scratch, and I wanted to get a bit more familiar with Openshift. I had used it in the past, but it was time to have another go. The app and the code are now available. Feel free to check out:

https://pdfdoc-purpleidea.rhcloud.com/

This is a simple app that takes the URL of a markdown file on GitHub, and outputs a pandoc converted PDF. I wanted to use pandoc specifically, because it produces PDF’s that were beautifully created with LaTeX. To embed a link in your upstream documentation that points to a PDF, just append the file’s URL to this app’s url, under a /pdf/ path. For example:

https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md

will send you to a PDF of the puppet-gluster documentation. This will make it easier to accept questions as FAQ patches, without needing to have the git embedded binary PDF be constantly updated.

If you want to hear more about what I did, read on…

The setup:

Start by getting a free Openshift account. You’ll also want to install the client tools. Nothing is worse than having to interact with your app via a web interface. Hackers use terminals. Lucky, the Openshift team knows this, and they’ve created a great command line tool called rhc to make it all possible.

I started by following their instructions:

$ sudo yum install rubygem-rhc
$ sudo gem update rhc

Unfortunately, this left with a problem:

$ rhc
/usr/share/rubygems/rubygems/dependency.rb:298:in `to_specs': Could not find 'rhc' (>= 0) among 37 total gem(s) (Gem::LoadError)
    from /usr/share/rubygems/rubygems/dependency.rb:309:in `to_spec'
    from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
    from /usr/local/bin/rhc:22:in `'

I solved this by running:

$ gem install rhc

Which makes my user rhc to take precedence over the system one. Then run:

$ rhc setup

and the rhc client will take you through some setup steps such as uploading your public ssh key to the Openshift infrastructure. The beauty of this tool is that it will work with the Red Hat hosted infrastructure, or you can use it with your own infrastructure if you want to host your own Openshift servers. This alone means you’ll never get locked in to a third-party providers terms or pricing.

Create a new app:

To get a fresh python 3.3 app going, you can run:

$ rhc create-app <appname> python-3.3

From this point on, it’s fairly straight forward, and you can now hack your way through the app in python. To push a new version of your app into production, it’s just a git commit away:

$ git add -p && git commit -m 'Awesome new commit...' && git push && rhc tail

Creating a new app from existing code:

If you want to push a new app from an existing code base, it’s as easy as:

$ rhc create-app awesomesauce python-3.3 --from-code https://github.com/purpleidea/pdfdoc
Application Options
-------------------
Domain:      purpleidea
Cartridges:  python-3.3
Source Code: https://github.com/purpleidea/pdfdoc
Gear Size:   default
Scaling:     no

Creating application 'awesomesauce' ... done


Waiting for your DNS name to be available ... done

Cloning into 'awesomesauce'...
The authenticity of host 'awesomesauce-purpleidea.rhcloud.com (203.0.113.13)' can't be established.
RSA key fingerprint is 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'awesomesauce-purpleidea.rhcloud.com,203.0.113.13' (RSA) to the list of known hosts.

Your application 'awesomesauce' is now available.

  URL:        http://awesomesauce-purpleidea.rhcloud.com/
  SSH to:     00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com
  Git remote: ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
  Cloned to:  /home/james/code/awesomesauce

Run 'rhc show-app awesomesauce' for more details about your app.

In my case, my app also needs some binaries installed. I haven’t yet automated this process, but I think it can be done be creating a custom cartridge. Help to do this would be appreciated!

Updating your app:

In the case of an app that I already deployed with this method, updating it from the upstream source is quite easy. You just pull down and relevant commits, and then push them up to your app’s git repo:

$ git pull upstream master 
From https://github.com/purpleidea/pdfdoc
 * branch            master     -> FETCH_HEAD
Updating 5ac5577..bdf9601
Fast-forward
 wsgi.py | 2 --
 1 file changed, 2 deletions(-)
$ git push origin master 
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 312 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping Python 3.3 cartridge
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit bdf9601
remote: Activating virtenv
remote: Checking for pip dependency listed in requirements.txt file..
remote: You must give at least one requirement to install (see "pip help install")
remote: Running setup.py script..
remote: running develop
remote: running egg_info
remote: creating pdfdoc.egg-info
remote: writing pdfdoc.egg-info/PKG-INFO
remote: writing dependency_links to pdfdoc.egg-info/dependency_links.txt
remote: writing top-level names to pdfdoc.egg-info/top_level.txt
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: reading manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: running build_ext
remote: Creating /var/lib/openshift/00112233445566778899aabb/app-root/runtime/dependencies/python/virtenv/venv/lib/python3.3/site-packages/pdfdoc.egg-link (link to .)
remote: pdfdoc 0.0.1 is already the active version in easy-install.pth
remote: 
remote: Installed /var/lib/openshift/00112233445566778899aabb/app-root/runtime/repo
remote: Processing dependencies for pdfdoc==0.0.1
remote: Finished processing dependencies for pdfdoc==0.0.1
remote: Preparing build for deployment
remote: Deployment id is 9c2ee03c
remote: Activating deployment
remote: Starting Python 3.3 cartridge (Apache+mod_wsgi)
remote: Application directory "/" selected as DocumentRoot
remote: Application "wsgi.py" selected as default WSGI entry point
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
   5ac5577..bdf9601  master -> master
$

Final thoughts:

I hope this helped you getting going with Openshift. Feel free to send me patches!

Happy hacking!

James


by on October 10, 2014

Continuous integration for Puppet modules

I just patched puppet-gluster and puppet-ipa to bring their infrastructure up to date with the current state of affairs…

What’s new?

  • Better README’s
  • Rake syntax checking (fewer oopsies)
  • CI (testing) with travis on git push (automatic testing for everyone)
  • Use of .pmtignore to ignore files from puppet module packages (finally)
  • Pushing modules to the forge with blacksmith (sweet!)

This last point deserves another mention. Puppetlabs created the “forge” to try to provide some sort of added value to their stewardship. Personally, I like to look for code on github instead, but nevertheless, some do use the forge. The problem is that to upload new releases, you need to click your mouse like a windows user! Someone has finally solved that problem! If you use blacksmith, a new build is just a rake push away!

Have a look at this example commit if you’re interested in seeing the plumbing.

Better documentation and FAQ answering:

I’ve answered a lot of questions by email, but this only helps out individuals. From now on, I’d appreciate if you asked your question in the form of a patch to my FAQ. (puppet-gluster, puppet-ipa)

I’ll review and merge your patch, including a follow-up patch with the answer! This way you’ll get more familiar with git and sending small patches, everyone will benefit from the response, and I’ll be able to point you to the docs (and even a specific commit) to avoid responding to already answered questions. You’ll also have the commit information of something else who already had this problem. Cool, right?

Happy hacking,

James


by on August 27, 2014

Rough data density calculations

Seagate has just publicly announced 8TB HDD’s in a 3.5″ form factor. I decided to do some rough calculations to understand the density a bit better…

Note: I have decided to ignore the distinction between Terabytes (TB) and Tebibytes (TiB), since I always work in base 2, but I hate the -bi naming conventions. Seagate is most likely announcing an 8TB HDD, which is actually smaller than a true 8TiB drive. If you don’t know the difference it’s worth learning.

Rack Unit Density:

Supermicro sells a high density, double-sided 4U server, which can hold 90 x 3.5″ drives. This means you can easily store:

90 * 8TB = 720TB in 4U,

or:

720TB/4U = 180TB per U.

To store a petabyte of data, since:

1PB = 1024TB,

we need:

1024TB/180TB/U = 5.68 U.

Rounding up we realize that we can easily store one petabyte of raw data in 6U.

Since an average rack is usually 42U (tall racks can be 48U) that means we can store between seven and eight PB per rack:

42U/rack / 6U/PB = 7PB/rack

48U/rack / 6U/PB = 8PB/rack

If you can provide the power and cooling, you can quickly see that small data centers can easily get into exabyte scale if needed. One raw exabyte would only require:

1EB = 1024PB

1024PB/7PB/rack = 146 racks =~ 150 racks.

Raid and Redundancy:

Since you’ll most likely have lots of failures, I would recommend having some number of RAID sets per server, and perhaps a distributed file system like GlusterFS to replicate the data across different servers. Suppose you broke each 90 drive server into five separate RAID 6 bricks for GlusterFS:

90/5 = 18 drives per brick.

In RAID 6, you loose two drives to parity, so that means:

18 drives – 2 drives = 16 drives per brick of usable storage.

16 drives * 5 bricks * 8 TB = 640 TB after RAID 6 in 4U.

640TB/4U = 160TB/U

1024TB/160TB/U = 6.4TB/U =~ 7PB/rack.

Since I rounded a lot, the result is similar. With a replica count of 2 in a standard GlusterFS configuration, you average a total of about 3-4PB of usable storage per rack. Need a petabyte scale filesystem? One rack should do it!

Other considerations:

  • Remember that you need to take into account space for power, cooling and networking.
  • Keep in mind that SMR might be used to increase density even further (unless it’s not already being used on these drives).
  • Remember that these calculations were done to understand the order of magnitude, and not to get a precise measurement on the size of a planned cluster.
  • Petabyte scale is starting to feel small…

Conclusions:

Storage is getting very inexpensive. After the above analysis, I feel safe in concluding that:

  1. Puppet-Gluster could easily automate a petabyte scale filesystem.
  2. I have an embarrassingly small amount of personal storage.

Hope this was fun,

Happy hacking,

James

 

Disclaimer: I have not tried the 8TB Seagate HDD’s, or the Supermicro 90 x 3.5″ servers, but if you are building a petabyte scale cluster with GlusterFS/Puppet-Gluster, I’d like to hear about it!

 


by on June 4, 2014

Hiera data in modules and OS independent puppet

Earlier this year, R.I.Pienaar released his brilliant data in modules hack, a few months ago, I got the chance to start implementing it in Puppet-Gluster, and today I have found the time to blog about it.

What is it?

R.I.’s hack lets you store hiera data inside a puppet module. This can have many uses including letting you throw out the nested mess that is commonly params.pp, and replace it with something file based that is elegant and hierarchical. For my use case, I’m using it to build OS independent puppet modules, without storing this data as code. The secondary win is that porting your module to a new GNU/Linux distribution or version could be as simple as adding a YAML file.

How does it work?

(For the specifics on the hack in general, please read R.I. Pienaar’s blog post. After you’re comfortable with that, please continue…)

In the hiera.yaml data/ hierarchy, I define an OS / version structure that should probably cover all use cases. It looks like this:

---
:hierarchy:
- tree/%{::osfamily}/%{::operatingsystem}/%{::operatingsystemrelease}
- tree/%{::osfamily}/%{::operatingsystem}
- tree/%{::osfamily}
- common

At the bottom, you can specify common data, which can be overridden by OS family specific data (think RedHat “like” vs. Debian “like”), which can be overridden with operating system specific data (think CentOS vs. Fedora), which can finally be overridden with operating system version specific data (think RHEL6 vs. RHEL7).

Grouping the commonalities near the bottom of the tree, avoids duplication, and makes it possible to support new OS versions with fewer changes. It would be especially cool if someone could write a script to refactor commonalities downwards, and to refactor new uniqueness upwards.

This is an except of the Fedora specific YAML file:

gluster::params::package_glusterfs_server: 'glusterfs-server'
gluster::params::program_mkfs_xfs: '/usr/sbin/mkfs.xfs'
gluster::params::program_mkfs_ext4: '/usr/sbin/mkfs.ext4'
gluster::params::program_findmnt: '/usr/bin/findmnt'
gluster::params::service_glusterd: 'glusterd'
gluster::params::misc_gluster_reload: '/usr/bin/systemctl reload glusterd'

Since we use full paths in Puppet-Gluster, and since they are uniquely different in Fedora (no more: /bin) it’s nice to specify them all here. The added advantage is that you can easily drop in different versions of these utilities if you want to test a patched release without having to edit your system utilities. In addition, you’ll see that the OS specific RPM package name and service names are in here too. On a Debian system, they are usually different.

Dependencies:

This depends on Puppet >= 3.x and having the puppet-module-data module included. I do so for integration with vagrant like so.

Should I still use params.pp?

I think that this answer is yes. I use a params.pp file with a single class specifying all the defaults:

class gluster::params(
    # packages...
    $package_glusterfs_server = 'glusterfs-server',

    $program_mkfs_xfs = '/sbin/mkfs.xfs',
    $program_mkfs_ext4 = '/sbin/mkfs.ext4',

    # services...
    $service_glusterd = 'glusterd',

    # misc...
    $misc_gluster_reload = '/sbin/service glusterd reload',

    # comment...
    $comment = ''
) {
    if "${comment}" == '' {
        warning('Unable to load yaml data/ directory!')
    }

    # ...

}

In my data/common.yaml I include a bogus comment canary so that I can trigger a warning if the data in modules module isn’t working. This shouldn’t be a fail as long as you want to allow backwards compatibility, otherwise it should be! The defaults I use correspond to the primary OS I hack and use this module with, which in this case is CentOS 6.x.

To use this data in your module, include the params.pp file, and start using it. Example:

include gluster::params
package { "${::gluster::params::package_glusterfs_server}":
    ensure => present,
}

Unfortunately the readability isn’t nearly as nice as it is without this, however it’s an essential evil, due to the puppet language limitations.

Common patterns:

There are a few common code patterns, which you might need for this technique. The first few, I’ve already mentioned above. These are the tree layout in hiera.yaml, the comment canary, and the params.pp defaults. There’s one more that you might find helpful…

The split package pattern:

Certain packages are split into multiple pieces on some operating systems, and grouped together on others. This means there isn’t always a one-to-one mapping between the data and the package type. For simple cases you can use a hiera array:

# this hiera value could be an array of strings...
package { $::some_module::params::package::some_package_list:
    ensure => present,
    alias => 'some_package',
}
service { 'foo':
    require => Package['some_package'],
}

For this to work you must always define at least one element in the array. For more complex cases you might need to test for the secondary package in the split:

if "${::some_module::params::package::some_package}" != '' {
    package { "${::some_module::params::package::some_package}":
        ensure => present,
        alias => 'some_package', # or use the $name and skip this
    }
}

service { 'foo':
    require => "${::some_module::params::package::some_package}" ? {
        '' => undef,
        default => Package['some_package'],
    },
}

This pattern is used in Puppet-Gluster in more than one place. It turns out that it’s also useful when optional python packages get pulled into the system python. (example)

Hopefully you found this useful. Please help increase the multi-os aspect of Puppet-Gluster by submitting patches to the YAML files, and by testing it on your favourite GNU/Linux distro!

Happy hacking!

James

 

by on May 13, 2014

Vagrant on Fedora with libvirt (reprise)

Vagrant has become the de facto tool for devops. Faster iterations, clean environments, and less overhead. This isn’t an article about why you should use Vagrant. This is an article about how to get up and running with Vagrant on Fedora with libvirt easily!

Background:

This article is an update of my original Vagrant on Fedora with libvirt article. There is still lots of good information in that article, but this one should be easier to follow and uses updated versions of Vagrant and vagrant-libvirt.

Why vagrant-libvirt?

Vagrant ships by default with support for virtualbox. This makes sense as a default since it is available on Windows, Mac, and GNU/Linux. Real hackers use GNU/Linux, and in my opinion the best tool for GNU/Linux is vagrant-libvirt. Proprietary, closed source platforms aren’t hackable and therefore aren’t cool!

Another advantage to using the vagrant-libvirt plugin is that it plays nicely with the existing ecosystem of libvirt tools. You can use virsh, virt-manager, and guestfish alongside Vagrant, and if your development work needs to go into production, you can be confident in knowing that it was already tested on the same awesome KVM virtualization platform that your servers run.

Prerequisites:

Let’s get going. What do you need?

  • A Fedora 20 machine

I recommend hardware that supports VT extensions. Most does these days. This should also work with other GNU/Linux distro’s, but I haven’t tested them.

Installation:

I’m going to go through this in a logical hacking order. This means you could group all the yum install commands into a single execution at the beginning, but you would learn much less by doing so.

First install some of your favourite hacking dependencies. I did this on a minimal, headless F20 installation. You might want to add some of these too:

# yum install -y wget tree vim screen mtr nmap telnet tar git

Update the system to make sure it’s fresh:

# yum update -y

Download Vagrant version 1.5.4. No, don’t use the latest version, it probably won’t work! Vagrant has new releases practically as often as there are sunsets, and they typically cause lots of breakages.

$ wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.5.4_x86_64.rpm

and install it:

# yum install -y vagrant_1.5.4_x86_64.rpm

RVM installation:

In order to get vagrant-libvirt working, you’ll need some ruby dependencies. It turns out that RVM seems to be the best way to get exactly what you need. Use the sketchy RVM installer:

# \curl -sSL https://get.rvm.io | bash -s stable

If you don’t know why that’s sketchy, then you probably shouldn’t be hacking! I did that as root, but it probably works when you run it as a normal user. At this point rvm should be installed. The last important thing you’ll need to do is to add yourself to the rvm group. This is only needed if you installed rvm as root:

# usermod -aG rvm <username>

You’ll probably need to logout and log back in for this to take effect. Run:

$ groups

to make sure you can see rvm in the list. If you ran rvm as root, you’ll want to source the rvm.sh file:

$ source /etc/profile.d/rvm.sh

or simply use a new terminal. If you ran it as a normal user, I think RVM adds something to your ~/.bashrc. You might want to reload it:

$ source ~/.bashrc

At this point RVM should be working. Let’s see which ruby’s it can install:

$ rvm list known

Ruby version ruby-2.0.0-p353 seems closest to what is available on my Fedora 20 machine, so I’ll use that:

$ rvm install ruby-2.0.0-p353

If the exact patch number isn’t available, choose what’s closest. Installing ruby requires a bunch of dependencies. The rvm install command will ask yum for a bunch of dependencies, but if you’d rather install them yourself, you can run:

# yum install -y patch libyaml-devel libffi-devel glibc-headers autoconf gcc-c++ glibc-devel patch readline-devel zlib-devel openssl-devel bzip2 automake libtool bison

GEM installation:

Now we need the GEM dependencies for the vagrant-libvirt plugin. These GEM’s happen to have their own build dependencies, but thankfully I’ve already figured those out for you:

# yum install -y libvirt-devel libxslt-devel libxml2-devel

Now, install the nokogiri gem that vagrant-libvirt needs:

$ gem install nokogiri -v '1.5.11'

and finally we can install the actual vagrant-libvirt plugin:

$ vagrant plugin install --plugin-version 0.0.16 vagrant-libvirt

You don’t have to specify the –plugin-version 0.0.16 part, but doing so will make sure that you get a version that I have tested to be compatible with Vagrant 1.5.4 should a newer vagrant-libvirt release not be compatible with the Vagrant version you’re using. If you’re feeling brave, please test newer versions, report bugs, and write patches!

Making Vagrant more useful:

Vagrant should basically work at this point, but it’s missing some awesome. I’m proud to say that I wrote this awesome. I recommend my bash function and alias additions. If you’d like to include them, you can run:

$ wget https://gist.githubusercontent.com/purpleidea/8071962/raw/ee27c56e66aafdcb9fd9760f123e7eda51a6a51e/.bashrc_vagrant.sh
$ echo '. ~/.bashrc_vagrant.sh' >> ~/.bashrc
$ . ~/.bashrc    # reload

to pull in my most used Vagrant aliases and functions. I’ve written about them before. If you’re interested, please read:

KVM/QEMU installation:

As I mentioned earlier, I’m assuming you have a minimal Fedora 20 installation, so you might not have all the libvirt pieces installed! Here’s how to install any potentially missing pieces:

# yum install -y libvirt{,-daemon-kvm}

This should pull in a whole bunch of dependencies too. You will need to start and (optionally) enable the libvirtd service:

# systemctl start libvirtd.service
# systemctl enable libvirtd.service

You’ll notice that I’m using the systemd commands instead of the deprecated service command. My biggest (only?) gripe with systemd is that the command line tools aren’t as friendly as they could be! The systemctl equivalent requires more typing, and make it harder to start or stop the same service in quick succession, because it buries the action in the middle of the command instead of leaving it at the end!

The libvirtd service should finally be running. On my machine, it comes with a default network which got in the way of my vagrant-libvirt networking. If you want to get rid of it, you can run:

# virsh net-destroy default
# virsh net-undefine default

and it shouldn’t bother you anymore. One last hiccup. If it’s your first time installing KVM, you might run into bz#950436. To workaround this issue, I had to run:

# rmmod kvm_intel
# rmmod kvm
# modprobe kvm
# modprobe kvm_intel

Without this “module re-loading” you might see this error:

Call to virDomainCreateWithFlags failed: internal error: Process exited while reading console log output: char device redirected to /dev/pts/2 (label charserial0)
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied

Additional installations:

To make your machine somewhat more palatable, you might want to consider installing bash-completion:

# yum install -y bash-completion

You’ll also probably want to add the PolicyKit (polkit) .pkla file that I recommend in my earlier article. Typically that means adding something like:

[Allow james libvirt management permissions]
Identity=unix-user:james
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

as root to somewhere like:

/etc/polkit-1/localauthority/50-local.d/vagrant.pkla

Your machine should now be setup perfectly! The last thing you’ll need to do is to make sure that you get a Vagrantfile that does things properly! Here are some recommendations.

Shared folders:

Shared folders are a mechanism that Vagrant uses to pass data into (and sometimes out of) the virtual machines that it is managing. Typically you can use NFS, rsync, and some provider specific folder sharing like 9p. Using rsync is the simplest to set up, and works exceptionally well. Make sure you include the following line in your Vagrantfile:

config.vm.synced_folder './', '/vagrant', type: 'rsync'

If you want to see an example of this in action, you can have a look at my puppet-gluster Vagrantfile. If you are using the puppet apply provisioner, you will have to set it to use rsync as well:

puppet.synced_folder_type = 'rsync'

KVM performance:

Due to a regression in vagrant-libvirt, the default driver used for virtual machines is qemu. If you want to use the accelerated KVM domain type, you’ll have to set it:

libvirt.driver = 'kvm'

This typically gives me a 5x performance increase over plain qemu. This fix is available in the latest vagrant-libvirt version. The default has been set to KVM in the latest git master.

Dear internets!

I think this was fairly straightforward. You could probably even put all of these commands in a shell script and just run it to get it all going. What we really need is proper RPM packaging. If you can help out, that would be excellent!

If we had a version of vagrant-libvirt alongside a matching Vagrant version in Fedora, then developers and hackers could target that, and we could easily exchange dev environments, hackers could distribute product demos as full vagrant-libvirt clusters, and I could stop having to write these types of articles 😉

I hope this was helpful to you. Please let me know in the comments.

Happy hacking,

James

 

by on May 6, 2014

Keeping git submodules in sync with your branches

This is a quick trick for making working with git submodules more magic.

One day you might find that using git submodules is needed for your project. It’s probably not necessary for everyday hacking, but if you’re glue-ing things together, it can be quite useful. Puppet-Gluster uses this technique to easily include all the dependencies needed for a Puppet-Gluster+Vagrant automatic deployment.

If you’re a good hacker, you develop things in separate feature branches. Example:

cd code/projectdir/
git checkout -b feat/my-cool-feature
# hack hack hack
git add -p
# add stuff
git commit -m 'my cool new feature'
git push
# yay!

The problem arises if you git pull inside of a git submodule to update it to a particular commit. When you switch branches, the git submodule‘s branch doesn’t move along with you! Personally, I think this is a bug, but perhaps it’s not. In any case, here’s the fix:

add:

#!/bin/bash
exec git submodule update

to your:

<projectdir>/.git/hooks/post-checkout

and then run:

chmod u+x <projectdir>/.git/hooks/post-checkout

and you’re good to go! Here’s an example:

james@computer:~/code/puppet/puppet-gluster$ git checkout feat/yamldata
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'feat/yamldata'
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out 'f139d0b7cfe6d55c0848d0d338e19fe640a961f2'
james@computer:~/code/puppet/puppet-gluster (feat/yamldata)$ git checkout master
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'master'
Your branch is up-to-date with 'glusterforge/master'.
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out '07ec49d1f67a498b31b4f164678a76c464e129c4'
james@computer:~/code/puppet/puppet-gluster$ cat .git/hooks/post-checkout
#!/bin/bash
exec git submodule update
james@computer:~/code/puppet/puppet-gluster$

Hope that helps you out too! If someone knows of a use-case when you don’t want this functionality, please let me know! Many thanks to #git for helping me solve this issue!

Happy hacking,

James

 

by on March 27, 2014

Puppet-Gluster now available as RPM

I’ve been afraid of RPM and package maintaining [1] for years, but thanks to Kaleb Keithley, I have finally made some RPM’s that weren’t generated from a high level tool. Now that I have the boilerplate done, it’s a relatively painless process!

In case you don’t know kkeithley, he is a wizard [2] who happens to also be especially cool and hardworking. If you meet him, be sure to buy him a $BEVERAGE. </plug>

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

The full source of my changes is available in git.

If you want to make the RPM’s yourself, simply clone the puppet-gluster source, and run: make rpm. If you'd rather download pre-built RPM's, SRPM'S, or source tarballs, they are all being graciously hosted on download.gluster.org, thanks to John Mark Walker and the gluster.org community.

These RPM's will install their contents into /usr/share/puppet/modules/. They should work on Fedora or CentOS, but they do require a puppet package to be installed. I hope to offer them in the future as part of a repository for easier consumption.

There are also RPM's available for puppet-common, puppet-keepalived, puppet-puppet, puppet-shorewall, puppet-yum, and even puppetlabs-stdlib. These are the dependencies required to install the puppet-gluster module.

Please let me know if you find any issues with any of the packages, or if you have any recommendations for improvement! I'm new to packaging, so I probably made some mistakes.

Happy Hacking,

James

[1] package maintainer, aka: "paintainer" - according to semiosis, who is right!

[2] wizard as in an awesome, talented, hacker.


by on March 24, 2014

Introducing Puppet Exec[‘again’]

Puppet is missing a number of much-needed features. That’s the bad news. The good news is that I’ve been able to write some of these as modules that don’t need to change the Puppet core! This is an article about one of these features.

Posit: It’s not possible to apply all of your Puppet manifests in a single run.

I believe that this holds true for the current implementation of Puppet. Most manifests can, do and should apply completely in a single run. If your Puppet run takes more than one run to converge, then chances are that you’re doing something wrong.

(For the sake of this article, convergence means that everything has been applied cleanly, and that a subsequent Puppet run wouldn’t have any work to do.)

There are some advanced corner cases, where this is not possible. In these situations, you will either have to wait for the next Puppet run (by default it will run every 30 minutes) or keep running Puppet manually until your configuration has converged. Neither of these situations are acceptable because:

  • Waiting 30 minutes while your machines are idle is (mostly) a waste of time.
  • Doing manual work to set up your automation kind of defeats the purpose.
'Are you stealing those LCDs?' 'Yeah, but I'm doing it while my code compiles.'

Waiting 30 minutes while your machines are idle is (mostly) a waste of time. Okay, maybe it’s not entirely a waste of time :)

So what’s the solution?

Introducing: Puppet Exec[‘again’] !

Exec[‘again’] is a feature which I’ve added to my Puppet-Common module.

What does it do?

Each Puppet run, your code can decide if it thinks there is more work to do, or if the host is not in a converged state. If so, it will tell Exec[‘again’].

What does Exec[‘again’] do?

Exec[‘again’] will fork a process off from the running puppet process. It will wait until that parent process has finished, and then it will spawn (technically: execvpe) a new puppet process to run puppet again. The module is smart enough to inspect the parent puppet process, and it knows how to run the child puppet. Once the new child puppet process is running, you won’t see any leftover process id from the parent Exec[‘again’] tool.

How do I tell it to run?

It’s quite simple, all you have to do is import my puppet module, and then notify the magic Exec[‘again’] type that my class defines. Example:

include common::again

$some_str = 'ttboj is awesome'
# you can notify from any type that can generate a notification!
# typically, using exec is the most common, but is not required!
file { '/tmp/foo':
    content => "${some_str}\n",
    notify => Exec['again'], # notify puppet!
}

How do I decide if I need to run again?

This depends on your module, and isn’t always a trivial thing to figure out. In one case, I had to build a finite state machine in puppet to help decide whether this was necessary or not. In some cases, the solution might be simpler. In all cases, this is an advanced technique, so you’ll probably already have a good idea about how to figure this out if you need this type of technique.

Can I introduce a minimum delay before the next run happens?

Yes, absolutely. This is particularly useful if you are building a distributed system, and you want to give other hosts a chance to export resources before each successive run. Example:

include common::again

# when notified, this will run puppet again, delta sec after it ends!
common::again::delta { 'some-name':
    delta => 120, # 2 minutes (pick your own value)
}

# to run the above Exec['again'] you can use:
exec { '/bin/true':
    onlyif => '/bin/false', # TODO: some condition
    notify => Common::Again::Delta['some-name'],
}

Can you show me a real-world example of this module?

Have a look at the Puppet-Gluster module. This module was one of the reasons that I wrote the Exec[‘again’] functionality.

Are there any caveats?

Maybe! It’s possible to cause a fast “infinite loop”, where Puppet gets run unnecessarily. This could effectively DDOS your puppetmaster if left unchecked, so please use with caution! Keep in mind that puppet typically runs in an infinite loop already, except with a 30 minute interval.

Help, it won’t stop!

Either your code has become sentient, and has decided it wants to enable kerberos or you’ve got a bug in your Puppet manifests. If you fix the bug, things should eventually go back to normal. To kill the process that’s re-spawning puppet, look for it in your process tree. Example:

[root@server ~]# ps auxww | grep again[.py]
root 4079 0.0 0.7 132700 3768 ? S 18:26 0:00 /usr/bin/python /var/lib/puppet/tmp/common/again/again.py --delta 120
[root@server ~]# killall again.py
[root@server ~]# echo $?
0
[root@server ~]# ps auxww | grep again[.py]
[root@server ~]# killall again.py
again.py: no process killed
[root@server ~]#

Does this work with puppet running as a service or with puppet agent –test?

Yes.

How was the spawn/exec logic implemented?

The spawn/exec logic was implemented as a standalone python program that gets copied to your local system, and does all the heavy lifting. Please have a look and let me know if you can find any bugs!

Conclusion

I hope you enjoyed this addition to your toolbox. Please remember to use it with care. If you have a legitimate use for it, please let me know so that I can better understand your use case!

Happy hacking,

James

 

by on February 21, 2014

Speaking at SCALE today!

I’ll be giving a talk at SCALE today about automatically deploying GlusterFS with Puppet-Gluster and Vagrant. I’ll be giving some live demos, and this will cover some of the material from:

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

and it will contain excerpts from:

Screencasts of Puppet-Gluster + Vagrant

I’ll also be talking about some new upcoming features, and am happy to answer all of your questions!

The talk will be part of Infrastructure.next and is starting around 1:30 or 2pm in the Century AB room.

If you’re not able to attend, or you’d like a more personalized demo, send me a note! I’ll be around for the next few days.

Thanks to Joe Brockmeier and John Mark Walker for hosting the event and sending me here.

Happy Hacking,

James