all posts tagged vagrant-libvirt


by on August 18, 2015

Thanking Oh-My-Vagrant contributors for version 1.0.0

The Oh-My-Vagrant project became public about one year ago and at the time it was more of a fancy template than a robust project, but 188 commits (and counting) later, it has gotten surprisingly useful and mature.

james@computer:~/code/oh-my-vagrant$ git rev-list HEAD --count
188
james@computer:~/code/oh-my-vagrant$ git log $(git log --pretty=format:%H|tail -1)
commit 4faa6c89cce01c62130ef5a6d5fa0fff833da371
Author: James Shubin <james@shubin.ca>
Date:   Thu Aug 28 01:08:03 2014 -0400

    Initial commit of vagrant-puppet-docker-template...
    
    This is an attempt to prototype a default environment for
    vagrant+puppet+docker hacking. More improvements are needed for it to be
    useful, but it's probably already useful as a reference for now.

It would be easy to take most of the credit for taking the project this far, as I’ve been responsible for about 87% of the commits, but as is common, the numbers don’t tell the whole story. It is also a bug (but hopefully just an artifact) that I’ve had such a large percentage of commits. It’s quite common for a new project to start this way, but for Free Software to succeed long-term, it’s essential that the users become the contributors. Let’s try to change that going forward.

james@computer:~/code/oh-my-vagrant$ git shortlog -s | sort -rn
   165    James Shubin
     5    Vasyl Kaigorodov
     4    Randy Barlow
     2    Scott Collier
     2    Milan Zink
     2    Christoph Görn
     2    aweiteka
     1    scollier
     1    Russell Tweed
     1    ncoghlan
     1    John Browning
     1    Flavio Fernandes
     1    Carsten Clasohm
james@computer:~/code/oh-my-vagrant$ echo '165/188*100' | bc -l
87.76595744680851063800

The true story behind these 188 commits is the living history of the past year. Countless hours testing the code, using the project, suggesting features, getting hit by bugs, debugging issues, patching those bugs, and so on… If you could see an accurate graph of the number of hours put into the project, you’d see a much longer list of individuals, and I would have nowhere close to 87% of that breakdown.

Contributions are important!

Contributions are important, and patches especially help. Patches from your users are what make something a community project as opposed to two separate camps of consumers and producers. It’s about time we singled out some of those contributors!

Vasyl Kaigorodov

Vasyl is a great hacker who first fixed the broken networking in OMV. Before his work merged, it was not possible to run two different OMV environments at the same time. Now networking makes a lot more sense. Unfortunately the GitHub contributors graph doesn’t acknowledge his work because he doesn’t have a GitHub account. Shame on them!

Randy Barlow (bowlofeggs)

Randy came up with the idea for “mainstream mode“, and while his initial proof of concept didn’t quite work, the idea was good. His time budget didn’t afford the project this new feature, but he has sent in some other patches including some, tweaks used by the Pulp Vagrantfile. He’s got a patch or two pending on his TODO list which we’re looking forward to, as he finishes the work to port Pulp to OMV.

Scott Collier

Scott is a great model user. He gets very enthusiastic, he’s great at testing things out and complaining if they don’t behave as he’d like, and if you’re lucky, you can brow beat him to write a quick patch or two. He actually has three commits in the project so far, which would show up correctly above if he had set his git user variables correctly 😉 Thanks for spending the time to deal with OMV when there was a lot more cruft, and fewer features. I look forward to your next patch!

Milan Zink

Milan is a ruby expert who fixed the ruby xdg bugs we had in an earlier version of the project. Because of his work new users don’t even realize that there was ever an issue!

Christoph Görn

Christoph has been an invaluable promoter and dedicated user of the project. His work pushing OMV to the limit has generated real world requirements and feature requests, which have made the project useful for real users! It’s been hard to say no when he opens an issue ticket, but I’ve been able to force him to write a patch or two as well.

Russell Tweed

Russell is a new OMV user who jumped right into the code and sent in a patch for adding an arbitrary number of disks to OMV machines. As a first time contributor, I thank him for his patch and for withstanding the number of reviews it had to go through. It’s finally merged, even though we might have let one bug (now fixed) slip in too. I particularly like his patch, because I actually wrote the initial patch to add extra disks support to vagrant-libvirt, and I’m pleased to see it get used one level up!

John Browning

John actually found an edge case in the subscription manager code and after an interesting discussion, patched the issue. More users means more edge cases will fall out! Thanks John!

Flavio Fernandes

Even though Flavio is an OSX user, we’re thankful that he wrote and tested the virtualbox patch for OMV. OMV still needs an installer for OSX + mainstream mode, but once that’s done, we know the rest will work great!

Carsten Clasohm

Carsten actually wrote a lovely patch for a subtle OMV issue that is very hard to reproduce. I was able to merge his patch on the first review, and in fact it looked nicer than how I would have written it!

Nick Coghlan

Nick is actually a python hacker, so getting a ruby contribution proved a bit tricky! Fortunately, he is also a master of words, and helped clean up the documentation a bit. We’d love to get a few more doc patches if you have the time and some love!

Aaron Weitekamp

Even though aweiteka (as we call him) has only added five lines of source (2 of which were comments), he was an early user and tester, and we thank him for his contributions! Hopefully we’ll see him in our commit logs in the future!

Máirín Duffy

Máirín is a talented artist who does great work using free tools. I asked her if she’d be kind enough to make us a logo, and I’ll hopefully be able to show it to you soon!

Everyone else

To everyone else who isn’t in the commit log yet, thank you for using and testing OMV, finding bugs, opening issues and even for your social media love in getting the word out! I hope to get a patch from you soon!

The power of the unknown user

They’re sometimes hard to measure, but a recently introduced bug was reported to me independently by two different (and previously unknown) users very soon after the bug was introduced! I’m sorry for letting the bug in, but I am glad that people picked up on it so quickly! I’d love to have your help in improving our automated test infrastructure!

The AUTHORS file

Every good project needs a “hall of fame” for its contributors. That’s why, starting today there is an AUTHORS file, and if you’re a contributor, we urge you to send a one-line patch with your name, so it can be immortalized in the project forever. We could try to generate this file with git log, but that would remove the prestige behind getting your first and second patches in. If you’re not in the AUTHORS file, and you should be, send me your patch already!

Version 1.0.0

I think it’s time. The project deserves a 1.0.0 release, and I’ve now made it so. Please share and enjoy!

I hope you enjoy this project, and I look forward to receiving your patch.

Happy Hacking!

James

PS: Thanks to Brian Bouterse for encouraging me to focus on community, and for inspiring me to write this post!

by on September 3, 2014

Introducing: Oh My Vagrant!

If you’re a reader of my code or of this blog, it’s no secret that I hack on a lot of puppet and vagrant. Recently I’ve fooled around with a bit of docker, too. I realized that the vagrant, environments I built for puppet-gluster and puppet-ipa needed to be generalized, and they needed new features too. Therefore…

Introducing: Oh My Vagrant!

Oh My Vagrant is an attempt to provide an easy to use development environment so that you can be up and hacking quickly, and focusing on the real devops problems. The README explains my choice of project name.

Prerequisites:

I use a Fedora 20 laptop with vagrant-libvirt. Efforts are underway to create an RPM of vagrant-libvirt, but in the meantime you’ll have to read: Vagrant on Fedora with libvirt (reprise). This should work with other distributions too, but I don’t test them very often. Please step up and help test :)

The bits:

First clone the oh-my-vagrant repository and look inside:

git clone --recursive https://github.com/purpleidea/oh-my-vagrant
cd oh-my-vagrant/vagrant/

The included Vagrantfile is the current heart of this project. You’re welcome to use it as a template and edit it directly, or you can use the facilities it provides. I’d recommend starting with the latter, which I’ll walk you through now.

Getting started:

Start by running vagrant status (vs) and taking a look at the vagrant.yaml file that appears.

james@computer:/oh-my-vagrant/vagrant$ ls
Dockerfile  puppet/  Vagrantfile
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

Here you’ll see the list of resultant machines that vagrant thinks is defined (currently just template1), and a bunch of different settings in YAML format. The values of these settings help define the vagrant environment that you’ll be hacking in.

Changing settings:

The settings exist so that your vagrant environment is dynamic and can be changed quickly. You can change the settings by editing the vagrant.yaml file. They will be used by vagrant when it runs. You can also change them at runtime with --vagrant-foo flags. Running a vagrant status will show you how vagrant currently sees the environment. Let’s change the number of machines that are defined. Note the location of the --vagrant-count flag and how it doesn’t work when positioned incorrectly.

james@computer:/oh-my-vagrant/vagrant$ vagrant status --vagrant-count=4
An invalid option was specified. The help for this command
is available below.

Usage: vagrant status [name]
    -h, --help                       Print this help
james@computer:/oh-my-vagrant/vagrant$ vagrant --vagrant-count=4 status
Current machine states:

template1                 not created (libvirt)
template2                 not created (libvirt)
template3                 not created (libvirt)
template4                 not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 4
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

As you can see in the above example, changing the count variable to 4, causes vagrant to see a possible four machines in the vagrant environment. You can change as many of these parameters at a time by using the --vagrant- flags, or you can edit the vagrant.yaml file. The latter is much easier and more expressive, in particular for expressing complex data types. The former is much more powerful when building one-liners, such as:

vagrant --vagrant-count=8 --vagrant-namespace=gluster up gluster{1..8}

which should bring up eight hosts in parallel, named gluster1 to gluster8.

Other VM’s:

Since one often wants to be more expressive in machine naming and heterogeneity of machine type, you can specify a list of machines to define in the vagrant.yaml file vms array. If you’d rather define these machines in the Vagrantfile itself, you can also set them up in the vms array defined there. It is empty by default, but it is easy to uncomment out one of the many examples. These will be used as the defaults if nothing else overrides the selection in the vagrant.yaml file. I’ve uncommented a few to show you this functionality:

james@computer:/oh-my-vagrant/vagrant$ grep example[124] Vagrantfile 
    {:name => 'example1', :docker => true, :puppet => true, },    # example1
    {:name => 'example2', :docker => ['centos', 'fedora'], },    # example2
    {:name => 'example4', :image => 'centos-6', :puppet => true, },    # example4
james@computer:/oh-my-vagrant/vagrant$ rm vagrant.yaml # note that I remove the old settings
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example2                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example2
  :docker:
  - centos
  - fedora
- :name: example4
  :image: centos-6
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vim vagrant.yaml # edit vagrant.yaml file...
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example4
  :image: centos-7.0
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$

The above output might seem a little long, but if you try these steps out in your terminal, you should get a hang of it fairly quickly. If you poke around in the Vagrantfile, you should see the format of the vms array. Each element in the array should be a dictionary, where the keys correspond to the flags you wish to set. Look at the examples if you need help with the formatting.

Other settings:

As you saw, other settings are available. There are a few notable ones that are worth mentioning. This will also help explain some of the other features that this Vagrantfile provides.

  • domain: This sets the domain part of each vm’s FQDN. The default is example.com, which should work for most environments, but you’re welcome to change this as you see fit.
  • network: This sets the network that is used for the vm’s. You should pick a network/cidr that doesn’t conflict with any other networks on your machine. This is particularly useful when you have multiple vagrant environments hosted off of the same laptop.
  • image: This is the default base image to use for each machine. It can be overridden per-machine in the vm’s list of dictionaries.
  • sync: This is the sync type used for vagrant. rsync is the default and works in all environments. If you’d prefer to fight with the nfs mounts, or try out 9p, both those options are available too.
  • puppet: This option enables or disables integration with puppet. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • docker: This option enables and lists the docker images to set up per vm. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • namespace: This sets the namespace that your Vagrantfile operates in. This value is used as a prefix for the numbered vm’s, as the libvirt network name, and as the primary puppet module to execute.

More on the docker option:

For now, if you specify a list of docker images, they will be automatically pulled into your vm environment. It is recommended that you pre-cache them in an existing base image to save bandwidth. Custom base vagrant images can be easily be built with vagrant-builder, but this process is currently undocumented.

I’ll try to write-up a post on this process if there are enough requests. To keep you busy in the meantime, I’ve published a CentOS 7 vagrant base image that includes docker images for CentOS and Fedora. It is being graciously hosted by the GlusterFS community.

What other magic does this all do?

There is a certain amount of magic glue that happens behind the scenes. Here’s a list of some of it:

  • Idempotent /etc/hosts based DNS
  • Easy docker base image installation
  • IP address calculations and assignment with ipaddr
  • Clever cleanup on ‘vagrant destroy
  • Vagrant docker base image detection
  • Integration with Puppet

If you don’t understand what all of those mean, and you don’t want to go source diving, don’t worry about it! I will explain them in greater detail when it’s important, and hopefully for now everything “just works” and stays out of your way.

Future work:

There’s still a lot more that I have planned, and some parts of the Vagrantfile need clean up, but I figured I’d try and release this early so that you can get hacking right away. If it’s useful to you, please leave a comment and let me know.

Happy hacking,

James

 

by on May 13, 2014

Vagrant on Fedora with libvirt (reprise)

Vagrant has become the de facto tool for devops. Faster iterations, clean environments, and less overhead. This isn’t an article about why you should use Vagrant. This is an article about how to get up and running with Vagrant on Fedora with libvirt easily!

Background:

This article is an update of my original Vagrant on Fedora with libvirt article. There is still lots of good information in that article, but this one should be easier to follow and uses updated versions of Vagrant and vagrant-libvirt.

Why vagrant-libvirt?

Vagrant ships by default with support for virtualbox. This makes sense as a default since it is available on Windows, Mac, and GNU/Linux. Real hackers use GNU/Linux, and in my opinion the best tool for GNU/Linux is vagrant-libvirt. Proprietary, closed source platforms aren’t hackable and therefore aren’t cool!

Another advantage to using the vagrant-libvirt plugin is that it plays nicely with the existing ecosystem of libvirt tools. You can use virsh, virt-manager, and guestfish alongside Vagrant, and if your development work needs to go into production, you can be confident in knowing that it was already tested on the same awesome KVM virtualization platform that your servers run.

Prerequisites:

Let’s get going. What do you need?

  • A Fedora 20 machine

I recommend hardware that supports VT extensions. Most does these days. This should also work with other GNU/Linux distro’s, but I haven’t tested them.

Installation:

I’m going to go through this in a logical hacking order. This means you could group all the yum install commands into a single execution at the beginning, but you would learn much less by doing so.

First install some of your favourite hacking dependencies. I did this on a minimal, headless F20 installation. You might want to add some of these too:

# yum install -y wget tree vim screen mtr nmap telnet tar git

Update the system to make sure it’s fresh:

# yum update -y

Download Vagrant version 1.5.4. No, don’t use the latest version, it probably won’t work! Vagrant has new releases practically as often as there are sunsets, and they typically cause lots of breakages.

$ wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.5.4_x86_64.rpm

and install it:

# yum install -y vagrant_1.5.4_x86_64.rpm

RVM installation:

In order to get vagrant-libvirt working, you’ll need some ruby dependencies. It turns out that RVM seems to be the best way to get exactly what you need. Use the sketchy RVM installer:

# \curl -sSL https://get.rvm.io | bash -s stable

If you don’t know why that’s sketchy, then you probably shouldn’t be hacking! I did that as root, but it probably works when you run it as a normal user. At this point rvm should be installed. The last important thing you’ll need to do is to add yourself to the rvm group. This is only needed if you installed rvm as root:

# usermod -aG rvm <username>

You’ll probably need to logout and log back in for this to take effect. Run:

$ groups

to make sure you can see rvm in the list. If you ran rvm as root, you’ll want to source the rvm.sh file:

$ source /etc/profile.d/rvm.sh

or simply use a new terminal. If you ran it as a normal user, I think RVM adds something to your ~/.bashrc. You might want to reload it:

$ source ~/.bashrc

At this point RVM should be working. Let’s see which ruby’s it can install:

$ rvm list known

Ruby version ruby-2.0.0-p353 seems closest to what is available on my Fedora 20 machine, so I’ll use that:

$ rvm install ruby-2.0.0-p353

If the exact patch number isn’t available, choose what’s closest. Installing ruby requires a bunch of dependencies. The rvm install command will ask yum for a bunch of dependencies, but if you’d rather install them yourself, you can run:

# yum install -y patch libyaml-devel libffi-devel glibc-headers autoconf gcc-c++ glibc-devel patch readline-devel zlib-devel openssl-devel bzip2 automake libtool bison

GEM installation:

Now we need the GEM dependencies for the vagrant-libvirt plugin. These GEM’s happen to have their own build dependencies, but thankfully I’ve already figured those out for you:

# yum install -y libvirt-devel libxslt-devel libxml2-devel

Now, install the nokogiri gem that vagrant-libvirt needs:

$ gem install nokogiri -v '1.5.11'

and finally we can install the actual vagrant-libvirt plugin:

$ vagrant plugin install --plugin-version 0.0.16 vagrant-libvirt

You don’t have to specify the –plugin-version 0.0.16 part, but doing so will make sure that you get a version that I have tested to be compatible with Vagrant 1.5.4 should a newer vagrant-libvirt release not be compatible with the Vagrant version you’re using. If you’re feeling brave, please test newer versions, report bugs, and write patches!

Making Vagrant more useful:

Vagrant should basically work at this point, but it’s missing some awesome. I’m proud to say that I wrote this awesome. I recommend my bash function and alias additions. If you’d like to include them, you can run:

$ wget https://gist.githubusercontent.com/purpleidea/8071962/raw/ee27c56e66aafdcb9fd9760f123e7eda51a6a51e/.bashrc_vagrant.sh
$ echo '. ~/.bashrc_vagrant.sh' >> ~/.bashrc
$ . ~/.bashrc    # reload

to pull in my most used Vagrant aliases and functions. I’ve written about them before. If you’re interested, please read:

KVM/QEMU installation:

As I mentioned earlier, I’m assuming you have a minimal Fedora 20 installation, so you might not have all the libvirt pieces installed! Here’s how to install any potentially missing pieces:

# yum install -y libvirt{,-daemon-kvm}

This should pull in a whole bunch of dependencies too. You will need to start and (optionally) enable the libvirtd service:

# systemctl start libvirtd.service
# systemctl enable libvirtd.service

You’ll notice that I’m using the systemd commands instead of the deprecated service command. My biggest (only?) gripe with systemd is that the command line tools aren’t as friendly as they could be! The systemctl equivalent requires more typing, and make it harder to start or stop the same service in quick succession, because it buries the action in the middle of the command instead of leaving it at the end!

The libvirtd service should finally be running. On my machine, it comes with a default network which got in the way of my vagrant-libvirt networking. If you want to get rid of it, you can run:

# virsh net-destroy default
# virsh net-undefine default

and it shouldn’t bother you anymore. One last hiccup. If it’s your first time installing KVM, you might run into bz#950436. To workaround this issue, I had to run:

# rmmod kvm_intel
# rmmod kvm
# modprobe kvm
# modprobe kvm_intel

Without this “module re-loading” you might see this error:

Call to virDomainCreateWithFlags failed: internal error: Process exited while reading console log output: char device redirected to /dev/pts/2 (label charserial0)
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied

Additional installations:

To make your machine somewhat more palatable, you might want to consider installing bash-completion:

# yum install -y bash-completion

You’ll also probably want to add the PolicyKit (polkit) .pkla file that I recommend in my earlier article. Typically that means adding something like:

[Allow james libvirt management permissions]
Identity=unix-user:james
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

as root to somewhere like:

/etc/polkit-1/localauthority/50-local.d/vagrant.pkla

Your machine should now be setup perfectly! The last thing you’ll need to do is to make sure that you get a Vagrantfile that does things properly! Here are some recommendations.

Shared folders:

Shared folders are a mechanism that Vagrant uses to pass data into (and sometimes out of) the virtual machines that it is managing. Typically you can use NFS, rsync, and some provider specific folder sharing like 9p. Using rsync is the simplest to set up, and works exceptionally well. Make sure you include the following line in your Vagrantfile:

config.vm.synced_folder './', '/vagrant', type: 'rsync'

If you want to see an example of this in action, you can have a look at my puppet-gluster Vagrantfile. If you are using the puppet apply provisioner, you will have to set it to use rsync as well:

puppet.synced_folder_type = 'rsync'

KVM performance:

Due to a regression in vagrant-libvirt, the default driver used for virtual machines is qemu. If you want to use the accelerated KVM domain type, you’ll have to set it:

libvirt.driver = 'kvm'

This typically gives me a 5x performance increase over plain qemu. This fix is available in the latest vagrant-libvirt version. The default has been set to KVM in the latest git master.

Dear internets!

I think this was fairly straightforward. You could probably even put all of these commands in a shell script and just run it to get it all going. What we really need is proper RPM packaging. If you can help out, that would be excellent!

If we had a version of vagrant-libvirt alongside a matching Vagrant version in Fedora, then developers and hackers could target that, and we could easily exchange dev environments, hackers could distribute product demos as full vagrant-libvirt clusters, and I could stop having to write these types of articles 😉

I hope this was helpful to you. Please let me know in the comments.

Happy hacking,

James

 

by on January 27, 2014

Screencasts of Puppet-Gluster + Vagrant

I decided to record some screencasts to show how easy it is to deploy GlusterFS using Puppet-Gluster+Vagrant. You can follow along even if you don’t know anything about Puppet or Vagrant. The hardest part of this process was producing the actual videos!

If recommend first reading my earlier articles if you’re planning on following along:

Without any further delay, here are the screencasts:

Part 1: Intro, and provisioning of the Puppet server.

Part 2: Initial building of the Gluster hosts.

Part 3: Finishing the Gluster builds.

Part 4: GlusterFS client mounting and tests.

Part 5: Mixed bag of code, infrastructure tours, examples and other details.

I hope you enjoyed these videos. Thank you to the Gluster.org community for hosting them. If you liked these videos, please consider sponsoring some of my work, or making a donation!

As a side note, the only screencast tool that worked was gtk-recordmydesktop, however it deleted my second recording (which had to be re-recorded) and the audio stopped working one minute into my third recording (which had to then be separately recorded, and mixed in). Amazingly, pitivi was the only tool which worked to properly mix them together!

Happy Hacking,

James

PS: Please note, you may not sell, edit, redistribute, perform, or host these videos elsewhere without my permission. I especially don’t want to see them on youtube until Google let’s me unlink my youtube account! If you do want my permission to use these videos for something, contact me, and we can work something out. I’ll surely allow it if it’s not for something evil. If you’d rather have an interactive, live demo, let me know!

by on January 20, 2014

Building base images for Vagrant with a Makefile

I needed a base image “box” for my Puppet-Gluster+Vagrant work. It would have been great if good boxes already existed, and even better if it were easy to build my own. As it turns out, I wasn’t able to satisfy either of these conditions, so I’ve had to build one myself! I’ve published all of my code, so that you can use these techniques and tools too!

Status quo:

Having an NIH problem is bad for your vision, and it’s best to benefit from existing tools before creating your own. I first tried using vagrant-cachier, and then veewee, and packer. Vagrant-cachier is a great tool, but it turned out not being very useful because there weren’t any base images available for download that met my needs. Veewee and packer can build those images, but they both failed in doing so for different reasons. Hopefully this situation will improve in the future.

Writing a script:

I started by hacking together a short shell script of commands for building base images. There wasn’t much programming involved as the process was fairly linear, but it was useful to figure out what needed getting done.

I decided to use the excellent virt-builder command to put together the base image. This is exactly what it’s good at doing! To install it on Fedora 20, you can run:

$ sudo yum install libguestfs-tools

It wasn’t available in Fedora 19, but after a lot of pain, I managed to build (mostly correct?) packages. I have posted them online if you are brave (or crazy?) enough to want them.

Using the right tool:

After building a few images, I realized that a shell script was the wrong tool, and that it was time for an upgrade. What was the right tool? GNU Make! After working on this for more hours than I’m ready to admit, I present to you, a lovingly crafted virtual machine base image (“box”) builder:

Makefile

The Makefile itself is quite compact. It uses a few shell scripts to do some of the customization, and builds a clean image in about ten minutes. To use it, just run make.

Customization:

At the moment, it builds x86_64, CentOS 6.5+ machines for vagrant-libvirt, but you can edit the Makefile to build a custom image of your choosing. I’ve gone out of my way to add an $(OUTPUT) variable to the Makefile so that your generated files get saved in /tmp/ or somewhere outside of your source tree.

Download the image:

If you’d like to download the image that I generated, it is being generously hosted by the Gluster community here. If you’re using the Vagrantfile from my Puppet-Gluster+Vagrant setup, then you don’t have to download it manually, this will happen automatically.

Open issues:

The biggest issue with the images is that SELinux gets disabled! You might be okay with this, but it’s actually quite unfortunate. It is disabled to avoid the SELinux relabelling that happens on first boot, as this overhead defeats the usefulness of a fast vagrant deployment. If you know of a way to fix this problem, please let me know!

Example output:

If you’d like to see this in action, but don’t want to run it yourself, here’s an example run:

$ date && time make && date
Mon Jan 20 10:57:35 EST 2014
Running templater...
Running virt-builder...
[   1.0] Downloading: http://libguestfs.org/download/builder/centos-6.xz
[   4.0] Planning how to build this image
[   4.0] Uncompressing
[  19.0] Resizing (using virt-resize) to expand the disk to 40.0G
[ 173.0] Opening the new disk
[ 181.0] Setting a random seed
[ 181.0] Setting root password
[ 181.0] Installing packages: screen vim-enhanced git wget file man tree nmap tcpdump htop lsof telnet mlocate bind-utils koan iftop yum-utils nc rsync nfs-utils sudo openssh-server openssh-clients
[ 212.0] Uploading: files/epel-release-6-8.noarch.rpm to /root/epel-release-6-8.noarch.rpm
[ 212.0] Uploading: files/puppetlabs-release-el-6.noarch.rpm to /root/puppetlabs-release-el-6.noarch.rpm
[ 212.0] Uploading: files/selinux to /etc/selinux/config
[ 212.0] Deleting: /.autorelabel
[ 212.0] Running: yum install -y /root/epel-release-6-8.noarch.rpm && rm -f /root/epel-release-6-8.noarch.rpm
[ 214.0] Running: yum install -y bash-completion moreutils
[ 235.0] Running: yum install -y /root/puppetlabs-release-el-6.noarch.rpm && rm -f /root/puppetlabs-release-el-6.noarch.rpm
[ 239.0] Running: yum install -y puppet
[ 254.0] Running: yum update -y
[ 375.0] Running: files/user.sh
[ 376.0] Running: files/ssh.sh
[ 376.0] Running: files/network.sh
[ 376.0] Running: files/cleanup.sh
[ 377.0] Finishing off
Output: /home/james/tmp/builder/gluster/builder.img
Output size: 40.0G
Output format: qcow2
Total usable space: 38.2G
Free space: 37.3G (97%)
Running convert...
Running tar...
./Vagrantfile
./metadata.json
./box.img

real	9m10.523s
user	2m23.282s
sys	0m37.109s
Mon Jan 20 11:06:46 EST 2014
$

If you have any other questions, please let me know!

Happy hacking,

James

PS: Be careful when writing Makefile‘s. They can be dangerous if used improperly, and in fact I once took out part of my lib/ directory by running one. Woops!

 

by on January 8, 2014

Automatically deploying GlusterFS with Puppet-Gluster + Vagrant!

Puppet-Gluster was always about automating the deployment of GlusterFS. Getting your own Puppet server and the associated infrastructure running was never included “out of the box“. Today, it is! (This is big news!)

I’ve used Vagrant to automatically build these GlusterFS clusters. I’ve tested this with Fedora 20, and vagrant-libvirt. This won’t work with Fedora 19 because of bz#876541. I recommend first reading my earlier articles for Vagrant and Fedora:

Once you’re comfortable with the material in the above articles, we can continue…

The short answer:

$ sudo service nfs start
$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git
$ cd puppet-gluster/vagrant/gluster/
$ vagrant up puppet && sudo -v && vagrant up

Once those commands finish, you should have four running gluster hosts, and a puppet server. The gluster hosts will still be building. You can log in and tail -F log files, or watch -n 1 gluster status commands.

The whole process including the one-time downloads took about 30 minutes. If you’ve got faster internet that I do, I’m sure you can cut that down to under 20. Building the gluster hosts themselves probably takes about 15 minutes.

Enjoy your new Gluster cluster!

Screenshots!

I took a few screenshots to make this more visual for you. I like to have virt-manager open so that I can visually see what’s going on:

The annex{1..4} machines are building in parallel.

The annex{1..4} machines are building in parallel. The valleys happened when the machines were waiting for the vagrant DHCP server (dnsmasq).

Here we can see two puppet runs happening on annex1 and annex4.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Notice the two peaks on the puppet server which correspond to the valleys on annex{1,4}.

Here’s another example, with four hosts working in parallel:

foo

Can you answer why the annex machines have two peaks? Why are the second peaks bigger?

Tell me more!

Okay, let’s start with the command sequence shown above.

$ sudo service nfs start

This needs to be run once if the NFS server on your host is not already running. It is used to provide folder synchronization for the Vagrant guest machines. I offer more information about the NFS synchronization in an earlier article.

$ git clone --recursive https://github.com/purpleidea/puppet-gluster.git

This will pull down the Puppet-Gluster source, and all the necessary submodules.

$ cd puppet-gluster/vagrant/gluster/

The Puppet-Gluster source contains a vagrant subdirectory. I’ve included a gluster subdirectory inside it, so that your machines get a sensible prefix. In the future, this might not be necessary.

$ vagrant up puppet && sudo -v && vagrant up

This is where the fun stuff happens. You’ll need a base box image to run machines with Vagrant. Luckily, I’ve already built one for you, and it is generously hosted by the Gluster community.

The very first time you run this Vagrant command, it will download this image automatically, install it and then continue the build process. This initial box download and installation only happens once. Subsequent Puppet-Gluster+Vagrant deploys and re-deploys won’t need to re-download the base image!

This command starts by building the puppet server. Vagrant might use sudo to prompt you for root access. This is used to manage your /etc/exports file. After the puppet server is finished building, we refresh the sudo cache to avoid bug #2680.

The last vagrant up command starts up the remaining gluster hosts in parallel, and kicks off the initial puppet runs. As I mentioned above, the gluster hosts will still be building. Puppet automatically waits for the cluster to “settle” and enter a steady state (no host/brick changes) before it creates the first volume. You can log in and tail -F log files, or watch -n 1 gluster status commands.

At this point, your cluster is running and you can do whatever you want with it! Puppet-Gluster+Vagrant is meant to be easy. If this wasn’t easy, or you can think of a way to make this better, let me know!

I want N hosts, not 4:

By default, this will build four (4) gluster hosts. I’ve spent a lot of time writing a fancy Vagrantfile, to give you speed and configurability. If you’d like to set a different number of hosts, you’ll first need to destroy the hosts that you’ve built already:

$ vagrant destroy annex{1..4}

You don’t have to rebuild the puppet server because this command is clever and automatically cleans the old host entries from it! This makes re-deploying even faster!

Then, run the vagrant up command with the --gluster-count=<N> argument. Example:

$ vagrant up --gluster-count=8

This is also configurable in the puppet-gluster.yaml file which will appear in your vagrant working directory. Remember that before you change any configuration option, you should destroy the affected hosts first, otherwise vagrant can get confused about the current machine state.

I want to test a different GlusterFS version:

By default, this will use the packages from:

https://download.gluster.org/pub/gluster/glusterfs/LATEST/

but if you’d like to pick a specific GlusterFS version you can do so with the --gluster-version=<version> argument. Example:

$ vagrant up --gluster-version='3.4.2-1.el6'

This is also stored, and configurable in the puppet-gluster.yaml file.

Does (repeating) this consume a lot of bandwidth?

Not really, no. There is an initial download of about 450MB for the base image. You’ll only ever need to download this again if I publish an updated version.

Each deployment will hit a public mirror to download the necessary puppet, GlusterFS and keepalived packages. The puppet server caused about 115MB of package downloads, and each gluster host needed about 58MB.

The great thing about this setup, is that it is integrated with vagrant-cachier, so that you don’t have to re-download packages. When building the gluster hosts in parallel (the default), each host will have to download the necessary packages into a separate directory. If you’d like each host to share the same cache folder, and save yourself 58MB or so per machine, you’ll need to build in series. You can do this with:

$ vagrant up --no-parallel

I chose speed over preserving bandwidth, so I do not recommend this option. If your ISP has a bandwidth cap, you should find one that isn’t crippled.

Subsequent re-builds won’t download any packages that haven’t already been downloaded.

What should I look for?

Once the vagrant commands are done running, you’ll want to look at something to see how your machines are doing. I like to log in and run different commands. I usually log in like this:

$ vcssh --screen root@annex{1..4}

I explain how to do this kind of magic in an earlier post. I then run some of the following commands:

# tail -F /var/log/messages

This lets me see what the machine is doing. I can see puppet runs and other useful information fly by.

# ip a s eth2

This lets me check that the VIP is working correctly, and which machine it’s on. This should usually be the first machine.

# ps auxww | grep again.[py]

This lets me see if puppet has scheduled another puppet run. My scripts automatically do this when they decide that there is still building (deployment) left to do. If you see a python again.py process, this means it is sleeping and will wake up to continue shortly.

# gluster peer status

This lets me see which hosts have connected, and what state they’re in.

# gluster volume info

This lets me see if Puppet-Gluster has built me a volume yet. By default it builds one distributed volume named puppet.

I want to configure this differently!

Okay, you’re more than welcome to! All of the scripts can be customized. If you want to configure the volume(s) differently, you’ll want to look in the:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

file. The gluster::simple class is well-documented, and can be configured however you like. If you want to do more serious hacking, have a look at the Vagrantfile, the source, and the submodules. Of course the GlusterFS source is a great place to hack too!

The network block, domain, and other parameters are all configurable inside of the Vagrantfile. I’ve tried to use sensible defaults where possible. I’m using example.com as the default domain. Yes, this will work fine on your private network. DNS is currently configured with the /etc/hosts file. I wrote some magic into the Vagrantfile so that the slow /etc/hosts shell provisioning only has to happen once per machine! If you have a better, functioning, alternative, please let me know!

What’s the greatest number of machines this will scale to?

Good question! I’d like to know too! I know that GlusterFS probably can’t scale to 1000 hosts yet. Keepalived can’t support more than 256 priorities, therefore Puppet-Gluster can’t scale beyond that count until a suitable fix can be found. There are likely some earlier limits inside of Puppet-Gluster due to maximum command line length constraints that you’ll hit. If you find any, let me know and I’ll patch them. Patches now cost around seven karma points. Other than those limits, there’s the limit of my hardware. Since this is all being virtualized on a lowly X201, my tests are limited. An upgrade would be awesome!

Can I use this to test QA releases, point releases and new GlusterFS versions?

Absolutely! That’s the idea. There are two caveats:

  1. Automatically testing QA releases isn’t supported until the QA packages have a sensible home on download.gluster.org or similar. This will need a change to:
    https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

    The gluster community is working on this, and as soon as a solution is found, I’ll patch Puppet-Gluster to support it. If you want to disable automatic repository management (in gluster::simple) and manage this yourself with the vagrant shell provisioner, you’re able to do so now.

  2. It’s possible that new releases introduce bugs, or change things in a backwards incompatible way that breaks Puppet-Gluster. If this happens, please let me know so that something can get patched. That’s what testing is for!

You could probably use this infrastructure to test GlusterFS builds automatically, but that’s a project that would need real funding.

Updating the puppet source:

If you make a change to the puppet source, but you don’t want to rebuild the puppet virtual machine, you don’t have to. All you have to do is run:

$ vagrant provision puppet

This will update the puppet server with any changes made to the source tree at:

puppet-gluster/vagrant/gluster/puppet/

Keep in mind that the modules subdirectory contains all the necessary puppet submodules, and a clone of puppet-gluster itself. You’ll first need to run make inside of:

puppet-gluster/vagrant/gluster/puppet/modules/

to refresh the local clone. To see what’s going on, or to customize the process, you can look at the Makefile.

Client machines:

At the moment, this doesn’t build separate machines for gluster client use. You can either mount your gluster pool from the puppet server, another gluster server, or if you add the right DNS entries to your /etc/hosts, you can mount the volume on your host machine. If you really want Vagrant to build client machines, you’ll have to persuade me.

What about firewalls?

Normally I use shorewall to manage the firewall. It integrates well with Puppet-Gluster, and does a great job. For an unexplained reason, it seems to be blocking my VRRP (keepalived) traffic, and I had to disable it. I think this is due to a libvirt networking bug, but I can’t prove it yet. If you can help debug this issue, please let me know! To reproduce it, enable the firewall and shorewall directives in:

puppet-gluster/vagrant/gluster/puppet/manifests/site.pp

and then get keepalived to work.

Re-provisioning a machine after a long wait throws an error:

You might be hitting: vagrant-cachier #74. If you do, there is an available workaround.

Keepalived shows “invalid passwd!” messages in the logs:

This is expected. This happens because we build a distributed password for use with keepalived. Before the cluster state has settled, the password will be different from host to host. Only when the cluster is coherent will the password be identical everywhere, which incidentally is the only time when the VIP matters.

How did you build that awesome base image?

I used virt-builder, some scripts, and a clever Makefile. I’ll be publishing this code shortly. Hasn’t this been enough to keep you busy for a while?

Are you still here?

If you’ve read this far, then good for you! I’m sorry that it has been a long read, but I figured I would try to answer everyone’s questions in advance. I’d like to hear your comments! I get very little feedback, and I’ve never gotten a single tip! If you find this useful, please let me know.

Until then,

Happy hacking,

James

by on December 21, 2013

Vagrant vsftp and other tricks

As I previously wrote, I’ve been busy with Vagrant on Fedora with libvirt, and have even been submitting, patches and issues! (This “closed” issue needs solving!) Here are some of the tricks that I’ve used while hacking away.

Default provider:

I should have mentioned this in my earlier article but I forgot: If you’re always using the same provider, you might want to set it as the default. In my case I’m using vagrant-libvirt. To do so, add the following to your .bashrc:

export VAGRANT_DEFAULT_PROVIDER=libvirt

This helps you avoid having to add --provider=libvirt to all of your vagrant commands.

Aliases:

All good hackers use shell (in my case bash) aliases to make their lives easier. I normally keep all my aliases in .bash_aliases, but I’ve grouped them together with some other vagrant related items in my .bashrc:

alias vp='vagrant provision'
alias vup='vagrant up'
alias vssh='vagrant ssh'
alias vdestroy='vagrant destroy'

Bash functions:

There are some things aliases just can’t do. For those situations, a bash function might be most appropriate. These go inside your .bashrc file. I’d like to share two with you:

Vagrant logging:

Sometimes it’s useful to get more information from the vagrant command that is running. To do this, you can set the VAGRANT_LOG environment variable to info, or debug.

function vlog {
	VAGRANT_LOG=info vagrant "$@" 2> vagrant.log
}

This is usually helpful for debugging a vagrant issue. When you use the vlog command, it works exactly as the normal vagrant command, but it spits out a vagrant.log logfile into the working directory.

Vagrant sftp:

Vagrant provides an ssh command, but it doesn’t offer an sftp command. The sftp tool is fantastically useful when you want to quickly push or pull a file over ssh. First I’ll show you the code so that you can practice reading bash:

# vagrant sftp
function vsftp {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vsftp  - vagrant sftp" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$1" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && sftp -F "$f" "$1"
}

This vsftp command will work from anywhere in the vagrant project root directory or deeper. It does this by first searching upwards until it gets to that directory. When it finds it, it creates a .ssh/ directory there, and stores the proper ssh_config stubs needed for sftp to find the machines. Then the sftp -F command runs, connecting you to your guest machine.

You’ll notice that the first time you connect takes longer than the second time. This is because the vsftp function caches the ssh_config file store operation. You might want to set a different timeout, or disable it altogether. I should also mention that this script searches for a Vagrantfile and .vagrant/ directory to identify the project root. If there is a more “correct” method, please let me know.

I’m particularly proud of this because it was a fun (and useful) hack. This function, and all the above .bashrc material is available here as an easy download.

NFS folder syncing/mounting:

The vagrant-libvirt provider supports one way folder “synchronization” with rsync, and NFS mounting if you want two-way synchronization. I would love if libvirt/vagrant-libvirt had support for real folder sharing via QEMU guest agent and/or 9p. Getting it to work wasn’t trivial. Here’s what I had to do:

Vagrantfile:

Define your “synced_folder” so that you add the :mount_options array:

config.vm.synced_folder "nfs/", "/tmp/nfs", :nfs => true, :mount_options => ['rw', 'vers=3', 'tcp']

When you specify anything in the :mount_options array, it erases all the defaults. The defaults are: vers=3,udp,rw. Obviously, choose your own directory paths! I’d rather see this use NFSv4, but it would have taken slightly more fighting. Please become a warrior today!

Firewall:

Firewalling is not automatic. To work around this, you’ll need to run the following commands before you use your NFS mount:

# systemctl restart nfs-server # this as root, the rest will prompt
$ firewall-cmd --permanent --zone public --add-service mountd
$ firewall-cmd --permanent --zone public --add-service rpc-bind
$ firewall-cmd --permanent --zone public --add-service nfs
$ firewall-cmd --reload

You should only have to do this once, but possibly each reboot. If you like, you can patch those commands to run at the top of your Vagrantfile. If you can help fix this issue permanently, the bug is here.

Sudo:

The use of sudo to automatically edit /etc/exports is a neat trick, but it can cause some problems:

  1. It is often not needed
  2. It annoyingly prompts you
  3. It can bork your stdin

You get internet karma from me if you can help permanently solve any of those three problems.

Package caching:

As part of my deployments, Puppet installs a lot of packages. Repeatedly hitting a public mirror isn’t a very friendly thing to do, it wastes bandwidth, and it’s slower than having the packages locally! Setting up a proper mirror is a nice solution, but it comes with a management overhead. There is really only one piece of software that manages repositories properly, but using pulp has its own overhead, and is beyond the scope of this article. Because we’re not using our own local mirror, this won’t allow you to work entirely offline, as the metadata still needs to get downloaded from the net.

Installation:

As an interim solution, we can use vagrant-cachier. This plugin adds synced_folder‘s to the apt/yum cache directories. Sneaky! Since this requires two-way sync, you’ll need to get NFS folder syncing/mounting working first. Luckily, I’ve already taught you that!

To pass the right options through to vagrant-cachier, you’ll need this patch. I’d recommend first installing the plugin with:

$ vagrant plugin install vagrant-cachier

and then patching your vagrant-cachier manually. On my machine, the file to patch is found at:

~/.vagrant.d/gems/gems/vagrant-cachier-0.5.0/lib/vagrant-cachier/provision_ext.rb

Vagrantfile:

Here’s what I put in my Vagrantfile:

# NOTE: this doesn't cache metadata, full offline operation not possible
config.cache.auto_detect = true
config.cache.enable :yum		# choose :yum or :apt
if not ARGV.include?('--no-parallel')	# when running in parallel,
	config.cache.scope = :machine	# use the per machine cache
end
config.cache.enable_nfs = true	# sets nfs => true on the synced_folder
# the nolock option is required, otherwise the NFSv3 client will try to
# access the NLM sideband protocol to lock files needed for /var/cache/
# all of this can be avoided by using NFSv4 everywhere. die NFSv3, die!
config.cache.mount_options = ['rw', 'vers=3', 'tcp', 'nolock']

Since multiple package managers operating on the same cache can cause locking issues, this plugin lets you decide if you want a shared cache for all machines, or if you want separate caches. I know that when I use the --no-parallel option it’s safe (enough) to use a common cache because Puppet does all the package installations, and they all happen during each of their first provisioning runs which are themselves sequential.

This isn’t perfect, it’s a hack, but hacks are fun, and in this case, very useful. If things really go wrong, it’s easy to rebuild everything! I still think a few sneaky bugs remain in vagrant-cachier with libvirt, so please find, report, and patch them if you can, although it might be NFS that’s responsible.

Puppet integration:

Integration with Puppet makes this all worthwhile. There are two scenarios:

  1. You’re using a puppetmaster, and you care about things like exported resources and multi-machine systems.
  2. You don’t have a puppetmaster, and you’re running standalone “single machine” code.

The correct answer is #1. Standalone Puppet code is bad for a few reasons:

  1. It neglects the multi-machine nature of servers, and software.
  2. You’re not able to use useful features like exported resources.
  3. It’s easy to (unknowingly) write code that won’t work with a puppetmaster.

If you’d like to see some code which was intentionally written to only work without a puppetmaster (and the puppetmaster friendly equivalent) have a look at my Pushing Puppet material.

There is one good reason for standalone Puppet code, and that is: bootstrapping. In particular, bootstrapping the Puppet server.

DNS:

You need some sort of working DNS setup for this to function properly. Whether that involves hacking up your /etc/hosts files, using one of the vagrant DNS plugins, or running dnsmasq/bind is up to you. I haven’t found an elegant solution yet, hopefully not telling you what I’ve done will push you to come up with something awesome!

The :puppet_server provisioner:

This is the vagrant (puppet agent) provisioner that lets you talk to a puppet server. Here’s my config:

vm.vm.provision :puppet_server do |puppet|
	puppet.puppet_server = 'puppet'
	puppet.options = '--test'    # see the output
end

I’ve added the --test option so that I can see the output go by in my console. I’ve also disabled the puppet service on boot in my base machine image, so that it doesn’t run before I get the chance to watch it. If your base image already has the puppet service running on boot, you can disable it with a shell provisioner. That’s it!

The :puppet provisioner:

This is the standalone vagrant (puppet apply) provisioner. It seems to get misused quite a lot! As I mentioned, I’m only using it to bootstrap my puppet server. That’s why it’s a useful provisioner. Here’s my config:

vm.vm.provision :puppet do |puppet|
	puppet.module_path = 'puppet/modules'
	puppet.manifests_path = 'puppet/manifests'
	puppet.manifest_file = 'site.pp'
	# custom fact
	puppet.facter = {
		'vagrant' => '1',
	}
end

The puppet/{modules,manifests} directories should exist in your project root. I keep the puppet/modules/ directory fresh with a make script that rsync’s code in from my git directories, but that’s purely a personal choice. I added the custom fact as an example. All that’s left is for this code to build a puppetmaster!

Building a puppetmaster:

Here’s my site.pp:

node puppet {	# puppetmaster

	class { '::puppet::server':
		pluginsync => true,	# do we want to enable pluginsync?
		storeconfigs => true,	# do we want to enable storeconfigs?
		autosign => ['*'],	# NOTE: this is a temporary solution
		#repo => true,		# automatic repos (DIY for now)
		start => true,
	}

	class { '::puppet::deploy':
		path => '/vagrant/puppet/',	# puppet folder is put here...
		backup => false,		# don't use puppet to backup...
	}
}

As you can see, I include two classes. The first one installs and configures the puppetmaster, puppetdb, and so on. The second class runs an rsync from the vagrant deployed /vagrant/puppet/ directory to the proper directories inside of /etc/puppet/ and wherever else is appropriate. Each time I want to push new puppet code, I run vp puppet and then vp <host> of whichever host I want to test. You can even combine the two commands into a one-liner with && if you’d like.

The puppet-puppet module:

This is the hairy part. I’ve always had an issue with this module. I recently ripped out support for a lot of the old puppet 0.24 era features, and I’ve only since tested it on the recent 3.x series. A lot has changed from version to version, so it has been hard to follow this and keep it sane. There are serious improvements that could be made to the code. In fact, I wouldn’t recommend it unless you’re okay hacking on Puppet code:

https://github.com/purpleidea/puppet-puppet

Most importantly:

This was a long article! I could have split it up into multiple posts, gotten more internet traffic karma, and would have seemed like a much more prolific blogger as a result, but I figured you’d prefer it all in one big lot, and without any suspense. Prove me right by leaving me a tip, and giving me your feedback.

Happy hacking!

James

PS: You’ll want to be able to follow along with everything in this article if you want to use the upcoming Puppet-Gluster+Vagrant infrastructure that I’ll be releasing.

PPS: Heh heh heh…

PPPS: Get it? Vagrant -> Oscar the Grouch -> Muppets -> Puppet