all posts tagged bash


by on July 23, 2015

Git archive with submodules and tar magic

Git submodules are actually a very beautiful thing. You might prefer the word powerful or elegant, but that’s not the point. The downside is that they are sometimes misused, so as always, use with care. I’ve used them in projects like puppet-gluster, oh-my-vagrant, and others. If you’re not familiar with them, do a bit of reading and come back later, I’ll wait.

I recently did some work packaging Oh-My-Vagrant as RPM’s. My primary goal was to make sure the entire process was automatic, as I have no patience for manually building RPM’s. Any good packager knows that the pre-requisite for building a SRPM is a source tarball, and I wanted to build those automatically too.

Simply running a tar -cf on my source directory wouldn’t work, because I only want to include files that are stored in git. Thankfully, git comes with a tool called git archive, which does exactly that! No scary tar commands required:

Nobody likes tar

Here’s how you might run it:

$ git archive --prefix=some-project/ -o output.tar.bz2 HEAD

Let’s decompose:

The --prefix argument prepends a string prefix onto every file in the archive. Therefore, if you’d like the root directory to be named some-project, then you prepend that string with a trailing slash, and you’ll have everything nested inside a directory!

The -o flag predictably picks the output file and format. Using .tar.bz2 is quite common.

Lastly, the HEAD portion at the end specifies which git tree to pull the files from. I usually specify a git tag here, but you can specify a commit id if you prefer.

Obligatory, "make this article more interesting" meme image.

Obligatory, “make this article more interesting” meme image.

This is all well and good, but unfortunately, when I open my newly created archive, it is notably missing my git submodules! It would probably make sense for there to be an upstream option so that a --recursive flag would do this magic for you, but unfortunately it doesn’t exist yet.

There are a few scripts floating around that can do this, but I wanted something small, and without any real dependencies, that I can embed in my project Makefile, so that it’s all self-contained.

Here’s what that looks like:

sometarget:
    @echo Running git archive...
    # use HEAD if tag doesn't exist yet, so that development is easier...
    git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) $(VERSION) 2> /dev/null || (echo 'Warning: $(VERSION) does not exist.' && git archive --prefix=oh-my-vagrant-$(VERSION)/ -o $(SOURCE) HEAD)
    # TODO: if git archive had a --submodules flag this would easier!
    @echo Running git archive submodules...
    # i thought i would need --ignore-zeros, but it doesn't seem necessary!
    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do 
        temp="$${path%'}"; 
        temp="$${temp#'}"; 
        path=$$temp; 
        [ "$$path" = "" ] && continue; 
        (cd $$path && git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar && tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); 
    done

This is a bit tricky to read, so I’ll try to break it down. Remember, double dollar signs are used in Make syntax for embedded bash code since a single dollar sign is a special Make identifier. The $(VERSION) variable corresponds to the version of the project I’m building, which matches a git tag that I’ve previously created. $(SOURCE) corresponds to an output file name, ending in the .tar.bz2 suffix.

    p=`pwd` && (echo .; git submodule foreach) | while read entering path; do 

In this first line, we store the current working directory for use later, and then loop through the output of the git submodule foreach command. That output normally looks something like this:

james@computer:~/code/oh-my-vagrant$ git submodule foreach 
Entering 'vagrant/gems/xdg'
Entering 'vagrant/kubernetes/templates/default'
Entering 'vagrant/p4h'
Entering 'vagrant/puppet/modules/module-data'
Entering 'vagrant/puppet/modules/puppet'
Entering 'vagrant/puppet/modules/stdlib'
Entering 'vagrant/puppet/modules/yum'

As you can see, this shows that the above read command, eats up the Entering string, and pulls the quoted path into the second path variable. The next part of the code:

        temp="$${path%'}"; 
        temp="$${temp#'}"; 
        path=$$temp; 
        [ "$$path" = "" ] && continue; 

uses bash idioms to remove the two single quotes that wrap our string, and then skip over any empty versions of the path variable in our loop. Lastly, for each submodule found, we first switch into that directory:

        (cd $$path &&

Run a normal git archive command and create a plain uncompressed tar archive in a temporary directory:

git archive --prefix=oh-my-vagrant-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar &&

Then use the magic of tar to overlay this new tar file, on top of the source file that we’re now building up with each iteration of this loop, and then remove the temporary file.

tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); 

Finally, we end the loop:

    done

Boom, magic! Short, concise, and without any dependencies but bash and git.

Nobody should have to figure that out by themselves, and I wish it was built in to git, but until then, here’s how it’s done! Many thanks to #git on IRC for pointing me in the right direction.

This is the commit where I landed this patch for oh-my-vagrant, if you’re curious to see this in the wild. Now that this is done, I can definitely say that it was worth the time:

Is it worth the time? In this case, it was.

With this feature merged, along with my automatic COPR builds, a simple ‘make rpm‘, causes all of this automation to happen, and delivers a fresh build from git in a few minutes.

I hope you enjoyed this technique, and I hope you have some coding skills to get this feature upstream in git.

Happy Hacking,

James


by on September 3, 2014

Introducing: Oh My Vagrant!

If you’re a reader of my code or of this blog, it’s no secret that I hack on a lot of puppet and vagrant. Recently I’ve fooled around with a bit of docker, too. I realized that the vagrant, environments I built for puppet-gluster and puppet-ipa needed to be generalized, and they needed new features too. Therefore…

Introducing: Oh My Vagrant!

Oh My Vagrant is an attempt to provide an easy to use development environment so that you can be up and hacking quickly, and focusing on the real devops problems. The README explains my choice of project name.

Prerequisites:

I use a Fedora 20 laptop with vagrant-libvirt. Efforts are underway to create an RPM of vagrant-libvirt, but in the meantime you’ll have to read: Vagrant on Fedora with libvirt (reprise). This should work with other distributions too, but I don’t test them very often. Please step up and help test :)

The bits:

First clone the oh-my-vagrant repository and look inside:

git clone --recursive https://github.com/purpleidea/oh-my-vagrant
cd oh-my-vagrant/vagrant/

The included Vagrantfile is the current heart of this project. You’re welcome to use it as a template and edit it directly, or you can use the facilities it provides. I’d recommend starting with the latter, which I’ll walk you through now.

Getting started:

Start by running vagrant status (vs) and taking a look at the vagrant.yaml file that appears.

james@computer:/oh-my-vagrant/vagrant$ ls
Dockerfile  puppet/  Vagrantfile
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

Here you’ll see the list of resultant machines that vagrant thinks is defined (currently just template1), and a bunch of different settings in YAML format. The values of these settings help define the vagrant environment that you’ll be hacking in.

Changing settings:

The settings exist so that your vagrant environment is dynamic and can be changed quickly. You can change the settings by editing the vagrant.yaml file. They will be used by vagrant when it runs. You can also change them at runtime with --vagrant-foo flags. Running a vagrant status will show you how vagrant currently sees the environment. Let’s change the number of machines that are defined. Note the location of the --vagrant-count flag and how it doesn’t work when positioned incorrectly.

james@computer:/oh-my-vagrant/vagrant$ vagrant status --vagrant-count=4
An invalid option was specified. The help for this command
is available below.

Usage: vagrant status [name]
    -h, --help                       Print this help
james@computer:/oh-my-vagrant/vagrant$ vagrant --vagrant-count=4 status
Current machine states:

template1                 not created (libvirt)
template2                 not created (libvirt)
template3                 not created (libvirt)
template4                 not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 4
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

As you can see in the above example, changing the count variable to 4, causes vagrant to see a possible four machines in the vagrant environment. You can change as many of these parameters at a time by using the --vagrant- flags, or you can edit the vagrant.yaml file. The latter is much easier and more expressive, in particular for expressing complex data types. The former is much more powerful when building one-liners, such as:

vagrant --vagrant-count=8 --vagrant-namespace=gluster up gluster{1..8}

which should bring up eight hosts in parallel, named gluster1 to gluster8.

Other VM’s:

Since one often wants to be more expressive in machine naming and heterogeneity of machine type, you can specify a list of machines to define in the vagrant.yaml file vms array. If you’d rather define these machines in the Vagrantfile itself, you can also set them up in the vms array defined there. It is empty by default, but it is easy to uncomment out one of the many examples. These will be used as the defaults if nothing else overrides the selection in the vagrant.yaml file. I’ve uncommented a few to show you this functionality:

james@computer:/oh-my-vagrant/vagrant$ grep example[124] Vagrantfile 
    {:name => 'example1', :docker => true, :puppet => true, },    # example1
    {:name => 'example2', :docker => ['centos', 'fedora'], },    # example2
    {:name => 'example4', :image => 'centos-6', :puppet => true, },    # example4
james@computer:/oh-my-vagrant/vagrant$ rm vagrant.yaml # note that I remove the old settings
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example2                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example2
  :docker:
  - centos
  - fedora
- :name: example4
  :image: centos-6
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vim vagrant.yaml # edit vagrant.yaml file...
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example4
  :image: centos-7.0
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$

The above output might seem a little long, but if you try these steps out in your terminal, you should get a hang of it fairly quickly. If you poke around in the Vagrantfile, you should see the format of the vms array. Each element in the array should be a dictionary, where the keys correspond to the flags you wish to set. Look at the examples if you need help with the formatting.

Other settings:

As you saw, other settings are available. There are a few notable ones that are worth mentioning. This will also help explain some of the other features that this Vagrantfile provides.

  • domain: This sets the domain part of each vm’s FQDN. The default is example.com, which should work for most environments, but you’re welcome to change this as you see fit.
  • network: This sets the network that is used for the vm’s. You should pick a network/cidr that doesn’t conflict with any other networks on your machine. This is particularly useful when you have multiple vagrant environments hosted off of the same laptop.
  • image: This is the default base image to use for each machine. It can be overridden per-machine in the vm’s list of dictionaries.
  • sync: This is the sync type used for vagrant. rsync is the default and works in all environments. If you’d prefer to fight with the nfs mounts, or try out 9p, both those options are available too.
  • puppet: This option enables or disables integration with puppet. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • docker: This option enables and lists the docker images to set up per vm. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • namespace: This sets the namespace that your Vagrantfile operates in. This value is used as a prefix for the numbered vm’s, as the libvirt network name, and as the primary puppet module to execute.

More on the docker option:

For now, if you specify a list of docker images, they will be automatically pulled into your vm environment. It is recommended that you pre-cache them in an existing base image to save bandwidth. Custom base vagrant images can be easily be built with vagrant-builder, but this process is currently undocumented.

I’ll try to write-up a post on this process if there are enough requests. To keep you busy in the meantime, I’ve published a CentOS 7 vagrant base image that includes docker images for CentOS and Fedora. It is being graciously hosted by the GlusterFS community.

What other magic does this all do?

There is a certain amount of magic glue that happens behind the scenes. Here’s a list of some of it:

  • Idempotent /etc/hosts based DNS
  • Easy docker base image installation
  • IP address calculations and assignment with ipaddr
  • Clever cleanup on ‘vagrant destroy
  • Vagrant docker base image detection
  • Integration with Puppet

If you don’t understand what all of those mean, and you don’t want to go source diving, don’t worry about it! I will explain them in greater detail when it’s important, and hopefully for now everything “just works” and stays out of your way.

Future work:

There’s still a lot more that I have planned, and some parts of the Vagrantfile need clean up, but I figured I’d try and release this early so that you can get hacking right away. If it’s useful to you, please leave a comment and let me know.

Happy hacking,

James

 

by on May 6, 2014

Keeping git submodules in sync with your branches

This is a quick trick for making working with git submodules more magic.

One day you might find that using git submodules is needed for your project. It’s probably not necessary for everyday hacking, but if you’re glue-ing things together, it can be quite useful. Puppet-Gluster uses this technique to easily include all the dependencies needed for a Puppet-Gluster+Vagrant automatic deployment.

If you’re a good hacker, you develop things in separate feature branches. Example:

cd code/projectdir/
git checkout -b feat/my-cool-feature
# hack hack hack
git add -p
# add stuff
git commit -m 'my cool new feature'
git push
# yay!

The problem arises if you git pull inside of a git submodule to update it to a particular commit. When you switch branches, the git submodule‘s branch doesn’t move along with you! Personally, I think this is a bug, but perhaps it’s not. In any case, here’s the fix:

add:

#!/bin/bash
exec git submodule update

to your:

<projectdir>/.git/hooks/post-checkout

and then run:

chmod u+x <projectdir>/.git/hooks/post-checkout

and you’re good to go! Here’s an example:

james@computer:~/code/puppet/puppet-gluster$ git checkout feat/yamldata
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'feat/yamldata'
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out 'f139d0b7cfe6d55c0848d0d338e19fe640a961f2'
james@computer:~/code/puppet/puppet-gluster (feat/yamldata)$ git checkout master
M vagrant/gluster/puppet/modules/puppet
Switched to branch 'master'
Your branch is up-to-date with 'glusterforge/master'.
Submodule path 'vagrant/gluster/puppet/modules/puppet': checked out '07ec49d1f67a498b31b4f164678a76c464e129c4'
james@computer:~/code/puppet/puppet-gluster$ cat .git/hooks/post-checkout
#!/bin/bash
exec git submodule update
james@computer:~/code/puppet/puppet-gluster$

Hope that helps you out too! If someone knows of a use-case when you don’t want this functionality, please let me know! Many thanks to #git for helping me solve this issue!

Happy hacking,

James

 

by on January 20, 2014

Building base images for Vagrant with a Makefile

I needed a base image “box” for my Puppet-Gluster+Vagrant work. It would have been great if good boxes already existed, and even better if it were easy to build my own. As it turns out, I wasn’t able to satisfy either of these conditions, so I’ve had to build one myself! I’ve published all of my code, so that you can use these techniques and tools too!

Status quo:

Having an NIH problem is bad for your vision, and it’s best to benefit from existing tools before creating your own. I first tried using vagrant-cachier, and then veewee, and packer. Vagrant-cachier is a great tool, but it turned out not being very useful because there weren’t any base images available for download that met my needs. Veewee and packer can build those images, but they both failed in doing so for different reasons. Hopefully this situation will improve in the future.

Writing a script:

I started by hacking together a short shell script of commands for building base images. There wasn’t much programming involved as the process was fairly linear, but it was useful to figure out what needed getting done.

I decided to use the excellent virt-builder command to put together the base image. This is exactly what it’s good at doing! To install it on Fedora 20, you can run:

$ sudo yum install libguestfs-tools

It wasn’t available in Fedora 19, but after a lot of pain, I managed to build (mostly correct?) packages. I have posted them online if you are brave (or crazy?) enough to want them.

Using the right tool:

After building a few images, I realized that a shell script was the wrong tool, and that it was time for an upgrade. What was the right tool? GNU Make! After working on this for more hours than I’m ready to admit, I present to you, a lovingly crafted virtual machine base image (“box”) builder:

Makefile

The Makefile itself is quite compact. It uses a few shell scripts to do some of the customization, and builds a clean image in about ten minutes. To use it, just run make.

Customization:

At the moment, it builds x86_64, CentOS 6.5+ machines for vagrant-libvirt, but you can edit the Makefile to build a custom image of your choosing. I’ve gone out of my way to add an $(OUTPUT) variable to the Makefile so that your generated files get saved in /tmp/ or somewhere outside of your source tree.

Download the image:

If you’d like to download the image that I generated, it is being generously hosted by the Gluster community here. If you’re using the Vagrantfile from my Puppet-Gluster+Vagrant setup, then you don’t have to download it manually, this will happen automatically.

Open issues:

The biggest issue with the images is that SELinux gets disabled! You might be okay with this, but it’s actually quite unfortunate. It is disabled to avoid the SELinux relabelling that happens on first boot, as this overhead defeats the usefulness of a fast vagrant deployment. If you know of a way to fix this problem, please let me know!

Example output:

If you’d like to see this in action, but don’t want to run it yourself, here’s an example run:

$ date && time make && date
Mon Jan 20 10:57:35 EST 2014
Running templater...
Running virt-builder...
[   1.0] Downloading: http://libguestfs.org/download/builder/centos-6.xz
[   4.0] Planning how to build this image
[   4.0] Uncompressing
[  19.0] Resizing (using virt-resize) to expand the disk to 40.0G
[ 173.0] Opening the new disk
[ 181.0] Setting a random seed
[ 181.0] Setting root password
[ 181.0] Installing packages: screen vim-enhanced git wget file man tree nmap tcpdump htop lsof telnet mlocate bind-utils koan iftop yum-utils nc rsync nfs-utils sudo openssh-server openssh-clients
[ 212.0] Uploading: files/epel-release-6-8.noarch.rpm to /root/epel-release-6-8.noarch.rpm
[ 212.0] Uploading: files/puppetlabs-release-el-6.noarch.rpm to /root/puppetlabs-release-el-6.noarch.rpm
[ 212.0] Uploading: files/selinux to /etc/selinux/config
[ 212.0] Deleting: /.autorelabel
[ 212.0] Running: yum install -y /root/epel-release-6-8.noarch.rpm && rm -f /root/epel-release-6-8.noarch.rpm
[ 214.0] Running: yum install -y bash-completion moreutils
[ 235.0] Running: yum install -y /root/puppetlabs-release-el-6.noarch.rpm && rm -f /root/puppetlabs-release-el-6.noarch.rpm
[ 239.0] Running: yum install -y puppet
[ 254.0] Running: yum update -y
[ 375.0] Running: files/user.sh
[ 376.0] Running: files/ssh.sh
[ 376.0] Running: files/network.sh
[ 376.0] Running: files/cleanup.sh
[ 377.0] Finishing off
Output: /home/james/tmp/builder/gluster/builder.img
Output size: 40.0G
Output format: qcow2
Total usable space: 38.2G
Free space: 37.3G (97%)
Running convert...
Running tar...
./Vagrantfile
./metadata.json
./box.img

real	9m10.523s
user	2m23.282s
sys	0m37.109s
Mon Jan 20 11:06:46 EST 2014
$

If you have any other questions, please let me know!

Happy hacking,

James

PS: Be careful when writing Makefile‘s. They can be dangerous if used improperly, and in fact I once took out part of my lib/ directory by running one. Woops!

 

by on January 1, 2014

Vagrant clustered SSH and ‘screen’

Some fun updates for vagrant hackers… I wanted to use the venerable clustered SSH (cssh) and screen with vagrant. I decided to expand on my vsftp script. First read:

Vagrant on Fedora with libvirt

and

Vagrant vsftp and other tricks

to get up to speed on the background information.

Vagrant screen:

First, a simple screen hack… I often use my vssh alias to quickly ssh into a machine, but I don’t want to have to waste time with sudo-ing to root and then running screen each time. Enter vscreen:

# vagrant screen
function vscreen {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vscreen <vm-name> - vagrant screen" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"
	h="$1"
	# hostname extraction from user@host pattern
	p=`expr index "$1" '@'`
	if [ $p -gt 0 ]; then
		let "l = ${#h} - $p"
		h=${h:$p:$l}
	fi

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$h" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && ssh -t -F "$f" "$1" 'screen -xRR'
}

I usually run it this way:

$ vscreen root@machine

which logs in as root, to machine and gets me (back) into screen. This is almost identical to the vsftp script which I explained in an earlier blog post.

Vagrant cssh:

First you’ll need to install cssh. On my Fedora machine it’s as easy as:

# yum install -y clusterssh

I’ve been hacking a lot on Puppet-Gluster lately, and occasionally multi-machine hacking demands multi-machine key punching. Enter vcssh:


# vagrant cssh
function vcssh {
	[ "$1" = '' ] && echo "Usage: vcssh [options] [user@]<vm1>[ [user@]vm2[ [user@]vmN...]] - vagrant cssh" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	cssh="$d/cssh"
	cmd=''
	cat='cat '
	screen=''
	options=''

	multi='f'
	special=''
	for i in "$@"; do	# loop through the list of hosts and arguments!
		#echo $i

		if [ "$special" = 'debug' ]; then	# optional arg value...
			special=''
			if [ "$1" -ge 0 -o "$1" -le 4 ]; then
				cmd="$cmd $i"
				continue
			fi
		fi

		if [ "$multi" = 'y' ]; then	# get the value of the argument
			multi='n'
			cmd="$cmd '$i'"
			continue
		fi

		if [ "${i:0:1}" = '-' ]; then	# does argument start with: - ?

			# build a --screen option
			if [ "$i" = '--screen' ]; then
				screen=' -o RequestTTY=yes'
				cmd="$cmd --action 'screen -xRR'"
				continue
			fi

			if [ "$i" = '--debug' ]; then
				special='debug'
				cmd="$cmd $i"
				continue
			fi

			if [ "$i" = '--options' ]; then
				options=" $i"
				continue
			fi

			# NOTE: commented-out options are probably not useful...
			# match for key => value argument pairs
			if [ "$i" = '--action' -o "$i" = '-a' ] || \
			[ "$i" = '--autoclose' -o "$i" = '-A' ] || \
			#[ "$i" = '--cluster-file' -o "$i" = '-c' ] || \
			#[ "$i" = '--config-file' -o "$i" = '-C' ] || \
			#[ "$i" = '--evaluate' -o "$i" = '-e' ] || \
			[ "$i" = '--font' -o "$i" = '-f' ] || \
			#[ "$i" = '--master' -o "$i" = '-M' ] || \
			#[ "$i" = '--port' -o "$i" = '-p' ] || \
			#[ "$i" = '--tag-file' -o "$i" = '-c' ] || \
			[ "$i" = '--term-args' -o "$i" = '-t' ] || \
			[ "$i" = '--title' -o "$i" = '-T' ] || \
			[ "$i" = '--username' -o "$i" = '-l' ] ; then
				multi='y'	# loop around to get second part
				cmd="$cmd $i"
				continue
			else			# match single argument flags...
				cmd="$cmd $i"
				continue
			fi
		fi

		f="$d/$i.config"
		h="$i"
		# hostname extraction from user@host pattern
		p=`expr index "$i" '@'`
		if [ $p -gt 0 ]; then
			let "l = ${#h} - $p"
			h=${h:$p:$l}
		fi

		# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
		if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
			mkdir -p "$d"
			# we cache the lookup because this command is slow...
			vagrant ssh-config "$h" > "$f" || rm "$f"
		fi

		if [ -e "$f" ]; then
			cmd="$cmd $i"
			cat="$cat $f"	# append config file to list
		fi
	done

	cat="$cat > $cssh"
	#echo $cat
	eval "$cat"			# generate combined config file

	#echo $cmd && return 1
	#[ -e "$cssh" ] && cssh --options "-F ${cssh}$options" $cmd
	# running: bash -c glues together --action 'foo --bar' type commands...
	[ -e "$cssh" ] && bash -c "cssh --options '-F ${cssh}${screen}$options' $cmd"
}

This can be called like this:

$ vcssh annex{1..4} -l root

or like this:

$ vcssh root@hostname foo user@bar james@machine --action 'pwd'

which, as you can see, passes cssh arguments through! Can you see any other special surprises in the code? Well, you can run vcssh like this too:

$ vcssh root@foo james@bar --screen

which will perform exactly as vscreen did above, but in cssh!

You’ll see that the vagrant ssh-config lookups are cached, so this will be speedy when it’s running hot, but expect a few seconds delay when you first run it. If you want a longer cache timeout, it’s easy to change yourself in the function.

I’ve uploaded the code here, so that you don’t have to copy+paste it from my blog!

Happy hacking,

James

by on December 21, 2013

Vagrant vsftp and other tricks

As I previously wrote, I’ve been busy with Vagrant on Fedora with libvirt, and have even been submitting, patches and issues! (This “closed” issue needs solving!) Here are some of the tricks that I’ve used while hacking away.

Default provider:

I should have mentioned this in my earlier article but I forgot: If you’re always using the same provider, you might want to set it as the default. In my case I’m using vagrant-libvirt. To do so, add the following to your .bashrc:

export VAGRANT_DEFAULT_PROVIDER=libvirt

This helps you avoid having to add --provider=libvirt to all of your vagrant commands.

Aliases:

All good hackers use shell (in my case bash) aliases to make their lives easier. I normally keep all my aliases in .bash_aliases, but I’ve grouped them together with some other vagrant related items in my .bashrc:

alias vp='vagrant provision'
alias vup='vagrant up'
alias vssh='vagrant ssh'
alias vdestroy='vagrant destroy'

Bash functions:

There are some things aliases just can’t do. For those situations, a bash function might be most appropriate. These go inside your .bashrc file. I’d like to share two with you:

Vagrant logging:

Sometimes it’s useful to get more information from the vagrant command that is running. To do this, you can set the VAGRANT_LOG environment variable to info, or debug.

function vlog {
	VAGRANT_LOG=info vagrant "$@" 2> vagrant.log
}

This is usually helpful for debugging a vagrant issue. When you use the vlog command, it works exactly as the normal vagrant command, but it spits out a vagrant.log logfile into the working directory.

Vagrant sftp:

Vagrant provides an ssh command, but it doesn’t offer an sftp command. The sftp tool is fantastically useful when you want to quickly push or pull a file over ssh. First I’ll show you the code so that you can practice reading bash:

# vagrant sftp
function vsftp {
	[ "$1" = '' ] || [ "$2" != '' ] && echo "Usage: vsftp  - vagrant sftp" 1>&2 && return 1
	wd=`pwd`		# save wd, then find the Vagrant project
	while [ "`pwd`" != '/' ] && [ ! -e "`pwd`/Vagrantfile" ] && [ ! -d "`pwd`/.vagrant/" ]; do
		#echo "pwd is `pwd`"
		cd ..
	done
	pwd=`pwd`
	cd $wd
	if [ ! -e "$pwd/Vagrantfile" ] || [ ! -d "$pwd/.vagrant/" ]; then
		echo 'Vagrant project not found!' 1>&2 && return 2
	fi

	d="$pwd/.ssh"
	f="$d/$1.config"

	# if mtime of $f is > than 5 minutes (5 * 60 seconds), re-generate...
	if [ `date -d "now - $(stat -c '%Y' "$f" 2> /dev/null) seconds" +%s` -gt 300 ]; then
		mkdir -p "$d"
		# we cache the lookup because this command is slow...
		vagrant ssh-config "$1" > "$f" || rm "$f"
	fi
	[ -e "$f" ] && sftp -F "$f" "$1"
}

This vsftp command will work from anywhere in the vagrant project root directory or deeper. It does this by first searching upwards until it gets to that directory. When it finds it, it creates a .ssh/ directory there, and stores the proper ssh_config stubs needed for sftp to find the machines. Then the sftp -F command runs, connecting you to your guest machine.

You’ll notice that the first time you connect takes longer than the second time. This is because the vsftp function caches the ssh_config file store operation. You might want to set a different timeout, or disable it altogether. I should also mention that this script searches for a Vagrantfile and .vagrant/ directory to identify the project root. If there is a more “correct” method, please let me know.

I’m particularly proud of this because it was a fun (and useful) hack. This function, and all the above .bashrc material is available here as an easy download.

NFS folder syncing/mounting:

The vagrant-libvirt provider supports one way folder “synchronization” with rsync, and NFS mounting if you want two-way synchronization. I would love if libvirt/vagrant-libvirt had support for real folder sharing via QEMU guest agent and/or 9p. Getting it to work wasn’t trivial. Here’s what I had to do:

Vagrantfile:

Define your “synced_folder” so that you add the :mount_options array:

config.vm.synced_folder "nfs/", "/tmp/nfs", :nfs => true, :mount_options => ['rw', 'vers=3', 'tcp']

When you specify anything in the :mount_options array, it erases all the defaults. The defaults are: vers=3,udp,rw. Obviously, choose your own directory paths! I’d rather see this use NFSv4, but it would have taken slightly more fighting. Please become a warrior today!

Firewall:

Firewalling is not automatic. To work around this, you’ll need to run the following commands before you use your NFS mount:

# systemctl restart nfs-server # this as root, the rest will prompt
$ firewall-cmd --permanent --zone public --add-service mountd
$ firewall-cmd --permanent --zone public --add-service rpc-bind
$ firewall-cmd --permanent --zone public --add-service nfs
$ firewall-cmd --reload

You should only have to do this once, but possibly each reboot. If you like, you can patch those commands to run at the top of your Vagrantfile. If you can help fix this issue permanently, the bug is here.

Sudo:

The use of sudo to automatically edit /etc/exports is a neat trick, but it can cause some problems:

  1. It is often not needed
  2. It annoyingly prompts you
  3. It can bork your stdin

You get internet karma from me if you can help permanently solve any of those three problems.

Package caching:

As part of my deployments, Puppet installs a lot of packages. Repeatedly hitting a public mirror isn’t a very friendly thing to do, it wastes bandwidth, and it’s slower than having the packages locally! Setting up a proper mirror is a nice solution, but it comes with a management overhead. There is really only one piece of software that manages repositories properly, but using pulp has its own overhead, and is beyond the scope of this article. Because we’re not using our own local mirror, this won’t allow you to work entirely offline, as the metadata still needs to get downloaded from the net.

Installation:

As an interim solution, we can use vagrant-cachier. This plugin adds synced_folder‘s to the apt/yum cache directories. Sneaky! Since this requires two-way sync, you’ll need to get NFS folder syncing/mounting working first. Luckily, I’ve already taught you that!

To pass the right options through to vagrant-cachier, you’ll need this patch. I’d recommend first installing the plugin with:

$ vagrant plugin install vagrant-cachier

and then patching your vagrant-cachier manually. On my machine, the file to patch is found at:

~/.vagrant.d/gems/gems/vagrant-cachier-0.5.0/lib/vagrant-cachier/provision_ext.rb

Vagrantfile:

Here’s what I put in my Vagrantfile:

# NOTE: this doesn't cache metadata, full offline operation not possible
config.cache.auto_detect = true
config.cache.enable :yum		# choose :yum or :apt
if not ARGV.include?('--no-parallel')	# when running in parallel,
	config.cache.scope = :machine	# use the per machine cache
end
config.cache.enable_nfs = true	# sets nfs => true on the synced_folder
# the nolock option is required, otherwise the NFSv3 client will try to
# access the NLM sideband protocol to lock files needed for /var/cache/
# all of this can be avoided by using NFSv4 everywhere. die NFSv3, die!
config.cache.mount_options = ['rw', 'vers=3', 'tcp', 'nolock']

Since multiple package managers operating on the same cache can cause locking issues, this plugin lets you decide if you want a shared cache for all machines, or if you want separate caches. I know that when I use the --no-parallel option it’s safe (enough) to use a common cache because Puppet does all the package installations, and they all happen during each of their first provisioning runs which are themselves sequential.

This isn’t perfect, it’s a hack, but hacks are fun, and in this case, very useful. If things really go wrong, it’s easy to rebuild everything! I still think a few sneaky bugs remain in vagrant-cachier with libvirt, so please find, report, and patch them if you can, although it might be NFS that’s responsible.

Puppet integration:

Integration with Puppet makes this all worthwhile. There are two scenarios:

  1. You’re using a puppetmaster, and you care about things like exported resources and multi-machine systems.
  2. You don’t have a puppetmaster, and you’re running standalone “single machine” code.

The correct answer is #1. Standalone Puppet code is bad for a few reasons:

  1. It neglects the multi-machine nature of servers, and software.
  2. You’re not able to use useful features like exported resources.
  3. It’s easy to (unknowingly) write code that won’t work with a puppetmaster.

If you’d like to see some code which was intentionally written to only work without a puppetmaster (and the puppetmaster friendly equivalent) have a look at my Pushing Puppet material.

There is one good reason for standalone Puppet code, and that is: bootstrapping. In particular, bootstrapping the Puppet server.

DNS:

You need some sort of working DNS setup for this to function properly. Whether that involves hacking up your /etc/hosts files, using one of the vagrant DNS plugins, or running dnsmasq/bind is up to you. I haven’t found an elegant solution yet, hopefully not telling you what I’ve done will push you to come up with something awesome!

The :puppet_server provisioner:

This is the vagrant (puppet agent) provisioner that lets you talk to a puppet server. Here’s my config:

vm.vm.provision :puppet_server do |puppet|
	puppet.puppet_server = 'puppet'
	puppet.options = '--test'    # see the output
end

I’ve added the --test option so that I can see the output go by in my console. I’ve also disabled the puppet service on boot in my base machine image, so that it doesn’t run before I get the chance to watch it. If your base image already has the puppet service running on boot, you can disable it with a shell provisioner. That’s it!

The :puppet provisioner:

This is the standalone vagrant (puppet apply) provisioner. It seems to get misused quite a lot! As I mentioned, I’m only using it to bootstrap my puppet server. That’s why it’s a useful provisioner. Here’s my config:

vm.vm.provision :puppet do |puppet|
	puppet.module_path = 'puppet/modules'
	puppet.manifests_path = 'puppet/manifests'
	puppet.manifest_file = 'site.pp'
	# custom fact
	puppet.facter = {
		'vagrant' => '1',
	}
end

The puppet/{modules,manifests} directories should exist in your project root. I keep the puppet/modules/ directory fresh with a make script that rsync’s code in from my git directories, but that’s purely a personal choice. I added the custom fact as an example. All that’s left is for this code to build a puppetmaster!

Building a puppetmaster:

Here’s my site.pp:

node puppet {	# puppetmaster

	class { '::puppet::server':
		pluginsync => true,	# do we want to enable pluginsync?
		storeconfigs => true,	# do we want to enable storeconfigs?
		autosign => ['*'],	# NOTE: this is a temporary solution
		#repo => true,		# automatic repos (DIY for now)
		start => true,
	}

	class { '::puppet::deploy':
		path => '/vagrant/puppet/',	# puppet folder is put here...
		backup => false,		# don't use puppet to backup...
	}
}

As you can see, I include two classes. The first one installs and configures the puppetmaster, puppetdb, and so on. The second class runs an rsync from the vagrant deployed /vagrant/puppet/ directory to the proper directories inside of /etc/puppet/ and wherever else is appropriate. Each time I want to push new puppet code, I run vp puppet and then vp <host> of whichever host I want to test. You can even combine the two commands into a one-liner with && if you’d like.

The puppet-puppet module:

This is the hairy part. I’ve always had an issue with this module. I recently ripped out support for a lot of the old puppet 0.24 era features, and I’ve only since tested it on the recent 3.x series. A lot has changed from version to version, so it has been hard to follow this and keep it sane. There are serious improvements that could be made to the code. In fact, I wouldn’t recommend it unless you’re okay hacking on Puppet code:

https://github.com/purpleidea/puppet-puppet

Most importantly:

This was a long article! I could have split it up into multiple posts, gotten more internet traffic karma, and would have seemed like a much more prolific blogger as a result, but I figured you’d prefer it all in one big lot, and without any suspense. Prove me right by leaving me a tip, and giving me your feedback.

Happy hacking!

James

PS: You’ll want to be able to follow along with everything in this article if you want to use the upcoming Puppet-Gluster+Vagrant infrastructure that I’ll be releasing.

PPS: Heh heh heh…

PPPS: Get it? Vagrant -> Oscar the Grouch -> Muppets -> Puppet

 

by on October 10, 2013

Show current git branch in PS1 when branch is not master

Short post, long command…

I’ve decided to start showing the current git branch in my PS1. However, since I don’t want to know when I’m on master, I had to write a new PS1 that I haven’t yet seen anywhere. Add the following to your .bashrc:

PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
if [ -e /usr/share/git-core/contrib/completion/git-prompt.sh ]; then
    . /usr/share/git-core/contrib/completion/git-prompt.sh
    PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$([ "$(__git_ps1 %s)" != "" -a "$(__git_ps1 %s)" != "master" ] && (echo -e " (\[33[32m\]"$(__git_ps1 "%s")"\[33[0m\])") || echo "")\$ '
fi

This keeps my PS1 short for when I’m hacking on personal repositories that only have a single branch. Keep in mind that you might have to change the path to git-prompt.sh depending on what OS you’re using.

Example:

james@computer:~/code/puppet$ cd puppet-gluster
james@computer:~/code/puppet/puppet-gluster$ git checkout -b cool-new-feature
Switched to a new branch 'cool-new-feature'
james@computer:~/code/puppet/puppet-gluster (cool-new-feature)$ # tada !

The branch name is coloured to match the default colours that git uses to colour branches.

Happy hacking,

James

 

by on April 8, 2013

Knowing when to release and deploy your code (…and a mini script)

Knowing when to release and deploy your code can turn into a complicated discussion. In general, In general, I tend to support releasing early and often, for some value of $early and $often. I’ve decided to keep this simple and introduce you to one metric that I use…

I think that I am fairly diligent in adding plenty of comments to my source code. I might even sometimes add too many. I create plenty of XXX, FIXME, or TODO tagged comments as reminders of things to work on.

To me, XXX represents an important problem that should get looked at or fixed; FIXME, reminds me that I should definitely look into something, and finally, TODO gives me homework or things to pursue when I’m in need of a new project.

I try to resolve most if not all XXX tagged comments before making a 0.1 release, FIXME’s to consider something very stable, and a lack of TODO’s mean something is completely done for now.

To count all these, I wrote a little tool that greps through the top-level directories in my ~/code/ folder, and displays the results in a table. Feel free to give it a try, and use it for your own projects.

While I don’t see this as a particularly game changing utility, it scratches my itch, and helps me keep up my bash skills. The code is available here. Let me know if you have any improvements, or if the source isn’t enough documentation for you.

Happy hacking,

James