all posts tagged Q&A

by on January 16, 2014

Testing GlusterFS during “Glusterfest”

The GlusterFS community is having a “test day”. Puppet-Gluster+Vagrant is a great tool to help with this, and it has now been patched to support alpha, beta, qa, and rc releases! Because it was built so well (*cough*, shameless plug), it only took one patch.

Okay, first make sure that your Puppet-Gluster+Vagrant setup is working properly. I have only tested this on Fedora 20. Please read:

Automatically deploying GlusterFS with Puppet-Gluster+Vagrant!

to make sure you’re comfortable with the tools and infrastructure.

This weekend we’re testing 3.5.0 beta1. It turns out that the full rpm version for this is:


You can figure out these strings yourself by browsing the folders in:

To test a specific version, use the --gluster-version argument that I added to the vagrant command. For this deployment, here is the list of commands that I used:

$ mkdir /tmp/vagrant/
$ cd /tmp/vagrant/
$ git clone --recursive
$ cd vagrant/gluster/
$ vagrant up puppet
$ sudo -v && vagrant up --gluster-version='3.5.0-0.1.beta1.el6' --gluster-count=2 --no-parallel

As you can see, this is a standard vagrant deploy. I’ve decided to build two gluster hosts (--gluster-count=2) and I’m specifying the version string shown above. I’ve also decided to build in series (--no-parallel) because I think there might be some hidden race conditions, possibly in the vagrant-libvirt stack.

After about five minutes, the two hosts were built, and about six minutes after that, Puppet-Gluster had finished doing its magic. I had logged in to watch the progress, but if you were out getting a coffee, when you came back you could run:

$ gluster volume info

to see your newly created volume!

If you want to try a different version or host count, you don’t need to destroy the entire infrastructure. You can destroy the gluster annex hosts:

$ vagrant destroy annex{1..2}

and then run a new vagrant up command.

In addition, I’ve added a --gluster-firewall option. Currently it defaults to false because there’s a strange firewall bug blocking my VRRP (keepalived) setup. If you’d like to enable it and help me fix this bug, you can use:


To make sure the firewall is off, you can use:


In the future, I will change the default value to true, so specify it explicitly if you need a certain behaviour.

Happy hacking,


by on February 11, 2013

New Release: GlusterFS 3.4alpha

It’s that time again! Time to start prepping for a new release of GlusterFS, in this case, 3.4. If you haven’t checked it out yet, grab a source tarball and tell us how it goes. There are also community builds showing up on for Ubuntu, Fedora and EPEL. Additionally, the Git repo has now been tagged with 3.4.

First, take a look at the 3.4 feature page to see the highlights.

One thing should jump out at you immediately: QEMU integration and the block device translator. This will significantly increase the scope of possible use cases for GlusterFS. Previously, for provisioning VMs on GlusterFS, this involved going through the FUSE mount with the GlusterFS client. The native client mount via the FUSE module is great for the scale-out NAS use case – it’s pretty mature and reliable for sharing files and folders and presenting a global namespace regardless of deployment in the cloud, bare metal or providing storage services in a virtualized environment. However, for the use case of hosting and managing VMs, it simply didn’t perform at a level needed when hosting hundreds of VMs on multiple servers. Now, however, with the QEMU integration, we’re bypassing FUSE entirely and going through a new client library, libgfapi. Early reports suggest that for sequential reads and writes, performance improves by between 2x and 3x. This is a significant increase in performance and we’re very excited about it. With enough testing from our user community (hint, hint) we are hoping that this new feature can really expand how GlusterFS is used.

What’s even more noteworthy about this feature is that it wasn’t developed by Red Hat engineers. Engineers working out of the IBM Linux Technology Center approached us last summer about doing this work. IBM engineers developed the necessary pieces in QEMU, Libvirt, as well as the block device translator in GlusterFS. We had long desired to create a client library, and this was just the impetus we needed to move that particular feature up on our priority list.

This marks the first time that a major feature in a new release was contributed from outside our immediate engineering group. This shows how broad our community has grown and demonstrates the global reach of the Gluster community. It is indeed the hallmark of a healthy project.

For a rundown of other features coming in 3.4, see the list below: