all posts tagged Blog


by on November 5, 2013

Recap of Gluster Community Day at USENIX LISA

orange-ant-250px Yesterday we had the opportunity to run a Gluster Community Day at USENIX LISA in Washington D.C. Turns out it was well worth the time, as we had a fantastic group turn up for some really excellent talks.

The crowd wasn’t large, but it was clearly a group that was interested in GlusterFS and associated projects, like puppet-gluster. Having the right crowd is almost always better than having a big, but disinterested, crowd.

Planned to write up a post to summarize the event, but it looks like James Shubin has beaten me to the punch with a fantastic post on The Technical Blog of James. Be sure to check out James’ post, and if you’re interested in Gluster, Puppet, and other technical goodness, put his blog’s feed in your favorite reader. (He’s also on Twitter as @purpleidea.)

Many thanks to the attendees for turning up, and to Wesley Duffee-Braun, Eco Willson, and James for giving some fantastic talks and Gluster wisdom.

Don’t forget, we also have a Gluster Birds of a Feather (BoF) here at LISA on Thursday. Join us at 8:00 p.m. in the “Hoover” room.

by on October 17, 2013

Red Hat Related Talks at LinuxCon + CloudOpen Europe

LinuxCon and CloudOpen Europe are just a few days away, and the line-up for talks looks really good. If you’re putting together your schedule, we have a couple of suggestions for talks that you’d probably find interesting.

The full schedule is on Sched.org, which makes it really easy to keep track of the talks you don’t want to miss. Also, don’t miss the Gluster Workshop on Thursday.

Monday, October 21

Tuesday, October 22

Wednesday, October 23

by on October 11, 2013

Gluster Workshop in Edinburgh at LinuxCon Europe

If you’re attending LinuxCon Europe, you’ll want to get signed up for the Gluster Workshop on Thursday, October 24th.

The program starts at a very reasonable 10 a.m. This full-day, free workshop includes talks on using Gluster with OpenStack, KVM/QEMU, and how to develop apps to integrate with GlusterFS. This is a chance for developers and admins to learn first-hand what GlusterFS and related open software-defined storage projects in the Gluster Community can accomplish in cloud and virtualized environments.

  • State of Gluster (John Mark Walker)
  • Gluster for SysAdmins, an In-depth Look (Dustin Black)
  • Gluster and OpenStack, a Case Study (Udo Seidel)
  • Gluster, QEMU and KVM (Vijay Bellur)
  • Developing Apps and Integrating with GlusterFS (Justin Clift)

Please join us on October 24th at the Sheraton next to the Edinburgh International Conference Centre. Registration for the Gluster Workshop is free, sign up today on the LinuxCon/CloudOpen sign-up page (Note: Select “attendee” rather than speaker.)

by on September 16, 2013

oVirt 3.3, Glusterized

The All-in-One install I detailed in Up and Running with oVirt 3.3 includes everything you need to run virtual machines and get a feel for what oVirt can do, but the downside of the local storage domain type is that it limits you to that single All in One (AIO) node.

You can shift your AIO install to a shared storage configuration to invite additional nodes to the party, and oVirt has supported the usual shared storage suspects such as NFS and iSCSI since the beginning.

New in oVirt 3.3, however, is a storage domain type for GlusterFS that takes advantage of Gluster’s new libgfapi feature to boost performance compared to FUSE or NFS-based methods of accessing Gluster storage with oVirt.

With a GlusterFS data center in oVirt, you can distribute your storage resources right alongside your compute resources. As a new feature, GlusterFS domain support is rougher around the edges than more established parts of oVirt, but once you get it up and running, it’s worth the trouble.

In oVirt, each host can be part of only one data center at a time. Before we decommission our local storage domain, we have to shut down any VMs running on our host, and, we’re interested in moving them to our new Gluster storage domain, we need to ferry those machines over to our export domain.

GlusterFS Domain & RHEL/CentOS:

The new, libgfapi-based GlusterFS storage type has a couple of software prerequisites that aren’t currently available for RHEL/CentOS — the feature requires qemu 1.3 or better and libvirt 1.0.1 or better. Earlier versions of those components don’t know about the GlusterFS block device support, so while you’ll be able to configure a GlusterFS domain on one of those distros today, any attempts to launch VMs will fail.

Versions of qemu and libvirt with the needed functionality backported are in the works, and should be available soon, but for now, you’ll need Fedora 19 to use the GlusterFS domain type. For RHEL or CentOS hosts, you can still use Gluster-based storage, but you’ll need to do so with the POSIXFS storage type.

The setup procedures are very similar, so I’ll include the POSIXFS instructions below as well in case you want to pursue that route in the meantime. Once the updated packages become available, I’ll modify this howto accordingly.

SELinux, Permissive

Currently, the GlusterFS storage scenario described in this howto requires that SELinux be put in permissive mode. You can put selinux in permissive mode with the command:

sudo setenforce 0

To make the shift to permissive mode persist between reboots, edit “/etc/sysconfig/selinux” and change SELINUX=enforcing to SELINUX=permissive.

Glusterizing Your AIO Install in n Easy Steps

  1. Evacuate Your VMs

    Visit the “Virtual Machines” tab in your Administrator Portal, shut down any running VMs, and click “Export,” “OK” to copy them over to your export domain.

    While an export is in progress, there’ll be an hourglass icon next to your VM name. Once any VMs you wish to save have moved over, you can reclaim some space by right-clicking the VMs and hitting “Remove,” and then “OK.”

  2. Detach Your Domains

    Next, detach your ISO_DOMAIN from the local storage data center by visiting the “Storage” tab, clicking on the ISO_DOMAIN, visiting the “Data Center” tab in the bottom pane, clicking “local_datacenter,” then “Maintenance,” then “Detach,” and “OK” in the following dialog. Follow these same steps to detach your EXPORT_DOMAIN as well.

  3. Modify Your Data Center, Cluster & Host

    Now, click the “Data Centers” tab, select the “Default” data center, and click “Edit.” In the resulting dialog box, choose “GlusterFS” in the “Type” drop down menu and click “OK.”

    If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, choose “POSIXFS” in the “Type” drop down menu instead.

    Next, click the “Clusters” tab, select the “Default” cluster, and click “Edit.” In the resulting dialog box, click the check box next to “Enable Gluster Service” and click “OK.”

    Then, visit the “Hosts” tab, select your “local_host” host, and click “Maintenance.” When the host is in maintenance mode, click “Edit,” select “Default” from the “Data Center” drop down menu, hit “OK,” and then “OK” again in the following dialog.

  4. Next, hit the command-line for a few tweaks that ought to be handled automatically, but aren’t (yet).

    Install the vdsm-gluster package, start gluster, and restart vdsm:

    sudo yum install vdsm-gluster

    Now, edit the file “/etc/glusterfs/glusterd.vol” [bz#] to add “option rpc-auth-allow-insecure on” to the list of options under “volume management.”

    As part of the virt store optimizations that oVirt applies to Gluster volumes, there’s a Gluster virt group in which oVirt places optimized volumes. The file that describes this group isn’t currently provided in a package, so we have to fetch it from Gluster’s source repository:

    sudo curl https://raw.github.com/gluster/glusterfs/master/extras/group-virt.example -o /var/lib/glusterd/groups/virt [bz#]

    Now, we’ll start the Gluster service and restart the vdsm service:

    sudo service glusterd start
    sudo service vdsmd restart
  5. Next, we’ll create a mount point for our Gluster brick and set its permissions appropriately. To keep this howto short, I’m going to use a regular directory on our test machine’s file system for the Gluster brick. In a production setup, you’d want your Gluster brick to live on a separate XFS partition.
    sudo mkdir /var/lib/exports/data
    chown 36:36 /var/lib/exports/data [bz#]
  6. Now, we’re ready to re-activate our host, and use it to create the Gluster volume we’ll be using for VM storage. Return to the Administrator Portal, visit the “Hosts” tab, and click “Activate.”

    Then, visit the “Volumes” tab, click “Create Volume,” and give your new volume a name. I’m calling mine “data.” Check the “Optimize for Virt Store” check box, and click the “Add Bricks” button.

    In the resulting dialog box, populate “Brick Directory” with the path we created earlier, “/var/lib/exports/data” and click “Add” to add it to the bricks list. Then, click “OK” to exit the dialog, and “OK” again to return to the “Volumes” tab.

  7. Before we start up our new volume, we need to head back to the command line to apply the “server.allow-insecure” option we added earlier to our volume:
    sudo gluster volume set data server.allow-insecure on
  8. Now, back to the Administrator Portal to start our volume and create a new data domain. Visit the “Volumes” tab, select your newly-created volume, and click “Start.”

    Then, visit the “Storage” tab, hit “New Domain,” give your domain a name, and populate the “Path” field with your machine’s hostname colon volume name:

    mylittlepony.lab:data

    If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, you need to populate the “Path” field with with your machine’s hostname colon slash volume name instead. Again, this is only if you’re taking the POSIXFS route. With the GlusterFS storage type, that pesky slash [BZwon’t prevent the domain from being created, but it’ll cause VM startup mysteriously to fail! Also, in the “VFS Type” field, you’ll need to enter “glusterfs”

    Click “OK” and wait a few moments for the new storage domain to initialize. Next, click on your detached export domain, choose the “Data Center” tab in the bottom pane, click “Attach,” select “Default” data center, and click “OK.” Perform the same steps with your iso domain.

  9. All Right. You’re back up and running, this time with a GlusterFS Storage Domain. If you ferried any of the VMs you created on the original local storage domain out to your export domain, you can now ferry them back:

    Visit the “Storage” tab, select your export domain, click “VM Import” in the lower pane, select the VM you wish to import, and click “Import.” Click “OK” on the dialog that appears next. If you didn’t remove the VM you’re importing from your local storage domain earlier, you may have to “Import as cloned.”

Next Steps

From here, you can experiment with different types of Gluster volumes for your data domains. For instance, if, after adding a second host to your data center, you want to replicate storage between the two hosts, you’d create a storage brick on both of your hosts, choose the replicated volume type when creating your Gluster volume, create a data domain backed by that volume, and start storing your VMs there.

You can also disable the NFS ISO and Export shares hosted from your AIO machine and re-create them on new Gluster volumes, accessed via Gluster’s built-in NFS server. If you do, make sure to disable your system’s own NFS service, as kernel NFS and Gluster NFS conflict with each other.

by on

oVirt 3.3 Spices Up the Software Defined Datacenter with OpenStack and Gluster Integration

The oVirt 3.3 release may not quite let you manage all the things in the data center, but it’s getting awfully close. Just shy of six months after the oVirt 3.2 release, the team has delivered an update with groundbreaking integration with OpenStack components, GlusterFS, and a number of ways to custom tailor oVirt to your data center’s needs.

What is oVirt?

oVirt is an entirely open source approach to the software defined datacenter. oVirt builds on the industry-standard open source hypervisor, KVM, and delivers a platform that can scale from one system to hundreds of nodes running thousands of instances.

The oVirt project comprises two main components:

  • oVirt Node: A minimal Linux install that includes the KVM hypervisor and is tuned for running massive workloads.
  • oVirt Engine: A full-featured, centralized management portal for managing oVirt Nodes. oVirt Engine gives admins, developers, and users the tools needed to orchestrate their virtual machines across many oVirt Nodes.

See the oVirt Feature Guide for a comprehensive list of oVirt’s features.

What’s New in 3.3?

In just under six months of development, the oVirt team has made some impressive improvements and additions to the platform.

Integration with OpenStack Components

Evaluating or deploying OpenStack in your datacenter? The oVirt team has added integration with Glance and Neutron in 3.3 to enable sharing components between oVirt and OpenStack.

By integrating with Glance, OpenStack’s service for managing disk and server images and snapshots, you’ll be able to leverage your KVM-based disk images between oVirt and OpenStack.

OpenStack Neutron integration allows oVirt to use Neutron as an external network provider. This means you can tap Neutron from oVirt to provide networking capabilities (such as network discovery, provisioning, security groups, etc.) for your oVirt-managed VMs.

oVirt 3.3 also provides integration with Cloud-Init, so oVirt can simplify provisioning of virtual machines with SSH keys, user data, timezone information, and much more.

Gluster Improvements

With the 3.3 release, oVirt gains support for using GlusterFS as a storage domain. This means oVirt can take full advantage of Gluster’s integration with Qemu, providing a performance boost over the previous method of using Gluster’s POSIX exports. Using the native QEMU-GlusterFS integration allows oVirt to bypass the FUSE overhead and access images stored in Gluster as a network block device.

The latest oVirt release also allows admins to use oVirt to manage their Gluster clusters, and oVirt will recognize changes made via Gluster’s command line tools. In short, oVirt has gained tight integration with network-distributed storage, and Gluster users have easy management of their domains with a simple user interface.

Extending oVirt

Out of the proverbial box, oVirt is already a fantastic platform for managing your virtualized data center. However, oVirt can be extended to fit your computing needs precisely.

  • External Tasks give external applications the ability to inject tasks to the oVirt engine via a REST API, and track changes in the oVirt UI.
  • Custom Device Properties allow you to specify custom properties for virtual devices, such as vNICs, when devices may need non-standard settings.
  • Java-SDK is a full SDK for interacting with the oVirt API from external applications.

Getting oVirt 3.3

Ready to take oVirt for a test drive? Head over to the oVirt download page and check out Jason Brooks’ Getting Started with oVirt 3.3 Guide. Have questions? You can find us on IRC or subscribe to the users mailing list to get help from others using oVirt.

by on September 11, 2013

Up and Running with oVirt 3.3

The oVirt Project is now putting the finishing touches on version 3.3 of its KVM-based virtualization management platform. The release will be feature-packed, including expanded support for Gluster storage, new integration points for OpenStack’s Neutron networking and Glance image services, and a raft of new extensibility and usability upgrades.

oVirt 3.3 also sports an overhauled All-in-One (AIO) setup plugin, which makes it easy to get up and running with oVirt on a single machine to see what oVirt can do for you.

Prerequisites

  • Hardware: You’ll need a machine with at least 4GB RAM and processors with hardware virtualization extensions. A physical machine is best, but you can test oVirt effectively using nested KVM as well.
  • Software: oVirt 3.3 runs on the 64-bit editions of Fedora 19 or Red Hat Enterprise Linux 6.4 (or on the equivalent version of one of the RHEL-based Linux distributions such as CentOS or Scientific Linux).
  • Network: Your test machine’s domain name must resolve properly, either through your network’s DNS, or through the /etc/hosts files of your test machine itself and through those of whatever other nodes or clients you intend to use in your installation.On Fedora 19 machines with a static IP address (dhcp configurations appear not to be affected), you must disable NetworkManager for the AIO installer to run properly [BZ]:
    $> sudo systemctl stop NetworkManager.service
    $> sudo systemctl mask NetworkManager.service
    $> sudo service network start
    $> sudo chkconfig network on

    Also, check the configuration file for your interface (for instance, /etc/sysconfig/network-scripts/ifcfg-eth0) and remove the trailing zero from “GATEWAY0″ “IPADDR0″ and “NETMASK0″ as this syntax appears only to work while NetworkManager is enabled. [BZ]

  • All parts of oVirt should operate with SELinux in enforcing mode, but SELinux bugs do surface. At the time that I’m writing this, the Glusterization portion of this howto requires that SELinux be put in permissive mode. Also, the All in One install on CentOS needs SELinux to be in permissive mode to complete.You can put selinux in permissive mode with the command:
    sudo setenforce 0

    To make the shift to permissive mode persist between reboots, edit “/etc/sysconfig/selinux” and change SELINUX=enforcing to SELINUX=permissive.


Install & Configure oVirt All in One

  1. Run one of the following commands to install the oVirt repository on your test machine.
    1. For Fedora 19:
      $> sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y
    2. For RHEL/CentOS 6.4 (also requires EPEL):
       $> sudo yum localinstall http://resources.ovirt.org/releases/ovirt-release-el6-8-1.noarch.rpm -y
      sudo yum localinstall http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
  2. Next, install the oVirt All-in-One setup plugin:
    $> sudo yum install ovirt-engine-setup-plugin-allinone -y
  3. Run the engine-setup installer. When asked whether to configure VDSM on the host, answer yes. You should be fine accepting the other default values.
    $> sudo engine-setup
    engine-setup33.png

    Once the engine-setup script completes, you’ll have a working management server that doubles as a virtualization host. The script sets up a local storage domain for hosting VM images, and an iso domain for storing iso images for installing operating systems on the VMs you create.

  4. Before we leave the command line and fire up the oVirt Administrator Portal, we’re going to create one more storage domain: an export domain, which oVirt uses for ferrying VM images and templates between data centers.We can do this by creating the export domain mount point, setting the permissions properly, copying and tweaking the configuration files that engine-setup created for the iso domain, and reloading nfs-server:
    $> sudo mkdir /var/lib/exports/export
    $> sudo chown 36:36 /var/lib/exports/export
    1. For Fedora:
      $> sudo cp /etc/exports.d/ovirt-engine-iso-domain.exports /etc/exports.d/ovirt-engine-export-domain.exports

      In ovirt-engine-export-domain.exports Change “iso” to “export”

      $> sudo vi /etc/exports.d/ovirt-engine-export-domain.exports
      $> sudo service nfs-server reload
    2. For RHEL/CentOS:
      $> sudo vi /etc/exports

      In /etc/exports append the line:

      /var/lib/exports/export    0.0.0.0/0.0.0.0(rw)
      $> sudo service nfs reload
  5. Now, fire up your Web browser, visit the address your oVirt engine machine, and click the “Administrator Portal” link. Log in with the user name “admin” and the password you entered during engine-setup.
    admin-portal-login33.png
    admin-portal-login-a33.png

    Once logged into the Administrator Portal, click the “Storage” tab, select your ISO_DOMAIN, and visit the the “Data Center” tab in the bottom half of the screen. Next, click the “Attach” link, check the check box next to “local_datacenter,” and hit “OK.” This will attach the storage domain that houses your ISO images to your local datacenter.

    storage-tab33.png
    attach-iso33.png

    Next, we’ll create and activate our export domain. From the “Storage” tab, click “New Domain,” give the export domain a name (I’m using EXPORT_DOMAIN), choose “local_datacenter” in Data Center drop down menu, choose “Export / NFS” from “Domain Function / Storage Type” drop down menu, enter your oVirt machine IP / FQDN :/var/lib/exports/export in the Export Path, and click OK.

    new-export-domain33.png
  6. Before we create a VM, let’s head back to the command line and upload an iso image that we can use to install an OS on the VM we create.Download an iso image:
    $> curl -O http://mirrors.kernel.org/fedora/releases/19/Fedora/x86_64/iso/Fedora-19-x86_64-netinst.iso

    Upload the image into your iso domain (the password is the same as for the Administrator Portal):

    $> engine-iso-uploader upload -i ISO_DOMAIN Fedora-19-x86_64-netinst.iso
  7. Now we’re ready to create and run a VM. Head back to the oVirt Administrator Portal, visit the “Virtual Machines” tab, and click “New VM.” In the resulting dialog box, give your new instance a name and click “OK.”
    new-VM33.png

    In the “New Virtual Machine – Guide Me” dialog that pops up next, click “Configure Virtual Disks,” enter a disk size, and click “OK.” Hit “Configure Later” to dismiss the Guide Me dialog.

    add-disk33.png

    Next, select your newly-created VM, and click “Run Once.” In the dialog box that appears, expand “Boot Options,” check the “Attach CD” check box, choose your install iso from the drop down, and hit “OK” to proceed.

    run-once33.png

    After a few moments, the status of your new vm will switch from red to green, and you can click on the green monitor icon next to “Migrate” to open a console window.

    run-VM33.png

    oVirt defaults to the SPICE protocol for new VMs, which means you’ll need the virt-viewer package installed on your client machine. If a SPICE client isn’t available to you, you can opt for VNC by stopping your VM, clicking “Edit,” “Console,” “Show Advanced Options,” and choosing VNC from the “Protocol” drop down menu.

That’s enough for this blog post, but stay tuned for more oVirt 3.3 how-to posts. In particular, I have walkthroughs in the works for making use of oVirt’s new and improved Gluster storage support, and for making oVirt and OpenStack play nicely together.

If you’re interested in getting involved with the project, you can find all the mailing list, issue tracker, source repository, and wiki information you need here.

On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC or write a comment below and I’ll be happy to help you get up and running or get pointed in the right direction.

Finally, be sure to follow us on Twitter at @redhatopen for news on oVirt and other open source projects in the Red Hat world.

by on August 19, 2013

Gluster World Tour – Coming to a City Near You

If you’ve been watching the Gluster Community Day Meetup.com page, you’ve noticed lots of activity lately. That’s because we are planning several of these around the world, in addition to a few others we’ve already run this year.

What is a ‘Gluster Community Day?’ It’s a day for in-depth sessions, use cases, demos, and developer content presented by Gluster Community experts representing many layers of today’s cloud and data center infrastructure. A Gluster Community Day is where you learn best practices for deploying, managing and developing with GlusterFS as well as many of the adjunct projects that make up the Gluster Community.

We have several upcoming, and many more that we’re planning. Below is a list of those that we have locked in – check back at gluster.org/meetups/ or meetup.com/Gluster to always see the latest list:

And we’re actively seeking venues in Germany, France, Netherlands, Hong Kong, Singapore, Seoul, Bangalore, Chennai and Taipei. If you’d like to submit a venue for consideration, please send it to cfp (at) gluster.org

Would you like to speak at one of the events above? Send a brief note to cfp (at) gluster.org with a title and brief description of what you would like to speak about. Also include your personal bio, including talks you’ve given at other events.

Look forward to seeing you there!

-John Mark

by on July 12, 2013

The Summer of Gluster is Here!

I wanted to take a moment and share all the things that are going on in the Gluster Community. It really has been an amazing year, and we’re only halfway through. Here’s a recap for those of you watching from home:
  • Launched the Gluster Community Forge in early May – http://forge.gluster.org/
    • as of now, there are over 20 incubating projects on the Forge, 100 developers and over 1,000 commits to git repositories on the site.
    • future plans include upgrading to gitorious 3, adding integrated bug tracking capability and merging with the global look-and-feel under development for gluster.org
As we looked at the growth of the Gluster Community over the last year, it became clear that the community has evolved to be more than Red Hat, and that we needed a governance model that recognized this growth. For example, there are countless projects scattered across the internet that utilize GlusterFS, but there was no “one-stop shop” to find them. We also knew there are many organizations that contribute to the success of the Gluster Community, but there was no way to formalize their involvement. And finally, we understood that this movement of which we are a part, the movement away from traditional, proprietary storage vendors, needed a name: Open Software-defined Storage.
In response, we have plotted out a series of steps to make the Gluster Community vision grander, more ubiquitous, and more integral to open source cloud and big data communities than ever before. Here are just some of the things that you can expect to see:
  • Graduation of incubating projects. Leading candidates thus far include gluster-swift, pmux and gflocator. The former cements our standing in the OpenStack object storage camp, and the latter two form a very interesting project that allows users to conduct file-based Map/Reduce jobs on distributed Gluster volumes.
  • GlusterFS 3.4 – we are very very close to GA. Hang tight :) This is the release that includes QEMU integration and libgfapi, a new client library for developers
  • Much better performance for the vast majority of workloads. This will become more apparent when you try the imminent releases of 3.3.2 or 3.4.0.
  • Gluster Community Software Distribution. As the Gluster Community forms a software ecosystem around GlusterFS, we will formalize a timely release schedule that allows multiple projects to participate.
  • Higher frequency of point releases. This has been a big deal the past year. We have worked hard to fix this, and you’ll notice it very shortly.
  • More and better integration with multiple projects that make up OpenStack, CloudStack and Hadoop distributions
  • New Gluster.org site with complete redesign from the ground up and new branding
  • More Gluster Community Workshops, including at OSCON, LinuxCon North America & Europe, Stockholm, London and more. If you would like to run a Gluster Community Workshop in your area, contact us – cfp@gluster.org
  • More presence at OpenStack Summit , Hadoop Summit, Apache CloudStack Collaboration Conference and other related events. Gluster engineers will be more visible than ever at open source cloud and big data events
These steps are essential for building on our momentum and making a successful community that will, in turn, make all participants and collaborators more successful.
Want to be part of a winning team? Get involved – host a meetup, present at a workshop or conference, help out new users on gluster-users and #gluster.
We’re deeply committed to making the Gluster community a wide tent for innovation in cloud storage, and we want to know how we can serve you in this mission. Let us know what you’d like to see from us and how we can best meet your needs:
by on June 21, 2013

Rev Your (RDMA) Engines for the RDMA GlusterFest

UPDATE: We’re extending testing until 00:00 UTC on Tuesday, June 25. We want to give everyone a chance to get their RDMA clusters set up.

It’s that time again – we want to test the GlusterFS 3.4 beta before we unleash it on the world. Like our last test fest, we want you to put the latest GlusterFS beta through real-world usage scenarios that will show you how it compares to previous releases.

This time around, we want to focus this round of testing on Infiniband and RDMA hardware.

For a description of how to do this, see the GlusterFest page on the community wiki. Run the tests, report the results, and report any bugs you found as a result.

As an added bonus, use the gluster-users mailing list as another outlet for your testing. After reporting the results on the GlusterFest page, report them on the list, too, and other users can confirm – or counter – your results.

Find a new bug that is confirmed by the Gluster QE team, and I’ll send you a free t-shirt (see image below).

gluster-shirt.png

Testing is underway now, and wraps up at 00:00 UTC on Saturday, June 22 (aka 5pm PT/8pm ET on Friday, June 21).

by on June 19, 2013

Event Recap: oVirt Shanghai Workshop

Last month, over 80 users and developers gathered at Intel’s Shanghai China Campus for a two-day workshop centered on oVirt, the Open Virtualization management platform.

Jackson He, General Manager of Intel Asia and Pacific R&D Ltd. and Intel Software and Services Group PRC, provided the opening keynote, in which he spoke to a mostly local audience about Intel’s growth in China and continued commitment to open source software including such projects as oVirt, OpenStack, KVM and Hadoop. Intel’s continued commitment to Open Source virtualization was further demonstrated throughout the Workshop with great presentations by Gang Wei and Dongxiao Xu.

With three tracks spread across two days, this was the first workshop that also featured a day long Gluster Operations Track. This track, lead by John Mark Walker, Community Lead for Gluster, allowed for not only introductions and examples of leveraging GlusterFS storage solutions with oVirt, but also more advanced discussions, including a talk on developing with libgfapi, the GlusterFS translator framework, presented by Vijay Bellur, a Senior Principal Software Engineer at Red Hat.

In conjunction with the Gluster track on the first day of the workshop was the primary oVirt Operations track. With Red Hat presentations ranging from an introduction to oVirt to getting into the weeds of Troubleshooting, oVirt attendees were exposed to all levels of operational use cases and deployment tips. Presentations from IBM engineers Shu Ming and Mark Wu provided solid operational discussions covering oVirt testing in a nested virtualization configuration and outlining IBM’s commitment to and planned development objectives for oVirt.

The second day was all about oVirt developers. Of particular interest to attendees was a presentation by Zhengsheng Zhou of IBM discussing work done to support oVirt on Ubuntu. A key highlight of this event is the continued growth and interest around open virtualization solutions, with oVirt serving a foundational role. The interest in making oVirt available to other platforms is greatly encouraging, and we’re excited to see the community grow to include new platforms.

Also on day two, Doron Fediuck of Red Hat presented on oVirt SLAs, enforced by MoM, the Memory Overcommitment Manager. This presentation also provided a roadmap moving forward on this and other key features. Great discussions on this and most of the presentations allowed for developers to get engaged and focus in on where to help moving forward.

Presentations from the event are now available on the oVirt website.

With over 80 attendees representing Intel, IBM, Red Hat as well as the greater oVirt and Gluster communities, we’re pleased that this workshop was a success.