all posts tagged News
If you’ve been watching the Gluster Community Day Meetup.com page, you’ve noticed lots of activity lately. That’s because we are planning several of these around the world, in addition to a few others we’ve already run this year.
What is a ‘Gluster Community Day?’ It’s a day for in-depth sessions, use cases, demos, and developer content presented by Gluster Community experts representing many layers of today’s cloud and data center infrastructure. A Gluster Community Day is where you learn best practices for deploying, managing and developing with GlusterFS as well as many of the adjunct projects that make up the Gluster Community.
We have several upcoming, and many more that we’re planning. Below is a list of those that we have locked in – check back at gluster.org/meetups/ or meetup.com/Gluster to always see the latest list:
And we’re actively seeking venues in Germany, France, Netherlands, Hong Kong, Singapore, Seoul, Bangalore, Chennai and Taipei. If you’d like to submit a venue for consideration, please send it to cfp (at) gluster.org
Would you like to speak at one of the events above? Send a brief note to cfp (at) gluster.org with a title and brief description of what you would like to speak about. Also include your personal bio, including talks you’ve given at other events.
Look forward to seeing you there!
Several talks related to the Gluster Community have been proposed for the OpenStack Summit in Hong Kong in November. You have to vote for your favorites so that we can be sure to get on the program.
Remember to vote early and often!
Today at Red Hat Summit, Jon Masters, Red Hat’s chief ARM architect, demonstrated GlusterFS replicated on two ARM 64 servers, streaming a video. This marks the first successful demo of a distributed filesystem running on ARM 64.
Video and podcast to come soon.
It’s that time again! Time to start prepping for a new release of GlusterFS, in this case, 3.4. If you haven’t checked it out yet, grab a source tarball and tell us how it goes. There are also community builds showing up on download.gluster.org for Ubuntu, Fedora and EPEL. Additionally, the Git repo has now been tagged with 3.4.
First, take a look at the 3.4 feature page to see the highlights.
One thing should jump out at you immediately: QEMU integration and the block device translator. This will significantly increase the scope of possible use cases for GlusterFS. Previously, for provisioning VMs on GlusterFS, this involved going through the FUSE mount with the GlusterFS client. The native client mount via the FUSE module is great for the scale-out NAS use case – it’s pretty mature and reliable for sharing files and folders and presenting a global namespace regardless of deployment in the cloud, bare metal or providing storage services in a virtualized environment. However, for the use case of hosting and managing VMs, it simply didn’t perform at a level needed when hosting hundreds of VMs on multiple servers. Now, however, with the QEMU integration, we’re bypassing FUSE entirely and going through a new client library, libgfapi. Early reports suggest that for sequential reads and writes, performance improves by between 2x and 3x. This is a significant increase in performance and we’re very excited about it. With enough testing from our user community (hint, hint) we are hoping that this new feature can really expand how GlusterFS is used.
What’s even more noteworthy about this feature is that it wasn’t developed by Red Hat engineers. Engineers working out of the IBM Linux Technology Center approached us last summer about doing this work. IBM engineers developed the necessary pieces in QEMU, Libvirt, as well as the block device translator in GlusterFS. We had long desired to create a client library, and this was just the impetus we needed to move that particular feature up on our priority list.
This marks the first time that a major feature in a new release was contributed from outside our immediate engineering group. This shows how broad our community has grown and demonstrates the global reach of the Gluster community. It is indeed the hallmark of a healthy project.
For a rundown of other features coming in 3.4, see the list below:
I forgot to post this at the time, but I had a lovely conversation with Richard Morrell, aka the “Cloud Evangelist” at Red Hat’s UK office. Richard is a jolly bloke with a fair bit to say on all things cloud. We talked about GlusterFS, the Gluster community, and also about Red Hat’s upcoming Developer Day in London on November 1.
Richard definitely gets the idea that the cloud is really about the data – and storage.
Linky to the podcast here.
Direct link to audio: MP3 and OGG
Today, we’re announcing the next generation of GlusterFS, version 3.3. The release has been a year in the making and marks several firsts: the first post-acquisition release under Red Hat, our first major act as an openly-governed project and our first foray beyond NAS. We’ve also taken our first steps towards merging big data and unstructured data storage, giving users and developers new ways of managing their data scalability challenges.
GlusterFS is an open source, fully distributed storage solution for the world’s ever-increasing volume of unstructured data. It is a software-only, highly available, scale-out, centrally managed storage pool that can work with POSIX filesystems that support extended attributes, such as Ext3/4, XFS, BTRFS and many more.
This release provides many of the most commonly requested features including proactive self-healing, quorum enforcement, and granular locking for self-healing, as well as many additional bug fixes and enhancements.
Some of the more noteworthy features include:
- Unified File and Object storage – Blending OpenStack’s Object Storage API with GlusterFS provides simultaneous read and write access to data as files or as objects.
- HDFS compatibility – Gives Hadoop administrators the ability to run MapReduce jobs on unstructured data on GlusterFS and access the data with well-known tools and shell scripts.
- Proactive self-healing – GlusterFS volumes will now automatically restore file integrity after a replica recovers from failure.
- Granular locking – Allows large files to be accessed even during self-healing, a feature that is particularly important for VM images.
- Replication improvements – With quorum enforcement you can be confident that your data has been written in at least the configured number of places before the file operation returns, allowing a user-configurable adjustment to fault tolerance vs performance.
Visit http://www.gluster.org to download. Packages are available for most distributions, including Fedora, Debian, RHEL, Ubuntu and CentOS.
Get involved! Join us on #gluster on freenode, join our mailing list, ‘like’ our Facebook page, follow us on Twitter, or check out our LinkedIn group.
GlusterFS is an open source project sponsored by Red Hat®, who uses it in its line of Red Hat Storage products.
In 1814, Thomas Jefferson donated the contents of his vast personal library of books and correspondence to form the foundation of the Library of Congress. Some 200 years later, that library is one of the largest in the world. Yet, the text of all of it…
Our 3rd and final community profile features Louis ‘Semiosis’ Zuckerman. Semiosis maintains a repository of GlusterFS binaries for Ubuntu on Launchpad.net. While he came in 2nd in the contest based on his contributions on our Community Q&A forums, many of you may know him from his participation on #gluster on Freenode. The following is an […]
A big part of the value proposition of cloud is to ensure that you have continuous access to your data, and that you’ve moved beyond the physical limitations of a single box or a single data center or a single geography. While the move to the cloud can…
Now that we’ve learned what a translator looks like and how to build one, it’s time to run one and actually watch it work. The best way to do this is good old-fashioned gdb, as follows (using some of the examples from last time). 1 2 3 4 5 6 7 [root@gfs-i8c-01 xlator_example]# gdb glusterfs […]