all posts tagged gluster


by on January 18, 2016

Gluster Community Survey Report, 2015

In November 2015, we did our annual Gluster Community Survey, and we had some great responses and turnout!
We’ve taken some of the highlights and distilled them down for our overall community to review.

Some interesting things:

  • 68% of respondents have been using Gluster for less than 2 years.
  • 3 shall be the number:The most popular amount of nodes in a cluster is 3
  • Top uses for Gluster: 44% Virtual Infrastructure, 42% File Sync and Share, 8% On demand media backup, 6% Backup
  • 70% of installations are managing less than 50 terabytes, but 3% are managing over one petabyte!

View our visual followup highlights – Gluster Community Survey Highlights – 2015

by on October 2, 2015

Gluster News: September 2015

Since we did not have any weekly Gluster news go out in September, this post tries to capture and summarize action from the entire month of September 2015.

==  General News ==

GlusterFS won yet another Bossie in the open source platforms, infrastructure, management, and orchestration software category. Long time users of the project might remember the first Bossie win in 2011.

GlusterFS had three bug fix releases happen in September. 3.7.4, 3.6.6 and 3.5.6 releases happened over the course of this month.

GlusterFS 3.7.5 is expected to be released in the first week of October.

GlusterFS 3.7.x  – Samba libgfapi support is affected currently. Stay on GlusterFS 3.6.x if you use it.

gdeploy, an ansible based deployment tool for Gluster, was released this month. More details about gdeploy can be found in the announcement email.

Introducing georepsetup – Gluster Geo-replication  Setup Tool.

If you are interested in Containers & Gluster, learn more about Running GlusterFS inside Kubernetes.

== Technical News ==

Gluster.next design discussions happened virtually in the week of 09/28. Discussions happened around these topics:

Several new features have been proposed for Gluster 3.8 and can be found here. A release planning page on gluster.org for 3.8 is expected in the next few weeks.

Getting Started with Code contributions to GlusterFS –  https://www.mail-archive.com/gluster-users@gluster.org/msg21726.html

== Community News ==

Introduction of Amye Scavarda – new community lead for Gluster.

GlusterFS Silicon Valley Meetup group had a meetup at Facebook Campus in Menlo Park, CA. More details about the meetup can be found here.

GlusterFS India Community had a meetup in Bangalore on September 12.

Soumya Koduri & Poornima Gurusiddaiah presented about the following topics at SNIA Software Developers Conference event in Santa Clara, CA.

Achieving Coherent and Aggressive Client Caching in Gluster, a Distributed System
Introduction to Highly Available NFS Server on Scale-Out Storage Systems Based on GlusterFS

Niels de Vos gave a presentation and demo about writing backups with Bareos to Gluster at the Open Source Backup Conference.

Several talks related to Gluster are planned at the upcoming LinuxCon EU Conference in Dublin, Ireland:

http://sched.co/3xWp NFS-Ganesha and Clustered NAS on Distributed Storage Systems – Soumya Koduri, Meghana Madhusudhan
http://sched.co/3xWx Advancements in Automatic File Replication in Gluster – Ravishankar N
http://sched.co/3yVR BitRot Detection in GlusterFS – Gaurav Garg, Venky Shankar
http://sched.co/3xTV Open Storage in the Enterprise with Gluster and Ceph – Dustin Black

We plan to have bi-weekly updates from the next edition. Do stay tuned in to learn about happenings in the Gluster World!

 

 

 

 

 

by on August 4, 2015

Gluster Community Packages

The Gluster Community currently provides GlusterFS packages for the following distributions:

                            3.5 3.6 3.7
Fedora 21                    ¹   ×   ×
Fedora 22                    ×   ¹   ×
Fedora 23                    ×   ×   ¹
Fedora 24                    ×   ×   ¹
RHEL/CentOS 5                ×   ×
RHEL/CentOS 6                ×   ×   ×
RHEL/CentOS 7                ×   ×   ×
Ubuntu 12.04 LTS (precise)   ×   ×
Ubuntu 14.04 LTS (trusty)    ×   ×   ×
Ubuntu 15.04 (vivid)             ×   ×
Ubuntu 15.10 (wily)
Debian 7 (wheezy)            ×   ×
Debian 8 (jessie)            ×   ×   ×
Debian 9 (squeeze)           ×   ×   ×
SLES 11                      ×   ×
SLES 12                          ×   ×
OpenSuSE 13                  ×   ×   ×
RHELSA 7                             ×

(Packages are also available in NetBSD and maybe FreeBSD.)

Most packages are available from download.gluster.org

Ubuntu packages are available from Launchpad

As can be seen, the old distributions don’t have pkgs of the latest GlusterFS, usually due to dependencies that are too old or missing. Similarly, new distributions don’t have pkgs of the older versions, for the same reason.

[1] In Fedora, Fedora Updates, or Fedora Updates-Testing for Primary architectures. Secondary architectures seem to be slow to sync with Primary; RPMs for aarch64 are often available from download.gluster.org

by on April 8, 2015

GlusterFS-3.4.7 Released

The Gluster community is please to announce the release of GlusterFS-3.4.7.

The GlusterFS 3.4.7 release is focused on bug fixes:

  • 33608f5 cluster/dht: Changed log level to DEBUG
  • 076143f protocol: Log ENODATA & ENOATTR at DEBUG in removexattr_cbk
  • a0aa6fb build: argp-standalone, conditional build and build with gcc-5
  • 35fdb73 api: versioned symbols in libgfapi.so for compatibility
  • 8bc612d cluster/dht: set op_errno correctly during migration.
  • 8635805 cluster/dht: Fixed double UNWIND in lookup everywhere code

GlusterFS 3.4.7 packages for a variety of popular distributions are available from http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.7/

by on March 27, 2015

GlusterFS 3.4.7beta4 is now available for testing

The 4th beta for GlusterFS 3.4.7 is now available for testing. A handful of bugs have been fixed since the 3.4.6 release, check the references below for details.

Bug reporters are encouraged to verify the fixes, and we invite others to test this beta to check for regressions. The ETA for 3.4.7 GA is tentatively set for April 6 so time is short for testing. Please note that the 3.4 release will EOL when 3.7 is released.

Packages for different distributions can be found on the main download site.

Release Notes for GlusterFS 3.4.7

GlusterFS 3.4.7 consists entirely of bug fixes. The Release Notes for 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.4.4, 3.4.5, and 3.4.6 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.4 stable release.

The following changes are included in 3.4.7:

  • 33608f5 cluster/dht: Changed log level to DEBUG
  • 076143f protocol: Log ENODATA & ENOATTR at DEBUG in removexattr_cbk
  • a0aa6fb build: argp-standalone, conditional build and build with gcc-5
  • 35fdb73 api: versioned symbols in libgfapi.so for compatibility
  • 8bc612d cluster/dht: set op_errno correctly during migration.
  • 8635805 cluster/dht: Fixed double UNWIND in lookup everywhere code

Known Issues:

  • memory leak in glusterfs fuse bridge
  • File replicas differ in content even as heal info lists 0 entries in replica 2 setup
by on March 17, 2015

GlusterFS 3.4.7beta2 is now available for testing

The 2nd beta for GlusterFS 3.4.7 is now available for testing. A handful of bugs have been fixed since the 3.4.6 release, check the references below for details.

Bug reporters are encouraged to verify the fixes, and we invite others to test this beta to check for regressions. The ETA for 3.4.7 GA is not set, but will likely be before the end of March. Please note that the 3.4 release will EOL when 3.7 is released.

Packages for different distributions can be found on the main download site.

Release Notes for GlusterFS 3.4.7

GlusterFS 3.4.7 consists entirely of bug fixes. The Release Notes for 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.4.4, 3.4.5, and 3.4.6 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.4 stable release.

The following changes are included in 3.4.7:

  • a0aa6fb build: argp-standalone, conditional build and build with gcc-5
  • 35fdb73 api: versioned symbols in libgfapi.so for compatibility
  • 8bc612d cluster/dht: set op_errno correctly during migration.
  • 8635805 cluster/dht: Fixed double UNWIND in lookup everywhere code

Known Issues:

  • memory leak in glusterfs fuse bridge
  • File replicas differ in content even as heal info lists 0 entries in replica 2 setup

And if you’re wondering what happened to the first beta, it was made, but did not build with gcc-5.

by on January 27, 2015

GlusterFS 3.6.2 GA released

The release source tar file and packages for Fedora {20,21,rawhide},
RHEL/CentOS {5,6,7}, Debian {wheezy,jessie}, Pidora2014, and Raspbian
wheezy are available at
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/

(Ubuntu packages will be available soon.)

This release fixes the following bugs. Thanks to all who submitted bugs
and patches and reviewed the changes.

1184191 – Cluster/DHT : Fixed crash due to null deref
1180404 – nfs server restarts when a snapshot is deactivated
1180411 – CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were
browsed at CIFS mount and Control+C is issued
1180070 – [AFR] getfattr on fuse mount gives error : Software caused
connection abort
1175753 – [readdir-ahead]: indicate EOF for readdirp
1175752 – [USS]: On a successful lookup, snapd logs are filled with
Warnings “dict OR key (entry-point) is NULL”
1175749 – glusterfs client crashed while migrating the fds
1179658 – Add brick fails if parent dir of new brick and existing
brick is same and volume was accessed using libgfapi and smb.
1146524 – glusterfs.spec.in – synch minor diffs with fedora dist-git
glusterfs.spec
1175744 – [USS]: Unable to access .snaps after snapshot restore after
directories were deleted and recreated
1175742 – [USS]: browsing .snaps directory with CIFS fails with
“Invalid argument”
1175739 – [USS]: Non root user who has no access to a directory, from
NFS mount, is able to access the files under .snaps under that directory
1175758 – [USS] : Rebalance process tries to connect to snapd and in
case when snapd crashes it might affect rebalance process
1175765 – USS]: When snapd is crashed gluster volume stop/delete
operation fails making the cluster in inconsistent state
1173528 – Change in volume heal info command output
1166515 – [Tracker] RDMA support in glusterfs
1166505 – mount fails for nfs protocol in rdma volumes
1138385 – [DHT:REBALANCE]: Rebalance failures are seen with error
message ” remote operation failed: File exists”
1177418 – entry self-heal in 3.5 and 3.6 are not compatible
1170954 – Fix mutex problems reported by coverity scan
1177899 – nfs: ls shows “Permission denied” with root-squash
1175738 – [USS]: data unavailability for a period of time when USS is
enabled/disabled
1175736 – [USS]:After deactivating a snapshot trying to access the
remaining activated snapshots from NFS mount gives ‘Invalid argument’ error
1175735 – [USS]: snapd process is not killed once the glusterd comes back
1175733 – [USS]: If the snap name is same as snap-directory than cd to
virtual snap directory fails
1175756 – [USS] : Snapd crashed while trying to access the snapshots
under .snaps directory
1175755 – SNAPSHOT[USS]:gluster volume set for uss doesnot check any
boundaries
1175732 – [SNAPSHOT]: nouuid is appended for every snapshoted brick
which causes duplication if the original brick has already nouuid
1175730 – [USS]: creating file/directories under .snaps shows wrong
error message
1175754 – [SNAPSHOT]: before the snap is marked to be deleted if the
node goes down than the snaps are propagated on other nodes and glusterd
hungs
1159484 – ls -alR can not heal the disperse volume
1138897 – NetBSD port
1175728 – [USS]: All uss related logs are reported under
/var/log/glusterfs, it makes sense to move it into subfolder
1170548 – [USS] : don’t display the snapshots which are not activated
1170921 – [SNAPSHOT]: snapshot should be deactivated by default when
created
1175694 – [SNAPSHOT]: snapshoted volume is read only but it shows rw
attributes in mount
1161885 – Possible file corruption on dispersed volumes
1170959 – EC_MAX_NODES is defined incorrectly
1175645 – [USS]: Typo error in the description for USS under “gluster
volume set help”
1171259 – mount.glusterfs does not understand -n option

Regards,
Kaleb, on behalf of Raghavendra Bhat, who did all the work.

by on November 7, 2014

Some notes on libgfapi.so symbol versions in GlusterFS 3.6.1

A little bit of background——

We started to track API/ABI changes to libgfapi.so by incrementing the SO_NAME, e.g. libgfapi.so.0(.0.0). In the master branch it was incremented to to ‘7’ or libgfapi.so.7(.0.0) for the eventual glusterfs-3.7.

I believe, but I’m not entirely certain¹, that we were supposed to reset this when we branched for release-3.6. Reset it to either ‘6’ or ‘0’, but we didn’t — apparently we forgot about it. In the 3.6.0 betas and if you build the GA release of 3.6.0 you’ll get a libgfapi.so.7(.0.0).

We didn’t hear much, if anything about this until a few days before 3.6.0 was scheduled to GA, when we were ‘reminded’ that older versions of applications like qemu, Samba, and more — linked against previous versions of libgfapi.so — no longer worked after upgrading to the new version of glusterfs.

We briefly experimented with adding a -compat package that installed a symlink: libgfapi.so.0 -> libgfapi.so.7; but some thought this was too much of a hack, and we abandoned that idea.

As a result we now have symbol versions in libgfapi.so. I’ve posted a public spreadsheet with a table of the symbols and the versions of glusterfs that they appear in at

https://docs.google.com/spreadsheets/d/1SKtzgEeVZbKFJgGGMdftf0p-AB5U7yyhVq1n2b6hBeQ/edit?usp=sharing

and also at

https://ethercalc.org/lrjvqrapzu

There are a few things to note about the symbol versions:

  1. so far all the function signatures, i.e. number of parameters and their types have not changed since libgfapi was introduced in 3.4.0. That’s a Good Thing.
  2. the symbol versions are taken from the (community) glusterfs release that they first appeared in.
  3. there are two functions declared in glfs.h that do not have an associated definition. So far it’s not clear why.
  4. there are two functions defined (in glfs-fops.c) that look like they ought to have declarations in glfs.h. Perhaps this was an oversight in the original implementation.
  5. there are several (private?) functions in libgfapi that are not declared in a public header that are used/referenced outside the library. That’s not a Bad Thing, per se, but it’s also not a Good Thing. It seems a bit strange for, e.g. glfs-heal and the nfs server xlator to have to be linked against libgfapi.so. These functions should either be made public or moved to another library, e.g. libglusterfs.so

N.B. that 3, 4, and 5 are also noted in comments in the spreadsheets.

In parallel to all this, RHEL 6.6, RHEL 7.0, and the related CentOS releases shipped updated RHS-GlusterFS client-side packages with version 3.6.0.X. This has resulted in confusion for many of the users of Community GlusterFS who are having issues with upgrading their systems. On top of that the libgfapi.so in these releases is libgfapi.so.0(.0.0), and there are applications included in those releases that are linked with it.

Earlier today (7 Nov, 2014) we released GlusterFS 3.6.1 to address and hopefully mitigate the 3.6.0 upgrade issue and fix the libgfapi.so compatibility issue by reverting to the original SO_NAME (libgfapi.so.0.0.0), now with symbol versions. The applications will not need to be recompiled or relinked.

Knowing that we were going to quickly release GlusterFS 3.6.1 to address these issues, to save our community packagers some work and to try to minimize confusion² we agreed in the community to not package GlusterFS 3.6.0 for any of the Linux distributions. We expect packages for 3.6.1 to be available soon on download.gluster.org.

And if anyone is looking for a nice Google Summer of Code project, linking libglusterfs and the xlators with link maps — with or without symbol versions — is an idea that I think has some merit.

HTH.

¹Not without slogging through a lot of old emails to reconstruct what we originally intended.

²Then we don’t have to explain why some Linux distributions have community 3.6.0 packages and others have (only) 3.6.1.

by on November 6, 2014

GlusterFS 3.4.6beta2 is now available for testing

Even though GlusterFS-3.6.0 was released last week, maintenance continues on the 3.4 stable series!

The 2nd beta for GlusterFS 3.4.6 is now available for testing. Many bugs have been fixed since the 3.4.5 release, check the references below for details.

Bug reporters are encouraged to verify the fixes, and we invite others to test this beta to check for regressions. The ETA for 3.4.6 GA is tentatively set for the week of 10 November, after testing shows the release to be stable.

Packages for different distributions can be found on the main download site.

Release Notes for GlusterFS 3.4.6

GlusterFS 3.4.6 consists entirely of bug fixes. The Release Notes for 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.4.4, and 3.4.5 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.4 stable release.

The following changes are included in 3.4.6:

  • 049580f logrotate: gluster logrotate config should not be global
  • dc8a349 cluster/afr: Handle EAGAIN properly in inodelk
  • 4dc4325 socket: disallow CBC cipher modes
  • fe3e541 Cluster/DHT: Changing rename log severity
  • 3ade635 cluster/dht: Rename should not fail post hardlink creation
  • 49c4106 cluster/dht: Treat linkto file rename failure as non-critial error
  • c527449 cluster/dht: synchronize rename and file-migration
  • 0ebe35e libglusterfs/syncop: implement inodelk
  • fcd256f cluster/dht: introduce locking api.
  • 7a1e42e cluster/dht: Fix dht_access treating directory like files
  • 91175b3 cluster/dht: Prevent dht_access from going into a loop.
  • ebdb73d dht: fix rename race
  • fe5cf30 storage/posix: removing deleting entries in case of creation failures
  • b3387c8 cluster/dht: Fix races to avoid deletion of linkto file
  • 3042613 DHT/Create: Failing to identify a linkto file in lookup_everywhere_cbk
  • a735680 dht: fix rename race
  • bc75418 DHT/readdirp: Directory not shown/healed on mount point if exists on sin
  • 67ccd15 dht/rebalance: Do not allow rebalance when gfid mismatch found
  • f8b5bfd glusterfs.spec.in: add psmisc to -server subpackage
  • ab0547e socket: Fixed parsing RPC records containing multi fragments
  • e2a76e7 cluster/dht: Fix dict_t leaks in rebalance process’ execution path
  • c0b40b5 gNFS: Fix memory leak in setacl code path
  • 1d4ef0b mount/fuse: Handle fd resolution failures
  • f0ddba7 cluster/afr: Fix memory leak of file-path in self-heal-daemon
  • 1679b72 NFS: stripe-xlator should pass EOF at end of READDIR

Known Issues:

  • memory leak in glusterfs fuse bridge
  • data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
  • rebalance process crash after add-brick and `rebalance start’ operation
by on December 10, 2013

GlusterFS Keeps VFX Studio on the Cutting Edge

Cutting Edge, a visual effects company that’s worked on films such as The Great Gatsby and I, Frankenstein, had outgrown its NAS storage system and was in search of a way to boost its storage capacity and performance in the face of several large upcoming projects. The Australia-based firm turned to GlusterFS as an alternative to making a massive investment in an enterprise SAN.

I spoke to Dan Mons, R&D SysAdmin at Cutting Edge and architect of the company’s GlusterFS deployment, about how he tapped Gluster to meet Cutting Edge’s growing storage needs.

“We’ve had three feature films roll through our Gluster storage since it went in, and to be 100% honest we couldn’t have done them without Gluster,” Mons said. “The flexibility it offers us for storage is amazing.”

The GlusterFS storage solution that Mons assembled consists of 24 total GlusterFS 3.4.1 nodes, each running CentOS 6.4 and outfitted with 34TB of RAID6 storage. These nodes are assembled into four six-node clusters, which provide the company’s Brisbane and Sydney offices each with its own production and backup cluster pair.

Each cluster hosts a distributed-replicated GlusterFS volume, which keeps data accessible in the event of node failure. Nightly rsync operations between the production and backup clusters at each location provide an additional layer of data protection.

Users in Cutting Edge’s Sydney and Brisbane offices have access to 107TB of production storage, and read-only access to another 107TB on each location’s the backup cluster.

Mons explained that given data volume, time and bandwidth issues, it isn’t feasible to synchronize completely the data generated at the two offices, but that the company’s artists have access to scripts to sync particular folders between the locations when it’s necessary to collaborate with co-workers in another office.

Client Access

With a client pool that runs the gamut from Linux-powered render machines and individual workstations to machines running OS X, Windows, and a handful of specialty OSes, ensuring access to their data across multiple platforms and protocols has been one of the trickier parts of the Cutting Edge deployment.

The Linux machines that comprise that majority of the company’s client mix access the cluster via the GlusterFS FUSE client, which provides access to all six nodes in the production cluster directly, for maximum bandwidth distribution. Older Linux and machines running speciality OSes tap the cluster via Gluster’s NFS support, with DNS round robin for distributing the load.

Mons explained that while the OS X-based machines in his company’s environment are able to access the GlusterFS cluster normally via NFS or CIFS mounts using command line tools, he’s run into various issues with the OS X Finder application and with Carbon or Cocoa-based OS X applications.

To work around these issues, the team at Cutting Edge set up a separate Linux server that mounts the GlusterFS volume with the FUSE client, and then re-exports that as AFP via Netatalk3. This method works, but at the cost of performance and of compatibility with some of the firm’s pipeline processes. Ideally, Mons would like to see a FUSE client become available for OS X.

The company’s Windows-based machines access the cluster via Samba, installed on each node in the cluster, with DNS round robin for distributing the load and Active Directory for authentication. Mons said that his team encountered file locking issues with certain applications, most of which they were able to resolve, although they’ve continued to experience issues with Photoshop and Microsoft Office on Windows.

Looking Ahead

Since their March 2013 deployment, the Cutting Edge storage solution has undergone updates from GlusterFS 3.3.1 to 3.4.0, and most recently, to 3.4.1, all of which have gone smoothly. Mons noted that the latest GlusterFS updates have brought noticable speed and NFS stability improvements, benefiting legacy and turnkey systems for which the FUSE client is not an option.

Looking ahead, Cutting Edge plans to add new node pairs to their production and backup clusters in early 2014, as their production clusters are nearing 90% capacity, with more project data on the way.

Mons told me that he’s begun testing Samba with Gluster’s recent libgfapi enhancements, which appear to boost file browsing performance in his environment. Along similar lines, Mons is looking forward to seeing support for storing directory and file information in extended attributes make its way into GlusterFS, which promise to speed list directory and disk usage operations.