all posts by


by on February 16, 2017

GlusterFS 3.8.9 is an other Long-Term-Maintenance update

We are proud to announce the General Availability of yet the next update to the Long-Term-Stable releases for GlusterFS 3.8. Packages are being prepared to hit the mirrors expected to hit the repositories of distributions and the Gluster download server over the next few days. Details on which versions are part of which distributions can be found on the Community Packages in the documentation. The release notes are part of the git repository, the downloadable tarball and are included in this post for easy access.

Release notes for Gluster 3.8.9

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6, 3.8.7 and 3.8.8contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 16 patches have been merged, addressing 14 bugs:
  • #1410852: glusterfs-server should depend on firewalld-filesystem
  • #1411899: DHT doesn't evenly balance files on FreeBSD with ZFS
  • #1412119: ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp
  • #1412888: Extra lookup/fstats are sent over the network when a brick is down.
  • #1412913: [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount.
  • #1412915: Spurious split-brain error messages are seen in rebalance logs
  • #1412916: [ganesha+ec]: Contents of original file are not seen when hardlink is created
  • #1412922: ls and move hung on disperse volume
  • #1412941: Regression caused by enabling client-io-threads by default
  • #1414655: Upcall: Possible memleak if inode_ctx_set fails
  • #1415053: geo-rep session faulty with ChangelogException "No such file or directory"
  • #1415132: Improve output of "gluster volume status detail"
  • #1417802: debug/trace: Print iatts of individual entries in readdirp callback for better debugging experience
  • #1420184: [Remove-brick] Hardlink migration fails with "lookup failed (No such file or directory)" error messages in rebalance logs
by on January 15, 2017

An other Gluster 3.8 Long-Term-Maintenance update with the 3.8.8 release

The Gluster team has been busy over the end-of-year holidays and this latest update to the 3.8 Long-Term-Maintenance release intends to fix quite a number of bugs. Packages have been built for many different distributions and are available from the download server. The release-notes for 3.8.8 have been included below for the ease of reference. All users on the 3.8 version are recommended to update to this current release.

Release notes for Gluster 3.8.8

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6 and 3.8.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 38 patches have been merged, addressing 35 bugs:
  • #1375849: [RFE] enable sharding with virt profile - /var/lib/glusterd/groups/virt
  • #1378384: log level set in glfs_set_logging() does not work
  • #1378547: Asynchronous Unsplit-brain still causes Input/Output Error on system calls
  • #1389781: build: python on Debian-based dists use .../lib/python2.7/dist-packages instead of .../site-packages
  • #1394635: errors appear in brick and nfs logs and getting stale files on NFS clients
  • #1395510: Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
  • #1399423: GlusterFS client crashes during remove-brick operation
  • #1399432: A hard link is lost during rebalance+lookup
  • #1399468: Wrong value in Last Synced column during Hybrid Crawl
  • #1399915: [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
  • #1401029: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
  • #1401534: fuse mount point not accessible
  • #1402697: glusterfsd crashed while taking snapshot using scheduler
  • #1402728: Worker restarts on log-rsync-performance config update
  • #1403109: Crash of glusterd when using long username with geo-replication
  • #1404105: Incorrect incrementation of volinfo refcnt during volume start
  • #1404583: Upcall: Possible use after free when log level set to TRACE
  • #1405004: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
  • #1405130: `gluster volume heal split-brain' does not heal if data/metadata/entry self-heal options are turned off
  • #1405450: tests/bugs/snapshot/bug-1316437.t test is causing spurious failure
  • #1405577: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
  • #1405886: Fix potential leaks in INODELK cbk in protocol/client
  • #1405890: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
  • #1405951: NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
  • #1406740: Fix spurious failure in tests/bugs/replicate/bug-1402730.t
  • #1408414: Remove-brick rebalance failed while rm -rf is in progress
  • #1408772: [Arbiter] After Killing a brick writes drastically slow down
  • #1408786: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
  • #1410073: Fix failure of split-brain-favorite-child-policy.t in CentOS7
  • #1410369: Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
  • #1410699: [geo-rep]: Config commands fail when the status is 'Created'
  • #1410708: glusterd/geo-rep: geo-rep config command leaks fd
  • #1410764: Remove-brick rebalance failed while rm -rf is in progress
  • #1411011: atime becomes zero when truncating file via ganesha (or gluster-NFS)
  • #1411613: Fix the place where graph switch event is logged
by on September 15, 2016

GlusterFS 3.8.4 is available, Gluster users are advised to update

Even though the last release 3.8 was just two weeks ago, we're sticking to the release schedule and have 3.8.4 ready for all our current and future users. As with all updates, we advise users of previous versions to upgrade to the latest and greatest. Several bugs have been fixed, and upgrading is one way to prevent hitting known problems in future.

Release notes for Gluster 3.8.4

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2 and 3.8.3 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 23 patches have been merged, addressing 22 bugs:
  • #1332424: geo-rep: address potential leak of memory
  • #1357760: Geo-rep silently ignores config parser errors
  • #1366496: 1 mkdir generates tons of log messages from dht xlator
  • #1366746: EINVAL errors while aggregating the directory size by quotad
  • #1368841: Applications not calling glfs_h_poll_upcall() have upcall events cached for no use
  • #1368918: tests/bugs/cli/bug-1320388.t: Infrequent failures
  • #1368927: Error: quota context not set inode (gfid:nnn) [Invalid argument]
  • #1369042: thread CPU saturation limiting throughput on write workloads
  • #1369187: fix bug in protocol/client lookup callback
  • #1369328: [RFE] Add a count of snapshots associated with a volume to the output of the vol info command
  • #1369372: gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
  • #1369517: rotated FUSE mount log is using to populate the information after log rotate.
  • #1369748: Memory leak with a replica 3 arbiter 1 configuration
  • #1370172: protocol/server: readlink rsp xdr failed while readlink got an error
  • #1370390: Locks xlators is leaking fdctx in pl_release()
  • #1371194: segment fault while join thread reaper_thr in fini()
  • #1371650: [Open SSL] : Unable to mount an SSL enabled volume via SMB v3/Ganesha v4
  • #1371912: gluster system:: uuid get hangs
  • #1372728: Node remains in stopped state in pcs status with "/usr/lib/ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs.
  • #1373530: Minor improvements and cleanup for the build system
  • #1374290: "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between
  • #1374565: [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume
by on August 12, 2016

The GlusterFS 3.8.2 bugfix release is available

Pretty much according to the release schedule, GlusterFS 3.8.2 has been released this week. Packages are available in the standard repositories, and moving from testing-status in different distributions to normal updates.

Release notes for Gluster 3.8.2

This is a bugfix release. The Release Notes for 3.8.0 and 3.8.1 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 54 patches have been merged, addressing 50 bugs:
  • #1339928: Misleading error message on rebalance start when one of the glusterd instance is down
  • #1346133: tiering : Multiple brick processes crashed on tiered volume while taking snapshots
  • #1351878: client ID should logged when SSL connection fails
  • #1352771: [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
  • #1352926: gluster volume status client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
  • #1353814: Bricks are starting when server quorum not met.
  • #1354250: Gluster fuse client crashed generating core dump
  • #1354395: rpc-transport: compiler warning format string
  • #1354405: process glusterd set TCP_USER_TIMEOUT failed
  • #1354429: [Bitrot] Need a way to set scrub interval to a minute, for ease of testing
  • #1354499: service file is executable
  • #1355609: [granular entry sh] - Clean up (stale) directory indices in the event of an rm -rf and also in the normal flow while a brick is down
  • #1355610: Fix timing issue in tests/bugs/glusterd/bug-963541.t
  • #1355639: [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
  • #1356439: Upgrade from 3.7.8 to 3.8.1 doesn't regenerate the volfiles
  • #1357257: observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick
  • #1357773: [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
  • #1357834: Gluster/NFS does not accept dashes in hostnames in exports/netgroups files
  • #1357975: [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped'
  • #1358262: Trash translator fails to create 'internal_op' directory under already existing trash directory
  • #1358591: Fix spurious failure of tests/bugs/glusterd/bug-1111041.t
  • #1359020: [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored.
  • #1359364: changelog/rpc: Memory leak- rpc_clnt_t object is never freed
  • #1359625: remove hardcoding in get_aux function
  • #1359654: Polling failure errors getting when volume is started&stopped with SSL enabled setup.
  • #1360122: Tiering related core observed with "uuid_is_null () message".
  • #1360138: [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages.
  • #1360174: IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse(EC) volume from 3.7.5 to 3.7.9
  • #1360556: afr coverity fixes
  • #1360573: Fix spurious failures in split-brain-favorite-child-policy.t
  • #1360574: multiple failures of tests/bugs/disperse/bug-1236065.t
  • #1360575: Fix spurious failures in ec.t
  • #1360576: [Disperse volume]: IO hang seen on mount with file ops
  • #1360579: tests: ./tests/bitrot/br-stub.t fails intermittently
  • #1360985: [SNAPSHOT]: The PID for snapd is displayed even after snapd process is killed.
  • #1361449: Direct io to sharded files fails when on zfs backend
  • #1361483: posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop
  • #1361665: Memory leak observed with upcall polling
  • #1362025: Add output option --xml to man page of gluster
  • #1362065: tests: ./tests/bitrot/bug-1244613.t fails intermittently
  • #1362069: [GSS] Rebalance crashed
  • #1362198: [tiering]: Files of size greater than that of high watermark level should not be promoted
  • #1363598: File not found errors during rpmbuild: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py{c,o}
  • #1364326: Spurious failure in tests/bugs/glusterd/bug-1089668.t
  • #1364329: Glusterd crashes upon receiving SIGUSR1
  • #1364365: Bricks doesn't come online after reboot [ Brick Full ]
  • #1364497: posix: honour fsync flags in posix_do_zerofill
  • #1365265: Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps"
  • #1365742: inode leak in brick process
  • #1365743: GlusterFS - Memory Leak - High Memory Utilization
by on April 13, 2016

GlusterFS 3.5.9 is available, would it be the last 3.5 release?

There has been a delay in announcing the most recent 3.5 release, I'm sorry about that! Packages for most distributions are available by now, either from the standard distribution repositories (NetBSD) or from download.gluster.org. We are working hard to release the next major version of Gluster. The roadmap for Gluster 3.8 is getting more complete every day. There is still some work to do though. A reminder for all users of the 3.5 stable series; when GlusterFS 3.8 is released, the 3.5 version will become unmaintained. We do our best to maintain three versions of Gluster, with the 3.8 release it will be 3.8, 3.7 and 3.6. Users still running version 3.5 are highly encouraged to start planning their upgrade process. If there are no critical problems reported against the 3.5 version, and no patches get sent, 3.5.9 might well be the last 3.5 release.

Release Notes for GlusterFS 3.5.9

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 and 3.5.8contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1313968: Request for XML output ignored when stdin is not a tty
  • 1315559: SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).
by on December 15, 2015

GlusterFS 3.5.7 has been released

Around the 10th of each month the release schedule allows for a 3.5 stable update. This got delayed a few days due to the unfriendly weather in The Netherlands, making me take some holidays in a more sunny place in Europe.

This release fixes two bugs, one is only a minor improvement for distributions using systemd, the other fixes a potential client-side segfault when the server.manage-gids option is used. Packages for different distributions are available on the main download server, distributions that still provide glusterfs-3.5 packages should get updates out shortly too.

To keep informed about Gluster, you can follow the project on Twitter, read articles on Planet Gluster or check other sources like mailinglists on our home page.

Release Notes for GlusterFS 3.5.7

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5 and 3.5.6 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1283542: glusterfs does not register with rpcbind on restart
  • 1283691: core dump in protocol/client:client_submit_request

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).
by on November 27, 2015

Hurry up, only a few days left to do the 2015 Gluster Community Survey

The Gluster Community provides packages for Fedora, CentOS, Debian, Ubuntu, NetBSD and other distributions. All users are important to us, and we really like to hear how Gluster is (not?) working out for you, or what improvements are most wanted. It is easy to pass this information (anonymously) along through this years survey (it's a Google form).

If you would like to comment on the survey itself, please get in touch with Amye.

by on September 21, 2015

Monthly GlusterFS 3.5 release, fixing two bugs

Packages for Fedora 21 are available in updates-testing, RPMs and .debs can be found on the main Gluster download site.

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4 and 3.5.5 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1244118: unix domain sockets on Gluster/NFS are created as fifo/pipe
  • 1244147: Many build warnings when compiling glusterfs-3.5 with recent gcc

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
       gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
       option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
       service glusterd restart
      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).
by on July 26, 2015

Gluster News of week #29/2015

An other week has passed, and here is an other “Gluster Weekly News” post. Please add topics for the next post to the etherpad. Anything that is worth noting can be added, contributions from anyone are very much appreciated.
GlusterFS 3.5.5 landed in the Fedora 21 updates repository (moved out of updates-testing).

Fedora 23 has been branched from Rawhide and will contain GlusterFS 3.7. Previous Fedora releases will stick with the stable branches, meaning F22 keeps glusterfs-3.6 and F21 will stay with glusterfs-3.5.

Shared Storage for Containers in Cloud66 using Gluster.
Real Internet Solutions from Belgium (Dutch only website) started to deploy a Gluster solution for their multi-datacenter Cloud Storage products.

Wednesday the regular community meeting took place under the guidance of Atin. He posted the minutes so that everyone else can follow what was discussed.

Several Gluster talks have been accepted for LinuxCon/CloudOpen Europe in Dublin. The accepted talks have been added (with links) to our event etherpad. Attendees interested in meeting other Gluster community people should add their names to the list on the etherpad, maybe we can setup a Gluster meetup or something.

More Gluster topics have been proposed for the OpenStack Summit in Tokyo. Go to https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/SearchForm and search for “gluster” to see them all. You can vote for talks you would like to attend.

GlusterFS 3.7.3 is going to be released by Kaushal early next week.

Documentation update describing the different projects for users, developers, website and feature planning. More feedback for these suggestions are very welcome.
by on July 19, 2015

Gluster News of week #28/2015

Thanks to André Bauer for suggesting a "This week in Gluster" blog post series. This post is the 1st of its kind, and hopefully we manage to write something every week. Future blog posts are edited on a public etherpad where everyone can contribute snippets. Suggestions for improvement can be shared on the mailinglists or on the etherpad.

As every week on Wednesday, there was a Gluster Community Meeting. The minutes have been posted to the list. The next meeting happens on Wednesday, at 12:00 UTC in #gluster-meeting on Freenode IRC.

Proxmox installations of Debian 8 fail when the VM image is stored on a Gluster volume. After many troubleshooting steps and trying different ideas, it was found out that there is an issue with the version of Qemu delivered by Proxmox. Qemu on Proxmox 3.4, the Debian 8 kernel with virtio-disks and storage on Gluster do not work well together. Initially thought to be a Gluster issue, was identified to be related to Proxmox. In order to install Debian 8 on Proxmox 3.4, a workaround is to configure IDE/SATA disks instead of virtio, or use NFS instead of Qemu+libgfapi. More details and alternative workarounds can be found in the email thread.

On IRC, Jampy asked about a problem with Proxmox containers which have their root filesystem on Gluster/NFS. Gluster/NFS has a bug where unix-domain-sockets are created as pipes/fifos. This unsurprisingly causes applications of unix-domain-sockets to behave incorrectly. Bug 1235231 already was filed and fixed in the master branch, on Friday backports have been posted to the release-3.7, 3.6 and 3.5 branches. Next releases are expected to have the fix merged.

Atin sent out a call for "Gluster Office Hours", manning the #gluster IRC channel and announce who will (try to) be available on certian days/times. Anyone who is willing to man the IRC channel and help to answer (or redirect) questions of users can sign up.

From Douglas Landgrafs report of FISL16:
We also had a talk about Gluster and oVirt by Marcelo Barbosa, he showed how oVirt + Gluster is running in the development company that he works. In the end, people asked questions how he integrated FreeIPA with oVirt and how well is running Jenkins and Gerrit servers on top of oVirt. Next year Marcelo should take 2 slots for a similar talks, people are very interested in Gluster with oVirt and real use cases as demonstrated.
GlusterFS 3.5.5 and 3.6.4 have been released and packages for different distributions have been made available.

Our Jenkins instance now supports connecting over https, before only http was available. A temporary self-signed certificate is used, an official one has been requested from the certificate authority.

Manu has updated the NetBSD slaves that are used for regression testing with Jenkins. The slaves are now running NetBSD 7.0 RC1.