all posts tagged debian

by on August 4, 2015

Gluster Community Packages

The Gluster Community currently provides GlusterFS packages for the following distributions:

                            3.5 3.6 3.7
Fedora 21                    ¹   ×   ×
Fedora 22                    ×   ¹   ×
Fedora 23                    ×   ×   ¹
Fedora 24                    ×   ×   ¹
RHEL/CentOS 5                ×   ×
RHEL/CentOS 6                ×   ×   ×
RHEL/CentOS 7                ×   ×   ×
Ubuntu 12.04 LTS (precise)   ×   ×
Ubuntu 14.04 LTS (trusty)    ×   ×   ×
Ubuntu 15.04 (vivid)             ×   ×
Ubuntu 15.10 (wily)
Debian 7 (wheezy)            ×   ×
Debian 8 (jessie)            ×   ×   ×
Debian 9 (squeeze)           ×   ×   ×
SLES 11                      ×   ×
SLES 12                          ×   ×
OpenSuSE 13                  ×   ×   ×
RHELSA 7                             ×

(Packages are also available in NetBSD and maybe FreeBSD.)

Most packages are available from

Ubuntu packages are available from Launchpad

As can be seen, the old distributions don’t have pkgs of the latest GlusterFS, usually due to dependencies that are too old or missing. Similarly, new distributions don’t have pkgs of the older versions, for the same reason.

[1] In Fedora, Fedora Updates, or Fedora Updates-Testing for Primary architectures. Secondary architectures seem to be slow to sync with Primary; RPMs for aarch64 are often available from

by on April 8, 2015

GlusterFS-3.4.7 Released

The Gluster community is please to announce the release of GlusterFS-3.4.7.

The GlusterFS 3.4.7 release is focused on bug fixes:

  • 33608f5 cluster/dht: Changed log level to DEBUG
  • 076143f protocol: Log ENODATA & ENOATTR at DEBUG in removexattr_cbk
  • a0aa6fb build: argp-standalone, conditional build and build with gcc-5
  • 35fdb73 api: versioned symbols in for compatibility
  • 8bc612d cluster/dht: set op_errno correctly during migration.
  • 8635805 cluster/dht: Fixed double UNWIND in lookup everywhere code

GlusterFS 3.4.7 packages for a variety of popular distributions are available from

by on January 27, 2015

GlusterFS 3.6.2 GA released

The release source tar file and packages for Fedora {20,21,rawhide},
RHEL/CentOS {5,6,7}, Debian {wheezy,jessie}, Pidora2014, and Raspbian
wheezy are available at

(Ubuntu packages will be available soon.)

This release fixes the following bugs. Thanks to all who submitted bugs
and patches and reviewed the changes.

1184191 – Cluster/DHT : Fixed crash due to null deref
1180404 – nfs server restarts when a snapshot is deactivated
1180411 – CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were
browsed at CIFS mount and Control+C is issued
1180070 – [AFR] getfattr on fuse mount gives error : Software caused
connection abort
1175753 – [readdir-ahead]: indicate EOF for readdirp
1175752 – [USS]: On a successful lookup, snapd logs are filled with
Warnings “dict OR key (entry-point) is NULL”
1175749 – glusterfs client crashed while migrating the fds
1179658 – Add brick fails if parent dir of new brick and existing
brick is same and volume was accessed using libgfapi and smb.
1146524 – – synch minor diffs with fedora dist-git
1175744 – [USS]: Unable to access .snaps after snapshot restore after
directories were deleted and recreated
1175742 – [USS]: browsing .snaps directory with CIFS fails with
“Invalid argument”
1175739 – [USS]: Non root user who has no access to a directory, from
NFS mount, is able to access the files under .snaps under that directory
1175758 – [USS] : Rebalance process tries to connect to snapd and in
case when snapd crashes it might affect rebalance process
1175765 – USS]: When snapd is crashed gluster volume stop/delete
operation fails making the cluster in inconsistent state
1173528 – Change in volume heal info command output
1166515 – [Tracker] RDMA support in glusterfs
1166505 – mount fails for nfs protocol in rdma volumes
1138385 – [DHT:REBALANCE]: Rebalance failures are seen with error
message ” remote operation failed: File exists”
1177418 – entry self-heal in 3.5 and 3.6 are not compatible
1170954 – Fix mutex problems reported by coverity scan
1177899 – nfs: ls shows “Permission denied” with root-squash
1175738 – [USS]: data unavailability for a period of time when USS is
1175736 – [USS]:After deactivating a snapshot trying to access the
remaining activated snapshots from NFS mount gives ‘Invalid argument’ error
1175735 – [USS]: snapd process is not killed once the glusterd comes back
1175733 – [USS]: If the snap name is same as snap-directory than cd to
virtual snap directory fails
1175756 – [USS] : Snapd crashed while trying to access the snapshots
under .snaps directory
1175755 – SNAPSHOT[USS]:gluster volume set for uss doesnot check any
1175732 – [SNAPSHOT]: nouuid is appended for every snapshoted brick
which causes duplication if the original brick has already nouuid
1175730 – [USS]: creating file/directories under .snaps shows wrong
error message
1175754 – [SNAPSHOT]: before the snap is marked to be deleted if the
node goes down than the snaps are propagated on other nodes and glusterd
1159484 – ls -alR can not heal the disperse volume
1138897 – NetBSD port
1175728 – [USS]: All uss related logs are reported under
/var/log/glusterfs, it makes sense to move it into subfolder
1170548 – [USS] : don’t display the snapshots which are not activated
1170921 – [SNAPSHOT]: snapshot should be deactivated by default when
1175694 – [SNAPSHOT]: snapshoted volume is read only but it shows rw
attributes in mount
1161885 – Possible file corruption on dispersed volumes
1170959 – EC_MAX_NODES is defined incorrectly
1175645 – [USS]: Typo error in the description for USS under “gluster
volume set help”
1171259 – mount.glusterfs does not understand -n option

Kaleb, on behalf of Raghavendra Bhat, who did all the work.

by on January 30, 2013

GlusterFS volumes not mounting in Debian Squeeze at boot time

In mixed results, some users have been reporting issues with mounting GlusterFS volumes at boot time. I spun up a VM at Rackspace to see what I could see.

For my volume I used the following fstab entry. The host is defined in /etc/hosts:

server1:testvol /mnt/testvol glusterfs _netdev 0 0

The error listed in the client logs tells me that the fuse module isn't loaded when the volume tries to mount:

[2013-01-30 17:14:05.307253] E [mount.c:598:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or d
[2013-01-30 17:14:05.307348] E [xlator.c:385:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your
volfile again

There are no logs with useable timestamps. The init scripts in /etc/rcS.d show that networking is being started before fuse. networking calls any scrips in /etc/network/if-up.d when the network comes up. Of these, the inaptly named mountnfs mounts all the fstab entries with _netdev set using the command

mount -a -O_netdev

The fuse init script was designed with the expectation that all the remote filesystems should already be mounted (for the case of nfs mounted /usr). This means that it's scheduled after networking to allow those remote mounts to occur.


Since I don't really care if remote filesystems are mounted before the fuse module is loaded, I worked around this by changing /etc/init.d/fuse replacing $remote_fs with $local_fs for the Required-Start:

# Required-Start:    $local_fs

Then re-order the init processes:

update-rc.d fuse start 34 S . stop 41 0 6 .


People often ask us to document troubleshooting steps. Because it's not supposed to fail, there are seldom fixed troubleshooting steps. If there were, we'd file bug reports and get them fixed.

Here's the process I used:

Check the client log. That's actually one that's documented everywhere. If something goes wrong, check the log.

Fuse isn't loaded. Where's it supposed to get loaded from? I'm out of my expertise with debian so I grep fuse /etc/init.d/* to see what all might have an effect. Looks like /etc/init.d/fuse is it.

fuse's Default-Start is "S" so I looked in /etc/rcS.d and saw the boot order. Thinking that ( was the likely script that was supposed to mount the gluster volume, I manually set the start order of fuse higher. (mv S19fuse S16fuse). Rebooting still didn't mount the volume.

I decided to see for sure where the volume was being started so in /sbin/mount.glusterfs I added "ps axf >>/tmp/mounttimeps". Rebooted.

Looking in my new file I saw:

  103 hvc0     Ss+    0:00 init boot 
  104 hvc0     S+     0:00  \_ /bin/sh /etc/init.d/rc S
  107 hvc0     S+     0:00      \_ startpar -p 4 -t 20 -T 3 -M boot -P N -R S
  399 hvc0     S      0:00          \_ startpar -p 4 -t 20 -T 3 -M boot -P N -R S
  400 hvc0     S      0:00              \_ /bin/sh -e /etc/init.d/networking start
  402 hvc0     S      0:00                  \_ ifup -a
  490 hvc0     S      0:00                      \_ /bin/sh -c run-parts  /etc/network/if-up.d
  491 hvc0     S      0:00                          \_ run-parts /etc/network/if-up.d
  492 hvc0     S      0:00                              \_ /bin/sh /etc/network/if-up.d/mountnfs
  502 hvc0     S      0:00                                  \_ mount -a -O _netdev
  503 hvc0     S      0:00                                      \_ /bin/sh /sbin/mount.glusterfs server1:testvol /mnt/testvol -o rw,_netdev

This pretty clearly showed that "networking" was responsible for causing the mount attempt. Since networking clearly happens before $remote_fs, I changed the requirements and reordered. The new order in /etc/rcS.d showed that fuse was going to start before networking and subsequent reboots proved that to work correctly.

I'll be working with the package maintainer for gluster-client to see if a proper solution can be implemented.