The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

GlusterFS 3.6.2 GA released

Gluster
2015-01-27

The release source tar file and packages for Fedora {20,21,rawhide},
RHEL/CentOS {5,6,7}, Debian {wheezy,jessie}, Pidora2014, and Raspbian
wheezy are available at
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/

(Ubuntu packages will be available soon.)

This release fixes the following bugs. Thanks to all who submitted bugs
and patches and reviewed the changes.

1184191 – Cluster/DHT : Fixed crash due to null deref
1180404 – nfs server restarts when a snapshot is deactivated
1180411 – CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were
browsed at CIFS mount and Control+C is issued
1180070 – [AFR] getfattr on fuse mount gives error : Software caused
connection abort
1175753 – [readdir-ahead]: indicate EOF for readdirp
1175752 – [USS]: On a successful lookup, snapd logs are filled with
Warnings “dict OR key (entry-point) is NULL”
1175749 – glusterfs client crashed while migrating the fds
1179658 – Add brick fails if parent dir of new brick and existing
brick is same and volume was accessed using libgfapi and smb.
1146524 – glusterfs.spec.in – synch minor diffs with fedora dist-git
glusterfs.spec
1175744 – [USS]: Unable to access .snaps after snapshot restore after
directories were deleted and recreated
1175742 – [USS]: browsing .snaps directory with CIFS fails with
“Invalid argument”
1175739 – [USS]: Non root user who has no access to a directory, from
NFS mount, is able to access the files under .snaps under that directory
1175758 – [USS] : Rebalance process tries to connect to snapd and in
case when snapd crashes it might affect rebalance process
1175765 – USS]: When snapd is crashed gluster volume stop/delete
operation fails making the cluster in inconsistent state
1173528 – Change in volume heal info command output
1166515 – [Tracker] RDMA support in glusterfs
1166505 – mount fails for nfs protocol in rdma volumes
1138385 – [DHT:REBALANCE]: Rebalance failures are seen with error
message ” remote operation failed: File exists”
1177418 – entry self-heal in 3.5 and 3.6 are not compatible
1170954 – Fix mutex problems reported by coverity scan
1177899 – nfs: ls shows “Permission denied” with root-squash
1175738 – [USS]: data unavailability for a period of time when USS is
enabled/disabled
1175736 – [USS]:After deactivating a snapshot trying to access the
remaining activated snapshots from NFS mount gives ‘Invalid argument’ error
1175735 – [USS]: snapd process is not killed once the glusterd comes back
1175733 – [USS]: If the snap name is same as snap-directory than cd to
virtual snap directory fails
1175756 – [USS] : Snapd crashed while trying to access the snapshots
under .snaps directory
1175755 – SNAPSHOT[USS]:gluster volume set for uss doesnot check any
boundaries
1175732 – [SNAPSHOT]: nouuid is appended for every snapshoted brick
which causes duplication if the original brick has already nouuid
1175730 – [USS]: creating file/directories under .snaps shows wrong
error message
1175754 – [SNAPSHOT]: before the snap is marked to be deleted if the
node goes down than the snaps are propagated on other nodes and glusterd
hungs
1159484 – ls -alR can not heal the disperse volume
1138897 – NetBSD port
1175728 – [USS]: All uss related logs are reported under
/var/log/glusterfs, it makes sense to move it into subfolder
1170548 – [USS] : don’t display the snapshots which are not activated
1170921 – [SNAPSHOT]: snapshot should be deactivated by default when
created
1175694 – [SNAPSHOT]: snapshoted volume is read only but it shows rw
attributes in mount
1161885 – Possible file corruption on dispersed volumes
1170959 – EC_MAX_NODES is defined incorrectly
1175645 – [USS]: Typo error in the description for USS under “gluster
volume set help”
1171259 – mount.glusterfs does not understand -n option

Regards,
Kaleb, on behalf of Raghavendra Bhat, who did all the work.

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more