all posts tagged gluster


by on March 31, 2017

Gluster Monthly Newsletter, March 2017

 

3.10 Release: If you didn’t already see this, we’ve released Gluster 3.10. Further details on the blog.

https://blog.gluster.org/2017/02/announcing-gluster-3-10/

 

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html

 

New meetup!

Seattle Storage Meetup has its first meeting, April 13!

 

Upcoming Talks:

Red Hat Summit –

Container-Native Storage for Modern Applications with OpenShift and Red Hat Gluster Storage

Architecting and Performance-Tuning Efficient Gluster Storage Pools

 

Noteworthy threads:

Gluster-users:

Gluster RPC Internals – Lecture #1 – recording – Milind Changire

http://lists.gluster.org/pipermail/gluster-users/2017-March/030136.html

Shyam announces release 3.11 : Scope, schedule and feature tracking

http://lists.gluster.org/pipermail/gluster-users/2017-March/030251.html

Vijay announces new demos in Community Meeting

http://lists.gluster.org/pipermail/gluster-users/2017-March/030264.html

Prasanna Kalever posts about Elasticsearch with gluster-block

http://lists.gluster.org/pipermail/gluster-users/2017-March/030302.html

Raghavendra Talur has a proposal to deprecate replace-brick for “distribute only” volumes

http://lists.gluster.org/pipermail/gluster-users/2017-March/030304.html

Deepak Naidu asks about Secured mount in GlusterFS using keys

http://lists.gluster.org/pipermail/gluster-users/2017-March/030312.html

Ramesh Nachimuthu has a question for gluster-users: How do you oVirt?

http://lists.gluster.org/pipermail/gluster-users/2017-March/030366.html  

Joe Julian announces a Seattle Storage meetup

http://lists.gluster.org/pipermail/gluster-users/2017-March/030398.html

 

Gluster-devel:

Shyam posts about Back porting guidelines: Change-ID consistency across branches

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052216.html

Niels de Vos asks about a pluggable interface for erasure coding?

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052223.html

Niels de Vos has a proposal on Reducing maintenance burden and moving fuse support to an external project

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052238.html

Nigel Babu starts a conversation on defining a good build

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052245.html

Ben Werthmann announces gogfapi improvements

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052274.html

Saravanakumar Arumugam posts about Gluster Volume as object storage with S3 interface

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052263.html

Vijay posts about Maintainers 2.0 proposal

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052321.html

George Lian posts: nodeid changed due to write-behind option changed online will lead to unexpected umount by kernel

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052372.html

Sriram posts a proposal for Gluster volume snapshot – Plugin architecture proposal

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052385.html

Mark Ferrell posts improvements for Gluster volume snapshot

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052396.html

Sonal Arora has a script to identify ref leaks

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052468.html

 

Gluster-infra:

Nigel Babu posts about RPM build failures post-mortem

http://lists.gluster.org/pipermail/gluster-infra/2017-March/003300.html

Nigel Babu posts about Servers in UTC now (mostly)

http://lists.gluster.org/pipermail/gluster-infra/2017-March/003368.html

 

Gluster Top 5 Contributors in the last 30 days:

Krutika Dhananjay, Michael Scherer, Kaleb S. Keithley, Nigel Babu, Xavier Hernandez

 

Upcoming CFPs:

OpenSource Summit Los Angeles – http://events.linuxfoundation.org/events/open-source-summit-north-america/program/cfp  – May 6

 

by on March 29, 2017

Enhancing & Optimizing Gluster for Container Storage

Containers are designed to run applications and be stateless in nature. This necessitates containerized applications to store data externally on persistent storage. Since applications can be launched at any point in time in a container cloud, the persistent storage shares also need to be dynamically provisioned without any administrative intervention. Gluster has been taking big strides for this form of container storage by introducing new features and deepening integration with other projects in the container ecosystem.

We have introduced two deployment models for addressing persistent storage with Gluster:

 

  • Container Native Storage: Containerized Gluster runs hyperconverged with application containers and builds volumes from disks that are available on the container hosts.
  • Container Ready Storage: Non-containerized Gluster running as a traditional trusted storage pool. Volumes are carved out of this pool and shares are made available to containers.

A lot of our integration focus for persistent storage has been with Kubernetes. Kubernetes provides multiple access modes for persistent storage – Read Write Many (RWM), Read Write Once (RWO) and Read Only Many (ROM).  Gluster’s native file based access has been found to be an apt match for RWM & ROM workloads. Block devices in gluster volumes are suitable for RWO workloads.

For RWM scenarios with both CNS and CRS, we recommend mapping a Kubernetes persistent volume claim to a Gluster volume. This approach provides isolation, reduces the likelihood of noisy neighbors and enables data services like geo-replication, snapshotting to be applied separately for different persistent volumes.

To enable dynamic provisioning of Gluster volumes, ReST based volume management operations have been introduced via Heketi. Heketi can manage multiple trusted storage pools and has the intelligence to carve out a volume in a trusted storage pool with minimal inputs from users. The provisioner for glusterfs in Kubernetes leverages the capabilities exposed by Heketi and creates volumes on the fly for addressing persistent volume claims made by users. You can find our work to bring together all these projects in the gluster-kubernetes project on github.  With support for Storage Classes and Daemon Sets, we have eased the storage setup and dynamic provisioning even further.

Along with  dynamic provisioning, a key requirement in container storage environments is the ability to scale and address a large number of persistent volume claims. To get to this level of scale, Gluster has evolved significantly in the recent 3.10 release. Key features that enable scale include:

  • Brick Multiplexing

  Brick multiplexing introduces the capability of aggregating bricks belonging to several volumes in a single glusterfsd process. This vastly improves the memory footprint of gluster for serving multiple brick directories from the same node. In addition to being a lesser memory hog, a multiplexed brick also consumes far fewer network ports than the non-multiplexed model. In hyperconverged CNS deployments where resources need to be shared between compute and storage, brick multiplexing optimizes gluster to scale to more number of volumes.

 gluster-block provides a management framework for exposing block devices backed by files in a volume through iSCSI. Going forward, we intend using this block interface for scalable RWO persistent volumes. We already have an external provisioner to integrate Kubernetes, Heketi and gluster-block to dynamically provision RWO persistent volumes.

Along with file and block accesses, we have envisioned the need for an Amazon S3 compatible object store in containerized environments. Several applications that are containerized look for ReSTful access to persist data. To address that we recently announced the availability of a gluster-object container that enables accessing a gluster volume through S3 APIs.

We are excited about these innovations in file, block and object accesses of Gluster to address container storage needs. Do let us know if our vision matches your container storage requirements and look forward to more details about our onward journey in the container world here!

 

by on March 15, 2017

Brick Multiplexing in Gluster 3.10

One of the salient features in Gluster 3.10 goes by the rather boring – and slightly opaque – name of brick multiplexing.  To understand what it is, and why it’s a good thing, read on.
(more…)

by on March 1, 2017

FOSDEM 2017 Gluster Talks

This year at FOSDEM, we helped run a Software Defined Storage DevRoom on Sunday, February 5th:

For those who weren’t able to make it, we’ve collected the recordings from the event related to Gluster here.

GlusterD-2.0 – the next generation of GlusterFS management – Kaushal Madappa

https://fosdem.org/2017/schedule/event/glusterd2/

Gluster Features Update – Niels de Vos

https://fosdem.org/2017/schedule/event/cephglustercommunity/

SELinux Support over GlusterFS  – Jiffin Tony Thottan

https://fosdem.org/2017/schedule/event/glusterselinux/  

Hyper-converged, persistent storage for containers with GlusterFS – Jose Rivera, Mohamed Ashiq

https://fosdem.org/2017/schedule/event/glustercontainer/

 

Our overall schedule:

GlusterD-2.0 – The next generation of GlusterFS management Kaushal Madappa
Introduction to Ceph cloud object storage Orit Wasserman
Storage overloaded to smoke? Legolize with LizardFS! Michal Bielicki
Gluster Features Overview,  Niels de Vos
Ceph Community Update Patrick McGarry
Evaluating NVMe drives for accelerating HBase, NVM HBase acceleration Nicolas Poggi
Ceph USB Storage Gateway David Disseldorp
Ceph and Storage management with openATTIC Lenz Grimmer
SELinux Support over GlusterFS Jiffin Tony Thottan
Deploying Ceph Clusters with Salt Jan Fajerski
Hyper-converged, persistent storage for containers with GlusterFS  – Jose Rivera, Mohamed Ashiq
Ceph weather report Orit Wasserman

https://fosdem.org/2017/schedule/track/software_defined_storage/  

 

by on February 28, 2017

Further notes on Gluster 3.10 and the direction for Gluster

This release of Gluster ushers in improvements for container storage, hyperconverged storage and scale-out Network Attached Storage (NAS) use cases. These use cases have been the primary focus areas for previous releases over the last 12-18 months and will continue to be the primary focus for the next three planned releases.

One of the things we’re really focused on as a project is persistent storage for containerized microservices. Part of this effort has been working with heketi and gluster-kubernetes to enhance our integration with containers. Continuing in the same vein, 3.10 brings about the following key improvements for container storage:

 

  • Brick multiplexing: Provides the ability to scale the number of exports and volumes per node. This is useful in container storage where there is a need for a large number of shared storage (Read Write Many) volumes. Brick Multiplexing also provides the infrastructure needed to implement Quality of Service in Gluster for a multi-tenant container deployment.
  • gluster-block: Along with 3.10, we are also releasing gluster-block v0.1.  gluster-block provides a very intuitive lifecycle management interface for block devices in a Gluster volume. This release of gluster-block configures block devices to be accessed from initiators through iSCSI.  Work on integrating gluster-block with Heketi for supporting Read Write Once volumes in Kubernetes is in progress.
  • S3 access for Gluster: We are also releasing an Amazon S3 compatible object storage container based on Swift3 and gluster-swift in Gluster’s docker hub. S3 access for Gluster will be useful for application developers who leverage S3 API for storage.

 

Deployment of hyperconverged storage for containers and virtualization is also a focus area for 3.10. gdeploy provides an improved ansible playbook for deploying hyperconvergence with oVirt and cockpit-gluster provides a wizard to make deployment using this playbook easy with oVirt. gk-deploy makes it easy to deploy Heketi and Gluster in hyperconverged container deployments.

 

There have been various improvements for scale-out NAS deployments in 3.10. Some of them include:

 

  • Parallelized readdirp: Improves performance of operations that perform directory crawl. This in conjunction with meta-data caching introduced in 3.9 provides a nice performance boost for small file operations.
  • Statedump support for libgfapi:  Debuggability and supportability improvement for projects like NFS Ganesha and Samba that have been integrated with libgfapi.
  • Estimate for rebalance completion: Helps administrators understand when rebalancing would complete.

 

Needless to say, none of this would have been possible without the support of our contributors and maintainers. Thank you to all those who made it happen! We  are excited at this juncture to deliver features that enhance the user experience on our key focus areas of container storage, hyperconvergence & scale-out NAS. We intend building on this momentum for further improvements on our key focus areas. Stay tuned and get involved as we progress along this journey with further Gluster releases!

 

by on February 27, 2017

Announcing Gluster 3.10

Release notes for Gluster 3.10.0

The Gluster community is pleased to announce the release of Gluster 3.10.

This is a major Gluster release that includes some substantial changes. The features revolve around, better support in container environments, scaling to larger number of bricks per node, and a few usability and performance improvements, among other bug fixes. This releases marks the completion of maintenance releases for Gluster 3.7 and 3.9. Moving forward, Gluster versions 3.10 and 3.8 are actively maintained.  

The most notable features and changes are documented here as well as in our full release notes on Github. A full list of bugs that has been addressed is included on that page as well.

Major changes and features

Brick multiplexing

Multiplexing reduces both port and memory usage. It does not improve performance vs. non-multiplexing except when memory is the limiting factor, though there are other related changes that improve performance overall (e.g. compared to 3.9).

Multiplexing is off by default. It can be enabled with

# gluster volume set all cluster.brick-multiplex on

Support to display op-version information from clients

To get information on what op-version are supported by the clients, users can invoke the gluster volume status command for clients. Along with information on hostname, port, bytes read, bytes written and number of clients connected per brick, we now also get the op-version on which the respective clients operate. Following is the example usage:

# gluster volume status <VOLNAME|all> clients

Support to get maximum op-version in a heterogeneous cluster

A heterogeneous cluster operates on a common op-version that can be supported across all the nodes in the trusted storage pool. Upon upgrade of the nodes in the cluster, the cluster might support a higher op-version. Users can retrieve the maximum op-version to which the cluster could be bumped up to by invoking the gluster volume getcommand on the newly introduced global option, cluster.max-op-version. The usage is as follows:

# gluster volume get all cluster.max-op-version

Support for rebalance time to completion estimation

Users can now see approximately how much time the rebalance operation will take to complete across all nodes.

The estimated time left for rebalance to complete is displayed as part of the rebalance status. Use the command:

# gluster volume rebalance <VOLNAME> status

Separation of tier as its own service

This change is to move the management of the tier daemon into the gluster service framework, thereby improving it stability and manageability by the service framework.

This has no change to any of the tier commands or user facing interfaces and operations.

 

Statedump support for gfapi based applications

gfapi based applications now can dump state information for better trouble shooting of issues. A statedump can be triggered in two ways:

  1. by executing the following on one of the Gluster servers,
  2. # gluster volume statedump <VOLNAME> client <HOST>:<PID>
    • <VOLNAME> should be replaced by the name of the volume
    • <HOST> should be replaced by the hostname of the system running the gfapi application
    • <PID> should be replaced by the PID of the gfapi application
  3. through calling glfs_sysrq(<FS>, GLFS_SYSRQ_STATEDUMP) within the application
    • <FS> should be replaced by a pointer to a glfs_t structure

All statedumps (*.dump.* files) will be located at the usual location, on most distributions this would be /var/run/gluster/.

Disabled creation of trash directory by default

From now onwards trash directory, namely .trashcan, will not be be created by default upon creation of new volumes unless and until the feature is turned ON and the restrictions on the same will be applicable as long as features.trash is set for a particular volume.

Implemented parallel readdirp with distribute xlator

Currently the directory listing gets slower as the number of bricks/nodes increases in a volume, though the file/directory numbers remain unchanged. With this feature, the performance of directory listing is made mostly independent of the number of nodes/bricks in the volume. Thus scale doesn’t exponentially reduce the directory listing performance. (On a 2, 5, 10, 25 brick setup we saw ~5, 100, 400, 450% improvement consecutively)

To enable this feature:

# gluster volume set <VOLNAME> performance.readdir-ahead on
# gluster volume set <VOLNAME> performance.parallel-readdir on

To disable this feature:

# gluster volume set <VOLNAME> performance.parallel-readdir off

If there are more than 50 bricks in the volume it is good to increase the cache size to be more than 10Mb (default value):

# gluster volume set <VOLNAME> performance.rda-cache-limit <CACHE SIZE>

md-cache can optionally -ve cache security.ima xattr

From kernel version 3.X or greater, creating of a file results in removexattr call on security.ima xattr. This xattr is not set on the file unless IMA feature is active. With this patch, removxattr call returns ENODATA if it is not found in the cache.

The end benefit is faster create operations where IMA is not enabled.

To cache this xattr use,

# gluster volume set <VOLNAME> performance.cache-ima-xattrs on

The above option is on by default.

Added support for CPU extensions in disperse computations

To improve disperse computations, a new way of generating dynamic code targeting specific CPU extensions like SSE and AVX on Intel processors is implemented. The available extensions are detected on run time. This can roughly double encoding and decoding speeds (or halve CPU usage).

This change is 100% compatible with the old method. No change is needed if an existing volume is upgraded.

You can control which extensions to use or disable them with the following command:

# gluster volume set <VOLNAME> disperse.cpu-extensions <type>

Valid values are:

  • none: Completely disable dynamic code generation
  • auto: Automatically detect available extensions and use the best one
  • x64: Use dynamic code generation using standard 64 bits instructions
  • sse: Use dynamic code generation using SSE extensions (128 bits)
  • avx: Use dynamic code generation using AVX extensions (256 bits)

The default value is ‘auto’. If a value is specified that is not detected on run-time, it will automatically fall back to the next available option.

Bugs addressed

Bugs addressed since release-3.9 are listed in our full release notes.

by on February 21, 2017

Gluster Monthly Newsletter, January/February 2017

3.10 is at RC1 and is tracking towards a February GA release! Read more about RC1 release —

http://lists.gluster.org/pipermail/gluster-users/2017-February/030031.html

 

Find us at Vault next month!

http://events.linuxfoundation.org/events/vault  

 

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html

 

Previous Gluster talks from January/February, now with more recordings!

FOSDEM:

—-

Software Defined Storage DevRoom:

https://fosdem.org/2017/schedule/track/software_defined_storage/  

 

GlusterD-2.0

The next generation of GlusterFS management – Kaushal Madappa

https://fosdem.org/2017/schedule/event/glusterd2/

 

SELinux Support over GlusterFS  – Jiffin Tony Thottan

https://fosdem.org/2017/schedule/event/glusterselinux/  

 

Hyper-converged, persistent storage for containers with GlusterFS – Jose Rivera, Mohamed Ashiq

https://fosdem.org/2017/schedule/event/glustercontainer/

 

Upcoming talks:

Vault

——

Challenges in Management Services for Distributed Storage – Mrugesh Karnik

https://vault2017.sched.com/event/9WQo/challenges-in-management-services-for-distributed-storage-mrugesh-karnik-red-hat  

 

Improving Performance of Directory Operations in Gluster – Manoj Pillai

https://vault2017.sched.com/event/9WQl/improving-performance-of-directory-operations-in-gluster-manoj-pillai-red-hat  

 

Persistent Storage for Containers with Gluster in Containers – Michael Adam –

https://vault2017.sched.com/event/9WQn/persistent-storage-for-containers-with-gluster-in-containers-michael-adam-red-hat

 

Provisioning NFSv4 Storage Using NFS-Ganesha, Gluster, and Pacemaker HA – Kaleb S. Keithley

https://vault2017.sched.com/event/9WQi/provisioning-nfsv4-storage-using-nfs-ganesha-gluster-and-pacemaker-ha-kaleb-s-keithley-red-hat-gluster-storage

 

Next Generation File Replication System In GlusterFS – Rafi Kavungal Chundattu Parambil, Red Hat

https://vault2017.sched.com/event/9WQr/next-generation-file-replication-system-in-glusterfs-rafi-kavungal-chundattu-parambil-red-hat

 

Noteworthy threads:

gluster-users:

Gustave Dahl asks for guidance on converting to shards:

http://lists.gluster.org/pipermail/gluster-users/2017-January/029745.html

Ziemowit Pierzycki wants to know about high-availability with KVM

http://lists.gluster.org/pipermail/gluster-users/2017-January/029772.html

Alessandro Briosi asks about gluster and multipath http://lists.gluster.org/pipermail/gluster-users/2017-January/029812.html

Kaushal announces Gluster D2 v4.0dev-5 http://lists.gluster.org/pipermail/gluster-users/2017-February/029849.html

Niels de Vos announces 3.8.9 http://lists.gluster.org/pipermail/gluster-users/2017-February/030011.html  

Olivier Lambert asks about removing an artificial limitation of disperse volume

http://lists.gluster.org/pipermail/gluster-users/2017-February/029887.html

Daniele Antolini has questions about heterogeneous bricks

http://lists.gluster.org/pipermail/gluster-users/2017-February/030016.html

 

gluster-devel:

 

Jeff Darcy provides an update on multiplexing status

http://lists.gluster.org/pipermail/gluster-devel/2017-January/051971.html

Dan Lambright requests a new maintainer for Gluster tiering

http://lists.gluster.org/pipermail/gluster-devel/2017-January/051970.html

Xavier Hernandez asks about creating new options for multiple gluster versions

http://lists.gluster.org/pipermail/gluster-devel/2017-January/052000.html

Avra Sengupta posts a Leader Election Xlator Design Document http://lists.gluster.org/pipermail/gluster-devel/2017-February/052015.html

Jeff Darcy posts Acknowledgements for brick multiplexing

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052049.html

Menaka Mohan provides an Outreachy intern update http://lists.gluster.org/pipermail/gluster-devel/2017-February/052067.html

Jeff Darcy starts a discussion around logging in a multi-brick daemon http://lists.gluster.org/pipermail/gluster-devel/2017-February/052086.html

Xavier Hernandez requests reviews on a number of patches

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052093.html

Niels de Vos asks Should glusterfs-3.10 become the new default with its first release?

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052121.html

Michael Scherer asks about C99 requirement in Gluster

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052125.html

 

gluster-infra:

 

From gluster-users, Michael Scherer corrects an erroneous mass unsubscription on gluster-users list http://lists.gluster.org/pipermail/gluster-users/2017-February/029948.html

From gluster-devel, Nigel Babu notes an upcoming outage in March:

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052126.html  

Nigel Babu posts 2017 Infrastructure Plans

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052126.html  

Shyam starts a discussion (and bug) around changing from bugzilla to github:

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052126.html  

 

Gluster Top 5 Contributors in the last 30 days:

Jeff Darcy, Poornima Gurusiddaiah, Atin Mukherjee, Kaleb S. Keithley, Xavier Hernandez

 

 

Upcoming CFPs:

Open Source Summit Japan –  http://events.linuxfoundation.org/events/open-source-summit-japan  – March 4

LinuxCon Beijing – http://events.linuxfoundation.org/events/linuxcon-containercon-cloudopen-china/program/cfp  – March 18

OpenSource Summit Los Angeles – http://events.linuxfoundation.org/events/open-source-summit-north-america/program/cfp  – May 6

 

by on February 16, 2017

GlusterFS 3.8.9 is an other Long-Term-Maintenance update

We are proud to announce the General Availability of yet the next update to the Long-Term-Stable releases for GlusterFS 3.8. Packages are being prepared to hit the mirrors expected to hit the repositories of distributions and the Gluster download server over the next few days. Details on which versions are part of which distributions can be found on the Community Packages in the documentation. The release notes are part of the git repository, the downloadable tarball and are included in this post for easy access.

Release notes for Gluster 3.8.9

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6, 3.8.7 and 3.8.8contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 16 patches have been merged, addressing 14 bugs:
  • #1410852: glusterfs-server should depend on firewalld-filesystem
  • #1411899: DHT doesn't evenly balance files on FreeBSD with ZFS
  • #1412119: ganesha service crashed on all nodes of ganesha cluster on disperse volume when doing lookup while copying files remotely using scp
  • #1412888: Extra lookup/fstats are sent over the network when a brick is down.
  • #1412913: [ganesha + EC]posix compliance rename tests failed on EC volume with nfs-ganesha mount.
  • #1412915: Spurious split-brain error messages are seen in rebalance logs
  • #1412916: [ganesha+ec]: Contents of original file are not seen when hardlink is created
  • #1412922: ls and move hung on disperse volume
  • #1412941: Regression caused by enabling client-io-threads by default
  • #1414655: Upcall: Possible memleak if inode_ctx_set fails
  • #1415053: geo-rep session faulty with ChangelogException "No such file or directory"
  • #1415132: Improve output of "gluster volume status detail"
  • #1417802: debug/trace: Print iatts of individual entries in readdirp callback for better debugging experience
  • #1420184: [Remove-brick] Hardlink migration fails with "lookup failed (No such file or directory)" error messages in rebalance logs
by on January 19, 2017

Gluster Community Newsletter, December 2016

Important happenings in Gluster:

Come see us at DevConf and FOSDEM!

Gluster has a big presence at both DevConf.CZ (https://devconf.cz/schedule.html) as well as FOSDEM! We’ll be exhibiting at FOSDEM with a Gluster stand, and we’ve got an Open Source Software DevRoom. Our schedule for FOSDEM: https://fosdem.org/2017/schedule/track/software_defined_storage/  

 

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html  

Our annual community survey has closed, thanks to everyone who participated!

We’ll be posting the results as part of the official January newsletter, along with recordings of the talks at DevConf and FOSDEM.

 

Upcoming talks:

DevConf: Hyper-converged, persistent storage for containers with GlusterFS

FOSDEM: SELinux Support over GlusterFS (https://fosdem.org/2017/schedule/event/glustercontainer/ )

Hyper-converged, persistent storage for containers with GlusterFS (https://fosdem.org/2017/schedule/event/glustercontainer/)

 

Noteworthy threads:

gluster-users

A lovely holiday gift from Lindsay Mathieson about stress testing Gluster http://lists.gluster.org/pipermail/gluster-users/2017-January/029569.html

Vladimir asks about GlusterFS best practices

http://www.gluster.org/pipermail/gluster-users/2016-December/029366.html  

Aravinda VK shares glustercli-python project updates

http://www.gluster.org/pipermail/gluster-users/2016-December/029376.html

Alexandr Porunov asks how to properly set ACLs in GlusterFS  http://www.gluster.org/pipermail/gluster-users/2016-December/029388.html

Atin Mukherjee responds to an issue of replica brick not working

http://www.gluster.org/pipermail/gluster-users/2016-December/029391.html

Shyam annouces 3.10: Feature list frozen http://www.gluster.org/pipermail/gluster-users/2016-December/029416.html

Yonex has questions on file operation failure on simple distributed volume http://www.gluster.org/pipermail/gluster-users/2016-December/029424.html

Shyam has our 3.10 Features Review  http://www.gluster.org/pipermail/gluster-users/2016-December/029478.html

 

gluster-devel

Kaushal comments that etherpads and archiv ing will be going away as of Feb 2017 http://www.gluster.org/pipermail/gluster-devel/2016-December/051639.html  

Hari Gowtham has a  3.10 feature proposal : Volume expansion on tiered volumes

http://www.gluster.org/pipermail/gluster-devel/2016-December/051647.html   

Samikshan Bairagya has a feature proposal for 3.10 release: Support to retrieve maximum supported op-version

http://www.gluster.org/pipermail/gluster-devel/2016-December/051650.html  

Prasanna Kalever has a 3.10 feature proposal : Gluster Block Storage CLI Integration

http://www.gluster.org/pipermail/gluster-devel/2016-December/051652.html

Kaleb Keithley has a 3.10 feature proposal, switch to storhaug for ganesha and samba HA setup

http://www.gluster.org/pipermail/gluster-devel/2016-December/051653.html

Poornima Gurusiddaiah has a 3.10 feature proposal : Parallel readdirp http://www.gluster.org/pipermail/gluster-devel/2016-December/051655.htm l

 

gluster-infra

Michael Scherer announces that salt is no longer used in infra: http://lists.gluster.org/pipermail/gluster-infra/2016-December/003039.html   

Gluster Top 5 Contributors in December: 

Niels de Vos, Mohammed Rafi KC,  Kaleb Keithley, Soumya Koduri, Sakshi Bansal

 

Upcoming CFPs:

Open Source Summit Japan (Mar 4)

http://events.linuxfoundation.org/events/open-source-summit-japan/program/cfp

LinuxCon + ContainerCon + CloudOpen China (Mar 18)

http://events.linuxfoundation.org/events/linuxcon-containercon-cloudopen-china/program/cfp  

Open Source Summit North America (LinuxCon + ContainerCon + CloudOpen + Community Leadership Conference) (May 6) http://events.linuxfoundation.org/events/open-source-summit-north-america/program/cfp

 

by on January 15, 2017

An other Gluster 3.8 Long-Term-Maintenance update with the 3.8.8 release

The Gluster team has been busy over the end-of-year holidays and this latest update to the 3.8 Long-Term-Maintenance release intends to fix quite a number of bugs. Packages have been built for many different distributions and are available from the download server. The release-notes for 3.8.8 have been included below for the ease of reference. All users on the 3.8 version are recommended to update to this current release.

Release notes for Gluster 3.8.8

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6 and 3.8.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 38 patches have been merged, addressing 35 bugs:
  • #1375849: [RFE] enable sharding with virt profile - /var/lib/glusterd/groups/virt
  • #1378384: log level set in glfs_set_logging() does not work
  • #1378547: Asynchronous Unsplit-brain still causes Input/Output Error on system calls
  • #1389781: build: python on Debian-based dists use .../lib/python2.7/dist-packages instead of .../site-packages
  • #1394635: errors appear in brick and nfs logs and getting stale files on NFS clients
  • #1395510: Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
  • #1399423: GlusterFS client crashes during remove-brick operation
  • #1399432: A hard link is lost during rebalance+lookup
  • #1399468: Wrong value in Last Synced column during Hybrid Crawl
  • #1399915: [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
  • #1401029: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
  • #1401534: fuse mount point not accessible
  • #1402697: glusterfsd crashed while taking snapshot using scheduler
  • #1402728: Worker restarts on log-rsync-performance config update
  • #1403109: Crash of glusterd when using long username with geo-replication
  • #1404105: Incorrect incrementation of volinfo refcnt during volume start
  • #1404583: Upcall: Possible use after free when log level set to TRACE
  • #1405004: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
  • #1405130: `gluster volume heal split-brain' does not heal if data/metadata/entry self-heal options are turned off
  • #1405450: tests/bugs/snapshot/bug-1316437.t test is causing spurious failure
  • #1405577: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
  • #1405886: Fix potential leaks in INODELK cbk in protocol/client
  • #1405890: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
  • #1405951: NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
  • #1406740: Fix spurious failure in tests/bugs/replicate/bug-1402730.t
  • #1408414: Remove-brick rebalance failed while rm -rf is in progress
  • #1408772: [Arbiter] After Killing a brick writes drastically slow down
  • #1408786: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
  • #1410073: Fix failure of split-brain-favorite-child-policy.t in CentOS7
  • #1410369: Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
  • #1410699: [geo-rep]: Config commands fail when the status is 'Created'
  • #1410708: glusterd/geo-rep: geo-rep config command leaks fd
  • #1410764: Remove-brick rebalance failed while rm -rf is in progress
  • #1411011: atime becomes zero when truncating file via ganesha (or gluster-NFS)
  • #1411613: Fix the place where graph switch event is logged