all posts tagged gluster


by on June 4, 2017

Gluster Monthly Newsletter, May 2017

Gluster Monthly Newsletter, May 2017

Important happenings for Gluster for May:

3.11 Release!

Our 3.11 release is officially out!

https://blog.gluster.org/2017/05/announcing-gluster-3-11/

Note that this is a short term supported release.

3.12 is underway with a feature freeze date of July 17, 2017.

Gluster Summit 2017!

Gluster Summit 2017 will be held in Prague, Czech Republic on October 27 and 28th. We’ll be opening a call for papers for this instead of having an application process to attend.

https://www.gluster.org/events/summit2017  

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html

 

From Red Hat Summit:

Container-Native Storage for Modern Applications with OpenShift and Red Hat Gluster Storage

http://bit.ly/2qpLVP0

Gluster Top 5 Contributors in the last 30 days:

Architecting and Performance-Tuning Efficient Gluster Storage Pools

http://bit.ly/2qpMgkK

 

Noteworthy threads from the mailing lists:

Announcing GlusterFS release 3.11.0 (Short Term Maintenance) – Shyam –

http://lists.gluster.org/pipermail/gluster-users/2017-May/031298.html  

GlusterFS and Kafka – Christopher Schmidt –

http://lists.gluster.org/pipermail/gluster-users/2017-May/031185.html

gluster-block v0.2 is alive! – Prasanna Kalever – http://lists.gluster.org/pipermail/gluster-users/2017-May/030933.html

GlusterFS removal from Openstack Cinder – Joe Julian

http://lists.gluster.org/pipermail/gluster-users/2017-May/031223.html

Release 3.12 and 4.0: Thoughts on scope – Shyam  –

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052811.html

Reviews older than 90 days  – Amar Tumballi –

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052844.html  

[Proposal]: Changes to how we test and vote each patch  – Amar Tumballi –  

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052868.html

Volgen support for loading trace and io-stats translators at specific points in the graph – Krutika Dhananjay –

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052881.html  

Backport for “Add back socket for polling of events immediately…” – Shyam

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052887.html  

[Proposal]: New branch (earlier: Changes to how we test and vote each patch) –  Amar Tumballi –   

http://lists.gluster.org/pipermail/gluster-devel/2017-May/052933.html  

 

Gluster Top 5 Contributors in the last 30 days:

Krutika Dhananjay, Michael Scherer, Kaleb S. Keithley, Nigel Babu, Xavier Hernandez

 

Upcoming CFPs:

Open Source Summit Europe –

http://events.linuxfoundation.org/events/open-source-summit-europe/program/cfp

July 8

 

by on May 31, 2017

Announcing Gluster 3.11

Release notes for Gluster 3.11

The Gluster community is pleased to announce the release of Gluster 3.11.

This is a short term maintenance (STM) Gluster release that includes some substantial changes. The features revolve around, improvements to small file workloads, Halo replication enhancement from Facebook, some usability and performance improvements, among other bug fixes.

The most notable features and changes are documented on the full release notes.

Moving forward, Gluster versions 3.11, 3.10 and 3.8 are actively maintained.

With the release of 3.12 in the future, active maintenance of this (3.11) STM release will be terminated.

Major changes and features

  • Switched to storhaug for ganesha and samba high availability
  • Added SELinux support for Gluster Volumes
  • Several memory leaks are fixed in gfapi during graph switches
  • get-state CLI is enhanced to provide client and brick capacity related information
  • Ability to serve negative lookups from cache has been added
  • New xlator to help developers detecting resource leaks has been added
  • Feature for metadata-caching/small file performance is production ready
  • “Parallel Readdir” feature introduced in 3.10.0 is production ready
  • Object versioning is enabled only if bitrot is enabled
  • Distribute layer provides more robust transactions during directory namespace operations
  • gfapi extended readdirplus API has been added
  • Improved adoption of standard refcounting functions across the code
  • Performance improvements to rebalance have been made
  • Halo Replication feature in AFR has been introduced
  • FALLOCATE support with EC
by on May 1, 2017

Gluster Monthly Newsletter, April 2017

 

Release 3.11 has been branched and tagged! More details on the mailing list.

http://lists.gluster.org/pipermail/gluster-users/2017-April/030764.html

 

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html

 

New meetup! We’re delighted to welcome the first Seattle Storage meetup, run by our very own Joe Julian.

https://www.meetup.com/Seattle-Storage-Meetup/

 

Coming to Red Hat Summit?

Come find us at the Gluster Community Booth in our Community Central area!

 

Upcoming Talks:

Red Hat Summit:

Container-Native Storage for Modern Applications with OpenShift and Red Hat Gluster Storage

http://bit.ly/2qpLVP0

Architecting and Performance-Tuning Efficient Gluster Storage Pools

http://bit.ly/2qpMgkK

 

Noteworthy threads:

Gluster-users:

Announcing release 3.11 : Scope, schedule and feature tracking

http://lists.gluster.org/pipermail/gluster-users/2017-April/030561.html

Usability Initiative for Gluster: Documentation

http://lists.gluster.org/pipermail/gluster-users/2017-April/030567.html

How do you oVirt? Here the answers!

http://lists.gluster.org/pipermail/gluster-users/2017-April/030592.html

Revisiting Quota functionality in GlusterFS

http://lists.gluster.org/pipermail/gluster-users/2017-April/030676.html

 

Gluster-devel:

Back porting guidelines: Change-ID consistency across branches

http://lists.gluster.org/pipermail/gluster-devel/2017-April/052495.html

GlusterFS+NFS-Ganesha longevity cluster

http://lists.gluster.org/pipermail/gluster-devel/2017-April/052503.html

GFID2 – Proposal to add extra byte to existing GFID

http://lists.gluster.org/pipermail/gluster-devel/2017-April/052520.html

[Gluster-Maintainers] Maintainers 2.0 Proposal

http://lists.gluster.org/pipermail/gluster-devel/2017-April/052551.html

Proposal for an extended READDIRPLUS operation via gfAPI

http://lists.gluster.org/pipermail/gluster-devel/2017-April/052596.html

 

Gluster-infra:

Jenkins Upgrade

http://lists.gluster.org/pipermail/gluster-infra/2017-April/003495.html

 

Gluster Top 5 Contributors in the last 30 days:

Krutika Dhananjay, Michael Scherer, Kaleb S. Keithley, Nigel Babu, Xavier Hernandez

 

Upcoming CFPs:

Open Source Summit North America – http://events.linuxfoundation.org/events/open-source-summit-north-america/program/cfp  – May 6

Open Source Summit Europe –

http://events.linuxfoundation.org/events/open-source-summit-europe/program/cfp

July 8

by on March 31, 2017

Gluster Monthly Newsletter, March 2017

 

3.10 Release: If you didn’t already see this, we’ve released Gluster 3.10. Further details on the blog.

https://blog.gluster.org/2017/02/announcing-gluster-3-10/

 

Our weekly community meeting has changed: we’ll be meeting every other week instead of weekly, moving the time to 15:00 UTC, and our agenda is at: https://bit.ly/gluster-community-meetings

We hope this means that more people can join us. Kaushal outlines the changes on the mailing list: http://lists.gluster.org/pipermail/gluster-devel/2017-January/051918.html

 

New meetup!

Seattle Storage Meetup has its first meeting, April 13!

 

Upcoming Talks:

Red Hat Summit –

Container-Native Storage for Modern Applications with OpenShift and Red Hat Gluster Storage

Architecting and Performance-Tuning Efficient Gluster Storage Pools

 

Noteworthy threads:

Gluster-users:

Gluster RPC Internals – Lecture #1 – recording – Milind Changire

http://lists.gluster.org/pipermail/gluster-users/2017-March/030136.html

Shyam announces release 3.11 : Scope, schedule and feature tracking

http://lists.gluster.org/pipermail/gluster-users/2017-March/030251.html

Vijay announces new demos in Community Meeting

http://lists.gluster.org/pipermail/gluster-users/2017-March/030264.html

Prasanna Kalever posts about Elasticsearch with gluster-block

http://lists.gluster.org/pipermail/gluster-users/2017-March/030302.html

Raghavendra Talur has a proposal to deprecate replace-brick for “distribute only” volumes

http://lists.gluster.org/pipermail/gluster-users/2017-March/030304.html

Deepak Naidu asks about Secured mount in GlusterFS using keys

http://lists.gluster.org/pipermail/gluster-users/2017-March/030312.html

Ramesh Nachimuthu has a question for gluster-users: How do you oVirt?

http://lists.gluster.org/pipermail/gluster-users/2017-March/030366.html  

Joe Julian announces a Seattle Storage meetup

http://lists.gluster.org/pipermail/gluster-users/2017-March/030398.html

 

Gluster-devel:

Shyam posts about Back porting guidelines: Change-ID consistency across branches

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052216.html

Niels de Vos asks about a pluggable interface for erasure coding?

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052223.html

Niels de Vos has a proposal on Reducing maintenance burden and moving fuse support to an external project

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052238.html

Nigel Babu starts a conversation on defining a good build

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052245.html

Ben Werthmann announces gogfapi improvements

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052274.html

Saravanakumar Arumugam posts about Gluster Volume as object storage with S3 interface

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052263.html

Vijay posts about Maintainers 2.0 proposal

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052321.html

George Lian posts: nodeid changed due to write-behind option changed online will lead to unexpected umount by kernel

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052372.html

Sriram posts a proposal for Gluster volume snapshot – Plugin architecture proposal

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052385.html

Mark Ferrell posts improvements for Gluster volume snapshot

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052396.html

Sonal Arora has a script to identify ref leaks

http://lists.gluster.org/pipermail/gluster-devel/2017-March/052468.html

 

Gluster-infra:

Nigel Babu posts about RPM build failures post-mortem

http://lists.gluster.org/pipermail/gluster-infra/2017-March/003300.html

Nigel Babu posts about Servers in UTC now (mostly)

http://lists.gluster.org/pipermail/gluster-infra/2017-March/003368.html

 

Gluster Top 5 Contributors in the last 30 days:

Krutika Dhananjay, Michael Scherer, Kaleb S. Keithley, Nigel Babu, Xavier Hernandez

 

Upcoming CFPs:

OpenSource Summit Los Angeles – http://events.linuxfoundation.org/events/open-source-summit-north-america/program/cfp  – May 6

 

by on March 29, 2017

Enhancing & Optimizing Gluster for Container Storage

Containers are designed to run applications and be stateless in nature. This necessitates containerized applications to store data externally on persistent storage. Since applications can be launched at any point in time in a container cloud, the persistent storage shares also need to be dynamically provisioned without any administrative intervention. Gluster has been taking big strides for this form of container storage by introducing new features and deepening integration with other projects in the container ecosystem.

We have introduced two deployment models for addressing persistent storage with Gluster:

 

  • Container Native Storage: Containerized Gluster runs hyperconverged with application containers and builds volumes from disks that are available on the container hosts.
  • Container Ready Storage: Non-containerized Gluster running as a traditional trusted storage pool. Volumes are carved out of this pool and shares are made available to containers.

A lot of our integration focus for persistent storage has been with Kubernetes. Kubernetes provides multiple access modes for persistent storage – Read Write Many (RWM), Read Write Once (RWO) and Read Only Many (ROM).  Gluster’s native file based access has been found to be an apt match for RWM & ROM workloads. Block devices in gluster volumes are suitable for RWO workloads.

For RWM scenarios with both CNS and CRS, we recommend mapping a Kubernetes persistent volume claim to a Gluster volume. This approach provides isolation, reduces the likelihood of noisy neighbors and enables data services like geo-replication, snapshotting to be applied separately for different persistent volumes.

To enable dynamic provisioning of Gluster volumes, ReST based volume management operations have been introduced via Heketi. Heketi can manage multiple trusted storage pools and has the intelligence to carve out a volume in a trusted storage pool with minimal inputs from users. The provisioner for glusterfs in Kubernetes leverages the capabilities exposed by Heketi and creates volumes on the fly for addressing persistent volume claims made by users. You can find our work to bring together all these projects in the gluster-kubernetes project on github.  With support for Storage Classes and Daemon Sets, we have eased the storage setup and dynamic provisioning even further.

Along with  dynamic provisioning, a key requirement in container storage environments is the ability to scale and address a large number of persistent volume claims. To get to this level of scale, Gluster has evolved significantly in the recent 3.10 release. Key features that enable scale include:

  • Brick Multiplexing

  Brick multiplexing introduces the capability of aggregating bricks belonging to several volumes in a single glusterfsd process. This vastly improves the memory footprint of gluster for serving multiple brick directories from the same node. In addition to being a lesser memory hog, a multiplexed brick also consumes far fewer network ports than the non-multiplexed model. In hyperconverged CNS deployments where resources need to be shared between compute and storage, brick multiplexing optimizes gluster to scale to more number of volumes.

 gluster-block provides a management framework for exposing block devices backed by files in a volume through iSCSI. Going forward, we intend using this block interface for scalable RWO persistent volumes. We already have an external provisioner to integrate Kubernetes, Heketi and gluster-block to dynamically provision RWO persistent volumes.

Along with file and block accesses, we have envisioned the need for an Amazon S3 compatible object store in containerized environments. Several applications that are containerized look for ReSTful access to persist data. To address that we recently announced the availability of a gluster-object container that enables accessing a gluster volume through S3 APIs.

We are excited about these innovations in file, block and object accesses of Gluster to address container storage needs. Do let us know if our vision matches your container storage requirements and look forward to more details about our onward journey in the container world here!

 

by on March 15, 2017

Brick Multiplexing in Gluster 3.10

One of the salient features in Gluster 3.10 goes by the rather boring – and slightly opaque – name of brick multiplexing.  To understand what it is, and why it’s a good thing, read on.
(more…)

by on March 1, 2017

FOSDEM 2017 Gluster Talks

This year at FOSDEM, we helped run a Software Defined Storage DevRoom on Sunday, February 5th:

For those who weren’t able to make it, we’ve collected the recordings from the event related to Gluster here.

GlusterD-2.0 – the next generation of GlusterFS management – Kaushal Madappa

https://fosdem.org/2017/schedule/event/glusterd2/

Gluster Features Update – Niels de Vos

https://fosdem.org/2017/schedule/event/cephglustercommunity/

SELinux Support over GlusterFS  – Jiffin Tony Thottan

https://fosdem.org/2017/schedule/event/glusterselinux/  

Hyper-converged, persistent storage for containers with GlusterFS – Jose Rivera, Mohamed Ashiq

https://fosdem.org/2017/schedule/event/glustercontainer/

 

Our overall schedule:

GlusterD-2.0 – The next generation of GlusterFS management Kaushal Madappa
Introduction to Ceph cloud object storage Orit Wasserman
Storage overloaded to smoke? Legolize with LizardFS! Michal Bielicki
Gluster Features Overview,  Niels de Vos
Ceph Community Update Patrick McGarry
Evaluating NVMe drives for accelerating HBase, NVM HBase acceleration Nicolas Poggi
Ceph USB Storage Gateway David Disseldorp
Ceph and Storage management with openATTIC Lenz Grimmer
SELinux Support over GlusterFS Jiffin Tony Thottan
Deploying Ceph Clusters with Salt Jan Fajerski
Hyper-converged, persistent storage for containers with GlusterFS  – Jose Rivera, Mohamed Ashiq
Ceph weather report Orit Wasserman

https://fosdem.org/2017/schedule/track/software_defined_storage/  

 

by on February 28, 2017

Further notes on Gluster 3.10 and the direction for Gluster

This release of Gluster ushers in improvements for container storage, hyperconverged storage and scale-out Network Attached Storage (NAS) use cases. These use cases have been the primary focus areas for previous releases over the last 12-18 months and will continue to be the primary focus for the next three planned releases.

One of the things we’re really focused on as a project is persistent storage for containerized microservices. Part of this effort has been working with heketi and gluster-kubernetes to enhance our integration with containers. Continuing in the same vein, 3.10 brings about the following key improvements for container storage:

 

  • Brick multiplexing: Provides the ability to scale the number of exports and volumes per node. This is useful in container storage where there is a need for a large number of shared storage (Read Write Many) volumes. Brick Multiplexing also provides the infrastructure needed to implement Quality of Service in Gluster for a multi-tenant container deployment.
  • gluster-block: Along with 3.10, we are also releasing gluster-block v0.1.  gluster-block provides a very intuitive lifecycle management interface for block devices in a Gluster volume. This release of gluster-block configures block devices to be accessed from initiators through iSCSI.  Work on integrating gluster-block with Heketi for supporting Read Write Once volumes in Kubernetes is in progress.
  • S3 access for Gluster: We are also releasing an Amazon S3 compatible object storage container based on Swift3 and gluster-swift in Gluster’s docker hub. S3 access for Gluster will be useful for application developers who leverage S3 API for storage.

 

Deployment of hyperconverged storage for containers and virtualization is also a focus area for 3.10. gdeploy provides an improved ansible playbook for deploying hyperconvergence with oVirt and cockpit-gluster provides a wizard to make deployment using this playbook easy with oVirt. gk-deploy makes it easy to deploy Heketi and Gluster in hyperconverged container deployments.

 

There have been various improvements for scale-out NAS deployments in 3.10. Some of them include:

 

  • Parallelized readdirp: Improves performance of operations that perform directory crawl. This in conjunction with meta-data caching introduced in 3.9 provides a nice performance boost for small file operations.
  • Statedump support for libgfapi:  Debuggability and supportability improvement for projects like NFS Ganesha and Samba that have been integrated with libgfapi.
  • Estimate for rebalance completion: Helps administrators understand when rebalancing would complete.

 

Needless to say, none of this would have been possible without the support of our contributors and maintainers. Thank you to all those who made it happen! We  are excited at this juncture to deliver features that enhance the user experience on our key focus areas of container storage, hyperconvergence & scale-out NAS. We intend building on this momentum for further improvements on our key focus areas. Stay tuned and get involved as we progress along this journey with further Gluster releases!

 

by on February 27, 2017

Announcing Gluster 3.10

Release notes for Gluster 3.10.0

The Gluster community is pleased to announce the release of Gluster 3.10.

This is a major Gluster release that includes some substantial changes. The features revolve around, better support in container environments, scaling to larger number of bricks per node, and a few usability and performance improvements, among other bug fixes. This releases marks the completion of maintenance releases for Gluster 3.7 and 3.9. Moving forward, Gluster versions 3.10 and 3.8 are actively maintained.  

The most notable features and changes are documented here as well as in our full release notes on Github. A full list of bugs that has been addressed is included on that page as well.

Major changes and features

Brick multiplexing

Multiplexing reduces both port and memory usage. It does not improve performance vs. non-multiplexing except when memory is the limiting factor, though there are other related changes that improve performance overall (e.g. compared to 3.9).

Multiplexing is off by default. It can be enabled with

# gluster volume set all cluster.brick-multiplex on

Support to display op-version information from clients

To get information on what op-version are supported by the clients, users can invoke the gluster volume status command for clients. Along with information on hostname, port, bytes read, bytes written and number of clients connected per brick, we now also get the op-version on which the respective clients operate. Following is the example usage:

# gluster volume status <VOLNAME|all> clients

Support to get maximum op-version in a heterogeneous cluster

A heterogeneous cluster operates on a common op-version that can be supported across all the nodes in the trusted storage pool. Upon upgrade of the nodes in the cluster, the cluster might support a higher op-version. Users can retrieve the maximum op-version to which the cluster could be bumped up to by invoking the gluster volume getcommand on the newly introduced global option, cluster.max-op-version. The usage is as follows:

# gluster volume get all cluster.max-op-version

Support for rebalance time to completion estimation

Users can now see approximately how much time the rebalance operation will take to complete across all nodes.

The estimated time left for rebalance to complete is displayed as part of the rebalance status. Use the command:

# gluster volume rebalance <VOLNAME> status

Separation of tier as its own service

This change is to move the management of the tier daemon into the gluster service framework, thereby improving it stability and manageability by the service framework.

This has no change to any of the tier commands or user facing interfaces and operations.

 

Statedump support for gfapi based applications

gfapi based applications now can dump state information for better trouble shooting of issues. A statedump can be triggered in two ways:

  1. by executing the following on one of the Gluster servers,
  2. # gluster volume statedump <VOLNAME> client <HOST>:<PID>
    • <VOLNAME> should be replaced by the name of the volume
    • <HOST> should be replaced by the hostname of the system running the gfapi application
    • <PID> should be replaced by the PID of the gfapi application
  3. through calling glfs_sysrq(<FS>, GLFS_SYSRQ_STATEDUMP) within the application
    • <FS> should be replaced by a pointer to a glfs_t structure

All statedumps (*.dump.* files) will be located at the usual location, on most distributions this would be /var/run/gluster/.

Disabled creation of trash directory by default

From now onwards trash directory, namely .trashcan, will not be be created by default upon creation of new volumes unless and until the feature is turned ON and the restrictions on the same will be applicable as long as features.trash is set for a particular volume.

Implemented parallel readdirp with distribute xlator

Currently the directory listing gets slower as the number of bricks/nodes increases in a volume, though the file/directory numbers remain unchanged. With this feature, the performance of directory listing is made mostly independent of the number of nodes/bricks in the volume. Thus scale doesn’t exponentially reduce the directory listing performance. (On a 2, 5, 10, 25 brick setup we saw ~5, 100, 400, 450% improvement consecutively)

To enable this feature:

# gluster volume set <VOLNAME> performance.readdir-ahead on
# gluster volume set <VOLNAME> performance.parallel-readdir on

To disable this feature:

# gluster volume set <VOLNAME> performance.parallel-readdir off

If there are more than 50 bricks in the volume it is good to increase the cache size to be more than 10Mb (default value):

# gluster volume set <VOLNAME> performance.rda-cache-limit <CACHE SIZE>

md-cache can optionally -ve cache security.ima xattr

From kernel version 3.X or greater, creating of a file results in removexattr call on security.ima xattr. This xattr is not set on the file unless IMA feature is active. With this patch, removxattr call returns ENODATA if it is not found in the cache.

The end benefit is faster create operations where IMA is not enabled.

To cache this xattr use,

# gluster volume set <VOLNAME> performance.cache-ima-xattrs on

The above option is on by default.

Added support for CPU extensions in disperse computations

To improve disperse computations, a new way of generating dynamic code targeting specific CPU extensions like SSE and AVX on Intel processors is implemented. The available extensions are detected on run time. This can roughly double encoding and decoding speeds (or halve CPU usage).

This change is 100% compatible with the old method. No change is needed if an existing volume is upgraded.

You can control which extensions to use or disable them with the following command:

# gluster volume set <VOLNAME> disperse.cpu-extensions <type>

Valid values are:

  • none: Completely disable dynamic code generation
  • auto: Automatically detect available extensions and use the best one
  • x64: Use dynamic code generation using standard 64 bits instructions
  • sse: Use dynamic code generation using SSE extensions (128 bits)
  • avx: Use dynamic code generation using AVX extensions (256 bits)

The default value is ‘auto’. If a value is specified that is not detected on run-time, it will automatically fall back to the next available option.

Bugs addressed

Bugs addressed since release-3.9 are listed in our full release notes.