all posts tagged Uncategorized

by on July 5, 2016

WORM (Write Once Read Multiple), Retention and Compliance

This feature is about having WORM-based compliance/archiving solution in glusterfs. It mainly focus on the following

  • Compliance: Laws and regulations to access and store intellectual property and confidential information.
  • WORM/Retention : Store data in a tamper-proof and secure way & Data accessibility policies
  • Archive: Storing data in effectively and efficiently & Disaster-Recovery solution


WORM Retention empowers GlusterFS users to safeguard their data in a tamper proof manner. It further enables the users to maintain and track the state of the file transformation with regards to time periods (writable, read-only and un-deletable). Thereafter , nullifying any effort to change contents , location and properties of a static file in brick.


by on June 15, 2016

GlusterFS 3.8 Released announces the release of 3.8 on June 14, 2016, marking a decade of active development.


The 3.8 release focuses on:

containers with inclusion of Heketi
ecosystem integration
protocol improvements with NFS Ganesha


Contributed features are marked with the supporting organizations.


  • Automatic conflict resolution, self-healing improvements (Facebook)
    • Synchronous Replication receives a major boost with features contributed from Facebook. Multi-threaded self-healing makes self-heal perform at a faster rate than before. Automatic Conflict resolution ensures that conflicts due to network partitions are handled without the need for administrative intervention
  • NFSv4.1 (Ganesha) – protocol
    • Gluster’s native NFSv3 server is disabled by default with this release. Gluster’s integration with NFS Ganesha provides NFS v3, v4 and v4.1 accesses to data stored in Gluster volume.
  • BareOS – backup / data protection
    • Gluster 3.8 is ready for integration with BareOS 16.2. BareOS 16.2 leverages glusterfind for intelligently backing up objects stored in a Gluster volume.
  • “Next generation” tiering and sharding – VM images
    • Sharding is now stable for VM image storage. Geo-replication has been enhanced to integrate with sharding for offsite backup/disaster recovery of VM images. Self-healing and data tiering with sharding makes it an excellent candidate for hyperconverged virtual machine image storage.
  • block device & iSCSI with LIO – containers
    • File backed block devices are usable from Gluster through iSCSI. This release of Gluster integrates with tcmu-runner [] to access block devices natively through libgfapi.
  • Heketi – containers, dynamic provisioning
    • Heketi provides the ability to dynamically provision Gluster volumes without administrative intervention. Heketi can manage multiple Gluster clusters and will be the cornerstone for integration with Container and Storage as a Service management ecosystems.
  • glusterfs-coreutils (Facebook) – containers
    • Native coreutils for Gluster developed by Facebook that uses libgfapi to interact with gluster volumes. Useful for systems and containers that do not have FUSE.


For more details, our release notes are included:


The release of 3.8 also marks the end of life for GlusterFS 3.5, there will no further updates for this version.

by on March 22, 2016

Gluster Community Newsletter, March 2016

Great things happening this month!

Upcoming for next Month:

Linux Foundation Vault 

GlusterFS and its Distribution Model – Sakshi Bansal
GlusterFS @ Facebook – Richard Wareing
Arbiter based Replication in Gluster without 3x Storage Cost and Zero Split-Brains! – Ravishankar N.
Tiering in GlusterFS: Hardware Config Considerations – Veda Shankar

Ganesha + Gluster scale out NFSv4 – Kaleb Keithley
Huge Indexes: Algorithms to Track Objects in Cache Tiers – Dan Lambright
GlusterD 2.0 – Managing Distributed File System Using a Centralized Store – Atin Mukherjee
Understanding Client Side Shared Cache with Pblcacle – Luis Pabon
Deploying pNFS over Distributed File Storage – Jiffin Tony Thottan
Storage as a Service with Gluster – Vijay Bellur, Red Hat
Lessons Learned Containerizing GlusterFS and Ceph with Docker and Kubernetes – Huamin Chen

Noteworthy threads:


GlusterFS FUSE Client Performance Issues – Ravishankar N comments that the FUSE client Performance Issues will be resolved with the 3.7.9 release –
SELinux support in the near future!!! – Manikandan S outlines support for SELinux in upcoming releases
Default quorum for 2 way replication – Pranith kicks off a conversation about quorum in 2 way replication


Quality of Service in Glusterfs – Raghavendra Gowdappa kicks off a discussion on QoS
Updates on GD2 from Kaushal
GD2  ETCD Bootstrapping – Atin provides an update on GlusterD 2.0
On backporting fixes – Raghavendra Talur begins a discussion on backporting patches and tests
Improving subdir export for NFS-Ganesha
– Jiffin Tony Thottan starts a discussion if this should be in 3.7.9 or 3.8

Fuse Subdirectory mounts, access-control and sub-directory geo-replication, snapshot features – Pranith Kumar Karampuri (and Kaushal) gives a two-part update on design.

Gluster Top 5 Contributors in the last 30 days:

Niels de Vos, Mohammed Rafi KC, Kaleb Keithley, Soumya Koduri, Sakshi Bansal

Upcoming CFPs:

Flock: April 8
LinuxCon Japan: May 6
LinuxCon North America: April 26th
LinuxCon Europe:  June 17
LISA:  April 25th

by on March 21, 2016

Automated Tiering in Gluster

This post describes how to run automated tiering in Gluster. Tiering is appropriate for stable workloads where frequently used data fits on small, fast storage, such as SSDs, and rarely used data resides on a slower/cheaper volume, such as spinning disks.

On a tiered volume, files are tracked according to frequency of access. Popular files tend to migrate to faster storage, and unpopular ones to slower storage. The behavior can be influenced with tuning parameters.


Basic Operation

To use tiering, take an existing volume and attach a hot tier to it. The existing volume becomes the cold tier. The existing volume may be either erasure coded or distributed-replicated. The hot tier must be distributed-replicated. For example:

gluster volume tier vol1 attach gprfs01:/brick1 gprfs02:/brick2 gprfs03:/brick3  \

Once the tier is attached, there may be a delay before migration begins. A full scan of the cold tier is undergone. This delay shall be removed in the near term.

Promotion stands for file migration from the cold to hot tier. Demotion stands for migration in the opposite direction.

When a file is migrated a counter is bumped. The counters may be viewed:

gluster volume tier vol1 status

You can stop tiering to use the hot bricks for some other purpose. To stop tiering, use the detach operation. The steps resemble removing bricks. You initiate the process, then wait for it to complete by monitoring its status. This may take time depending on how much data must be moved off the hot tier. Once completed, the commit command may be used to remove the hot tier. The cold tier then reverts to the original volume.

gluster volume tier vol1 detach start
gluster volume tier vol1 detach status
gluster volume tier vol1 detach commit


Hot storage is valuable and should be utilized, else the resource is wasted. To this end, the tiering feature aggressively promotes files to the hot tier until it nears full. That point is governed by the “cluster.watermark-low” tunable and is expressed as a percentage.

Conversely, the hot tier cannot become completely full. If too much data resides on the hot tier, files are aggressively demoted. This value is governed by “cluster.watermark-hi”.

The system shall attempt to stabilize such that the amount of data on the hot tier is between the lower and upper watermarks.

gluster volume set vol cluster.watermark-hi 90
gluster volume set vol cluster.watermark-low 75

The tiering daemon migrates files periodically. The period for promoting files is “cluster.tier-promote-frequency”. Its default value was chosen such that files would be promoted quickly, in reaction to I/O. The period for demoting files is “cluster.tier-demote-frequency”. Its default value was chosen such that files are demoted slowly in the background. These values are expressed in seconds.

gluster volume set vol cluster.tier-promote-frequency 120
gluster volume set vol cluster.tier-demote-frequency 3600

It is possible to limit how much data may be migrated within a period. The limit may be expressed in # of files or in MB.

gluster volume set vol cluster.tier-max-mb 4000
gluster volume set vol cluster.tier-max-files 10000

By default, files are queued to be promoted if they are  accessed on the cold tier within a period. This behavior can be changed such that files are promoted if they are accessed more than some threshold within a period. The threshold may be expressed in terms of reads or writes. This would avoid populating the hot tier with files that are only accessed once. The hot tier should store files which are repeatedly accessed.

gluster volume set vol cluster.write-freq-threshold 2
gluster volume set vol 2


As of 3/16, measurements have tested cases where ~95% of the I/Os are to files on the hot tier. Those experiments have shown good performance when the cold tier is distributed-replicated. When the cold tier is erasure coded, the features works well for larger file sizes (greater than 512K) for a typical SSD.

Performance should improve as the code matures, and your milage my vary. A subsequent post shall explore performance.

by on December 24, 2015

Sharding – What next?

In my previous post, I talked about the sharding feature – what it does, where it is useful, etc. You can read that post here: When we designed and wrote sharding feature in GlusterFS, our focus had been single-writer-to-large-files use cases, chief among these being the virtual machine image store use-case.

We are happy to announce that we have reached that stage where the feature is considered stable for the VM store use case after several rounds of testing (thanks to Lindsay Mathieson from the community, Paul Cuzner and Satheesaran Sundaramoorthi), bug fixing and reviews (thanks to Pranith Kumar Karampuri) and a couple of performance improvements. Also, patches have been sent to make sharding work with geo-replication, thanks to Kotresh’s efforts (testing still in progress).

We would love to hear from you on what you think of the feature and where you think it can be further improved:. Specifically, the following are two questions we are seeking feedback on:

  1. Your experience testing sharding with VM store use case — any bugs you ran into, any performance issues, etc.
  2. what are the other large-file use cases you know of or use, where you think having sharding capability will be useful.

Based on your feedback, we will start work on making sharding work in other workloads and with other existing GlusterFS features.

We look forward to hearing from you.

by on December 23, 2015

Introducing shard translator

GlusterFS-3.7.0 saw the release of sharding feature, among several others. The feature was tagged as “experimental” as it was still in the initial stages of development back then. Here is some introduction to the feature:

Why shard translator?

GlusterFS’ answer to very large files (those which can grow beyond a single brick) had never been clear. There is a stripe translator which allows you to do that, but that comes at a cost of flexibility – you can add servers only in multiple of stripe-count x replica-count, mixing striped and unstriped files is not possible in an “elegant” way. This also happens to be a big limiting factor for the big data/Hadoop use case where super large files are the norm (and where you want to split a file even if it could fit within a single server.) The proposed solution for this is to replace the current stripe translator with a new “shard” translator.


Unlike stripe, Shard is not a cluster translator. It is placed on top of DHT. Initially all files will be created as normal files, even up to a certain configurable size. The first block (default 4MB) will be stored like a normal file under its parent directory. However further blocks will be stored in a file, named by the GFID and block index in a separate namespace (like /.shard/GFID1.1, /.shard/GFID1.2 … /.shard/GFID1.N). File IO happening to a particular offset will write to the appropriate “piece file”, creating it if necessary. The aggregated file size and block count will be stored in the xattr of the original file.


Here I have a 2×2 distributed-replicated volume.

# gluster volume info
Volume Name: dis-rep
Type: Distributed-Replicate
Volume ID: 96001645-a020-467b-8153-2589e3a0dee3
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Brick1: server1:/bricks/1
Brick2: server2:/bricks/2
Brick3: server3:/bricks/3
Brick4: server4:/bricks/4
Options Reconfigured:
performance.readdir-ahead: on

To enable sharding on it, this is what I do:

# gluster volume set dis-rep features.shard on
volume set: success

Now, to configure the shard block size to 16MB, this is what I do:

# gluster volume set dis-rep features.shard-block-size 16MB
volume set: success

How files are sharded:

Now I write 84MB of data into a file named ‘testfile’.

# dd if=/dev/urandom of=/mnt/glusterfs/testfile bs=1M count=84
84+0 records in
84+0 records out
88080384 bytes (88 MB) copied, 13.2243 s, 6.7 MB/s

Let’s check the backend to see how the file was sharded to pieces and how these pieces got distributed across the bricks:

# ls /bricks/* -lh
total 0

total 0

total 17M
-rw-r–r–. 2 root root 16M Dec 24 12:36 testfile

total 17M
-rw-r–r–. 2 root root 16M Dec 24 12:36 testfile

So the file hashed to the second replica set (brick3 and brick4 which form a replica pair) and 16M in size. Where did the remaining 68MB worth of data go? To find out, let’s check the contents of the hidden directory .shard on all bricks:

# ls /bricks/*/.shard -lh
total 37M
-rw-r–r–. 2 root root  16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.1
-rw-r–r–. 2 root root  16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.3
-rw-r–r–. 2 root root 4.0M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.5

total 37M
-rw-r–r–. 2 root root  16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.1
-rw-r–r–. 2 root root  16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.3
-rw-r–r–. 2 root root 4.0M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.5

total 33M
-rw-r–r–. 2 root root 16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.2
-rw-r–r–. 2 root root 16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.4

total 33M
-rw-r–r–. 2 root root 16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.2
-rw-r–r–. 2 root root 16M Dec 24 12:36 bc19873d-7772-4803-898c-bf14ee1ff2bd.4

So, the file was basically split into 6 pieces: 5 of them residing in the hidden directory “/.shard” distributed across replica sets based on disk space availability and the file name hash, and the first block residing in its native parent directory. Notice how blocks 1 through 4 are all of size 16M and the last block (block-5) is 4M in size.

Now let’s do some math to see how ‘testfile’ was “sharded”:

The total size of the write was 84MB. And the configured block size in this case is 16MB. So (84MB divided by 16MB) = 5 with remainder = 4MB

So the file was basically broken into 6 pieces in all, with the last piece having 4MB of data and the rest of them 16MB in size.

Now when we view the file from the mount point, it would appear as one single file:

# ls -lh /mnt/glusterfs/
total 85M
-rw-r–r–. 1 root root 84M Dec 24 12:36 testfile

Notice how the file is shown to be of size 84MB on the mount point. Similarly, when the file is read by an application, the different pieces or ‘shards’ are stitched together and appropriately presented to the application as if there was no chunking done at all.

Advantages of sharding:

The advantage of sharding a file over striping it across a finite set of bricks are:

  • Data blocks are distributed by DHT in a “normal way”.
  • Adding servers can happen in any number (even one at a time) and DHT’s rebalance will spread out the “piece files” evenly.
  • Sharding provides better utilization of disk space. Now it is no longer necessary to have at least one brick of size X in order to accommodate a file of size X, where X is really large. Consider this example: A distribute volume is made up of 3 bricks of size 10GB, 20GB, 30GB. With this configuration, it is impossible to store a file greater than 30GB in size on this volume. Sharding eliminates this limitation. A file of upto 60GB size can be stored on this volume with sharding.
  • Self-healing of a large file is now more distributed into smaller files across more servers leading to better heal performance and lesser CPU usage, which is particularly a pain point for large file workloads.
  • piece file naming scheme is immune to renames and hardlinks.
  • When geo-replicating a large file to a remote volume, only the shards that changed can be synced to the slave, considerably reducing the sync time.
  • When sharding is used in conjunction with tiering, only the shards that change would be promoted/demoted. This reduces the amount of data that needs to be migrated between hot and cold tier.
  • When sharding is used in conjunction with bit-rot detection feature of GlusterFS, the checksum is computed on smaller shards as opposed to one large file.
by on October 12, 2015

Linux scale out NFSv4 using NFS-Ganesha and GlusterFS — one step at a time

NFS-Ganesha 2.3 is rapidly winding down to release and it has a bunch of new things in it that make it fairly compelling. A lot of people are also starting to use Red Hat Gluster Storage with the NFS-Ganesha NFS server that is part of that package. Setting up a highly available NFS-Ganesha system using GlusterFS is not exactly trivial. This blog post will “eat the elephant” one bite at a time.

Some people might wonder why use NFS-Ganesha — a user space NFS server — when kernel NFS (knfs) already supports NFSv4? The answer is simple really. NFSv4 in the kernel doesn’t scale. It doesn’t scale out, and it’s a single point of failure. This blog post will show how to set up a resilient, highly available system with no single point of failure.


Let’s start small and simple. We’ll set up a single NFS-Ganesha server on CentOS 7, serving a single disk volume.

Start by setting up a CentOS 7 machine. You may want to create a separate volume for the NFS export. We’ll leave this as an exercise for the reader. do not install any NFS.

1. Install EPEL, NFS-Ganesha and GlusterFS. Use the yum repos on Repo files are at
nfs-ganesha.repo and glusterfs-epel.repo. Copy them to /etc/yum.repos.d.

    % yum -y install epel-release
    % yum -y install glusterfs-server glusterfs-fuse glusterfs-cli glusterfs-ganesha
    % yum -y install nfs-ganesha-xfs

2. Create a directory to mount the export volume, make a file system on the export volume, and finally mount it:

    % mkdir -p /bricks/demo
    % mkfs.xfs /dev/sdb
    % mount /dev/sdb /bricks/demo

3. Gluster recommends not creating volumes on the root directory of the brick. If something goes wrong it’s easier rm -rf the directory than it is to try and clean it or remake the file system. Create a couple subdirs on the brick:

    % mkdir /bricks/demo/vol
    % mkdir /bricks/demo/scratch

4. Edit the Ganesha config file at /etc/ganesha/ganesha.conf. Here’s what mine looks like:

	# Export Id (mandatory, each EXPORT must have a unique Export_Id)
	Export_Id = 1;

	# Exported path (mandatory)
	Path = /bricks/demo/scratch;

	# Pseudo Path (required for NFS v4)
	Pseudo = /bricks/demo/scratch;

	# Required for access (default is None)
	# Could use CLIENT blocks instead
	Access_Type = RW;

	# Exporting FSAL
		Name = XFS;

5. Start ganesha:

    % systemctl start nfs-ganesha

6. Wait one minute for NFS grace to end, then mount the volume:

    % mount localhost:/scratch /mnt


7. Now we’ll create a simple gluster volume and use NFS_Ganesha to serve it. We also need to disable gluster’s nfs (gnfs).

    % gluster volume create simple $hostname:/bricks/demo/simple
    % gluster volume set simple nfs.disable on
    % gluster volume start simple

8. Edit the Ganesha config file at /etc/ganesha/ganesha.conf. Here’s what mine looks like:

	# Export Id (mandatory, each EXPORT must have a unique Export_Id)
	Export_Id = 1;

	# Exported path (mandatory)
	Path = /simple;

	# Pseudo Path (required for NFS v4)
	Pseudo = /simple;

	# Required for access (default is None)
	# Could use CLIENT blocks instead
	Access_Type = RW;

	# Exporting FSAL
		Name = GLUSTER;
		Hostname = localhost;
		Volume = simple;

9. Restart ganesha:

    % systemctl stop nfs-ganesha
    % systemctl start nfs-ganesha

10. Wait one minute for NFS grace to end, then mount the volume:

    % mount localhost:/simple /mnt

Copy a file to the NFS volume. You’ll see it on the gluster brick in /bricks/demo/simple.


Now for the part you’ve been waiting for. For this we’ll start from scratch. This will be a four node cluster: node0, node1, node2, and node3.

1. Tear down anything left over from the above.

2. Ensure that all nodes are resolvable either in DNS or /etc/hosts:

    node0% cat /etc/hosts localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 node0 node1 node2 node3 node0v node1v node2v node3v

3. Set up passwordless ssh among the four nodes. On node1 create a keypair and deploy it to all the nodes:

    node0% ssh-keygen -f /var/lib/glusterd/nfs/secret.pem
    node0% ssh-copy-id -i /var/lib/glusterd/nfs/ root@node0
    node0% ssh-copy-id -i /var/lib/glusterd/nfs/ root@node1
    node0% ssh-copy-id -i /var/lib/glusterd/nfs/ root@node2
    node0% ssh-copy-id -i /var/lib/glusterd/nfs/ root@node3
    node0% scp /var/lib/glusterd/nfs/secret.* node1:/var/lib/glusterd/nfs/
    node0% scp /var/lib/glusterd/nfs/secret.* node2:/var/lib/glusterd/nfs/
    node0% scp /var/lib/glusterd/nfs/secret.* node3:/var/lib/glusterd/nfs/

You can confirm that it works with:

    node0% ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/nfs/secret.pem root@node1

4. Start glusterd on all nodes:

    node0% systemctl enable glusterd && systemctl start glusterd
    node1% systemctl enable glusterd && systemctl start glusterd
    node2% systemctl enable glusterd && systemctl start glusterd
    node3% systemctl enable glusterd && systemctl start glusterd

5. From node0, peer probe the other nodes:

    node0% gluster peer probe node1
    peer probe: success
    node0% gluster peer probe node2
    peer probe: success
    node0% gluster peer probe node3
    peer probe: success

You can confirm their status with:

    node0% gluster peer status
    Number of Peers: 3

    Hostname: node1
    Uuid: ca8e1489-0f1b-4814-964d-563e67eded24
    State: Peer in Cluster (Connected)

    Hostname: node2
    Uuid: 37ea06ff-53c2-42eb-aff5-a1afb7a6bb59
    State: Peer in Cluster (Connected)

    Hostname: node3
    Uuid: e1fb733f-8e4e-40e4-8933-e215a183866f
    State: Peer in Cluster (Connected)

6. Create the /etc/ganesha/ganesha-ha.conf file on node0. Here’s what mine looks like:

# Name of the HA cluster created.
# must be unique within the subnet
# The gluster server from which to mount the shared data volume.
# You may use short names or long names; you may not use IP addresses.
# Once you select one, stay with it as it will be mildly unpleasant to clean up if you switch later on. Ensure that all names - short and/or long - are in DNS or /etc/hosts on all machines in the cluster.
# The subset of nodes of the Gluster Trusted Pool that form the ganesha HA cluster. Hostname is specified.
# Virtual IPs for each of the nodes specified above.

7. Enable the Gluster shared state volume:

    node0% gluster volume set all cluster.enable-shared-storage enable

Wait a few moments for it to be mounted everywhere. You can check that it’s mounted at /run/gluster/shared_storage (or /var/run/gluster/shared_storage) on all the nodes.

8. Enable and start the Pacemaker pcsd on all nodes:

    node0% systemctl enable pcsd && systemctl start pcsd
    node1% systemctl enable pcsd && systemctl start pcsd
    node2% systemctl enable pcsd && systemctl start pcsd
    node3% systemctl enable pcsd && systemctl start pcsd

9. Set a password for the user ‘hacluster’ on all nodes. Use the same password for all nodes:

    node0% echo demopass | passwd --stdin hacluster
    node1% echo demopass | passwd --stdin hacluster
    node2% echo demopass | passwd --stdin hacluster
    node3% echo demopass | passwd --stdin hacluster

10. Perform cluster auth between the nodes. Username is ‘hacluster’, Password is the one you used in step 9:

    node0% pcs cluster auth node0
    node0% pcs cluster auth node1
    node0% pcs cluster auth node2
    node0% pcs cluster auth node3

11. Create the Gluster volume to export. We’ll create a 2×2 distribute-replicate volume. Start the volume:

    node0% gluster volume create cluster-demo replica 2 node0:/home/bricks/demo node1:/home/bricks/demo node2:/home/bricks/demo node3:/home/bricks/demo
    node0% gluster volume start cluster-demo

12. Enable ganesha, i.e. start the ganesha.nfsd:

    node0% gluster nfs-ganesha enable

13. Export the volume:

    node0% gluster vol set cluster-demo ganesha.enable on

14. And finally mount the NFS volume from a client using one of the virtual IP addresses:

    nfs-client% mount node0v:/cluster-demo /mnt

by on October 9, 2015

GlusterFS at LinuxCon Europe 2015

We’ve just wrapped up a great week at LinuxCon Europe 2016 in Dublin with a great showing from the Gluster community!

BitRot Detection in GlusterFS – Gaurav Garg, Red Hat & Venky Shankar

Advancements in Automatic File Replication in Gluster – Ravishankar N

Gluster for Sysadmins – Dustin Black

Open Storage in the Enterprise with Gluster and Ceph – Dustin Black

If you missed any of these, we’ll be posting slides and a quick review of the talk for those that are interested over the next week.

by on October 2, 2015

Gluster News: September 2015

Since we did not have any weekly Gluster news go out in September, this post tries to capture and summarize action from the entire month of September 2015.

==  General News ==

GlusterFS won yet another Bossie in the open source platforms, infrastructure, management, and orchestration software category. Long time users of the project might remember the first Bossie win in 2011.

GlusterFS had three bug fix releases happen in September. 3.7.4, 3.6.6 and 3.5.6 releases happened over the course of this month.

GlusterFS 3.7.5 is expected to be released in the first week of October.

GlusterFS 3.7.x  – Samba libgfapi support is affected currently. Stay on GlusterFS 3.6.x if you use it.

gdeploy, an ansible based deployment tool for Gluster, was released this month. More details about gdeploy can be found in the announcement email.

Introducing georepsetup – Gluster Geo-replication  Setup Tool.

If you are interested in Containers & Gluster, learn more about Running GlusterFS inside Kubernetes.

== Technical News == design discussions happened virtually in the week of 09/28. Discussions happened around these topics:

Several new features have been proposed for Gluster 3.8 and can be found here. A release planning page on for 3.8 is expected in the next few weeks.

Getting Started with Code contributions to GlusterFS –

== Community News ==

Introduction of Amye Scavarda – new community lead for Gluster.

GlusterFS Silicon Valley Meetup group had a meetup at Facebook Campus in Menlo Park, CA. More details about the meetup can be found here.

GlusterFS India Community had a meetup in Bangalore on September 12.

Soumya Koduri & Poornima Gurusiddaiah presented about the following topics at SNIA Software Developers Conference event in Santa Clara, CA.

Achieving Coherent and Aggressive Client Caching in Gluster, a Distributed System
Introduction to Highly Available NFS Server on Scale-Out Storage Systems Based on GlusterFS

Niels de Vos gave a presentation and demo about writing backups with Bareos to Gluster at the Open Source Backup Conference.

Several talks related to Gluster are planned at the upcoming LinuxCon EU Conference in Dublin, Ireland: NFS-Ganesha and Clustered NAS on Distributed Storage Systems – Soumya Koduri, Meghana Madhusudhan Advancements in Automatic File Replication in Gluster – Ravishankar N BitRot Detection in GlusterFS – Gaurav Garg, Venky Shankar Open Storage in the Enterprise with Gluster and Ceph – Dustin Black

We plan to have bi-weekly updates from the next edition. Do stay tuned in to learn about happenings in the Gluster World!






by on September 18, 2015

Silicon Valley Meetup at Facebook, September 2015

Facebook hosted a great crowd on Monday, September 14, in Silicon Valley with about 30 attendees and a full night of presentations about Gluster.

We started by introducing our new Gluster Community Lead, Amye Scavarda. You’ll see her a lot more in the coming months promoting Gluster and the Gluster community.

The talks started with the future: where Gluster is moving with GlusterNext, led by Jeff Darcy.

Jeff Darcy

Jeff Darcy taking questions at SV Meetup – 9/14

Richard Wareing of Facebook talked about their use case with Gluster and how it helps power their work.

Richard Wareing of Facebook talking about their usecase

Richard Wareing of Facebook talking about their usecase

On the topic of performance, Shyam Ranganathan talked about DHT2 and improvements that have been made over DHT.

Shyam Ranganathan discussing performance

Shyam Ranganathan discussing performance

Dan Lambright closed out the evening with a talk on Tiering in GlusterNext.

Dan Lambright - Tiering

Dan Lambright discussing Tiering

Special thanks to Jacob Shucart for being our master of ceremonies!

Jacob Shucart closing SV Meetup

Jacob Shucart closing SV Meetup