The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Testing GlusterFS with very fast disks on Fedora 20

Gluster
November 17, 2014

In the past I used to test with RAM-disks, provided by /dev/ram*. Gluster uses extended attributes on the filesystem, that makes is not possible to use tmpfs. While thinking about improving some of the GlusterFS regression tests, I noticed that Fedora 20 (and possibly earlier versions too) does not provide the /dev/ram* devices anymore. I could not find the needed kernel module quickly, so I decided to look into the newer zram module.

Getting zram working seems to be pretty simple. By default one /dev/zram0 is made available after loading the module. But, if needed, the module offers a parameter num_devices to create more devices. After loading the module with modprobe zram, you can do the following to create your high-performance volatile storage:

# SIZE_2GB=$(expr 1024 * 1024 * 1024 * 2)
# echo ${SIZE_2GB} > /sys/class/block/zram0/disksize
# mkfs -t xfs /dev/zram0
# mkdir /bricks/fast
# mount /dev/zram0 /bricks/fast

With this mountpoint it is now possible to create a Gluster volume:

# gluster volume create fast ${HOSTNAME}:/bricks/fast/data
# gluster volume start fast

Once done with testing, stop and delete the Gluster volume, and free the zram like this:

# umount /bricks/fast
# echo 1 > /sys/class/block/zram0/reset

Of course, unloading the module with rmmod zram would free the resources too.

It is getting more important for Gluster to be prepared for very fast disks. Hardware like Fusion-io Flash drives and in future Persistent Memory/NVM will get more available in storage clouds, and of course we would like to see Gluster staying part of that!

BLOG

  • 11 Jan 2019
    Gluster Container Storage 0.5 relea...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: GCS environment updated to kube 1.13 CSI deployment moved to 1.0 Integrated Anthill deployment Kube & etcd metrics added to prometheus Tuning of etcd to increase stability GD2 bug fixes from scale testing...

    Read more
  • 07 Jan 2019
    Gluster Monthly Newsletter, Decembe...

    See you at FOSDEM! We have a jampacked Software Defined Storage day on Sunday, Feb 3rd  (with a few sessions on the previous day): https://fosdem.org/2019/schedule/track/software_defined_storage/ We also have a shared stand with Ceph, come find us! Gluster 6 – We’re in planning for our Gluster 6 release, currently scheduled for...

    Read more
  • 12 Dec 2018
    Gluster Container Storage milestone...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try...

    Read more