by on November 26, 2013

A Gluster Block Interface – Performance and Configuration

This post shares some experiences I’ve had in simulating a block device in gluster.

The block device is a file based image, which acts as the backend for the Linux SCSI target. The file resides in gluster, so enjoys gluster’s feature set. But the client only sees a block device. The Linux SCSI target presents it over iSCSI.

Some more information on how to set this up is at the bottom of this post.

But where to mount gluster?

There are three options to consider. Call them client, server, and gateway.

The configurations are depicted in the diagram below:

  • Configuring the block device at the server means the client only requires an iSCSI initiator.

  • Configuring the block device at the client allows I/O to fan out to different nodes.

  • Configuring the block device at a gateway allows I/O to fan out to different nodes without changing the client.


These options have their pros and cons. For example, a gateway minimizes client overhead while providing fan-out. OTOH, the customer must provide an additional node.

In my case, the objective was to minimize customer burden. Server-side configuration is probably the best choice. After chatting with some colleagues here at Red Hat, thats what we settled on.

I ran some simple performance tests using the “fio” tool to generate I/O.

  • Up to 10 fio processes were started.

  • The queue depth for each was 32.

  • Each process sends I/O to its own slice of the volume.

  • client and server block cache is flushed between tests

    • echo 3 > /proc/sys/vm/drop_caches

  • 64k records

  • 50G volume replicated over two nodes

  • Gluster version:




  • Only two nodes were used in performance testing. Ideally more nodes would be used, but that equipment is not readily available.

  • There are numerous options to tune, including the queue depth, number of streams, number of paths, gluster volume configuration, block size, number of targets etc. Parameters were chosen based on trial and error rather than formal methodology.




1 KVM client – 6VCPU x 20GB client (pinned)  <===> 2 file servers baremetal 12x20GB

4 x 64GB  blk virtio file systems (cache=none)

(1 x 12disk RAID6 gluster brick / server)


For the interested, here is a cookbook to set it up. For more information, see [1] and [2].


  1. Mount gluster locally on your server.


$ mount -t glusterfs /mnt


  1. Create a large file representing your block device within the gluster fs.


$ dd if=/dev/zero of=/mnt/disk bs=2G count=25


  1. Create a target using the file as the backend storage.


If necessary, download the Linux SCSI target. Then start the service.


$ yum install scsi-target-utils

$ service tgtd start


You must give an iSCSI Qualified name (IQN), in the format :


  yyyy-mm represents the 4-digit year and 2-digit month the device was started (for example: 2011-07);


$ tgtadm –lld iscsi –op new –mode target –tid 1 -T


You can look at the target:


$ tgtadm –lld iscsi –op show –mode conn –tid 1

Session: 11

Connection: 0


    IP Address:


Next, add a logical unit to the target


$ tgtadm –lld iscsi –op new –mode logicalunit –tid 1 –lun 1 -b /mnt/disk


Allow any initiator to access the target.


$ tgtadm –lld iscsi –op bind –mode target –tid 1 -I ALL

  1. Now it’s time to set up your client.


Discover your targets.


$ iscsiadm –mode discovery –type sendtargets –portal


Login to your target session.

$ iscsiadm –mode node –targetname –portal –login


You should have a new SCSI disk. You will see it created in /var/log/messages. You will see it in lsblk.


You can send I/O to it:


$ dd if=/dev/zero of=/dev/sda bs=4K count=100


To tear down the session when you are finished:

$ iscsiadm  -m node -T  -p -u



Linux Journal article on making software backed iSCSI targets


How to set up Linux iSCSI targets with tgt



  1. Joshua says:

    Thanks for the article. I need reference benchmarks for GlusterFS + fio. Would it be possible to append the fio file used for this benchmark?

  2. james says:

    what could be the advantages of simulating a block device under glusterfs?

    As gluster is already layer on top of regular FS – are there any performance penalties to this approach?

    this is more like gluster defined block device – just curious who will benefit form this approach

  3. This post was all about how to accept iSCSI (the front-end). The gluster block device is the back-end storage, it does not provide iSCSI. But I agree it is advantageous to use it rather than creating a lun using “dd” because this skips the file system overhead.

  4. james says:

    @Dan, sorry I could not understand your view point from the comment –
    correct my understanding here- you are creating a iscsi block (file-based) inside a gluster volume – and this is presented to any iscsi initiator.
    So any writes to the iscsi block (target) happen via gluster?
    what advantage does this have over traditional iscsi block ?
    having iscsi block within gluster – means any perf cost ? added benefit/flexibility?

  5. correct my understanding here- you are creating a iscsi block (file-based) inside a gluster volume – and this is presented to any iscsi initiator.
    So any writes to the iscsi block (target) happen via gluster?


    what advantage does this have over traditional iscsi block ?

    One answer is traditional block is not scale-out. The advantage to trying to get something like this working on gluster is you can attempt scale-out performance and storage for the block use case. A second answer is gluster does not support block access from a set of applications: VMs on HyperV, VmWare, databases, tape. I think that would be the benefit/flexibility argument.
    I do not believe performance would be as good as traditional block, however, its not bad. I have a second blog post on this subject in which I use the gfapi, and you can find more information on’s HOWTO section.


Leave a Reply

Your email address will not be published.