The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Gluster Volume Snapshot Howto

Gluster
2014-10-22
This article will provide details on how to configure GlusterFS volume to make use of Gluster Snapshot feature. As we have already discussed earlier that Gluster Volume Snapshot is based of thinly provisioned logical volume (LV). Therefore here I will guide you regarding creating LVs, bricks and Gluster volume. And then we will discuss various features of GlusterFS volume snapshot and how to use them.

Gluster Volume Snapshot

Gluster volume snapshot feature is based of thinly provisioned LVs. Therefore to make use of this feature following guidelines have to be followed:

  • All bricks should be carved out from an independent thinly provisioned logical volume (LV). In other words, no two brick should share a common LV. More details about thin provisioning and thin provisioned snapshot can be found here.
  • This thinly provisioned LV should only be used for forming a brick.
  • Thin pool from which the thin LVs are created should have sufficient space and also it should have sufficient space for pool metadata.

Given the above prerequisites I will take you through an example on how to create bricks in such case and how to create a volume. Click here to get the details of LVM and it’s various options.

The above diagram gives an overview of LVM. The volume group is formed out of storage devices and one or more thin pools can be carved out of this volume group. Once you have a thin pool, you can create one or more thinly provisioned logical volume (LV). All the thin LVs are created with a virtual size. This virtual size can even be greater than the total pool size itself. This gives the admin flexibility to procure their hardware when there is demand. e.g. even though i have 1 TB of storage I can create LVs of size 2 TB. Thin pool tracks the total strorage usage by each LV and the overall pool size. So you can add more storage as your utilization grows.

Let’s start with storage devices and see how we can create volume groups and thin LVs. If you have a storage device attached to your server then you can use them or else if you just want to try how GlusterFS work then you can even make use of loopback devices.

Note: If you already have a storage device then skip this section and goto pvcreate section. The loopback device should only be used for testing purpose.

The loopback device should point to a file and the device size is dependent on the file size. So lets create a file of required length. You can create files by any means but I prefer to use fallocate as it creates big files really fast.

fallocate -l 2G dev1.img

The above command will create a file of size 2GB. If you already have a loopback device then associate the newly created file with the device or else create a loopback device and then associate it. Following command can be used to create a loopback device (loop0).

mknod /dev/loop0 b 7 0

Once we have a loopback device we need to associate it with a file.

losetup /dev/loop0 dev1.img

Now, your test device is ready. Lets see how we can create a thinly provisioned LV and for that the first step is to create a physical volume (PV). Use the following command to create a PV.

pvcreate /dev/loop0

The above command will initialize the storage disk or partition so that it can be used by the LVM. Once we have the PVs ready we are ready to create the  volume group (VG). You can create one VG per PV or you can combine mulitple PVs to form a single VG. In this example we are using a single PV to create a VG.

vgcreate mygroup /dev/loop0

Now, we have a VG named “mygroup”. The next step is to create a thin pool. You can allocate one or more thin pool inside a volume group. In this example we are creating a single thin pool inside the volume group.

lvcreate -L 2G -T mygroup/mythinpool

The above command will create a thin pool named “mythinpool” with 2 GB as storage. Once you have a thin pool you are ready to create thin volumes. Use the following command to create thin volume.

lvcreate -V 1G -T mygroup/mythinpool -n thinv1

The above command will create a thin volume named “thinv1” of size 1GB. Now, before creating a brick out of this volume you need to create a file-system on these thin volume. Ideally you should use XFS or Ext4 as the file-system. Use mkfs to create a valid file-system. e.g. you can create a simple XFS file-system using the following command.

mkfs.xfs /dev/mygroup/thinv1

Now, we need to mount these LVs to form the bricks. Use the mount command to do so.

mount /dev/mygroup/thinv1 /bricks/brick1

Using the above method you can create one or more bricks. For our testing I created one more brick in “anotherhost” machine. Lets create a Gluster volume from these LVs. Use the following command to do so.

gluster volume create vol1 replica 2 myhost:/bricks/brick1/b1 anotherhost:/bricks/brick1/b1

We have a volume with bricks created out of thin LVs. We are now ready to  test and use the snapshot feature. Ah, forgot one thing. You need to first start the volume.

gluster volume start vol1

Gluster Volume Snapshot

With that we have a running Gluster volume. Lets create a snapshot of this volume. Snapshot can be taken only for a started Gluster volume. This snapshot creates a read-only Gluster volume. The following diagram represents the snapshot volume. As you can see the snapshot volume is the exact copy of the Gluster volume.

 

Snapshot Commands

This section provides details about various commands provided by Gluster to manage snapshots.

Snapshot creation

gluster snapshot create <snapname> <volname(s)> [description <description>] [force]

This command will create a snapshot of a Gluster volume. Snap-name is a mandatory field and the name should be unique in the entire cluster. vol-name is the volume name of the origin volume whose snapshot is to be taken. Users can also provide an optional description to be saved along with the snap (max 1024 characters).

Following pre-requisites need to be met before snapshot can be taken:

  • Gluster volume should be in started state.
  • All the bricks associated with the volume should be in started state, unless it is a n-way replication where n <= 3. In such case quorum is checked.
  • Snapshot name should be unique in the cluster.
  • No other volume operation, like rebalance, add-brick etc, should be running on the volume.
  • Total number of snapshots in the volume should not be equal to Effective snap-max-hard-limit.

e.g.

gluster snapshot create snap1 vol1

The above command will create a Gluster snapshot named “snap1” for volume “vol1”. This snapshot is a read-only Gluster volume. The bricks of this read-only volume is mounted under /var/run/gluster/snaps/ folder as /var/run/gluster/snaps/<snap-volume-name>/brick<bricknumber>, e.g.

/var/run/gluster/snaps/ee1c2c74f70a4043a2bbba94362eaeb6/brick1
/var/run/gluster/snaps/ee1c2c74f70a4043a2bbba94362eaeb6/brick2

Listing of available snaps

gluster snapshot list [volname]

This command is used to list all the snapshots present in the trusted storage pool, or for a specified volume.

Info of snapshots

gluster snapshot info [(snapname | volume <volname>)]

Shows the information of all the snapshots or the specified snapshot. The status will include the brick details, UUID and snapshot volume status.

Status of snapshots

gluster snapshot status [(snapname | volume <volname>)]

Shows the running status of all the snapshots or the specified snapshot. The status will include the brick details, LVM details, process details, etc.

Deleting snaps

gluster snapshot delete <snapname>

This command will delete the specified snapshot.

Activating a snap volume

Use the following commands to activate snapshot.

gluster snapshot activate <snapname> [force]

If some of the bricks of the snapshot volume are down then use the force command to start them.

Deactivating a snap volume

gluster snapshot deactivate <snapname>

By default the created snapshot is in active state. The above command will deactivate an active snapshot

Configuring snapshot

The configurable parameters for snapshot are:

  • snap-max-hard-limit: This is a hard limit beyond which snapshot creation is not allowed. This limit can be set for the trusted storage pool or per volume. The effective max limit is lowest of volume and the trusted storage limit.
  • snap-max-soft-limit: This is the soft limit beyond which user will get a warning on snapshot creation. If auto-delete feature is enabled then snapshot creation beyond this point will lead to deletion of the oldest snapshot. This is a percentage value and the default value is 90%.
  • auto-delete: This will enable or disable auto-delete feature. When enabled it will delete the oldest snapshot when the snapshot count in a volume crosses snap-max-soft-limit. By default this feature is disabled.
The following command displays the existing config values for a volume. If the volume name is not provided then config values of all the volumes are displayed.

gluster snapshot config [vol-name]

To change the existing configuration values, run the following command. If vol-name is provided then config value of that volume is changed, else it will set/change the trusted storage pool limit.

gluster snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])

Volume specific limit cannot cross the trusted storage pool limit. If a volume specific limit is not provided then the trusted storage pool limit will be considered.
snap-max-hard-limit: Maximum hard limit for the system or the specified volume.
snap-max-soft-limit: Soft limit mark for the system.
auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled.

 

Restoring snaps

gluster snapshot restore <snapname>

This command restores an already taken snapshot of the volume. Snapshot restore is an offline activity therefore if any volume which is part of the given snap is online then the restore operation will fail.

Once the snapshot is restored it will be deleted from the list of snapshots. Therefore if you want to retain the snapshot then you should take explicit snapshot after the restore operation.

 

Accessing Snapshots

As I mentioned before Gluster snapshot creates a read-only volume. This volume can be accessed via fuse mount. Currenly other protocols such as NFS and CIFS access are not supported. Use the following command to mount the snapshot volume

mount -t glusterfs :/snaps/<snap-name>/<parent-volname> /mount_point

e.g.

mount -t glusterfs myhost:/snaps/snap1/vol1 /mnt/snapvol

Another way of accessing snapshots is via User Serviceable Snapshots, which I will explain later.

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more