Gluster volume snapshot feature is based of thinly provisioned LVs. Therefore to make use of this feature following guidelines have to be followed:
Given the above prerequisites I will take you through an example on how to create bricks in such case and how to create a volume. Click here to get the details of LVM and it’s various options.
The above diagram gives an overview of LVM. The volume group is formed out of storage devices and one or more thin pools can be carved out of this volume group. Once you have a thin pool, you can create one or more thinly provisioned logical volume (LV). All the thin LVs are created with a virtual size. This virtual size can even be greater than the total pool size itself. This gives the admin flexibility to procure their hardware when there is demand. e.g. even though i have 1 TB of storage I can create LVs of size 2 TB. Thin pool tracks the total strorage usage by each LV and the overall pool size. So you can add more storage as your utilization grows.
Note: If you already have a storage device then skip this section and goto pvcreate section. The loopback device should only be used for testing purpose.
fallocate -l 2G dev1.img
mknod /dev/loop0 b 7 0
Once we have a loopback device we need to associate it with a file.
losetup /dev/loop0 dev1.img
pvcreate /dev/loop0
The above command will initialize the storage disk or partition so that it can be used by the LVM. Once we have the PVs ready we are ready to create the volume group (VG). You can create one VG per PV or you can combine mulitple PVs to form a single VG. In this example we are using a single PV to create a VG.
vgcreate mygroup /dev/loop0
Now, we have a VG named “mygroup”. The next step is to create a thin pool. You can allocate one or more thin pool inside a volume group. In this example we are creating a single thin pool inside the volume group.
lvcreate -L 2G -T mygroup/mythinpool
The above command will create a thin pool named “mythinpool” with 2 GB as storage. Once you have a thin pool you are ready to create thin volumes. Use the following command to create thin volume.
lvcreate -V 1G -T mygroup/mythinpool -n thinv1
The above command will create a thin volume named “thinv1” of size 1GB. Now, before creating a brick out of this volume you need to create a file-system on these thin volume. Ideally you should use XFS or Ext4 as the file-system. Use mkfs to create a valid file-system. e.g. you can create a simple XFS file-system using the following command.
mkfs.xfs /dev/mygroup/thinv1
Now, we need to mount these LVs to form the bricks. Use the mount command to do so.
mount /dev/mygroup/thinv1 /bricks/brick1
gluster volume create vol1 replica 2 myhost:/bricks/brick1/b1 anotherhost:/bricks/brick1/b1
gluster volume start vol1
This section provides details about various commands provided by Gluster to manage snapshots.
gluster snapshot create <snapname> <volname(s)> [description <description>] [force]
Following pre-requisites need to be met before snapshot can be taken:
e.g.
gluster snapshot create snap1 vol1
The above command will create a Gluster snapshot named “snap1” for volume “vol1”. This snapshot is a read-only Gluster volume. The bricks of this read-only volume is mounted under /var/run/gluster/snaps/
folder as /var/run/gluster/snaps/<snap-volume-name>/brick<bricknumber>, e.g.
/var/run/gluster/snaps/
ee1c2c74f70a4043a2bbba94362eaeb6/brick1
/var/run/gluster/snaps/ee1c2c74f70a4043a2bbba94362eaeb6/brick2
gluster snapshot list [volname]
This command is used to list all the snapshots present in the trusted storage pool, or for a specified volume.
gluster snapshot info [(snapname | volume <volname>)]
Shows the information of all the snapshots or the specified snapshot. The status will include the brick details, UUID and snapshot volume status.
gluster snapshot status [(snapname | volume <volname>)]
Shows the running status of all the snapshots or the specified snapshot. The status will include the brick details, LVM details, process details, etc.
gluster snapshot delete <snapname>
This command will delete the specified snapshot.
Use the following commands to activate snapshot.
gluster snapshot activate <snapname> [force]
If some of the bricks of the snapshot volume are down then use the force
command to start them.
gluster snapshot deactivate <snapname>
By default the created snapshot is in active state. The above command will deactivate an active snapshot
The configurable parameters for snapshot are:
gluster snapshot config [vol-name]
To change the existing configuration values, run the following command. If vol-name is provided then config value of that volume is changed, else it will set/change the trusted storage pool limit.
gluster snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])
Volume specific limit cannot cross the trusted storage pool limit. If a volume specific limit is not provided then the trusted storage pool limit will be considered.
snap-max-hard-limit: Maximum hard limit for the system or the specified volume.
snap-max-soft-limit: Soft limit mark for the system.
auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled.
gluster snapshot restore <snapname>
This command restores an already taken snapshot of the volume. Snapshot restore is an offline activity therefore if any volume which is part of the given snap is online then the restore operation will fail.
Once the snapshot is restored it will be deleted from the list of snapshots. Therefore if you want to retain the snapshot then you should take explicit snapshot after the restore operation.
As I mentioned before Gluster snapshot creates a read-only volume. This volume can be accessed via fuse mount. Currenly other protocols such as NFS and CIFS access are not supported. Use the following command to mount the snapshot volume
mount -t glusterfs :/snaps/<snap-name>/<parent-volname> /mount_point
e.g.
mount -t glusterfs myhost:/snaps/snap1/vol1 /mnt/snapvol
Another way of accessing snapshots is via User Serviceable Snapshots, which I will explain later.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...