all posts tagged Filesystems


by on August 30, 2016

Run Gluster systemd containers [without privileged mode] in Fedora/CentOS

Today we will discuss about how to run gluster systemd containers without ‘privilege’ mode !! Awesome .. Isnt it ?

I owe this blog to few people latest being twitter.com/dglushenok/status/740265552258682882
Here is some details about my docker host setup:
[root@dhcp35-111 ~]# cat /etc/redhat-release
Fedora release 24 (Twenty Four)
[root@dhcp35-111 ~]# docker version
Client:
Version: 1.10.3
API version: 1.22
Package version: docker-1.10.3-21.git19b5791.fc24.x86_64
Go version: go1.6.2
Git commit: 19b5791/1.10.3
Built:
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Package version: docker-1.10.3-21.git19b5791.fc24.x86_64
Go version: go1.6.2
Git commit: 19b5791/1.10.3
Built:
OS/Arch: linux/amd64
[root@dhcp35-111 ~]#

I have pulled gluster/gluster-centos image from docker hub and kept in my docker image registry.

[root@dhcp35-111 ~]# docker images |grep gluster
docker.io/gluster/gluster-centos latest 759691b0beca 4 days ago 406.1 MB
gluster/gluster-centos experiment fd8cd51f47fb 2 weeks ago 351.2 MB
gluster/gluster-centos latest 9b46174d3366 3 weeks ago 351.1 MB
gluster/gluster-centos gluster_3_7_centos_7 5809addca906 4 weeks ago 351.1 MB

The beauty is that we don’t need any extra steps to be performed in our host system.

NOTE: We havent submitted ‘privileged’ flag/option with below ‘docker run’ command. The volume mounts like ‘/etc/glusterfs’, ‘/var/lib/glusterd’, ‘/var/log/glusterfs’..etc are kept for glusterfs metadata and logs to be persistent across container spawning.


[root@dhcp35-111 docker-host]# docker run -d --name gluster3 -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro gluster/gluster-centos
8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a
[root@dhcp35-111 docker-host]#

As we now have the container id ( 8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a), let’s get inside the container and examine the service and its behavior.

[root@dhcp35-111 docker-host]# docker exec -ti 8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a /bin/bash
[root@8b1dd6f0aa88 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 122764 4688 ? Ss 13:34 0:00 /usr/sbin/init
root 22 0.0 0.0 36832 6348 ? Ss 13:34 0:00 /usr/lib/systemd/systemd-journald
root 23 0.0 0.0 118492 2744 ? Ss 13:34 0:00 /usr/sbin/lvmetad -f
root 29 0.0 0.0 24336 2884 ? Ss 13:34 0:00 /usr/sbin/crond -n
rpc 42 0.0 0.0 64920 3244 ? Ss 13:34 0:00 /sbin/rpcbind -w
root 44 0.0 0.2 430272 17300 ? Ssl 13:34 0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root 68 0.0 0.0 82572 6212 ? Ss 13:34 0:00 /usr/sbin/sshd -D
root 197 0.0 0.0 11788 2952 ? Ss 13:35 0:00 /bin/bash
root 219 0.0 0.0 47436 3360 ? R+ 13:44 0:00 ps aux
[root@8b1dd6f0aa88 /]#
[root@8b1dd6f0aa88 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-28 13:34:53 UTC; 27s ago
Process: 43 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 44 (glusterd)
CGroup: /system.slice/docker-8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a.scope/system.slice/glusterd.service
└─44 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Jun 28 13:34:51 8b1dd6f0aa88 systemd[1]: Starting GlusterFS, a clustered file-system server...
Jun 28 13:34:53 8b1dd6f0aa88 systemd[1]: Started GlusterFS, a clustered file-system server.
Jun 28 13:35:15 8b1dd6f0aa88 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@8b1dd6f0aa88 /]#
[root@8b1dd6f0aa88 /]# glusterd --version
glusterfs 3.7.11 built on Apr 18 2016 13:20:46
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@8b1dd6f0aa88 /]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@8b1dd6f0aa88 /]# rpm -qa |grep glusterfs
glusterfs-3.7.11-1.el7.x86_64
glusterfs-fuse-3.7.11-1.el7.x86_64
glusterfs-cli-3.7.11-1.el7.x86_64
glusterfs-libs-3.7.11-1.el7.x86_64
glusterfs-client-xlators-3.7.11-1.el7.x86_64
glusterfs-api-3.7.11-1.el7.x86_64
glusterfs-server-3.7.11-1.el7.x86_64
glusterfs-geo-replication-3.7.11-1.el7.x86_64
[root@8b1dd6f0aa88 /]#

Let’s examine this container from docker host and verify these containers are running without privileged mode.

[root@dhcp35-111 docker-host]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b1dd6f0aa88 gluster/gluster-centos "/usr/sbin/init" 6 minutes ago Up 6 minutes 111/tcp, 245/tcp, 443/tcp, 2049/tcp, 2222/tcp, 6010-6012/tcp, 8080/tcp, 24007/tcp, 38465-38466/tcp, 38468-38469/tcp, 49152-49154/tcp, 49156-49162/tcp gluster3
[root@dhcp35-111 docker-host]# docker inspect 8b1dd6f0aa88|grep -i privil
"Privileged": false,
[root@dhcp35-111 docker-host]#

All is well, but what will be missing if you run these containers without ‘privilged’ mode? Not much! However, if you want to create gluster snapshots from container we may need to export ‘/dev/’ to the container and operations to create devices from container need privileged mode.

by on August 26, 2016

Possible configurations of GlusterFS in Kubernetes/OpenShift setup

In previous blog posts we discussed, how to use GlusterFS as a persistent storage in Kubernetes and Openshift. In nutshell, the GlusterFS can be deployed/used in a kubernetes/openshift environment as : *) Contenarized GlusterFS ( Pod ) *) GlusterFS as Openshift service and Endpoint (Service and Endpoint). *) GlusterFS volume as Persistent Volume (PV) and using GlusterFS volume plugin to bind this PV to a Persistent Volume Claim ( PVC) *) GlusterFS template to deploy GlusterFS pods in an Openshift Environment. All the configuration files that can be used to deploy GlusterFS can be found @ github.com/humblec/glusterfs-kubernetes-openshift/ or github.com/gluster/glusterfs-kubernetes-openshift. Lets see how to use these files to deploy GlusterFS in kubernetes and Openshift. We will start with Deploying GlusterFS pods in an Openshift/Kubernetes Environment. Deploying GlusterFS Pod:
[Update] The pod file is renamed to gluster-pod.yaml in the mentioned repo. More details about Gluster Containers can be found @http://www.slideshare.net/HumbleChirammal/gluster-containers
GlusterFS pods can be deployed in Kubernetes/Openshift, so that Gluster Nodes are deployed in containers and it can provide persistent storage for Openshift/Kubernetes setup. The examples files in this repo are used for this demo. Step 1: Create GlusterFS pod [root@atomic-node2 gluster_pod]# oc create -f gluster-1.yaml Step 2: Get details about the GlusterFS pod. [root@atomic-node2 gluster_pod]# oc describe pod gluster-1 Name: gluster-1 Namespace: default Image(s): gluster/gluster-centos Node: atomic-node1/10.70.43.174 Start Time: Tue, 17 May 2016 10:19:17 +0530 Labels: name=gluster-1 Status: Running Reason: Message: IP: 10.70.43.174 Replication Controllers: Containers: glusterfs: Container ID: docker://ff8f4af700d725dfe0e08939ec011c34ddf9dedc7204e0ced1cc355a56150742 Image: gluster/gluster-centos Image ID: docker://033de9c44a8aabde55ce8a2b751ccf5bc345fdb534ea30e79a8fa70b82dc7761 QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Tue, 17 May 2016 10:20:35 +0530 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: brickpath: Type: HostPath (bare host directory volume) Path: /mnt/brick1 default-token-72d89: Type: Secret (a secret that should populate this volume) SecretName: default-token-72d89 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 1m 1 {scheduler } Scheduled Successfully assigned gluster-1 to atomic-node1 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Created Created with docker id f55ce55e6ea3 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Started Started with docker id f55ce55e6ea3 1m 1m 1 {kubelet atomic-node1} spec.containers{glusterfs} Pulling pulling image "gluster/gluster-centos" 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Pulled Successfully pulled image "gluster/gluster-centos" 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Created Created with docker id ff8f4af700d7 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Started Started with docker id ff8f4af700d7 From above logs, you can see it pulled `gluster/gluster-centos` container image and deployed containers from it. [root@atomic-node2 gluster_pod]# oc get pods NAME READY STATUS RESTARTS AGE gluster-1 1/1 Running 0 1m Examine the container and make sure it has a running GlusterFS daemon. [root@atomic-node2 gluster_pod]# oc exec -ti gluster-1 /bin/bash Examine the processes running in this container and the `glusterd` service information. [root@atomic-node1 /]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.4 0.0 40780 2920 ? Ss 04:50 0:00 /usr/sbin/init root 20 0.3 0.0 36816 4272 ? Ss 04:50 0:00 /usr/lib/syste root 21 0.0 0.0 118476 1332 ? Ss 04:50 0:00 /usr/sbin/lvme root 37 0.0 0.0 101344 1228 ? Ssl 04:50 0:00 /usr/sbin/gssp rpc 44 0.1 0.0 64904 1052 ? Ss 04:50 0:00 /sbin/rpcbind root 209 0.1 0.1 364716 13444 ? Ssl 04:50 0:00 /usr/sbin/glus root 341 1.1 0.0 13368 1964 ? Ss 04:51 0:00 /bin/bash root 354 0.0 0.0 49020 1820 ? R+ 04:51 0:00 ps aux [root@atomic-node1 /]# service glusterd status Redirecting to /bin/systemctl status glusterd.service ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2016-05-17 04:50:41 UTC; 35s ago Process: 208 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 209 (glusterd) CGroup: /system.slice/docker-ff8f4af700d725dfe0e08939ec011c34ddf9dedc7204e0ced1cc355a56150742.scope/system.slice/glusterd.service └─209 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO... ‣ 209 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO... May 17 04:50:36 atomic-node1 systemd[1]: Starting Gluste... May 17 04:50:41 atomic-node1 systemd[1]: Started Gluster... Hint: Some lines were ellipsized, use -l to show in full. Let's fetch some more details about GlusterFS in this container. [root@atomic-node1 /]# gluster --version glusterfs 3.7.9 built on Mar 20 2016 03:19:49 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@atomic-node1 /]# [root@atomic-node1 /]# mount |grep mnt /dev/mapper/atomic-node1-root on /mnt/brick1 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) This container is built on top of CentOS base image as shown below. [root@atomic-node1 /]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@atomic-node1 /]# In this article we discussed, how to run GlusterFS as a pod in Kubernetes or Openshift setup. [Part 2] covers `how to use GlusterFS as a service, Persistent Volume for a Persistent Volume Claim`. [Part 3] covers `how to use GlusterFS template to deploy GlusterFS pods in an Openshift/kubernetes setup`.
by on August 24, 2016

[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’. Here is the workflow: Start your kubernetes controller manager with highlighted options:

 ...kube controller-manager --v=3 
 --service-account-private-key-file=/tmp/kube-serviceaccount.key
 --root-ca-file=/var/run/kubernetes/apiserver.crt --enable-hostpath-provisioner=false

 --enable-network-storage-provisioner=true --storage-config=/tmp --net-provider=glusterfs
 --pvclaimbinder-sync-period=15s --cloud-provider= --master=127.0.0.1:8080

 
Create a file called `gluster.json` in `/tmp` directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The `resturl` has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using `heketi` for the same.

 [hchiramm@dhcp35-111 tmp]$ cat gluster.json
 {
 "endpoint": "glusterfs-cluster",
 "resturl": "http://127.0.0.1:8081",
 "restauthenabled":false,
 "restuser":"",
 "restuserkey":""
 }
 [hchiramm@dhcp35-111 tmp]$
 

We have to define an ENDPOINT and SERVICE. Below are the example configuration files. ENDPOINT : “ip” has to be filled with your gluster trusted pool IP.


[hchiramm@dhcp35-111 ]$ cat glusterfs-endpoint.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
}
]
}

SERVICE: Please note that the Service Name is matching with ENDPOINT name


[hchiramm@dhcp35-111 ]$ cat gluster-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
[hchiramm@dhcp35-111 ]$

Finally we have a Persistent Volume Claim file as shown below: NOTE: The size of the volume is mentioned as ’20G’:


[hchiramm@dhcp35-111 ]$ cat gluster-pvc.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "glusterc",
"annotations": {
"volume.alpha.kubernetes.io/storage-class": "glusterfs"
}
},
"spec": {
"accessModes": [
"ReadOnlyMany"
],
"resources": {
"requests": {
"storage": "20Gi"
}
}
}
}
[hchiramm@dhcp35-111 ]$

Let's start defining the endpoint, service and PVC.


[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-endpoint.json
endpoints "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl create -f gluster-service.json
service "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl get ep,service
NAME ENDPOINTS AGE
ep/glusterfs-cluster 10.36.6.105:1 14s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/glusterfs-cluster 10.0.0.10 1/TCP 9s
svc/kubernetes 10.0.0.1 443/TCP 13m
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

Now, let's request a claim!

[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-pvc.json
persistentvolumeclaim "glusterc" created
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv/pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c  20Gi ROX  Bound  default/glusterc 2s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/glusterc Bound pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 0 3s
[hchiramm@dhcp35-111 ]$

Awesome! Based on the request it created a PV and BOUND to the PVClaim!!


[hchiramm@dhcp35-111 ]$ ./kubectl describe pv pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Name: pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Labels:
Status: Bound
Claim: default/glusterc
Reclaim Policy: Delete
Access Modes: ROX
Capacity: 20Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
 Path: vol_038b56756f4e3ab4b07a87494097941c
ReadOnly: false
No events.
[hchiramm@dhcp35-111 ]$
 

Verify the volume exist in backend:

 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 038b56756f4e3ab4b07a87494097941c
 [root@ ~]#

Let's delete the PV claim --


[hchiramm@dhcp35-111 ]$ ./kubectl delete pvc glusterc
persistentvolumeclaim "glusterc" deleted
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

It got deleted! Verify it from backend:


 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 [root@ ~]# 

We can use the Volume for app pods by referring the claim name. Hope this is a nice feature to have !

Please let me know if you have any comments/suggestions.

Also, the patch - https://github.com/kubernetes/kubernetes/pull/30888 is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.

by on April 10, 2014

Play with libgfapi and its python bindings..

What is libgfapi ?


User-space library for accessing data in GlusterFS
Filesystem-like API
Runs in application process
no FUSE, no copies, no context switches
…but same volfiles, translators, etc.
Could be used for Apache/nginx modules, MPI I/O
(maybe), Ganesha, etc. ad infinitum
BTW it’s usable from Python too :)

Yes, I copied it from rhsummit.files.wordpress.com/2013/06/darcy_th_1040_glusterfs.pdf

libgfapi improves gluster performance by avoiding “fuse” layer. Its a different route to access glusterfs data. Imagine that, libgfapi sits in the application layer. Different bindings are available to access libgfapi.

www.gluster.org/community/documentation/index.php/Language_Bindings

In this article, I will introduce the python bindings of libgfapi.

The libgfapi python binding is available on GitHub, with a mirror on the Gluster Forge:

github.com/gluster/libgfapi-python

1) First of all, clone the git repo:

    $ git clone github.com/gluster/libgfapi-python.git
    $ cd libgfapi-python
    

2) Then run the setup script:

    $ sudo python setup.py install
    

Once its done, you are almost done with the dev environment :)

Now its really easy to use the functions provided by libgfapi. Let me mention some of the gluster functions available through the python bindings:


glfs_discard(self.fd, offset, len)
glfs_dup(self.fd)
glfs_fallocate(self.fd, mode, offset, len)
glfs_fchown(self.fd, uid, gid)
glfs_fdatasync(self.fd)
glfs_fstat(self.fd, ctypes.byref(s))
glfs_fsync(self.fd)
glfs_read(self.fd, rbuf, buflen, flags)
glfs_write(self.fd, buf, len(buf))
glfs_closedir(self.fd)
glfs_close(self.fd)
glfs_readdir_r(self.fd, ctypes.byref(entry),
glfs_new(volid)
gfs_set_volfile_server(self.fs, proto, host, port)
glfs_fini(self.fs)
glfs_set_logging(self.fs, path, level)
glfs_chown(self.fs, path, uid, gid)
glfs_getxattr(self.fs, path, key, buf, maxlen)
glfs_listxattr(self.fs, path, buf, 512)
glfs_lstat(self.fs, path, ctypes.byref(s))
glfs_mkdir(self.fs, path, mode)
glfs_creat(self.fs, path, flags, mode)
glfs_open(self.fs, path, flags)
glfs_opendir(self.fs, path)
glfs_removexattr(self.fs, path, key)
glfs_rename(self.fs, opath, npath)
glfs_rmdir(self.fs, path)
glfs_setxattr(self.fs, path, key, value, vlen, 0)
glfs_stat(self.fs, path, ctypes.byref(s))
glfs_statvfs(self.fs, path, ctypes.byref(s))  --------------> [1]
glfs_symlink(self.fs, source, link_name)
glfs_unlink(self.fs, path)

The Libgfapi functions available through the bindings are:

    def close(self):
    def discard(self, offset, len):
    def dup(self):
    def fallocate(self, mode, offset, len):
    def fchown(self, uid, gid):
    def fdatasync(self):
    def fstat(self):
    def fsync(self):
    def lseek(self, pos, how):
    def read(self, buflen, flags=0):
    def write(self, data):
    def next(self):
    def set_logging(self, path, level):
    def mount(self):
    def chown(self, path, uid, gid):
    def exists(self, path):
    def getatime(self, path):
    def getctime(self, path):
    def getmtime(self, path):
    def getsize(self, filename):
    def getxattr(self, path, key, maxlen):
    def isdir(self, path):
    def isfile(self, path):
    def islink(self, path):
    def listdir(self, path):
    def listxattr(self, path):
    def lstat(self, path):
    def makedirs(self, name, mode):
    def mkdir(self, path, mode):
    def open(self, path, flags, mode=0777):
    def opendir(self, path):
    def removexattr(self, path, key):
    def rename(self, opath, npath):
    def rmdir(self, path):
    def rmtree(self, path, ignore_errors=False, onerror=None):
    def setxattr(self, path, key, value, vlen):
    def stat(self, path):
    def statvfs(self, path): -------------------------------->[1]
    def symlink(self, source, link_name):
    def unlink(self, path):
    def walk(self, top, topdown=True, onerror=None, followlinks=False):

[1] The patch (review.gluster.org/#/c/7430/ ) for “statvfs” is not merged as of now , :)

I have a gluster setup where I created a distributed volume called “vol2″ :


Volume Name: vol2
Type: Distribute
Volume ID: d355c575-d345-4e54-a7f1-d77b1bfebaf9
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.43.152:/home/rhs2-1
Options Reconfigured:
server.allow-insecure: on

Lets start accessing this volume using Python. To use the gfapi binding, you need to import gfapi as shown below:

>>> from glusterfs import gfapi

Once it’s done, you access the volume with the mount() method like this:


>>> myVol = gfapi.Volume("10.X.X.152","vol2")
>>> myVol_init = myVol.mount()
>>> myVol_init
0
>>> 

“10.X.X.152″ is my gluster server and “vol2″ is the volume name:

The mount() method basically initialises the connection to the volume.

These are the methods available for the Volume object:

>>> dir(myVol)
['__class__', '__del__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_api', 'chown', 'exists', 'fs', 'getsize', 'getxattr', 'isdir', 'isfile', 'islink', 'listdir', 'listxattr', 'lstat', 'makedirs', 'mkdir', 'mount', 'open', 'opendir', 'removexattr', 'rename', 'rmdir', 'rmtree', 'set_logging', 'setxattr', 'stat', 'statvfs', 'symlink', 'unlink', 'walk']
>>> 

Lets create some entries in this volume and check further:


[root@ ~]# mount -t glusterfs XX.XX.humblec.com:/vol2 /hum
[root@ ~]# cd /hum/
[root@ hum]# dd if=/dev/random of=file1 bs=1M count=5
0+5 records in
0+5 records out
54 bytes (54 B) copied, 11.6826 s, 0.0 kB/s
[root@n hum]# 

This created a 5M file called “file1″ inside “vol2″ volume.

To list the files and directories inside the volume, we use the listdir() method:

>>> myVol.listdir("/")
['file1']

Running a “stat” on the file, from the server, shows this information:


[root@ hum]# stat file1
  File: `file1'
  Size: 54        	Blocks: 1          IO Block: 131072 regular file
Device: 1fh/31d	Inode: 9316719741945628140  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-04-09 19:16:10.209995591 +0530
Modify: 2014-04-09 18:05:00.719006150 +0530
Change: 2014-04-09 18:05:00.719006150 +0530
[root@ hum]# 

Lets prove the python binding works :)

>>> myVol.stat("file1").st_ino
9316719741945628140L
>>> myVol.stat("file1").st_size
54L

If I want to list the extended attributes, I can try something like:

>>> myVol.listxattr("/")
['trusted.glusterfs.volume-id']
>>> 

Lets create a directory called “humble”:

>>> myVol.mkdir("humble/", 0775)
0
>>> 

Checking on the server using ls, it should be there:

[root@ hum]# ls
file1  humble
[root@ hum]# 

Success! For fun, some ‘stat’ information can be displayed using these methods:

>>> myVol.statvfs("/").f_bavail
1060674L

>>> myVol.statvfs("/").f_bfree
1133685L
>>> myVol.statvfs("/").f_files
365760L
>>> 

If you want to mount a gluster volume as a non-root user, you need to follow the steps below.
By default, gluster allows client connections only from privileged ports. To enable connections from unprivileged ports you have to follow below steps.

1. Turn on the allow-insecure option for the volume:

       gluster volume set <volume_name> allow-insecure on

2. Edit /etc/glusterfs/glusterd.vol, adding the line:

       option rpc-auth-allow-insecure on

3. Obviously you should have the permissions on the directory you are accessing for non-root access

4. Start and stop volume