all posts tagged Docker


by on August 30, 2016

Run Gluster systemd containers [without privileged mode] in Fedora/CentOS

Today we will discuss about how to run gluster systemd containers without ‘privilege’ mode !! Awesome .. Isnt it ?

I owe this blog to few people latest being twitter.com/dglushenok/status/740265552258682882
Here is some details about my docker host setup:
[root@dhcp35-111 ~]# cat /etc/redhat-release
Fedora release 24 (Twenty Four)
[root@dhcp35-111 ~]# docker version
Client:
Version: 1.10.3
API version: 1.22
Package version: docker-1.10.3-21.git19b5791.fc24.x86_64
Go version: go1.6.2
Git commit: 19b5791/1.10.3
Built:
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Package version: docker-1.10.3-21.git19b5791.fc24.x86_64
Go version: go1.6.2
Git commit: 19b5791/1.10.3
Built:
OS/Arch: linux/amd64
[root@dhcp35-111 ~]#

I have pulled gluster/gluster-centos image from docker hub and kept in my docker image registry.

[root@dhcp35-111 ~]# docker images |grep gluster
docker.io/gluster/gluster-centos latest 759691b0beca 4 days ago 406.1 MB
gluster/gluster-centos experiment fd8cd51f47fb 2 weeks ago 351.2 MB
gluster/gluster-centos latest 9b46174d3366 3 weeks ago 351.1 MB
gluster/gluster-centos gluster_3_7_centos_7 5809addca906 4 weeks ago 351.1 MB

The beauty is that we don’t need any extra steps to be performed in our host system.

NOTE: We havent submitted ‘privileged’ flag/option with below ‘docker run’ command. The volume mounts like ‘/etc/glusterfs’, ‘/var/lib/glusterd’, ‘/var/log/glusterfs’..etc are kept for glusterfs metadata and logs to be persistent across container spawning.


[root@dhcp35-111 docker-host]# docker run -d --name gluster3 -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro gluster/gluster-centos
8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a
[root@dhcp35-111 docker-host]#

As we now have the container id ( 8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a), let’s get inside the container and examine the service and its behavior.

[root@dhcp35-111 docker-host]# docker exec -ti 8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a /bin/bash
[root@8b1dd6f0aa88 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 122764 4688 ? Ss 13:34 0:00 /usr/sbin/init
root 22 0.0 0.0 36832 6348 ? Ss 13:34 0:00 /usr/lib/systemd/systemd-journald
root 23 0.0 0.0 118492 2744 ? Ss 13:34 0:00 /usr/sbin/lvmetad -f
root 29 0.0 0.0 24336 2884 ? Ss 13:34 0:00 /usr/sbin/crond -n
rpc 42 0.0 0.0 64920 3244 ? Ss 13:34 0:00 /sbin/rpcbind -w
root 44 0.0 0.2 430272 17300 ? Ssl 13:34 0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root 68 0.0 0.0 82572 6212 ? Ss 13:34 0:00 /usr/sbin/sshd -D
root 197 0.0 0.0 11788 2952 ? Ss 13:35 0:00 /bin/bash
root 219 0.0 0.0 47436 3360 ? R+ 13:44 0:00 ps aux
[root@8b1dd6f0aa88 /]#
[root@8b1dd6f0aa88 /]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-28 13:34:53 UTC; 27s ago
Process: 43 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 44 (glusterd)
CGroup: /system.slice/docker-8b1dd6f0aa88197bdcd022802f7c0c16d642630a21b2b43accfa5ed8023c197a.scope/system.slice/glusterd.service
└─44 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Jun 28 13:34:51 8b1dd6f0aa88 systemd[1]: Starting GlusterFS, a clustered file-system server...
Jun 28 13:34:53 8b1dd6f0aa88 systemd[1]: Started GlusterFS, a clustered file-system server.
Jun 28 13:35:15 8b1dd6f0aa88 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@8b1dd6f0aa88 /]#
[root@8b1dd6f0aa88 /]# glusterd --version
glusterfs 3.7.11 built on Apr 18 2016 13:20:46
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@8b1dd6f0aa88 /]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@8b1dd6f0aa88 /]# rpm -qa |grep glusterfs
glusterfs-3.7.11-1.el7.x86_64
glusterfs-fuse-3.7.11-1.el7.x86_64
glusterfs-cli-3.7.11-1.el7.x86_64
glusterfs-libs-3.7.11-1.el7.x86_64
glusterfs-client-xlators-3.7.11-1.el7.x86_64
glusterfs-api-3.7.11-1.el7.x86_64
glusterfs-server-3.7.11-1.el7.x86_64
glusterfs-geo-replication-3.7.11-1.el7.x86_64
[root@8b1dd6f0aa88 /]#

Let’s examine this container from docker host and verify these containers are running without privileged mode.

[root@dhcp35-111 docker-host]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b1dd6f0aa88 gluster/gluster-centos "/usr/sbin/init" 6 minutes ago Up 6 minutes 111/tcp, 245/tcp, 443/tcp, 2049/tcp, 2222/tcp, 6010-6012/tcp, 8080/tcp, 24007/tcp, 38465-38466/tcp, 38468-38469/tcp, 49152-49154/tcp, 49156-49162/tcp gluster3
[root@dhcp35-111 docker-host]# docker inspect 8b1dd6f0aa88|grep -i privil
"Privileged": false,
[root@dhcp35-111 docker-host]#

All is well, but what will be missing if you run these containers without ‘privilged’ mode? Not much! However, if you want to create gluster snapshots from container we may need to export ‘/dev/’ to the container and operations to create devices from container need privileged mode.

by on March 29, 2016

Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.


#oc get nodes
NAME LABELS STATUS AGE
dhcp42-144.example.com kubernetes.io/hostname=dhcp42-144.example.com,name=node3 Ready 15d
dhcp42-235.example.com kubernetes.io/hostname=dhcp42-235.example.com,name=node1 Ready 15d
dhcp43-174.example.com kubernetes.io/hostname=dhcp43-174.example.com,name=node2 Ready 15d
dhcp43-183.example.com kubernetes.io/hostname=dhcp43-183.example.com,name=master Ready,SchedulingDisabled 15d

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

# gluster v status
Status of volume: gluster_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 170.22.42.84:/gluster_brick 49152 0 Y 8771
Brick 170.22.43.77:/gluster_brick 49152 0 Y 7443
NFS Server on localhost 2049 0 Y 7463
NFS Server on 170.22.42.84 2049 0 Y 8792
Task Status of Volume gluster_vol
------------------------------------------------------------------------------
There are no active volume tasks

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.


# cat gluster_pod/gluster-service.yaml
apiVersion: "v1"
kind: "Service"
metadata:
name: "glusterfs-cluster"
spec:
ports:
- port: 1
# oc create -f gluster_pod/gluster-service.yaml
service "glusterfs-cluster" created

Verify:

# oc get service
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
glusterfs-cluster 172.30.251.13 1/TCP 9m
kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 16d

STEP 2: Create an Endpoint for the gluster service

# cat gluster_pod/gluster-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 170.22.43.77
ports:
- port: 1

The ip here is the glusterfs cluster ip.


# oc create -f gluster_pod/gluster-endpoints.yaml
endpoints "glusterfs-cluster" created
# oc get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 170.22.43.77:1 3m
kubernetes 170.22.43.183:8053,170.22.43.183:8443,170.22.43.183:8053 16d

STEP 3: Create a PV for the gluster volume.

# cat gluster_pod/gluster-pv.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "gluster-default-volume"
spec:
capacity:
storage: "8Gi"
accessModes:
- "ReadWriteMany"
glusterfs:
endpoints: "glusterfs-cluster"
path: "gluster_vol"
readOnly: false
persistentVolumeReclaimPolicy: "Recycle"

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.


# oc create -f gluster_pod/gluster-pv.yaml
persistentvolume "gluster-default-volume" created
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Available 36s

STEP 4: Create a PVC for the gluster PV.


# cat gluster_pod/gluster-pvc.yaml
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: "glusterfs-claim"
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "8Gi"

Note: the Developer request for 8 Gb of storage with access mode rwx.


# oc create -f gluster_pod/gluster-pvc.yaml
persistentvolumeclaim "glusterfs-claim" created
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-claim Bound gluster-default-volume 8Gi RWX 14s

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status


# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Bound default/glusterfs-claim 5m

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.


# cat gluster_pod/gluster_pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mygluster
image: ashiq/gluster-client
command: ["/usr/sbin/init"]
volumeMounts:
- mountPath: "/home"
name: gluster-default-volume
volumes:
- name: gluster-default-volume
persistentVolumeClaim:
claimName: glusterfs-claim

The above pod definition will pull the ashiq/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.


# oc create -f gluster_pod/fedora_pod.yaml
pod "mypod" created
# oc get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1m

Wow its running… lets go and check where it is running.

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec57d62e3837 ashiq/gluster-client "/usr/sbin/init" 4 minutes ago Up 4 minutes k8s_myfedora.dc1f7d7a_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_ed2eb8e5
1439dd72fb1d openshift3/ose-pod:v3.1.1.6 "/pod" 4 minutes ago Up 4 minutes k8s_POD.e071dbf6_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_4d6a7afb

Found the Pod running successfully on one of the Kubernetes node.

On the host:


# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /var/lib/origin/openshift.local.volumes/pods/5d301443-ec20-11e5-9076-5254002e937b/volumes/kubernetes.io~glusterfs/gluster-default-volume

I can see the gluster volume being mounted on the host o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.


# docker exec -it ec57d62e3837 /bin/bash
[root@mypod /]# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /home

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it


[root@mypod /]# mkdir /home/ashiq
[root@mypod /]# ls /home/
ashiq

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.

Author: Mohamed Ashiq

by on September 22, 2015

Docker Global Hack Day #3, Bangalore Edition

We organized Docker Global Hack Day at Red Hat Office on 19th Sep’15. Though there were lots RSVPs, the turn up for the event was less than expected. We started the day by showing the recording of kick-off event.

Docker Global Hack #3The teams here worked on four different ideas, out of which two submitted to the Global Hack github page. The four ideas on which teams worked on are:-

Alan and Fayiz worked on PaaS idea, which can be used for setting up dev and qa environment.

Archit was winner of Docker Global Hack Day #2 as well for the same project. He updated the same project in this hackathon. His project is about crowd source analysis by using distributed computing  through  Docker.

  • Visualizing Docker Networking – Himanshu Roy

Himanshu was exploring the idea of visualizing multi-host Docker networking.

  • Spreading and collating containers on GlusterFS with runC -Mohamed Ashiq Liyazudeen, Hari Gowtham

By looking at runC demo in the kick-off video, we thought it would be good if we can run containers on GlusterFS and use it move containers around by saving and restoring the containers on shared volume.

I did not work on specific idea but I was helping teams and other attendees with their questions. I also worked on my upcoming tutorial at Linux Con, Europe on Data and Networking management with containers.

May be because of long weekend and other events we got less participation. Hopefully we do better next time.

by on November 3, 2014

8th Bangalore Docker meetup and Global Hackathon#2

On 1st Nov’14, Red Hat offices in Bangalore and Pune hosted Docker meetups and Hackathon.

~40 people attended Bangalore meetup. Before the hackathon we had following presentations :-

  • Docker Global Hackday opening by Avi Cavale, Co-founder and CEO, Shippable.
  • Introduction to Docker – Pranay Pareek, Shippable
  • Introduction to Project Atomic – Neependra Khare, Red Hat.

In the last meetup we looked at CoreOS and decided to to look at Project Atomic in this meetup. I used Colin Walters  slides to give Atomic presentations and then used Fedora 20 Atomic image to give Demo.

After the presentation and Lunch we had the hackathon. Four teams participated. I could not due to being host but I helped first timers with Docker handons. The four hacks we did are:-

1. Dockit – Docker GlusterFS integration @HumbleDevassy, @swordphilic, @hiSaifi

 

2. dockerComp – @arcolife,@krishnakalyan3 – Bangalore Hackathon Winner

 

3. Docker as a Load Balancer. @anandrm

Dockerize a Load Balancer to replace a Standalone Load Balancers.

4. CI with Docker @srikrishnaholla, @harishk8591106, @dilipkuki

We did not have anyone to judge, so we did the presentations and cast the vote ourselves. Local winner was dockerComp.

In Pune we did not hackathon but had great workshop. Look at the meetup page for awesome feedback. The workshop labs are available at following to anyone to use.

http://people.redhat.com/rrajaram/dockermeetup/

Overall it was a great experience. There were requests to do hands-on with Docker. We’ll try to do in next meetup.

 

by on September 3, 2014

Introducing: Oh My Vagrant!

If you’re a reader of my code or of this blog, it’s no secret that I hack on a lot of puppet and vagrant. Recently I’ve fooled around with a bit of docker, too. I realized that the vagrant, environments I built for puppet-gluster and puppet-ipa needed to be generalized, and they needed new features too. Therefore…

Introducing: Oh My Vagrant!

Oh My Vagrant is an attempt to provide an easy to use development environment so that you can be up and hacking quickly, and focusing on the real devops problems. The README explains my choice of project name.

Prerequisites:

I use a Fedora 20 laptop with vagrant-libvirt. Efforts are underway to create an RPM of vagrant-libvirt, but in the meantime you’ll have to read: Vagrant on Fedora with libvirt (reprise). This should work with other distributions too, but I don’t test them very often. Please step up and help test :)

The bits:

First clone the oh-my-vagrant repository and look inside:

git clone --recursive https://github.com/purpleidea/oh-my-vagrant
cd oh-my-vagrant/vagrant/

The included Vagrantfile is the current heart of this project. You’re welcome to use it as a template and edit it directly, or you can use the facilities it provides. I’d recommend starting with the latter, which I’ll walk you through now.

Getting started:

Start by running vagrant status (vs) and taking a look at the vagrant.yaml file that appears.

james@computer:/oh-my-vagrant/vagrant$ ls
Dockerfile  puppet/  Vagrantfile
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

Here you’ll see the list of resultant machines that vagrant thinks is defined (currently just template1), and a bunch of different settings in YAML format. The values of these settings help define the vagrant environment that you’ll be hacking in.

Changing settings:

The settings exist so that your vagrant environment is dynamic and can be changed quickly. You can change the settings by editing the vagrant.yaml file. They will be used by vagrant when it runs. You can also change them at runtime with --vagrant-foo flags. Running a vagrant status will show you how vagrant currently sees the environment. Let’s change the number of machines that are defined. Note the location of the --vagrant-count flag and how it doesn’t work when positioned incorrectly.

james@computer:/oh-my-vagrant/vagrant$ vagrant status --vagrant-count=4
An invalid option was specified. The help for this command
is available below.

Usage: vagrant status [name]
    -h, --help                       Print this help
james@computer:/oh-my-vagrant/vagrant$ vagrant --vagrant-count=4 status
Current machine states:

template1                 not created (libvirt)
template2                 not created (libvirt)
template3                 not created (libvirt)
template4                 not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms: []
:namespace: template
:count: 4
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$

As you can see in the above example, changing the count variable to 4, causes vagrant to see a possible four machines in the vagrant environment. You can change as many of these parameters at a time by using the --vagrant- flags, or you can edit the vagrant.yaml file. The latter is much easier and more expressive, in particular for expressing complex data types. The former is much more powerful when building one-liners, such as:

vagrant --vagrant-count=8 --vagrant-namespace=gluster up gluster{1..8}

which should bring up eight hosts in parallel, named gluster1 to gluster8.

Other VM’s:

Since one often wants to be more expressive in machine naming and heterogeneity of machine type, you can specify a list of machines to define in the vagrant.yaml file vms array. If you’d rather define these machines in the Vagrantfile itself, you can also set them up in the vms array defined there. It is empty by default, but it is easy to uncomment out one of the many examples. These will be used as the defaults if nothing else overrides the selection in the vagrant.yaml file. I’ve uncommented a few to show you this functionality:

james@computer:/oh-my-vagrant/vagrant$ grep example[124] Vagrantfile 
    {:name => 'example1', :docker => true, :puppet => true, },    # example1
    {:name => 'example2', :docker => ['centos', 'fedora'], },    # example2
    {:name => 'example4', :image => 'centos-6', :puppet => true, },    # example4
james@computer:/oh-my-vagrant/vagrant$ rm vagrant.yaml # note that I remove the old settings
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example2                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example2
  :docker:
  - centos
  - fedora
- :name: example4
  :image: centos-6
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vim vagrant.yaml # edit vagrant.yaml file...
james@computer:/oh-my-vagrant/vagrant$ cat vagrant.yaml 
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.0
:sync: rsync
:puppet: false
:docker: false
:cachier: false
:vms:
- :name: example1
  :docker: true
  :puppet: true
- :name: example4
  :image: centos-7.0
  :puppet: true
:namespace: template
:count: 1
:username: ''
:password: ''
:poolid: []
:repos: []
james@computer:/oh-my-vagrant/vagrant$ vs
Current machine states:

template1                 not created (libvirt)
example1                  not created (libvirt)
example4                  not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
james@computer:/oh-my-vagrant/vagrant$

The above output might seem a little long, but if you try these steps out in your terminal, you should get a hang of it fairly quickly. If you poke around in the Vagrantfile, you should see the format of the vms array. Each element in the array should be a dictionary, where the keys correspond to the flags you wish to set. Look at the examples if you need help with the formatting.

Other settings:

As you saw, other settings are available. There are a few notable ones that are worth mentioning. This will also help explain some of the other features that this Vagrantfile provides.

  • domain: This sets the domain part of each vm’s FQDN. The default is example.com, which should work for most environments, but you’re welcome to change this as you see fit.
  • network: This sets the network that is used for the vm’s. You should pick a network/cidr that doesn’t conflict with any other networks on your machine. This is particularly useful when you have multiple vagrant environments hosted off of the same laptop.
  • image: This is the default base image to use for each machine. It can be overridden per-machine in the vm’s list of dictionaries.
  • sync: This is the sync type used for vagrant. rsync is the default and works in all environments. If you’d prefer to fight with the nfs mounts, or try out 9p, both those options are available too.
  • puppet: This option enables or disables integration with puppet. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • docker: This option enables and lists the docker images to set up per vm. It is possible to override this per machine. This functionality will be expanded in a future version of Oh My Vagrant.
  • namespace: This sets the namespace that your Vagrantfile operates in. This value is used as a prefix for the numbered vm’s, as the libvirt network name, and as the primary puppet module to execute.

More on the docker option:

For now, if you specify a list of docker images, they will be automatically pulled into your vm environment. It is recommended that you pre-cache them in an existing base image to save bandwidth. Custom base vagrant images can be easily be built with vagrant-builder, but this process is currently undocumented.

I’ll try to write-up a post on this process if there are enough requests. To keep you busy in the meantime, I’ve published a CentOS 7 vagrant base image that includes docker images for CentOS and Fedora. It is being graciously hosted by the GlusterFS community.

What other magic does this all do?

There is a certain amount of magic glue that happens behind the scenes. Here’s a list of some of it:

  • Idempotent /etc/hosts based DNS
  • Easy docker base image installation
  • IP address calculations and assignment with ipaddr
  • Clever cleanup on ‘vagrant destroy
  • Vagrant docker base image detection
  • Integration with Puppet

If you don’t understand what all of those mean, and you don’t want to go source diving, don’t worry about it! I will explain them in greater detail when it’s important, and hopefully for now everything “just works” and stays out of your way.

Future work:

There’s still a lot more that I have planned, and some parts of the Vagrantfile need clean up, but I figured I’d try and release this early so that you can get hacking right away. If it’s useful to you, please leave a comment and let me know.

Happy hacking,

James

 

by on June 19, 2014

Community Gluster Image on Docker

If you would like to try out gluster, a new CentOS based docker container is available on the docker hub at https://registry.hub.docker.com/u/gluster/gluster/. This image is very new, so do not use it for production environments. It is meant to be an early community version of gluster running within docker.

For correctness and performance reasons, we recommend running Gluster on an host-mounted XFS volume that resides on a separate device from the root filesystem. For this proof of concept, we use only a single node gluster daemon.

This community image was originally created by Frederick F. Kautz IV and Harshavardhana.

Usage

Prepare an XFS mount

The preferred method to use gluster is to mount an XFS partition on a separate device. If you want to test the image and do not have an XFS partition available on your system, you can create and mount one using the following commands:
dd if=/dev/zero of=/data/gluster.xfs bs=1M count=2048
mkfs.xfs -isize=512 /data/gluster.xfs
mkdir /mnt/gluster
mount -oloop,inode64,noatime /data/gluster.xfs /mnt/gluster

Run docker with the XFS mount

host # docker run --privileged -i -t -h gluster -v /mnt/gluster:/mnt/vault \
gluster/gluster:latest
container # df -h /mnt/vault
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop4     2014M   45M  1969M   4% /mnt/vault

Access your new gluster volume from the host

Grab the ip address for the container
GLUSTER_CONTAINER_ID=$(docker ps | grep -i gluster | awk {'print $1'}
GLUSTER_IPADDR=$(docker inspect $GLUSTER_CONTAINER_ID | grep -i ipaddr | \
sed -e 's/\"//g' -e 's/\,//g' | awk {'print $2'})
Mount a container using the ip address provided in the above section.
mount -t glusterfs ${GLUSTER_IPADDR}:$VOLUME_NAME /mnt/gfs

Accessing your new gluster volume from a contaner

First, mount the volume to the host as shown in the previous section.
Second, mount the volume on container run
docker run -i -t -h gluster-client -v /mnt/gfs:/mnt/${VOLUME_NAME} gluster/gluster:latest
Note:
Docker drops CAP_SYS_ADMIN which prevents the user from mounting a container from within another container.

Shutting down and restarting gluster

Gluster stores metadata about the volume in /var/lib/glusterd and logs in /var/log/glusterfs. In order to preserve state, use docker commit before shutting down the cluster.
docker commit $GLUSTER_CONTAINER_ID mygluster:latest
docker kill $GLUSTER_CONTAINER_ID
To restart gluster, simply run your tagged gluster image.
docker run --privileged -i -t -h gluster -v /mnt/gluster:/mnt/vault mygluster:latest

Next Steps

We are investigating how to run gluster in a docker based multi-node environment. We will write a new blog post covering this topic soon. We are also investigating what changes are necessary to both gluster and docker to help support running gluster in docker.
If you are feeling adventurous, take a look at jpetazzo’s pipework project: https://github.com/jpetazzo/pipework.
by on February 16, 2014

Running GlusterFS inside docker container

As a part of GlusterFS 3.5 testing and hackathon, I decided to put GlusterFS inside a docker container.So I installed docker on my Fedora20 desktop

$ yum install docker-io -y
$ systemctl enable docker.service
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'
$ systemctl start docker.service
$ docker version
Client version: 0.7.6
..
Server version: 0.7.6

and then started a Fedora container

$ docker run -i -t mattdm/fedora /bin/bash

Once I am inside the container I installed GlusterFS packages

bash-4.2# yum install glusterfs glusterfs-server -y

And then tried to create volume

bash-4.2# /usr/sbin/glusterd
bash-4.2# gluster volume create vol 172.17.0.3:/mnt/brick/ force


but I got following error:-

volume create: vol: failed: Glusterfs is not supported on brick: 172.17.0.3:/mnt/brick.
Setting extended attributes failed, reason: Operation not permitted.

From above error it looked as setting up extended attributes are not supported, which is a basic need to use GlusteFS. So I tried to test them manually. I was able to set extended attributes in user namespace but not in trusted namespace.

bash-4.2# yum install attr -y
bash-4.2# setfattr -n user.foo1 -v "bar" a
bash-4.2# touch a; setfattr -n trusted.foo1 -v "bar" a
setfattr: a: Operation not permitted

With some internet search I figured out that CAP_SYS_ADMIN is needed for setting up extended attributes in trusted namespace and to get that inside docker we need to run an image with –privileged=true option like

$ docker run --privileged=true -i -t mattdm/fedora /bin/bash

With that I was able to create the volume and start it

bash-4.2# gluster volume create vol 172.17.0.3:/mnt/brick/ force
bash-4.2# gluster volume start vol

But when I tried to mount the volume I got following error:-

E [mount.c:267:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)

this turned out to be image specific problem, which I am using (mattdm/fedora). I had to mknod for /dev/fuse

bash-4.2# mknod /dev/fuse c 10 229

and after that I was able to mount volume.

On Fedroa 20 for docker version 0.7.6 the default storage driver for docker is device-mapper on which extended attributes are supported. AUFS storage driver does not support extended attributes as of now.I have tried with btrfs storage driver with docker 0.8 as well and was able to use GlusterFS. To use btrfs storage driver, we need to start docker daemon with following command :-

$ docker -d -s btrfs

Above will only work if Docker is running on a btrfs partition already prepared by the host system.