all posts tagged CloudStack


by on February 23, 2014

Setting up a test-environment for Apache CloudStack and Gluster

This is an example of how to configure an environment where you can test CloudStack and Gluster. It uses two machines on the same LAN, one acts as a KVM hypervisor and the other as storage and management server. Because the (virtual) networking in the hypervisor is a little more complex than the networking on the management server, the hypervisor will be setup with an OpenVPN connection so that the local LAN is not affected with 'foreign' network traffic.

I am not a CloudStack specialist, so this configuration may not be optimal for real world usage. It is the intention to be able to test CloudStack and its Gluster integration in existing networks. The CloudStack installation and configuration done is suitable for testing and development systems, for production environments it is highly recommended to follow the CloudStack documentation instead.


.----------------. .-------------------.
| | | |
| KVM Hypervisor | <------- LAN -------> | Management Server |
| | ^-- OpenVPN --^ | |
'----------------' '-------------------'
agent.cloudstack.tld storage.cloudstack.tld

Both systems have one network interface with a static IP-address. In the LAN, other IP-addresses can not be used. This makes it difficult to access virtual machines, but that does not matter too much for this testing.

Both systems need a basic installation:

  • Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
  • Fedora EPEL enabled (howto install epel-release)
  • enable ssh access
  • SELinux in permissive mode (or disabled)
  • firewall enabled, but not restricting anything
  • Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)

On the hypervisor, an additional (internal only) bridge needs to be setup. This bridge will be used for providing IP-addresses to the virtual machines. Each virtual machine seems to need at least 3 IP-addresses. This is a default in CloudStack. This example uses virtual networks 192.168.N.0/24, where N is 0 to 4.

Configuration for the main cloudbr0 device:


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

And the additional IP-addresses on the cloudbr0 bridge (create 4 files, replace N by 1, 2, 3 and 4):


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:N
DEVICE=cloudbr0:N
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.N.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

Enable the new cloudbr0 bridge with all its IP-addresses:


# ifup cloudbr0
Any of the VMs that have a 192.168.*.* address, should be able to get to the real LAN, and ultimately also the internet. Enabling NAT for the internal virtual networks is the easiest:

# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE
# service iptables save

The hypervisor will need to be setup to act as a gateway to the virtual machines on the cloudbr0 bridge. In order to so do, a very basic OpenVPN service does the trick:


# yum install openvpn
# openvpn --genkey --secret /etc/openvpn/static.key
# cat << EOF > /etc/openvpn/server.conf
dev tun
ifconfig 192.168.200.1 192.168.200.2
secret static.key
EOF
# chkconfig openvpn on
# service openvpn start

On the management server, it is needed to configure OpenVPN as a client, so that routing to the virtual networks is possible:


# yum install openvpn
# cat << EOF > /etc/openvpn/client.conf
remote real-hostname-of-hypervisor.example.net
dev tun
ifconfig 192.168.200.2 192.168.200.1
secret static.key
EOF
# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key /etc/openvpn
# chkconfig opennvpn on
# service openvpn start

In /etc/hosts (on both the hypervisor and management server) the internal hostnames for the environment should be added:


#file: /etc/hosts
192.168.200.1 agent.cloudstack.tld
192.168.200.2 storage.cloudstack.tld

The hypervisor will also function as a DNS-server for the virtual machines. The easiest is to use dnsmasq which uses /etc/hosts and /etc/resolv.conf for resolving:


# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

The management server is also used as a Gluster Storage Server. Therefor it needs to have some Gluster packages:


# wget -O /etc/yum.repo.d/glusterfs-epel.repo \
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo
# yum install glusterfs-server
# vim /etc/glusterfs/glusterd.vol

# service glusterd restart

Create two volumes where CloudStack will store disk images. Before starting the volumes, apply the required settings too. Note that the hostname that holds the bricks should be resolvable by the hypervisor and the Secondary Storage VMs. This example does not show how to create volumes for production usage, do not create volumes like this for anything else than testing and scratch data.


# mkdir -p /bricks/primary/data
# mkdir -p /bricks/secondary/data
# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data
# gluster volume set primary storage.owner-uid 36
# gluster volume set primary storage.owner-gid 36
# gluster volume set primary server.allow-insecure on
# gluster volume set primary nfs.disable true
# gluster volume start primary
# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data
# gluster volume set secondary storage.owner-uid 36
# gluster volume set secondary storage.owner-gid 36
# gluster volume start secondary

When the preparation is all done, it is time to install Apache CloudStack. It is planned to have support for Gluster in CloudStack 4.4. At the moment not all required changes are included in the CloudStack git repository. Therefor, is is needed to build the RPM packages from the Gluster Forge repository where the development is happening. On a system running RHEL-6.5, checkout the sources and build the packages (this needs a standard CloudStack development environment, including java-1.7.0-openjdk-devel, Apache Maven and others):


$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git
$ cd cloudstack
$ git checkout -t -b wip/master/gluster
$ cd packaging/centos63
$ ./package.sh

In the end, these packages should have been build:

  • cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

On the management server, install the following packages:


# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

Install and configure the database:


# yum install mysql-server
# chkconfig mysqld on
# service mysqld start
# vim /etc/cloudstack/management/classpath.conf

# cloudstack-setup-databases cloud:secret --deploy-as=root:

Install the systemvm templates:


# mount -t nfs storage.cloudstack.tld:/secondary /mnt
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt \
-h kvm \
-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2
# umount /mnt

The management server is now prepared, and the webui can get configured:


# cloudstack-setup-management

On the hypervisor, install the following additional packages:


# yum install qemu-kvm libvirt glusterfs-fuse
# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
# cloudstack-setup-agent

Make sure that in /etc/cloudstack/agent/agent.properties the right NICs are being used:


guest.network.device=cloudbr0
private.bridge.name=cloudbr0
private.network.device=cloudbr0
network.direct.device=cloudbr0
public.network.device=cloudbr0

Go to the CloudStack webinterface, this should be running on the management server: http://real-hostname-of-mgmt.example.net:8080/client The default username/password is: admin / password

It is easiest to skip the configuration wizard (not sure if that supports Gluster already). When the normal interface is shown, under 'Infrastructure' a new 'Zone' can get added. The Zone wizard will need the following input:

  • DNS 1: 192.168.0.1
  • Internal DNS 1: 192.168.0.1
  • Hypervisor: KVM

Under POD, use these options:

  • Reserved system gateway: 192.168.0.1
  • Reserved system netmask: 255.255.255.0
  • Start reserved system IP: 192.168.0.10
  • End reserved system IP: 192.168.0.250

Next the network config for the virtual machines:

  • Guest gateway: 192.168.1.1
  • Guest system netmask: 255.255.255.0
  • Guest start IP: 192.168.1.10
  • Guest end IP: 192.168.1.250

Primary storage:

  • Type: Gluster
  • Server: storage.cloudstack.tld
  • Volume: primary

Secondary Storage:

  • Type: nfs
  • Server: storage.cloudstack.tld
  • path: /secondary

Hypervisor agent:

  • hostname: agent.cloudstack.tld
  • username: root
  • password: password

If this all succeeded, the newly created Zone can get enabled. After a while, there should be two system VMs listed in the Infrastructure. It is possible to log in on these system VMs and check if all is working. To do so, log in over SSH on the hypervisor and connect to the VMs through libvirt:


# virsh list
Id Name State
----------------------------------------------------
1 s-1-VM running
2 v-2-VM running

# virsh console 1
Connected to domain s-1-VM
Escape character is ^]

Debian GNU/Linux 7 s-1-VM ttyS0

s-1-VM login: root
Password: password
...
root@s-1-VM:~#

Log out from the shell, and press CTRL+] to disconnect from the console.

To verify that this VM indeed runs with the QEMU+libgfapi integration, check the log file that libvirt writes and confirm that there is a -drive with a glusterfs+tcp:// URL in /var/log/libvirt/qemu/s-1-VM.log:


... /usr/libexec/qemu-kvm -name s-1-VM ... -drive file=gluster+tcp://storage.cloudstack.tld:24007/primary/d691ac19-4ec1-47c1-b765-55f804b78bec,...
by on

Setting up a test-environment for Apache CloudStack and Gluster

This is an example of how to configure an environment where you can test CloudStack and Gluster. It uses two machines on the same LAN, one acts as a KVM hypervisor and the other as storage and management server. Because the (virtual) networking in the hypervisor is a little more complex than the networking on the management server, the hypervisor will be setup with an OpenVPN connection so that the local LAN is not affected with 'foreign' network traffic.

I am not a CloudStack specialist, so this configuration may not be optimal for real world usage. It is the intention to be able to test CloudStack and its Gluster integration in existing networks. The CloudStack installation and configuration done is suitable for testing and development systems, for production environments it is highly recommended to follow the CloudStack documentation instead.


.----------------. .-------------------.
| | | |
| KVM Hypervisor | <------- LAN -------> | Management Server |
| | ^-- OpenVPN --^ | |
'----------------' '-------------------'
agent.cloudstack.tld storage.cloudstack.tld

Both systems have one network interface with a static IP-address. In the LAN, other IP-addresses can not be used. This makes it difficult to access virtual machines, but that does not matter too much for this testing.

Both systems need a basic installation:

  • Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
  • Fedora EPEL enabled (howto install epel-release)
  • enable ssh access
  • SELinux in permissive mode (or disabled)
  • firewall enabled, but not restricting anything
  • Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)

On the hypervisor, an additional (internal only) bridge needs to be setup. This bridge will be used for providing IP-addresses to the virtual machines. Each virtual machine seems to need at least 3 IP-addresses. This is a default in CloudStack. This example uses virtual networks 192.168.N.0/24, where N is 0 to 4.

Configuration for the main cloudbr0 device:


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

And the additional IP-addresses on the cloudbr0 bridge (create 4 files, replace N by 1, 2, 3 and 4):


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:N
DEVICE=cloudbr0:N
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.N.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

Enable the new cloudbr0 bridge with all its IP-addresses:


# ifup cloudbr0
Any of the VMs that have a 192.168.*.* address, should be able to get to the real LAN, and ultimately also the internet. Enabling NAT for the internal virtual networks is the easiest:

# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE
# service iptables save

The hypervisor will need to be setup to act as a gateway to the virtual machines on the cloudbr0 bridge. In order to so do, a very basic OpenVPN service does the trick:


# yum install openvpn
# openvpn --genkey --secret /etc/openvpn/static.key
# cat << EOF > /etc/openvpn/server.conf
dev tun
ifconfig 192.168.200.1 192.168.200.2
secret static.key
EOF
# chkconfig openvpn on
# service openvpn start

On the management server, it is needed to configure OpenVPN as a client, so that routing to the virtual networks is possible:


# yum install openvpn
# cat << EOF > /etc/openvpn/client.conf
remote real-hostname-of-hypervisor.example.net
dev tun
ifconfig 192.168.200.2 192.168.200.1
secret static.key
EOF
# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key /etc/openvpn
# chkconfig opennvpn on
# service openvpn start

In /etc/hosts (on both the hypervisor and management server) the internal hostnames for the environment should be added:


#file: /etc/hosts
192.168.200.1 agent.cloudstack.tld
192.168.200.2 storage.cloudstack.tld

The hypervisor will also function as a DNS-server for the virtual machines. The easiest is to use dnsmasq which uses /etc/hosts and /etc/resolv.conf for resolving:


# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

The management server is also used as a Gluster Storage Server. Therefor it needs to have some Gluster packages:


# wget -O /etc/yum.repo.d/glusterfs-epel.repo \
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo
# yum install glusterfs-server
# vim /etc/glusterfs/glusterd.vol

# service glusterd restart

Create two volumes where CloudStack will store disk images. Before starting the volumes, apply the required settings too. Note that the hostname that holds the bricks should be resolvable by the hypervisor and the Secondary Storage VMs. This example does not show how to create volumes for production usage, do not create volumes like this for anything else than testing and scratch data.


# mkdir -p /bricks/primary/data
# mkdir -p /bricks/secondary/data
# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data
# gluster volume set primary storage.owner-uid 36
# gluster volume set primary storage.owner-gid 36
# gluster volume set primary server.allow-insecure on
# gluster volume set primary nfs.disable true
# gluster volume start primary
# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data
# gluster volume set secondary storage.owner-uid 36
# gluster volume set secondary storage.owner-gid 36
# gluster volume start secondary

When the preparation is all done, it is time to install Apache CloudStack. It is planned to have support for Gluster in CloudStack 4.4. At the moment not all required changes are included in the CloudStack git repository. Therefor, is is needed to build the RPM packages from the Gluster Forge repository where the development is happening. On a system running RHEL-6.5, checkout the sources and build the packages (this needs a standard CloudStack development environment, including java-1.7.0-openjdk-devel, Apache Maven and others):


$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git
$ cd cloudstack
$ git checkout -t -b wip/master/gluster
$ cd packaging/centos63
$ ./package.sh

In the end, these packages should have been build:

  • cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

On the management server, install the following packages:


# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

Install and configure the database:


# yum install mysql-server
# chkconfig mysqld on
# service mysqld start
# vim /etc/cloudstack/management/classpath.conf

# cloudstack-setup-databases cloud:secret --deploy-as=root:

Install the systemvm templates:


# mount -t nfs storage.cloudstack.tld:/secondary /mnt
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt \
-h kvm \
-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2
# umount /mnt

The management server is now prepared, and the webui can get configured:


# cloudstack-setup-management

On the hypervisor, install the following additional packages:


# yum install qemu-kvm libvirt glusterfs-fuse
# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
# cloudstack-setup-agent

Make sure that in /etc/cloudstack/agent/agent.properties the right NICs are being used:


guest.network.device=cloudbr0
private.bridge.name=cloudbr0
private.network.device=cloudbr0
network.direct.device=cloudbr0
public.network.device=cloudbr0

Go to the CloudStack webinterface, this should be running on the management server: http://real-hostname-of-mgmt.example.net:8080/client The default username/password is: admin / password

It is easiest to skip the configuration wizard (not sure if that supports Gluster already). When the normal interface is shown, under 'Infrastructure' a new 'Zone' can get added. The Zone wizard will need the following input:

  • DNS 1: 192.168.0.1
  • Internal DNS 1: 192.168.0.1
  • Hypervisor: KVM

Under POD, use these options:

  • Reserved system gateway: 192.168.0.1
  • Reserved system netmask: 255.255.255.0
  • Start reserved system IP: 192.168.0.10
  • End reserved system IP: 192.168.0.250

Next the network config for the virtual machines:

  • Guest gateway: 192.168.1.1
  • Guest system netmask: 255.255.255.0
  • Guest start IP: 192.168.1.10
  • Guest end IP: 192.168.1.250

Primary storage:

  • Type: Gluster
  • Server: storage.cloudstack.tld
  • Volume: primary

Secondary Storage:

  • Type: nfs
  • Server: storage.cloudstack.tld
  • path: /secondary

Hypervisor agent:

  • hostname: agent.cloudstack.tld
  • username: root
  • password: password

If this all succeeded, the newly created Zone can get enabled. After a while, there should be two system VMs listed in the Infrastructure. It is possible to log in on these system VMs and check if all is working. To do so, log in over SSH on the hypervisor and connect to the VMs through libvirt:


# virsh list
Id Name State
----------------------------------------------------
1 s-1-VM running
2 v-2-VM running

# virsh console 1
Connected to domain s-1-VM
Escape character is ^]

Debian GNU/Linux 7 s-1-VM ttyS0

s-1-VM login: root
Password: password
...
root@s-1-VM:~#

Log out from the shell, and press CTRL+] to disconnect from the console.

To verify that this VM indeed runs with the QEMU+libgfapi integration, check the log file that libvirt writes and confirm that there is a -drive with a glusterfs+tcp:// URL in /var/log/libvirt/qemu/s-1-VM.log:


... /usr/libexec/qemu-kvm -name s-1-VM ... -drive file=gluster+tcp://storage.cloudstack.tld:24007/primary/d691ac19-4ec1-47c1-b765-55f804b78bec,...
by on

Setting up a test-environment for Apache CloudStack and Gluster

This is an example of how to configure an environment where you can test CloudStack and Gluster. It uses two machines on the same LAN, one acts as a KVM hypervisor and the other as storage and management server. Because the (virtual) networking in the hypervisor is a little more complex than the networking on the management server, the hypervisor will be setup with an OpenVPN connection so that the local LAN is not affected with 'foreign' network traffic.

I am not a CloudStack specialist, so this configuration may not be optimal for real world usage. It is the intention to be able to test CloudStack and its Gluster integration in existing networks. The CloudStack installation and configuration done is suitable for testing and development systems, for production environments it is highly recommended to follow the CloudStack documentation instead.


.----------------. .-------------------.
| | | |
| KVM Hypervisor | <------- LAN -------> | Management Server |
| | ^-- OpenVPN --^ | |
'----------------' '-------------------'
agent.cloudstack.tld storage.cloudstack.tld

Both systems have one network interface with a static IP-address. In the LAN, other IP-addresses can not be used. This makes it difficult to access virtual machines, but that does not matter too much for this testing.

Both systems need a basic installation:

  • Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
  • Fedora EPEL enabled (howto install epel-release)
  • enable ssh access
  • SELinux in permissive mode (or disabled)
  • firewall enabled, but not restricting anything
  • Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)

On the hypervisor, an additional (internal only) bridge needs to be setup. This bridge will be used for providing IP-addresses to the virtual machines. Each virtual machine seems to need at least 3 IP-addresses. This is a default in CloudStack. This example uses virtual networks 192.168.N.0/24, where N is 0 to 4.

Configuration for the main cloudbr0 device:


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

And the additional IP-addresses on the cloudbr0 bridge (create 4 files, replace N by 1, 2, 3 and 4):


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:N
DEVICE=cloudbr0:N
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.N.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

Enable the new cloudbr0 bridge with all its IP-addresses:


# ifup cloudbr0
Any of the VMs that have a 192.168.*.* address, should be able to get to the real LAN, and ultimately also the internet. Enabling NAT for the internal virtual networks is the easiest:

# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE
# service iptables save

The hypervisor will need to be setup to act as a gateway to the virtual machines on the cloudbr0 bridge. In order to so do, a very basic OpenVPN service does the trick:


# yum install openvpn
# openvpn --genkey --secret /etc/openvpn/static.key
# cat << EOF > /etc/openvpn/server.conf
dev tun
ifconfig 192.168.200.1 192.168.200.2
secret static.key
EOF
# chkconfig openvpn on
# service openvpn start

On the management server, it is needed to configure OpenVPN as a client, so that routing to the virtual networks is possible:


# yum install openvpn
# cat << EOF > /etc/openvpn/client.conf
remote real-hostname-of-hypervisor.example.net
dev tun
ifconfig 192.168.200.2 192.168.200.1
secret static.key
EOF
# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key /etc/openvpn
# chkconfig opennvpn on
# service openvpn start

In /etc/hosts (on both the hypervisor and management server) the internal hostnames for the environment should be added:


#file: /etc/hosts
192.168.200.1 agent.cloudstack.tld
192.168.200.2 storage.cloudstack.tld

The hypervisor will also function as a DNS-server for the virtual machines. The easiest is to use dnsmasq which uses /etc/hosts and /etc/resolv.conf for resolving:


# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

The management server is also used as a Gluster Storage Server. Therefor it needs to have some Gluster packages:


# wget -O /etc/yum.repo.d/glusterfs-epel.repo \
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo
# yum install glusterfs-server
# vim /etc/glusterfs/glusterd.vol

# service glusterd restart

Create two volumes where CloudStack will store disk images. Before starting the volumes, apply the required settings too. Note that the hostname that holds the bricks should be resolvable by the hypervisor and the Secondary Storage VMs. This example does not show how to create volumes for production usage, do not create volumes like this for anything else than testing and scratch data.


# mkdir -p /bricks/primary/data
# mkdir -p /bricks/secondary/data
# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data
# gluster volume set primary storage.owner-uid 36
# gluster volume set primary storage.owner-gid 36
# gluster volume set primary server.allow-insecure on
# gluster volume set primary nfs.disable true
# gluster volume start primary
# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data
# gluster volume set secondary storage.owner-uid 36
# gluster volume set secondary storage.owner-gid 36
# gluster volume start secondary

When the preparation is all done, it is time to install Apache CloudStack. It is planned to have support for Gluster in CloudStack 4.4. At the moment not all required changes are included in the CloudStack git repository. Therefor, is is needed to build the RPM packages from the Gluster Forge repository where the development is happening. On a system running RHEL-6.5, checkout the sources and build the packages (this needs a standard CloudStack development environment, including java-1.7.0-openjdk-devel, Apache Maven and others):


$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git
$ cd cloudstack
$ git checkout -t -b wip/master/gluster
$ cd packaging/centos63
$ ./package.sh

In the end, these packages should have been build:

  • cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

On the management server, install the following packages:


# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

Install and configure the database:


# yum install mysql-server
# chkconfig mysqld on
# service mysqld start
# vim /etc/cloudstack/management/classpath.conf

# cloudstack-setup-databases cloud:secret --deploy-as=root:

Install the systemvm templates:


# mount -t nfs storage.cloudstack.tld:/secondary /mnt
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt \
-h kvm \
-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2
# umount /mnt

The management server is now prepared, and the webui can get configured:


# cloudstack-setup-management

On the hypervisor, install the following additional packages:


# yum install qemu-kvm libvirt glusterfs-fuse
# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
# cloudstack-setup-agent

Make sure that in /etc/cloudstack/agent/agent.properties the right NICs are being used:


guest.network.device=cloudbr0
private.bridge.name=cloudbr0
private.network.device=cloudbr0
network.direct.device=cloudbr0
public.network.device=cloudbr0

Go to the CloudStack webinterface, this should be running on the management server: http://real-hostname-of-mgmt.example.net:8080/client The default username/password is: admin / password

It is easiest to skip the configuration wizard (not sure if that supports Gluster already). When the normal interface is shown, under 'Infrastructure' a new 'Zone' can get added. The Zone wizard will need the following input:

  • DNS 1: 192.168.0.1
  • Internal DNS 1: 192.168.0.1
  • Hypervisor: KVM

Under POD, use these options:

  • Reserved system gateway: 192.168.0.1
  • Reserved system netmask: 255.255.255.0
  • Start reserved system IP: 192.168.0.10
  • End reserved system IP: 192.168.0.250

Next the network config for the virtual machines:

  • Guest gateway: 192.168.1.1
  • Guest system netmask: 255.255.255.0
  • Guest start IP: 192.168.1.10
  • Guest end IP: 192.168.1.250

Primary storage:

  • Type: Gluster
  • Server: storage.cloudstack.tld
  • Volume: primary

Secondary Storage:

  • Type: nfs
  • Server: storage.cloudstack.tld
  • path: /secondary

Hypervisor agent:

  • hostname: agent.cloudstack.tld
  • username: root
  • password: password

If this all succeeded, the newly created Zone can get enabled. After a while, there should be two system VMs listed in the Infrastructure. It is possible to log in on these system VMs and check if all is working. To do so, log in over SSH on the hypervisor and connect to the VMs through libvirt:


# virsh list
Id Name State
----------------------------------------------------
1 s-1-VM running
2 v-2-VM running

# virsh console 1
Connected to domain s-1-VM
Escape character is ^]

Debian GNU/Linux 7 s-1-VM ttyS0

s-1-VM login: root
Password: password
...
root@s-1-VM:~#

Log out from the shell, and press CTRL+] to disconnect from the console.

To verify that this VM indeed runs with the QEMU+libgfapi integration, check the log file that libvirt writes and confirm that there is a -drive with a glusterfs+tcp:// URL in /var/log/libvirt/qemu/s-1-VM.log:


... /usr/libexec/qemu-kvm -name s-1-VM ... -drive file=gluster+tcp://storage.cloudstack.tld:24007/primary/d691ac19-4ec1-47c1-b765-55f804b78bec,...
by on

Setting up a test-environment for Apache CloudStack and Gluster

This is an example of how to configure an environment where you can test CloudStack and Gluster. It uses two machines on the same LAN, one acts as a KVM hypervisor and the other as storage and management server. Because the (virtual) networking in the hypervisor is a little more complex than the networking on the management server, the hypervisor will be setup with an OpenVPN connection so that the local LAN is not affected with 'foreign' network traffic.

I am not a CloudStack specialist, so this configuration may not be optimal for real world usage. It is the intention to be able to test CloudStack and its Gluster integration in existing networks. The CloudStack installation and configuration done is suitable for testing and development systems, for production environments it is highly recommended to follow the CloudStack documentation instead.


.----------------. .-------------------.
| | | |
| KVM Hypervisor | <------- LAN -------> | Management Server |
| | ^-- OpenVPN --^ | |
'----------------' '-------------------'
agent.cloudstack.tld storage.cloudstack.tld

Both systems have one network interface with a static IP-address. In the LAN, other IP-addresses can not be used. This makes it difficult to access virtual machines, but that does not matter too much for this testing.

Both systems need a basic installation:

  • Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
  • Fedora EPEL enabled (howto install epel-release)
  • enable ssh access
  • SELinux in permissive mode (or disabled)
  • firewall enabled, but not restricting anything
  • Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)

On the hypervisor, an additional (internal only) bridge needs to be setup. This bridge will be used for providing IP-addresses to the virtual machines. Each virtual machine seems to need at least 3 IP-addresses. This is a default in CloudStack. This example uses virtual networks 192.168.N.0/24, where N is 0 to 4.

Configuration for the main cloudbr0 device:


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

And the additional IP-addresses on the cloudbr0 bridge (create 4 files, replace N by 1, 2, 3 and 4):


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:N
DEVICE=cloudbr0:N
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.N.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

Enable the new cloudbr0 bridge with all its IP-addresses:


# ifup cloudbr0
Any of the VMs that have a 192.168.*.* address, should be able to get to the real LAN, and ultimately also the internet. Enabling NAT for the internal virtual networks is the easiest:

# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE
# service iptables save

The hypervisor will need to be setup to act as a gateway to the virtual machines on the cloudbr0 bridge. In order to so do, a very basic OpenVPN service does the trick:


# yum install openvpn
# openvpn --genkey --secret /etc/openvpn/static.key
# cat << EOF > /etc/openvpn/server.conf
dev tun
ifconfig 192.168.200.1 192.168.200.2
secret static.key
EOF
# chkconfig openvpn on
# service openvpn start

On the management server, it is needed to configure OpenVPN as a client, so that routing to the virtual networks is possible:


# yum install openvpn
# cat << EOF > /etc/openvpn/client.conf
remote real-hostname-of-hypervisor.example.net
dev tun
ifconfig 192.168.200.2 192.168.200.1
secret static.key
EOF
# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key /etc/openvpn
# chkconfig opennvpn on
# service openvpn start

In /etc/hosts (on both the hypervisor and management server) the internal hostnames for the environment should be added:


#file: /etc/hosts
192.168.200.1 agent.cloudstack.tld
192.168.200.2 storage.cloudstack.tld

The hypervisor will also function as a DNS-server for the virtual machines. The easiest is to use dnsmasq which uses /etc/hosts and /etc/resolv.conf for resolving:


# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

The management server is also used as a Gluster Storage Server. Therefor it needs to have some Gluster packages:


# wget -O /etc/yum.repo.d/glusterfs-epel.repo \
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo
# yum install glusterfs-server
# vim /etc/glusterfs/glusterd.vol

# service glusterd restart

Create two volumes where CloudStack will store disk images. Before starting the volumes, apply the required settings too. Note that the hostname that holds the bricks should be resolvable by the hypervisor and the Secondary Storage VMs. This example does not show how to create volumes for production usage, do not create volumes like this for anything else than testing and scratch data.


# mkdir -p /bricks/primary/data
# mkdir -p /bricks/secondary/data
# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data
# gluster volume set primary storage.owner-uid 36
# gluster volume set primary storage.owner-gid 36
# gluster volume set primary server.allow-insecure on
# gluster volume set primary nfs.disable true
# gluster volume start primary
# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data
# gluster volume set secondary storage.owner-uid 36
# gluster volume set secondary storage.owner-gid 36
# gluster volume start secondary

When the preparation is all done, it is time to install Apache CloudStack. It is planned to have support for Gluster in CloudStack 4.4. At the moment not all required changes are included in the CloudStack git repository. Therefor, is is needed to build the RPM packages from the Gluster Forge repository where the development is happening. On a system running RHEL-6.5, checkout the sources and build the packages (this needs a standard CloudStack development environment, including java-1.7.0-openjdk-devel, Apache Maven and others):


$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git
$ cd cloudstack
$ git checkout -t -b wip/master/gluster
$ cd packaging/centos63
$ ./package.sh

In the end, these packages should have been build:

  • cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
  • cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

On the management server, install the following packages:


# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

Install and configure the database:


# yum install mysql-server
# chkconfig mysqld on
# service mysqld start
# vim /etc/cloudstack/management/classpath.conf

# cloudstack-setup-databases cloud:secret --deploy-as=root:

Install the systemvm templates:


# mount -t nfs storage.cloudstack.tld:/secondary /mnt
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /mnt \
-h kvm \
-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2
# umount /mnt

The management server is now prepared, and the webui can get configured:


# cloudstack-setup-management

On the hypervisor, install the following additional packages:


# yum install qemu-kvm libvirt glusterfs-fuse
# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
# cloudstack-setup-agent

Make sure that in /etc/cloudstack/agent/agent.properties the right NICs are being used:


guest.network.device=cloudbr0
private.bridge.name=cloudbr0
private.network.device=cloudbr0
network.direct.device=cloudbr0
public.network.device=cloudbr0

Go to the CloudStack webinterface, this should be running on the management server: http://real-hostname-of-mgmt.example.net:8080/client The default username/password is: admin / password

It is easiest to skip the configuration wizard (not sure if that supports Gluster already). When the normal interface is shown, under 'Infrastructure' a new 'Zone' can get added. The Zone wizard will need the following input:

  • DNS 1: 192.168.0.1
  • Internal DNS 1: 192.168.0.1
  • Hypervisor: KVM

Under POD, use these options:

  • Reserved system gateway: 192.168.0.1
  • Reserved system netmask: 255.255.255.0
  • Start reserved system IP: 192.168.0.10
  • End reserved system IP: 192.168.0.250

Next the network config for the virtual machines:

  • Guest gateway: 192.168.1.1
  • Guest system netmask: 255.255.255.0
  • Guest start IP: 192.168.1.10
  • Guest end IP: 192.168.1.250

Primary storage:

  • Type: Gluster
  • Server: storage.cloudstack.tld
  • Volume: primary

Secondary Storage:

  • Type: nfs
  • Server: storage.cloudstack.tld
  • path: /secondary

Hypervisor agent:

  • hostname: agent.cloudstack.tld
  • username: root
  • password: password

If this all succeeded, the newly created Zone can get enabled. After a while, there should be two system VMs listed in the Infrastructure. It is possible to log in on these system VMs and check if all is working. To do so, log in over SSH on the hypervisor and connect to the VMs through libvirt:


# virsh list
Id Name State
----------------------------------------------------
1 s-1-VM running
2 v-2-VM running

# virsh console 1
Connected to domain s-1-VM
Escape character is ^]

Debian GNU/Linux 7 s-1-VM ttyS0

s-1-VM login: root
Password: password
...
root@s-1-VM:~#

Log out from the shell, and press CTRL+] to disconnect from the console.

To verify that this VM indeed runs with the QEMU+libgfapi integration, check the log file that libvirt writes and confirm that there is a -drive with a glusterfs+tcp:// URL in /var/log/libvirt/qemu/s-1-VM.log:


... /usr/libexec/qemu-kvm -name s-1-VM ... -drive file=gluster+tcp://storage.cloudstack.tld:24007/primary/d691ac19-4ec1-47c1-b765-55f804b78bec,...
by on December 1, 2013

Using Gluster as Primary Storage in CoudStack

CloudStack could use a Gluster environment for different kind of storage types:
  1. Primary Storage: mount over the GlusterFS native client (FUSE)
    This post shows how it is working and refers to the patches that make this possible.
  2. Volumes for virtual machines: use the libgfapi integration in QEMU
    Next upcoming task, initial untested patch in the wip-branch.
  3. Secondary Storage: mount over the GlusterFS native client (FUSE)
The current work-in-progress repository on the Gluster Community Forge already has functional support for creating Primary Storage on existing Gluster environments:
  • Infrastructure -> Primary Storage -> Add Primary Storage
    Add Primary Storage
  • Infrastructure -> Zones -> Add Zone - [wizard]
    Add Primary Storage through the Zone Wizard
Via the Infrastructure -> Primary Storage menu, the details of the newly created storage can be displayed.
Primary Storage Details

After creating a virtual machine from the standard CentOS template, it can be verified that the Primary Storage Pool on the Gluster environment is functioning. On the hypervisor that runs the VM:

[root@agent ~]# mount | grep gluster
gluster.cloudstack.example.net:/primary on /mnt/dd697445-f67c-33bc-af52-386de3ff7245 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[root@agent ~]# ps -C qemu-kvm -o command | grep i-2-3-VM
/usr/libexec/qemu-kvm -name i-2-3-VM ... -drive file=/mnt/dd697445-f67c-33bc-af52-386de3ff7245/1afd48d2-c5e1-44ce-bcb3-051cc4d59716,if=none,id=drive-virtio-disk0,format=qcow2,cache=none ...

The changes to CloudStack that make this possible are located on the Gluster Community Forge and have been posted for review:
  • [#15932] Add support for Primary Storage on Gluster using the libvirt backend
  • [#15933] Add Gluster to the list of protocols in the Management Server
by on

Using Gluster as Primary Storage in CoudStack

CloudStack could use a Gluster environment for different kind of storage types:
  1. Primary Storage: mount over the GlusterFS native client (FUSE)
    This post shows how it is working and refers to the patches that make this possible.
  2. Volumes for virtual machines: use the libgfapi integration in QEMU
    Next upcoming task, initial untested patch in the wip-branch.
  3. Secondary Storage: mount over the GlusterFS native client (FUSE)
The current work-in-progress repository on the Gluster Community Forge already has functional support for creating Primary Storage on existing Gluster environments:
  • Infrastructure -> Primary Storage -> Add Primary Storage
    Add Primary Storage
  • Infrastructure -> Zones -> Add Zone - [wizard]
    Add Primary Storage through the Zone Wizard
Via the Infrastructure -> Primary Storage menu, the details of the newly created storage can be displayed.
Primary Storage Details

After creating a virtual machine from the standard CentOS template, it can be verified that the Primary Storage Pool on the Gluster environment is functioning. On the hypervisor that runs the VM:

[root@agent ~]# mount | grep gluster
gluster.cloudstack.example.net:/primary on /mnt/dd697445-f67c-33bc-af52-386de3ff7245 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[root@agent ~]# ps -C qemu-kvm -o command | grep i-2-3-VM
/usr/libexec/qemu-kvm -name i-2-3-VM ... -drive file=/mnt/dd697445-f67c-33bc-af52-386de3ff7245/1afd48d2-c5e1-44ce-bcb3-051cc4d59716,if=none,id=drive-virtio-disk0,format=qcow2,cache=none ...

The changes to CloudStack that make this possible are located on the Gluster Community Forge and have been posted for review:
  • [#15932] Add support for Primary Storage on Gluster using the libvirt backend
  • [#15933] Add Gluster to the list of protocols in the Management Server
by on

Using Gluster as Primary Storage in CoudStack

CloudStack could use a Gluster environment for different kind of storage types:
  1. Primary Storage: mount over the GlusterFS native client (FUSE)
    This post shows how it is working and refers to the patches that make this possible.
  2. Volumes for virtual machines: use the libgfapi integration in QEMU
    Next upcoming task, initial untested patch in the wip-branch.
  3. Secondary Storage: mount over the GlusterFS native client (FUSE)
The current work-in-progress repository on the Gluster Community Forge already has functional support for creating Primary Storage on existing Gluster environments:
  • Infrastructure -> Primary Storage -> Add Primary Storage
    Add Primary Storage
  • Infrastructure -> Zones -> Add Zone - [wizard]
    Add Primary Storage through the Zone Wizard
Via the Infrastructure -> Primary Storage menu, the details of the newly created storage can be displayed.
Primary Storage Details

After creating a virtual machine from the standard CentOS template, it can be verified that the Primary Storage Pool on the Gluster environment is functioning. On the hypervisor that runs the VM:

[root@agent ~]# mount | grep gluster
gluster.cloudstack.example.net:/primary on /mnt/dd697445-f67c-33bc-af52-386de3ff7245 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[root@agent ~]# ps -C qemu-kvm -o command | grep i-2-3-VM
/usr/libexec/qemu-kvm -name i-2-3-VM ... -drive file=/mnt/dd697445-f67c-33bc-af52-386de3ff7245/1afd48d2-c5e1-44ce-bcb3-051cc4d59716,if=none,id=drive-virtio-disk0,format=qcow2,cache=none ...

The changes to CloudStack that make this possible are located on the Gluster Community Forge and have been posted for review:
  • [#15932] Add support for Primary Storage on Gluster using the libvirt backend
  • [#15933] Add Gluster to the list of protocols in the Management Server
by on

Using Gluster as Primary Storage in CoudStack

CloudStack could use a Gluster environment for different kind of storage types:
  1. Primary Storage: mount over the GlusterFS native client (FUSE)
    This post shows how it is working and refers to the patches that make this possible.
  2. Volumes for virtual machines: use the libgfapi integration in QEMU
    Next upcoming task, initial untested patch in the wip-branch.
  3. Secondary Storage: mount over the GlusterFS native client (FUSE)
The current work-in-progress repository on the Gluster Community Forge already has functional support for creating Primary Storage on existing Gluster environments:
  • Infrastructure -> Primary Storage -> Add Primary Storage
    Add Primary Storage
  • Infrastructure -> Zones -> Add Zone - [wizard]
    Add Primary Storage through the Zone Wizard
Via the Infrastructure -> Primary Storage menu, the details of the newly created storage can be displayed.
Primary Storage Details

After creating a virtual machine from the standard CentOS template, it can be verified that the Primary Storage Pool on the Gluster environment is functioning. On the hypervisor that runs the VM:

[root@agent ~]# mount | grep gluster
gluster.cloudstack.example.net:/primary on /mnt/dd697445-f67c-33bc-af52-386de3ff7245 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[root@agent ~]# ps -C qemu-kvm -o command | grep i-2-3-VM
/usr/libexec/qemu-kvm -name i-2-3-VM ... -drive file=/mnt/dd697445-f67c-33bc-af52-386de3ff7245/1afd48d2-c5e1-44ce-bcb3-051cc4d59716,if=none,id=drive-virtio-disk0,format=qcow2,cache=none ...

The changes to CloudStack that make this possible are located on the Gluster Community Forge and have been posted for review:
  • [#15932] Add support for Primary Storage on Gluster using the libvirt backend
  • [#15933] Add Gluster to the list of protocols in the Management Server
by on November 25, 2013

Initial work on Gluster integration with CloudStack

Last week there was a CloudStack Conference at the Beurs van Belage in Amsterdam. I attended the first day and joined the Hackathon. Without any prior knowledge of CloudStack, I was asked by some of the Gluster community people to have a look at adding support for Gluster in CloudStack. An interesting topic, and of course I'll happily have a go at it.
CloudStack seems quite a nice project. The conference showed an awesome part of the community, loads of workshops and a surprising number of companies that sponsor and contribute to CloudStack. Very impressive!
One of the attendants at the CloudStack Conference was Wido den Hollander. Wido has experience with integrating CEPH in CloudStack, and gave an explanation and some pointers on how storage is implemented.

Integration Notes

libvirt

It seems that the most useful way to integrate Gluster with CloudStack is to make sure libvirt know how to use a Gluster backend. Checking with some of my colleagues that are part of the group that support libvirt, quickly showed that libvirt knows about Gluster already (Add new net filesystem glusterfs).
This suggests that it should be possible to create a storage pool in libvirt that is hosted on a Gluster environment. A little trial and error shows that a command like this creates the pool:

# virsh pool-create-as --name primary_gluster --type netfs --source-host $(hostname) --source-path /primary --source-format glusterfs --target /mnt/libvirt/primary_gluster

The components that the above command uses, are:
  • primary_gluster: the name of the storage pool in libvirt
  • netfs: the type of the pool, netfs mounts the 'pool' under the given --target
  • $(hostname): one of the Gluster servers that is part of the Trusted Storage Pool that provides the Gluster volume
  • /primary: the name of the Gluster volume
  • /mnt/libvirt/primary_gluster: directory where libvirt will mount the Gluster volume
Creating a volume (a libvirt volume, which is a file on the Gluster volume) can be done through libvirt:

# virsh vol-create-as --pool primary_gluster --name virsh-created-vol.img --capacity 512M --format raw

This will create the file /mnt/libvirt/primary_gluster/virsh-created-vol.imgand that file can be used as a storage backend for a virtual machine. An example of a snippet for the disk that can be attached to a VM:

    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='gluster' name='/primary/virsh-created-vol.img'>
        <host name='HOSTNAME' port='24007'/>
      </source>
      <target dev='vda' bus='virtio'/>
    </disk>

There are some important prerequisites that need to be applied to the Gluster volume so that libvirt can start a virtual machine with the appropriate user. After setting these options on the Gluster volume and in /etc/glusterfs/glusterd.vol, a test virtual machine can get started. The log of the vm (/var/log/libvirt/qemu/just-a-vm.log) shows the QEMU command line, and this contains the path to the storage:

... /usr/libexec/qemu-kvm -name just-a-vm ... -drive file=gluster+tcp://HOSTNAME:24007/primary/virsh-created-vol.img,if=none,id=drive-virtio-disk0,format=raw,cache=none ...

Design Overview

When CloudStack utilized libvirt, it should be relatively straight forward to add support for Gluster in CloudStack. A diagram that shows the main interactions and their components looks like this:

                    .--------------.
| CloudStack |
'-------+------'
|
.-----+-----.
| libvirt |
'-----+-----'
|
.----------------+--------------.
| |
.---------+----------. .----------+----------.
| / storage pool / | | virtual machine |
| image management | | management |
'---------+----------' | / XML description / |
| '----------+----------'
V |
........................ V
: / vfs/fuse / : .............................
: mount -t glusterfs : : / QEMU + libgfapi / :
:......................: : qemu file=gluster://... :
:...........................:

The parts that are already functioning are these:
  • libvirt mounts a Gluster volume as a netfs/fuse-filesystem
  • create a XML definition for the disk and pass gluster:// on to QEMU

The actual development work will be in teaching CloudStack to intruct libvirt to use a Storage Pool backed by a Gluster Volume and attach disks to a virtual machine with the gluster protocol.

CloudStack Storage Subsystem modifications

Wido pointed out that most of the storage changes will be needed in the LibvirtStoragePoolDef and LibvirtStorageAdapter Java classes. Also the Storage Core would need to know about the new storage backend.
After some browsing and reading the sources, the needed modifications looked straight forward. The Gluster backend compares to the NFS backend, which can be used as an example.
Changing the code is an easy part, compared to testing it. Remember that I have no CloudStack background what so ever... Setting up a CloudStack environment to see if the modifications do anything, is far from trivial. Compared to the time I spend on changing the source code, trying to get a minimal test environment functioning took most of my time. At this moment, my patches are untested and therefore I have not posted them for review yet :-/

Setting up a CloudStack environment for testing

Some pointers to setup a development environment:
  • Building CloudStack manually (non RPMs)
  • maven 3.0.4 has been deprecated, use maven 3.0.5 instead
  • Installation Guide
  • RHEL6 requires the Optional Channel for jsvc from the jakarta-commons-daemon-jsvc package
  • install the cloudstack-agent (and -common) package
  • set guid and local.storage.uuid in /etc/cloudstack/agent/agent.properties

Running the CloudStack Management server is easy enough when the sources are checked out and build. A command like this works for me:

# mvn -pl :cloud-client-ui jetty:run

To deploy the changes for the cloudstack-agent, I prefer to build and install RPMs. Building these is made easy by the packaging/centos63/package.sh script:

# cd packaging/centos63 ; ./package.sh ; cd -

This script and the resulting packages work well on RHEL-6.5.

Upcoming work

With the test environment in place, I can now start to make changes to the Management Server. The current modifications in the JavaScript code make it possible to select Gluster as a primary storage pool. Unfortunately, I'm no web developer and changing JavaScript isn't something I'm very good at. I will be hacking on it every now and then, and hope to be able to have something suitable for review soon.
Of course, any assistance is welcome! I'm happy to share my work in progress if there is an interest. No guarantees about any working functionality though ;-)
by on

Initial work on Gluster integration with CloudStack

Last week there was a CloudStack Conference at the Beurs van Belage in Amsterdam. I attended the first day and joined the Hackathon. Without any prior knowledge of CloudStack, I was asked by some of the Gluster community people to have a look at adding support for Gluster in CloudStack. An interesting topic, and of course I'll happily have a go at it.
CloudStack seems quite a nice project. The conference showed an awesome part of the community, loads of workshops and a surprising number of companies that sponsor and contribute to CloudStack. Very impressive!
One of the attendants at the CloudStack Conference was Wido den Hollander. Wido has experience with integrating CEPH in CloudStack, and gave an explanation and some pointers on how storage is implemented.

Integration Notes

libvirt

It seems that the most useful way to integrate Gluster with CloudStack is to make sure libvirt know how to use a Gluster backend. Checking with some of my colleagues that are part of the group that support libvirt, quickly showed that libvirt knows about Gluster already (Add new net filesystem glusterfs).
This suggests that it should be possible to create a storage pool in libvirt that is hosted on a Gluster environment. A little trial and error shows that a command like this creates the pool:

# virsh pool-create-as --name primary_gluster --type netfs --source-host $(hostname) --source-path /primary --source-format glusterfs --target /mnt/libvirt/primary_gluster

The components that the above command uses, are:
  • primary_gluster: the name of the storage pool in libvirt
  • netfs: the type of the pool, netfs mounts the 'pool' under the given --target
  • $(hostname): one of the Gluster servers that is part of the Trusted Storage Pool that provides the Gluster volume
  • /primary: the name of the Gluster volume
  • /mnt/libvirt/primary_gluster: directory where libvirt will mount the Gluster volume
Creating a volume (a libvirt volume, which is a file on the Gluster volume) can be done through libvirt:

# virsh vol-create-as --pool primary_gluster --name virsh-created-vol.img --capacity 512M --format raw

This will create the file /mnt/libvirt/primary_gluster/virsh-created-vol.imgand that file can be used as a storage backend for a virtual machine. An example of a snippet for the disk that can be attached to a VM:

    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='gluster' name='/primary/virsh-created-vol.img'>
        <host name='HOSTNAME' port='24007'/>
      </source>
      <target dev='vda' bus='virtio'/>
    </disk>

There are some important prerequisites that need to be applied to the Gluster volume so that libvirt can start a virtual machine with the appropriate user. After setting these options on the Gluster volume and in /etc/glusterfs/glusterd.vol, a test virtual machine can get started. The log of the vm (/var/log/libvirt/qemu/just-a-vm.log) shows the QEMU command line, and this contains the path to the storage:

... /usr/libexec/qemu-kvm -name just-a-vm ... -drive file=gluster+tcp://HOSTNAME:24007/primary/virsh-created-vol.img,if=none,id=drive-virtio-disk0,format=raw,cache=none ...

Design Overview

When CloudStack utilized libvirt, it should be relatively straight forward to add support for Gluster in CloudStack. A diagram that shows the main interactions and their components looks like this:

                    .--------------.
| CloudStack |
'-------+------'
|
.-----+-----.
| libvirt |
'-----+-----'
|
.----------------+--------------.
| |
.---------+----------. .----------+----------.
| / storage pool / | | virtual machine |
| image management | | management |
'---------+----------' | / XML description / |
| '----------+----------'
V |
........................ V
: / vfs/fuse / : .............................
: mount -t glusterfs : : / QEMU + libgfapi / :
:......................: : qemu file=gluster://... :
:...........................:

The parts that are already functioning are these:
  • libvirt mounts a Gluster volume as a netfs/fuse-filesystem
  • create a XML definition for the disk and pass gluster:// on to QEMU

The actual development work will be in teaching CloudStack to intruct libvirt to use a Storage Pool backed by a Gluster Volume and attach disks to a virtual machine with the gluster protocol.

CloudStack Storage Subsystem modifications

Wido pointed out that most of the storage changes will be needed in the LibvirtStoragePoolDef and LibvirtStorageAdapter Java classes. Also the Storage Core would need to know about the new storage backend.
After some browsing and reading the sources, the needed modifications looked straight forward. The Gluster backend compares to the NFS backend, which can be used as an example.
Changing the code is an easy part, compared to testing it. Remember that I have no CloudStack background what so ever... Setting up a CloudStack environment to see if the modifications do anything, is far from trivial. Compared to the time I spend on changing the source code, trying to get a minimal test environment functioning took most of my time. At this moment, my patches are untested and therefore I have not posted them for review yet :-/

Setting up a CloudStack environment for testing

Some pointers to setup a development environment:
  • Building CloudStack manually (non RPMs)
  • maven 3.0.4 has been deprecated, use maven 3.0.5 instead
  • Installation Guide
  • RHEL6 requires the Optional Channel for jsvc from the jakarta-commons-daemon-jsvc package
  • install the cloudstack-agent (and -common) package
  • set guid and local.storage.uuid in /etc/cloudstack/agent/agent.properties

Running the CloudStack Management server is easy enough when the sources are checked out and build. A command like this works for me:

# mvn -pl :cloud-client-ui jetty:run

To deploy the changes for the cloudstack-agent, I prefer to build and install RPMs. Building these is made easy by the packaging/centos63/package.sh script:

# cd packaging/centos63 ; ./package.sh ; cd -

This script and the resulting packages work well on RHEL-6.5.

Upcoming work

With the test environment in place, I can now start to make changes to the Management Server. The current modifications in the JavaScript code make it possible to select Gluster as a primary storage pool. Unfortunately, I'm no web developer and changing JavaScript isn't something I'm very good at. I will be hacking on it every now and then, and hope to be able to have something suitable for review soon.
Of course, any assistance is welcome! I'm happy to share my work in progress if there is an interest. No guarantees about any working functionality though ;-)