all posts tagged Ubuntu


by on August 4, 2015

Gluster Community Packages

The Gluster Community currently provides GlusterFS packages for the following distributions:

                            3.5 3.6 3.7
Fedora 21                    ¹   ×   ×
Fedora 22                    ×   ¹   ×
Fedora 23                    ×   ×   ¹
Fedora 24                    ×   ×   ¹
RHEL/CentOS 5                ×   ×
RHEL/CentOS 6                ×   ×   ×
RHEL/CentOS 7                ×   ×   ×
Ubuntu 12.04 LTS (precise)   ×   ×
Ubuntu 14.04 LTS (trusty)    ×   ×   ×
Ubuntu 15.04 (vivid)             ×   ×
Ubuntu 15.10 (wily)
Debian 7 (wheezy)            ×   ×
Debian 8 (jessie)            ×   ×   ×
Debian 9 (squeeze)           ×   ×   ×
SLES 11                      ×   ×
SLES 12                          ×   ×
OpenSuSE 13                  ×   ×   ×
RHELSA 7                             ×

(Packages are also available in NetBSD and maybe FreeBSD.)

Most packages are available from download.gluster.org

Ubuntu packages are available from Launchpad

As can be seen, the old distributions don’t have pkgs of the latest GlusterFS, usually due to dependencies that are too old or missing. Similarly, new distributions don’t have pkgs of the older versions, for the same reason.

[1] In Fedora, Fedora Updates, or Fedora Updates-Testing for Primary architectures. Secondary architectures seem to be slow to sync with Primary; RPMs for aarch64 are often available from download.gluster.org

by on July 16, 2014

How to install GlusterFS with a replicated volume over 2 nodes on Ubuntu 14.04

How to install GlusterFS with a replicated volume over 2 nodes on Ubuntu 14.04

In this tutorial I will explain GlusterFS configuration in Ubuntu 14.04. GlusterFS is an open source distributed file system which provides easy replication over multiple storage nodes. Gluster File System is a distributed filesystem allowing you to create a single volume of storage which spans multiple disks, multiple machines and even multiple data centres.

by on October 18, 2013

GlusterFS 3.4.1 Packages for Ubuntu Saucy (13.10)

As a long-time Ubuntu user, I’ve worked to make sure that Debian and Ubuntu are first-class citizens in the Gluster Community. This is not without its challenges – most Gluster developers live in the Fedora/CentOS/RHEL hemisphere, and the GlusterFS version available in Ubuntu is a rather old 3.2.7, two major releases behind the latest and greatest, 3.4.1.

However, I’m happy to report that when the Saucy Salamander hit the download servers yesterday, we had DEBs readily available for downloading. This is completely due to our active community and one of our stars, not to mention community board member, Louis ‘semiosis’ Zuckerman. He had his packages uploaded on October 15, two days before the official Ubuntu 13.10 GA release.

Semiosis’ PPA is the official location for Gluster Community-supported packages of GlusterFS for Ubuntu. Give it a try, Ubuntu users. And as an added bonus, 13.10 also includes packages of OpenStack Havana – you know what that means.

by on September 20, 2013

Gluster Community Day, Thursday

I’m here in New Orleans hacking up a storm and getting to meet fellow gluster users IRL. John Mark Walker started off with a great “State of the GlusterFS union” style talk.

Today Louis (semiosis) gave a great talk about running glusterfs on amazon. It was highly pragmatic and he explained how he chose the number of bricks per host. The talk will be posted online shortly.

Marco Ceppi from Canonical gave a talk about juju and gluster. I haven’t had much time to look at juju, so it was good exposure. Marco’s gluster charm suffers from a lack of high availability peering, but I’m sure that is easily solved, and it isn’t a big issue. I had the same issue when working on puppet-gluster. I’ve written an article about how I solved this problem. I think it’s the most elegant solution, but if anyone has a better idea, please let me know. The solutions I used for puppet, can be applied to juju too. Marco and I talked about porting puppet-gluster to ubuntu. We also talked about using puppet inside of juju, with a puppetmaster, but we’re not sure how useful that would be beyond pure hack value.

Joe Julian gave a talk on running a MySQL (MariaDB) on glusterfs and getting mostly decent performance. That man knows his gluster internals.

I presented my talk about puppet-gluster. I had a successful live demo, which ran over ssh+screen across the conference centre internet to my home cluster Montreal. With interspersed talking, the full deploy took about eight minutes. Hope you enjoyed it. Let me know if you have any trouble with your setup and what features you’re missing. The video will be posted shortly.

Thanks again to John Mark Walker, RedHat and gluster.org for sponsoring my trip.

Happy hacking,

James

by on September 18, 2013

Linuxcon day two, Tuesday

Continuing on from yesterday, I’ve met even more interesting people. I chatted with Dianne Mueller about some interesting ideas for gluster+openshift. More to come on that front soon. Hung out with Jono Bacon and talked a bit about puppet-gluster on Ubuntu. If there is interest in the community for this, please let me know. Thanks to John Mark Walker and RedHat for sponsoring me and introducing me to many of these folks. Hello to all the others that I didn’t mention.

On the hacking side of things, I added proper xml parsing, and a lot of work on fancier firewalling to puppet-gluster. At the moment, here’s how the firewall support works:

  1. Initially, each host doesn’t know about the other nodes.
  2. Puppet runs and each host exports host information to each other node. This opens up the firewall for glusterd so that the hosts can peer.
  3. Now that we know which hosts are in a common pool, we can open up the firewall for each volume’s bricks. Since the volume has not yet been started (or even created) we can’t know which ports are needed, so all incoming ports are permitted from other gluster nodes.
  4. Once the volume is created, and started, the TCP port information will be available, and can be consumed as facts. These facts then refine the previously defined firewall rules, to only allow the needed ports.
  5. Your white-listed firewall setup is now complete.
  6. For users who wish to avoid using this module to configure your firewall, you can set shorewall => false in your gluster::server class. If you want to specify the allowed ip access control manually, that is possible too.

I hope you find this useful. I know I do. Let me know, and

Happy Hacking,

James

by on March 15, 2013

Glorious Gluster – How to setup GlusterFS on Rackspace Cloud and Ubuntu 12.10

A few of our projects recently called for a distributed file-system that provided high availability and redundancy. After a tip off from a fellow techie and a quick browse around the net it appeared that a solution called GlusterFS appeared to tick all the boxes for what we were wanting.

However setting it up turned out not to be as trivial as I had originally anticipated. I’m going to try and put down the process we have evolved for setting it up on Ubuntu in the cloud

A couple of things to clear up first.

  1. We are using Rackspace for our cloud but beyond the setup of the servers it should still be relevant
  2. There are a number of ways to interact with Rackspaces set up but for this we are going to use the cloud control panel
  3. We use Ubuntu as our preferred server which means that our config tends to be all over the place compared to other guides
  4. You will need to set up a minimum of 2 servers and a separate block storage device for each.
  5. We have set up and broken a few different variations of gluster setup so far and make no guarantees that the setup in this blog is infallable but its the best wehave so far.

Setting up the hardware

First things first. We are going to need to set up are some servers.

Feel free to create any size server you want. Just make sure to select Ubuntu 12.10 (or whatever version you may have that is newer).

You will also need to define a new network to work with. We use this to isolate the traffic between the nodes of our new gluster.

You can create a new network when creating the first of your servers. On the creation page under the networks heading you can find a “Create Network” button.

create-networkHopefully this should be quite self explanatory. Now when you create subsequent servers you will then have the option to attach your new network (“GlusterNet” in my example).

Once the two starting nodes have been created then you need to add some additional block storage to store your data on. Make sure that you create blocks that have sufficient capacity for your needs. Something else to consider is using High Performance SSD storage. Its a little on the pricy side but well worth the expense if you are trying to eak out every ounce of performance from the implementation.

block-storage

You will then need to attach one to each of your servers.

attach-storage

Once attached you will be able to see the details of the block mount point from the block storage details page.

storage-details

Make a note of the mount point (in this case “/dev/xvdb”) as we will need that in a minute.

Prepare the Server

Now that we have a the hardware ready we can shell into a server to set it up.

First you need to shell into your server and update its OS as the images provided by most cloud supplier tends not to have the latest patches and updates. In our case it’s as simple as:

apt-get update 
apt-get upgrade

Once that’s done we then need to prepare the Block Storage device ( henceforth refered to as a “brick”)

if you run

fdisk -l
  you should see that an entry that looks something like

Disk /dev/xvdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdb doesn't contain a valid partition table

This indicates that our brick needs a partition table and formatting. We can achieve this be doing the following

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xe7da4288.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-209715199, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-209715199, default 209715199): 
Using default value 209715199

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

I’ve highlighted the prompts and my responses. All we are doing here is creating a default partition table that has a single partition which uses up the whole disk.

now running

fdisk -l
  should give us something that looks like

Disk /dev/xvdb: 107.4 GB, 107374182400 bytes
43 heads, 44 sectors/track, 110843 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe7da4288

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdb1            2048   209715199   104856576   83  Linu

As you can now see we have a valid device of  /dev/xvdb1 that we can mountHowever we need to create a valid filesystem on the new brick before we can mount it. I have been doing this with Ext4 rather than XFS (which is the recommened filesystem from gluster), this is mainly down to the fact that when i tried using XFS I kept getting some issues with performance and access. I’m sure that with further investigation I could resolve this but as of yet haven’t had chance to. So far though I have had zero issues using Ext4. To create the filesystem we run:

mkfs.ext4 -j /dev/xvdb1

Next, create a folder to mount to, easily done by executing:

mkdir -p /glusterfs/brick

Finally, the simplest way to mount the device is via your /etc/fstab by adding the line

/dev/xvdb1       /glusterfs/brick ext4 defaults 1 2

and running

mount -a
  as root (this will also mean that it mounts on boot for you automatically as well.)

Next we need to install the latest gluster version. At the time of writing this was v3.3.1. You can find a version to suit your OS at http://www.gluster.org/download. If you are using Ubuntu you can do the following

apt-get install software-properties-common
add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3
apt-get update
apt-get install glusterfs-server glusterfs-client

By this point you will now have a single working server to continue on your going to need to set up your second server ready to create your new volume.

Once you have your second (or third, fourth, etc) setup its a good idea to add a reference to each one of them to your

/etc/hosts
  file. This is not really necessary and you can just use the IP addresses of each server but it saves you having to remember each IP and makes it easier to identify.

Remember that we are going to be working with the new network interface you created earlier (i.e “GlusterNet”). to get the IP of your GlusterNet interface a quick ifconfig will show you an interface with an IP that matched the CIDR from earlier. In my case I now have 2 IPs of 192.168.3.1 & 192.168.3.2.

So now I add the following lines to my

/etc/hosts
  file

192.168.3.1 gluster1
192.168.3.2 gluster2

Creating our volume

Now that the servers are prepared we can now play with the the tool

gluster
.This tool is a life saver in getting everything configured quickly and you can easily get a list of what its capable of by running 
gluster help
. Now Im not going to take you through every command and option and would recomend reading the gluster manual to learn more.

What this tool actually does is help generate and manipulate all the required config that is then stored at

/var/lib/glusterd/
.

Firstly we need to tell gluster is that we have a pool of servers that will communicate with each other. Gluster refers to these as peers. To do this you need to run 

gluster peer probe gluster2
 on each server for each server that will be used, replacing “gluster2″ with the name names you defined in your
/etc/hosts
  file. This will then create the appropriate files at 
/var/lib/glusterd/peers/

Now that all our peers have been defined we can get to actually creating the new distributed volume. This however requires a little consideration as there are some decisions you need to make.

If we take a look at the help for creating a new volume we can see that we need to decide on what options to use

volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>

  1. <NEW-VOLNAME>
      – what are we going to name our volume
  2. [stripe <COUNT>] [replica <COUNT>]
    – are we going to crate a striped or replicated volume and how many “bricks” are we going to create this volume with
  3. [transport {tcp|rdma|tcp,rdma>]
     - What transport protocol do you want the peers to communicate with
  4. <NEW-BRICK>
     - which servers/bricks do you want to use.

for more information on how to create you volume and what all the options mean have a look at these links

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Configuring_Distributed_Replicated_Volumes

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Configuring_Distributed_Striped_Volumes

for our purposes we are going to run

gluster volume create myvolume replica 2 transport tcp gluster1:/glusterfs/brick gluster2:/glusterfs/brick

This now creates a new volume that spans both of our servers. you can confirm that this is the case by running

gluster volume info
  and you should get something that looks like

Volume Name: myvolume
Type: Replicate
Volume ID: d3dd24fd-9482-44c3-9503-24291fad8193
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/glusterfs/brick
Brick2: gluster2:/glusterfs/brick

running this on both servers should give you the same results.

What you will now find is that the

gluster
  command has created a plethora of files at
/var/lib/glusterd/vols/myvolume/
. As you work with gluster more and more you will find yourself drawn to these files as they control all the different aspects of how the volume works and performs. Most importantly we will need some information from these files when we come to configure a client to mount the volume.

All that is left to do now is start the volume which can be easily done with a quick 

gluster volume start myvolume

At this point we have now completed setting up our volume but we need to add some security. I would strongly recommend setting up a firewall using

ufw
 to control access to the server. The easiest way to do this is to allow all traffic on your “GlusterNet” network interface as only the servers you attach to that network will have access. you can find a guide to using ufw at https://help.ubuntu.com/12.10/serverguide/firewall.html.

Mounting a Client

Now that we have a working volume we need to add some clients. To do this you will need to create a new server as above that is attached to the “GlusterNet” network but without the block storage (unless you really want it that is).

Make sure to add your gluster dfinitions to your

/etc/hosts
 file

Once you have your new client server ready we can install the gluster client

apt-get install software-properties-common
add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.3
apt-get update
apt-get install glusterfs-client

I’ve seen a number of different guides that tell you to install glusterfs-server as well but I have as yet had no need to as it all works without it.

Now there are a lot of way that you can mount your new Gluster volume. I have tried a few and have had varying results. What I have found is that the best way is to create a volume file. To do this we create a new file at

/etc/glusterfs.vol
.

volume gluster1
  type protocol/client
  option transport-type tcp
  option remote-host gluster1
  option remote-subvolume /glusterfs/brick
  option username <username>
  option password <password>
end-volume

volume gluster2
  type protocol/client
  option transport-type tcp
  option remote-host gluster2
  option remote-subvolume /glusterfs/brick
  option username <username>
  option password <password>
end-volume

volume replicate
  type cluster/replicate
  subvolumes gluster1 gluster2
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 400MB
  subvolumes writebehind
end-volume

What you will notice is that there is a

<username>
  and
<password>
  required for this to work. You can find these details on one of your peer servers in the file 
/var/lib/glusterd/vols/myvolume/trusted-myvolume-fuse.vol
.

This /etc/gluster.vol file is basically going to inform the gluster-client software about how to connect to the gluster volume and all the available nodes to connect to. This provides us with some level of fail-over so should one node become unavailable the gluster client will seamlessly switch to a different one. It also allows us to define additional “translators” such as the performance-io one that you can see here. I would strongly recommend reading through the available translators to see which may be useful to you.

Now one of the main issues you will find with Ubuntu is that it will fail on boot if you try to add this mount to your fstab. To get around this you can use Upstart. if you create the following file at

/etc/init/glusterfs-mount.conf
  making sure to change
<interface>
  to the interface for your GlusterNet network (i.e. eth0 or eth1 or eth2, you get the idea)

author "Matt Cockayne"
description "Mount GlusterFS after networking available"

start on net-device-up IFACE=<interface>
stop on stopping network
stop on starting shutdown

script
    mount -t glusterfs /etc/glusterfs.vol /glusterfs
end script

As you can see we are using a straight mount command. The magic is that this will not be executed until the start clause validates which in this case is not until the network interface for “GlusterNet” is up and running properly. You will also see that we are mounting the

/etc/gluster.vol
  file to
/gluster
  (remember to create this folder to mount to) rather than mounting a network path as you might when mounting an NFS share.

If you wanted you could also add more to your upstart script to handle clean un-mounting of gluster thus allowing you to then use the service

gluster-mount (start|stop|restart)
  commands

A quick reboot of the client server should confirm that it boots successfully and you will now end up with your volume mounted at /gluster. You can now test this by creating a new file. I tend to create an empty file at

/gluster/mounted
  just so I have a quick reference that the folder is mounted. Once that’s created if you now go and take a look at the
/gluster/brick
  on your “peers” you should see that there is now a file called “mounted” sat there looking all smug that it worked.

Caveats

Some important things for you to be made aware of

by on January 10, 2013

Creating An NFS-Like Standalone Storage Server With GlusterFS 3.2.x On Ubuntu 12.10

Creating An NFS-Like Standalone Storage Server With GlusterFS 3.2.x On Ubuntu 12.10

This tutorial shows how to set up a standalone storage server on Ubuntu 12.10. Instead of NFS, I will use GlusterFS
here. The client system will be able to access the storage as if it was
a local filesystem.

GlusterFS is a clustered file-system capable of scaling to several
peta-bytes. It aggregates various storage bricks over Infiniband RDMA or
TCP/IP interconnect into one large parallel network file system.
Storage bricks can be made of any commodity hardware such as x86_64
servers with SATA-II RAID and Infiniband HBA.

by on July 18, 2012

OpenVZ: Mounting Host Devices/Partitions/Directories In A Container With Bind Mounts (Debian/Ubuntu)

OpenVZ: Mounting Host Devices/Partitions/Directories In A Container With Bind Mounts (Debian/Ubuntu)

Sometimes you are in a situation where you need to mount a hard
drive, partiiton or directory from the OpenVZ host inside an OpenVZ
container – for example, you add a fast SSD to the host and want to put
your container’s MySQL databases on it to make MySQL faster. This
tutorial explains how you can mount host devices/partitions/directories
in an OpenVZ containers with bind mounts.

by on July 9, 2012

Striping Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.04

Striping Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.04

This tutorial shows how to do data striping (segmentation of
logically sequential data, such as a single file, so that segments can
be assigned to multiple physical devices in a round-robin fashion and
thus written concurrently) across four single storage servers (running
Ubuntu 12.04) with GlusterFS.
The client system (Ubuntu 12.04 as well) will be able to access the
storage as if it was a local filesystem.

GlusterFS is a clustered file-system capable of scaling to several
peta-bytes. It aggregates various storage bricks over Infiniband RDMA or
TCP/IP interconnect into one large parallel network file system.
Storage bricks can be made of any commodity hardware such as x86_64
servers with SATA-II RAID and Infiniband HBA.

by on July 1, 2012

Distributed Storage Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.04

Distributed Storage Across Four Storage Nodes With GlusterFS 3.2.x On Ubuntu 12.04

This tutorial shows how to combine four single storage servers
(running Ubuntu 12.04) to one large storage server (distributed storage)
with GlusterFS.
The client system (Ubuntu 12.04 as well) will be able to access the
storage as if it was a local filesystem.

GlusterFS is a clustered file-system capable of scaling to several
peta-bytes. It aggregates various storage bricks over Infiniband RDMA or
TCP/IP interconnect into one large parallel network file system.
Storage bricks can be made of any commodity hardware such as x86_64
servers with SATA-II RAID and Infiniband HBA.