all posts tagged swift
The Gluster Community and SwiftStack would like to invite you to join our sprint at PyCon 2013, taking place in the Hyatt Regency, Santa Clara, CA, on Monday, March 18, 2013.
The sprint is meant for anyone who wants to create apps leveraging the Swift API – which means anyone who’s ever wanted or needed to target cloud storage.
Swift is a highly available, distributed, eventually-consistent object storage mechanism, leveraging REST as a transfer mechanism. As a standard for distributed object access, an application written with Swift is portable to any compliant platform, including Gluster and Amazon S3.
Since Swift uses REST, applications can use it in many languages, from Python, to Java, to bash, to C#, to anything else that can manage to open a socket.
Not only will the sprint serve as an excellent environment to challenge and indulge your cloud storage capabilities and interest, but if you develop the best application, you could take home a prize.
The sprint agenda is in ongoing development, but you can take a look as it evolves and add your thoughts to the sprint wiki page.
We hope to see you at PyCon 2013!
A little while back, I tested out the Unified File and Object feature in Gluster 3.3, which taps OpenStack’s Swift component to handle the object half of the file and object combo. It took me kind of a long time to get it all running, so I was pleased to find this blog post promising a Quick and Dirty guide to UFO setup, and made a mental note to return to UFO.
When my colleague John Mark asked me about this iOS Swift client from Rackspace, I figured that now would be a good time to revisit UFO, and do it on one of the Google Compute Engine instances available to me while I’m in my free trial period with the newest member of Google’s cloud computing family. (OpenStack, iOS & Cloud: Feel the Search Engine Optimization!)
That Quick and Dirty Guide
The UFO guide, written by Kaleb Keithley, worked just as quickly as advertised: start with Fedora 16, 17 or RHEL 6 (or one of the RHEL 6 rebuilds) and end with a simple Gluster install that abides by the OpenStack Swift API. I installed on CentOS 6 because this, along with Ubuntu, is what’s supported right now in Google Compute engine.
Kaleb notes at the bottom of his post that you might experience authentication issues with RHEL 6–I didn’t have this problem, but I did have to add in the extra step of starting the memcache service manually (service memcached start) before starting up the swift service (swift-init main start).
The guide directs you to configure a repository that contains the up-to-date Gluster packages needed. I’m familiar with this repository, as it’s the same one I use on my F17 and CentOS 6 oVirt test systems. I also had to configure the EPEL repository on my CentOS 6 instance, as UFO requires some packages not available in the regular CentOS repositories.
I diverged from the guide in one other place. Where the guide asks you to add this line to the [filter:tempauth] section of /etc/swift/proxy-server.conf:
I found that I had to tack on an extra URL to that line to make the iOS client work:
user_$myvolname_$username=$password .admin https://$myhostname:443/v1/AUTH_$myvolname
Without the extra URL, my UFO setup was pointing the iOS client to a 127.0.0.1 address, which, not surprisingly, the iOS device wasn’t able to access.
The iOS Client (and the Android non-client)
Rackspace’s Cloud Mobile application enables users of the company’s Cloud Servers and Cloud Files offering to access these services from iOS and Android devices. I tried out both platforms, the former on my iPod Touch (recently upgraded to iOS 6) and on my Nexus S 4G smartphone (which runs a nightly build of Cyanogenmod 10).
My subhead above says Android non-client, because, as reviewers in the Google Play store and the developer in this github issue comment both indicate (but the app description and [non-existent] docs do not), the current version of the Android client doesn’t work with the recent, Swift-based incarnation of Rackspace’s cloud Files service.
What’s more, the Android version of the client does not allow any modification of one’s account settings. When I was trial-and-erroring my way toward figuring out the right account syntax, this got pretty annoying. Also annoying was the absence of any detailed error messages.
Things were better (albeit still undocumented) with the iOS version of the client, which allowed for account details editing, for ignoring invalid ssl certs, and for viewing the error message returned by any failed API operations.
In the parlance of the above Gluster UFO setup guide, here are the correct values for the account creation screen (the one you reach in the iOS client after selecting “Other” on the Provider screen:
- Username: $myvolname:$username
- API Key: $password
- Name: $whateveryouwant
- API Url: https://$myhostname:443/auth/v1.0
- Validate SSL Certificate: OFF
After getting those account details in place, you’ll be able to view the Swift/Gluster containers accessible to your account, create new containers, and upload/download files to and from those containers. There were no options for managing permisisons through the iOS client, so when I wanted to make a container world-readable
, I did it from a terminal, using the API.
Google Compute Engine
As I mentioned above, I tested this on Google Compute Engine, the Infrastructure-as-a-Service offering that the search giant announced at its last Google I/O conference. I excitedly signed up for the GCE limited preview as soon as it was announced, but for various reasons, I haven’t done as much testing with it as I’d planned.
Here are my bullet-point impressions of GCE:
- CentOS or Ubuntu — On GCE, for now, you run the instance types they give you, and that’s either CentOS 6 or Ubuntu 10.04. You can create your own images, by modifying one of the stock images and going through a little process to export and save it. This comes in handy, because, for now, on GCE, there are…
- No persistent instances — It’s like the earlier days of Amazon EC2. Your VMs lose all their changes when they terminate. There is, however…
- Persistent storage available — You can’t store VMs in persistent images, but you can hook up your VMs to virtual disks that persist, for storing data.
- No SELinux — The CentOS images come with SELinux disabled. This turned out to be annoying for me, as OpenShift Origin and oVirt both expect to find SELinux enabled. This cut short a pair of my tests. I was able to modify the oVirt Engine startup script not to complain about SELinux, but was then foiled due to…
- Monolithic kernel (no module loading) — oVirt engine, which I’d planned to test with a Gluster-only cluster (real virt wouldn’t have worked atop the already-virtualized GCE), wanted to load modules, and there’s no module-loading allowed (for now) on GCE. All told, though…
- GCE is a lot like EC2 — With a bit of familiarity with the ways of EC2, you should feel right at home on GCE. I opened firewall ports for access to port 443 and port 22 using security groups functionality that’s much like what you have on EC2. You launch instances in a similar way, with Web or command line options, and so on.
Back in February 2011, when I joined what ultimately became part of the GlusterFS development team at Red Hat, I had already been interested in low power — as in low power consumption — computing for a long time. For most of my earlier explorations I had used a Linksys WRT54G router — which uses a MIPS-based SoC — and the OpenWRT Linux distribution. My primary focus back then was to see if I could shoehorn the important bits of software that my then employer was shipping on its Intel/Linux-based product. As you might guess, the constraints of the platform were severely limiting — not enough memory, little to no storage, slow CPU, slow network, etc., etc.
I was excited to discover, on my arrival, that the Fedora Project was spearheading a new effort to focus on ARM in general, and was working on cleaning up and rationalizing all the various sources for ARM devices in the Linux kernel source. As icing on the cake, they were targeting several modern and affordable devices, including the BeagleBoard xM, various Dreamplug, Guruplug, Sheevaplug, and Pandaboards, and the TrimSlice, to name a few. What’s appealing about these devices, among other things, is they have 1GHz CPUs, hardware floating point, 512MB RAM, 100baseT ethernet, and USB ports. The TrimSlice has a dual core CPU and SATA onboard too, but it’s substantially more expensive.
I ordered a BeagleBoard xM and attempted to install the preliminary versions of Fedora that were available then. I had some issues with a variety of things and I eventually decided to abandon the BeagleBoard and buy a TrimSlice H instead. Fedora ARM support had matured somewhat and I sailed through the install with no problems; a much better experience. In the mean time, The Raspberry Pi was announced and I put my name on the waiting list to get one.
One thing to note, currently all ARM CPUs are 32-bit, and most are Little-Endian. All the devices I’ve mentioned so far are all Little-Endian. While there are Big-Endian ARM CPUs they seem to be rare. I’m not aware that anyone has ever run GlusterFS on Big-Endian machine even though there are PowerPC builds of GlusterFS for RHEL.And the first 64-bit ARM CPUs are in the works, and IIRC, we’ll start seeing some of the first ones around the end of 2013.
As it happens, I’m one of the maintainers of GlusterFS for Fedora. For the past year or more, I’ve been doing all the GlusterFS builds in Fedora’s Koji build system. As a result I’ve almost come to prefer to rpmbuild GlusterFS over running make. (There’s probably a pill for that.) In addition to Fedora’s yum repository I also maintain my own yum repository of GlusterFS-3.3.x for Fedora 17 and earlier, and RHEL (including CentOS), where Fedora/EPEL continue to ship GlusterFS-3.2.x for a number of reasons that I won’t go into here. Since I prefer to build GlusterFS with rpmbuild, it was a natural choice to take a GlusterFS source RPM and install it on my TrimSlice. Source RPMs are installed like any other RPM, and the contents land in your ~/rpmbuild/… directory. Then it’s a simple matter to run rpmbuild with
`rpmbuild -bb ~/rpmbuild/SPECS/glusterfs.spec`, wait a few minutes, then install the newly made GlusterFS RPMs with
`cd ~/rpmbuild/RPMS/armhfp; yum localinstall glusterfs*`
The TrimSlice H has room in its case for a 2.5″ laptop drive. I had a 320GB drive laying around after upgrading the drive in my work laptop, so I plugged that into the on-board SATA connector. (That was actually part of the install. There are two install options for the TrimSlice: one is to install on and run from an SD card, the other is to install on and run from the SATA drive. I chose the latter.) When I originally installed I left room on the disk for extra partitions. Now to use GlusterFS, I added two more partitions to fill up the rest of the drive. Then I made a btrfs file system on one, and an xfs file system on the other. We recommend that you create larger inodes on xfs volumes with
-i size=512 These will be my GlusterFS bricks. I’ve mounted those volumes at /bricks/btrfs and /bricks/xfs. Enable and start glusterd with
`systemctl enable glusterd.service; systemctl start glusterd.service` create the volumes with
`gluster volume create btrfs $brickhostname:/bricks/btrfs; gluster volume create xfs $hostname:/bricks/xfs`, start the volumes with
`gluster volume start btrfs; gluster volume start xfs` , et voila, I’m done. You can mount these volumes on your clients with NFS (use -o tcp,vers=3 on most Linux) or through Gluster native FUSE with
`mount -t glusterfs $brickhostname:btrfs` Remember, you’ll need to install GlusterFS RPMs on your clients to use the Gluster native FUSE option.
Then, out of the blue, the Raspberry Pi I’d ordered so long ago finally arrived. This little board is about 5cm square and has a 100baseT ethernet, an HDMI port, and two USB ports. It’s a bit underpowered at only 700MHz CPU and doesn’t have hardware floating point. The Fedora Project offers a “remix” of Fedora 17 for the Pi. That’s because all the kernel bits haven’t made it into the official kernel source yet, so this remix is Fedora 17, but with a one-off kernel. Similar to setting up the TrimSlice, I borrowed a 1GB WD Caviar Blue drive, coupled with a $15 USB/SATA drive “dock”, I created three partitions on the drive: one swap and two file systems. I needed the swap space on the drive because I found when compiling that I ran out of memory. I hadn’t noticed the memory issue on the TrimSlice because I had created a swap device on the drive as a matter of routine. The Pi doesn’t have the option of booting from the drive, it runs strictly from the SD card. With swap space on the drive though there’s more than enough memory to compile though. As before, I created a btrfs file system on one of the remaining partitions, and xfs on the other. Build and install GlusterFS, start glusterd, create and start the volumes as before, et voila, done.
When I thought the Raspberry Pi was never going to arrive, someone here at Red Hat arranged a bulk order of Gooseberry boards. These are $45-ish boards originally intended for an inexpensive tablet that somehow made their way out into the world sans the rest of the tablet. They have an SD slot, a mini USB port, and wireless networking. Fedora doesn’t run on them yet, I need to track down the Ubuntu release for this board and get it set up. More on that in another blog entry later.
And—— I was able to resurrect my BeagleBoard. This time around I had a much better experience getting it set up. I have another drive dock on order and I will soon have another pair of bricks in my GlusterFS storage cluster.
Finally, HP and Calxeda are making server-class hardware you can buy today.
This sets up a GlusterFS Unified File and Object (UFO) server on a single node (single brick) Gluster server using the RPMs contained in my YUM repo at http://repos.fedorapeople.org/repos/kkeithle/glusterfs/. This repo contains RPMs for Fedora 16, Fedora 17, and RHEL 6. Alternatively you may use the glusterfs-3.4.0beta1 RPMs from the GlusterFS YUM repo at http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0beta1/
1. Add the repo to your system. See the README file there for instructions. N.B. If you’re using CentOS or some other RHEL clone you’ll want (need) to add the Fedora EPEL repo — see http://fedoraproject.org/wiki/EPEL.
2. Install glusterfs and UFO (remember to enable the new repo first):
- glusterfs-3.3.1 or glusterfs-3.4.0beta1 on Fedora 17 and Fedora 18:
`yum install glusterfs glusterfs-server glusterfs-fuse glusterfs-swift glusterfs-swift-account glusterfs-swift-container glusterfs-swift-object glusterfs-swift-proxy glusterfs-ufo`
- glusterfs-3.4.0beta1 on Fedora 19, RHEL 6, and CentOS 6:
`yum install glusterfs glusterfs-server glusterfs-fuse openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object openstack-swift-proxy glusterfs-ufo`
3. Start glusterfs:
- On Fedora 17, Fedora 18:
`systemctl start glusterd.service`
- On Fedora 16 or RHEL 6
`service start glusterd`
4. Create a glusterfs volume:
`gluster volume create $myvolname $myhostname:$pathtobrick`
5. Start the glusterfs volume:
`gluster volume start $myvolname`
6. Create a self-signed cert for UFO:
`cd /etc/swift; openssl req -new -x509 -nodes -out cert.crt -keyout cert.key`
7. fixup some files in /etc/swift:
`mv swift.conf-gluster swift.conf`
`mv fs.conf-gluster fs.conf`
`mv proxy-server.conf-gluster proxy-server.conf`
`mv account-server/1.conf-gluster account-server/1.conf`
`mv container-server/1.conf-gluster container-server/1.conf`
`mv object-server/1.conf-gluster object-server/1.conf`
8. Configure UFO (edit /etc/swift/proxy-server.conf):
+ add your cert and key to the [DEFAULT] section:
bind_port = 443
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
+ add one or more users of the gluster volume to the [filter:tempauth] section:
+ add the memcache address to the [filter:cache] section:
memcache_servers = 127.0.0.1:11211
9. Generate builders:
10. Start memcached:
- On Fedora 17:
`systemctl start memcached.service`
- On Fedora 16 or RHEL 6
`service start memcached`
11. Start UFO:
`swift-init main start`
» This has bitten me more than once. If you ssh -X into the machine running swift, it’s likely that sshd will already be using ports 6010, 6011, and 6012, and will collide with the swift processes trying to use those ports «
12. Get authentication token from UFO:
`curl -v -H 'X-Storage-User: $myvolname:$username' -H 'X-Storage-Pass: $password' -k https://$myhostname:443/auth/v1.0`
(authtoken similar to AUTH_tk2c69b572dd544383b352d0f0d61c2e6d)
13. Create a container:
`curl -v -X PUT -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername -k`
14. List containers:
`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname -k`
15. Upload a file to a container:
`curl -v -X PUT -T $filename -H 'X-Auth-Token: $authtoken' -H 'Content-Length: $filelen' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k`
16. Download the file:
`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k > $filename`
More information and examples are available from
N.B. We (Red Hat, Gluster) generally recommend using xfs for brick volumes; or if you’re feeling brave, btrfs. If you’re using ext4 be aware of the ext4 issue* and if you’re using ext3 make sure you mount it with -o user_xattr.
Within the past couple weeks, Fedora and Gluster rolled out new versions, packed with too many features to discuss in a single blog post. However, a couple of the stand-out updates in each release overlap neatly enough to tackle them together–namely, the inclusion of OpenStack Essex in Fedora 17 and support for using Gluster 3.3 as a storage backend for OpenStack.
I’ve tested OpenStack a couple of times in the past, and I’m happy to report that while the project remains a fairly complicated assemblage of components, the community around OpenStack has a done a good job documenting the process of setting up a basic test rig. Going head to head with Amazon Web Services, even with the confines of one’s own organization, won’t be a walk in the park, but it’s fairly easy to get OpenStack up an running in a form suitable for further learning and experimentation.
OpenStack on Fedora 17
The getting started with OpenStack on Fedora 17 howto that I followed for my latest test involves quite a bit of command line cut and paste, but it didn’t take long for me to go from a minimal install Fedora 17 virtual machine to a single node OpenStack installation, complete with compute, image hosting, authentication, and dashboard services–everything I needed to launch VMs, register images, and manage everything from the comfort of a web UI.
A couple of notes, I did everything on this minimal-install Fedora machine as root–since this is a soon-to-be blown-away test VM, I didn’t bother to create additional users. You may need to sprinkle in some sudos if you’re running as non-root. Also, I hit at least one issue with SELinux (related to glance) during my tests. I never turn off SELinux by default, but once I hit an error on a test box, I throw it into permissive mode.
Also, I elected to run the whole show (the openstack part of it, at least) within a single virtual machine running on my home oVirt installation, so the performance of my guest instances was very slow, but everything worked well enough for me to take OpenStack for a spin, and get to fiddling with trickier OpenStack topics, such as…
The one OpenStack element that the Fedora howto touches on only briefly is OpenStack Swift, the object storage system intended to replace Amazon’s S3. Here’s what the howto has to say about Swift:
These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn’t be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn’t include replication, multiple zones and load balancing.
(Configure swift with keystone)
What an ideal segue for Gluster 3.3, a storage software project with replication and load balancing as its stock in trade. The Gluster portion of my tests was quite a bit trickier than the OpenStack on Fedora part had been, but I learned a lot about Gluster and OpenStack along the way.
Building Gluster 3.3 Packages
First off, Gluster 3.3 shipped a bit after Fedora 17, and the version of Gluster available in the Fedora software repositories is still at 3.2. What’s more, the 3.3 packages offered by the Gluster project target Fedora 16, as well. The Fedora folder on the Gluster download server doesn’t include any source rpms, but I found a spec file for building Fedora rpms in the Gluster source tarball on the download server.
On my Fedora 17 notebook, I fetched the build dependencies for Gluster 3.2 using the command yum-builddep from the yum-utils package:
sudo yum-builddep glusterfs
I grabbed the file glusterfs.spec from the glusterfs-3.3.0.tar.gz tarball, dropped it in ~/rpmbuild/SPECS, and put the tarball into ~/rpmbuild/SOURCES. If you don’t have rpm-build installed on your Fedora machine, you’ll need to do that, as well.
Next, I built my Gluster 3.3 packages for F17:
rpmbuild -bb ~/rpmbuild/SPECS/glusterfs.spec
Then, I copied the packages over to my OpenStack test machine and updated the glusterfs and glusterfs-fuse packages that had been pulled in as dependencies during my OpenStack on F17 install:
scp ~/rpmbuild/RPMS/x86_64/glusterfs-* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./glusterfs-3.3.0-1.fc17.x86_64.rpm glusterfs-fuse-3.3.0-1.fc17.x86_64.rpm
Gluster+OpenStack: The Easy Way
As described on the Connecting with OpenStack Resource Page on the Gluster wiki, there are two ways of using Gluster with OpenStack. The first is super simple, and amounts to locating the images for your running OpenStack instances on Gluster by simply mounting a Gluster volume at the spot where OpenStack expects to place these images. On the resource page, there’s a PDF titled OpenStack VM Storage Guide that steps through the process of creating a four node distributed-replicated volume and mounting it in the right spot. Easy.
I did this with my test OpenStack setup, and it worked as advertised. I kicked off a yum update operation in one of my OpenStack instances, and then ungracefully shutdown (pulled the virtual plug on) the gluster VM node where the instance was calling home. I watched as the yum update process paused for a short time before continuing happily enough on one of the other Gluster nodes I’d configured.
Where things got quite a bit trickier was with the second OpenStack-Gluster integration option, that for Unified Object and File Storage. Gluster’s UFO is based on a slightly modified version of OpenStack Swift, where Gluster brings the storage, and users are able to access files and content either as objects, through Swift’s REST interface, or as regular files, through Gluster’s FUSE or NFS mounts.
Building Gluster UFO Packages
Again, I started by building some packages. The Gluster download site offers UFO (aka gluster-swift) packages for enterprise Linux 6 (RHEL and its relabeled children). There’s a source tarball, but unlike the main glusterfs tarball, the gluster-swift tarball doesn’t include a spec file for building rpms. I located spec files for gluster-swift and gluster-swift-plugin at Gluster’s github site, but these spec files referenced a handful of patches that weren’t in the git repository, so I wasn’t able to build them.
After Googling a while for the missing patches, I found source rpms for gluster-swift and gluster-swift-plugin in a public source repository for Red Hat Storage 2.0. Both of these packages are a hair older than the ones in the Gluster download location: gluster-swfit-1.4.8-3 vs 1.4.8-4 and gluster-swift-plugin-1.0-1 vs. 1.0-2, but I forged ahead with these.
I had to tweak the SPEC files slightly, changing references to the python2.6 in el6 to the python2.7 that ships with Fedora 17, but I managed to build both of them without much hassle, before copying them over to my openstack test machine and installing them:
rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift.spec
rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift-plugin.spec
scp ~/rpmbuild/RPMS/noarch/gluster-swift* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./gluster-swift-*
Gluster-Swift + OpenStack
Over on our openstackF17 machine, the gluster-swift package has placed a bunch of configuration files in /etc/swift. We’re going to leave most of these configurations in place, but we need to make a few modifications, starting with fs.conf:
I’m using the four VM gluster cluster described in the OpenStack VM Storage Guide I mentioned above, which is remote from my openstack server, so I have to change “mount_ip” to the ip of one of my gluster servers, and change “remote_cluster” to yes. If my gluster volume, or part of it, was local, I could have left these values alone.
The other thing required to make the remote gluster cluster bit work is enabling passwordless ssh login between my openstackF17 machine and the gluster server I pointed to in fs.conf:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub root@gluster1
More config file editing. Next up, proxy-server.conf. In order to get gluster-swift working with OpenStack’s Keystone authentication service, we’re going to grab some of the configuration info from the Fedora 17 OpenStack guide:
Change the “pipeline” line under [pipeline:main], adding “authtoken keystone” to the line, and removing “tempauth”:
pipeline = healthcheck cache authtoken keystone proxy-server
And then add these sections to correspond with our added elements. As to the “are these needed” comment question, that comes from the howto in the Fedora wiki, and I don’t know the answer, so I left it in:
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
# ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN
If you followed along with the Fedora 17 OpenStack howto, you’ll have a file (keystonerc) in your home directory that sets your OpenStack environment variables. Let’s make sure our variables are set correctly:
Next, we run these commands to replace some placeholder values in our proxy-server.conf file:
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN
Now we add the Swift service and endpoint to Keystone:
SERVICEID=$(keystone service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
echo $SERVICEID # just making sure we got a SERVICEID
keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s"
Gluster-swift will be looking for Gluster volumes that correspond to Swift account names. We need to figure out what names we need, and create Gluster volumes with those names. We ask Keystone about our account names:
In my setup, this turns up four accounts:
| id | name | enabled |
| 18571133bf9b4236be0ad45f2ccff135 | invisible_to_admin | True |
| 1918b675fa1f4b7f87c2bb3688f6f2f7 | admin | True |
| 42c41f15e6a24fa5b105e89b60af18fb | demo | True |
| decd4d68f50345eeb2eae090e2d32dcb | service | True |
So far, I’ve needed volumes for the admin and demo accounts. You’ll need to name your Gluster volumes after the value in the “id” column. Following the four node example in the OpenStack VM Storage Guide, the command (which you must run from on of your gluster nodes) will look like this, substituting your own Gluster node IPs, and your volume name values from keystone tenant-list:
gluster volume create 42c41f15e6a24fa5b105e89b60af18fb replica 2 10.1.1.11:/vmstore 10.1.1.12:/vmstore 10.1.1.13:/vmstore 10.1.1.14:/vmstore
Run the command again so you have volumes that correspond to both the admin and demo tenant ids.
Each Gluster volume needs its own mount point. You don’t have to create your mount points manually on each server. And again, the Gluster volume doesn’t have to live on a remote cluster. Any properly named Gluster volume on a server that gluster-swift knows about (from fs.conf, which we modded earlier) and can access passwordlessly (red spell check underline be damned) ought to work.
All right, almost done. Start or restart memcached, and start gluster-swift:
service memcached restart
swift-init main start
Now, we should be able to test gluster-swift:
If all is well, gluster-swift should try to mount the admin volume (the keystonerc file is telling swift to use the admin account), and satisfying hard drive activity gurgling sounds should ensue. If you run the command “mount” you should see that you have a Gluster volume mounted at the mount point “/mnt/gluster-object/AUTH_YOURADMINVOLNAME”. Like so:
gluster1:1918b675fa1f4b7f87c2bb3688f6f2f7 on /mnt/gluster-object/AUTH_1918b675fa1f4b7f87c2bb3688f6f2f7 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
You can test uploading to the volume from the command line:
swift upload container /path/to/file
You ought to be able to ssh in to one of your gluster nodes, navigate to the mount point that corresponds to your admin account volume, and see the file you just uploaded.
For a more GUI-ful experience, we can check out our snazzy gluster-swift store from the OpenStack dashboard (you’ll have installed this if you followed the OpenStack Fedora 17 howto). Make sure your firewall is down or you have port 80 open, and restart your web server for good measure:
service httpd restart
Visit the dashboard at http://YOUROPENSTACKSERVERIP/dashboard, and log in with admin and (assuming you retained the password default from the howto) verybadpass. In the left nav column, click the “Project” tab. The default project is “demo” (which is why we had to create a demo volume). In the left nav column, under “Object Store,” click “Containers,” and create, delete, upload to, download from, etc. at will. In the background, just as with the “swift list” command, gluster-swift should be reacting to the dashboard’s requests by mounting your Gluster volume.
For Further Study: Glance on Gluster-Swift
By default, OpenStack’s image-hosting service, Glance, stores its images in a local directory, but it’s possible to use Swift as a back-end for that image storage, by the backend listed in /etc/glance/glance-api.conf from “file” to “swift” and by correctly hooking up the authentication details there. I’ve yet to get this working, though.
In this OpenStack on Ubuntu howto, the author notes that a glance package from a particular PPA is required to make this work, due to some issue in the latest (as of 5/28/12) glance package from the official repos. I took a peek at the patches included in this substitute package, and couldn’t immediately tell what, if anything, might be missing from Fedora’s glance package.
If you’re still with me, and you’re interested in setting up all or part of this yourself, don’t hesitate to ask me questions–I puzzled over this for a week or so, and if I can save you some time, that’ll make my toiling more worthwhile to me. Fire away in the comments below, or hit me up on IRC. I’m jbrooks on freenode IRC, and #gluster is one of the channels where you can find me.