all posts tagged RHEL


by on December 7, 2014

Configured Zabbix to keep my server cool

Recently I got myself an APC NetShelterCX mini. It is a 12U rack, with integrated fans for cooling. At the moment it is populated with some ARM boards (not rack mounted), their PDUs, a switch and (for now) one 2U server.

Surprisingly, the fans of the NetShelter are louder than the server (the rest does not have fans at all, except for the switch). But, it is not an option to keep the fans turned off all the time. When the server is idle, its CPUs temperature stays somewhere between 40-50 Celsius. However, when starting several virtual machines for testing some Gluster changes, the temperature rises steadily. To prevent overheating, the fans of the cabinet need to be turned on.

Of course, turning on the fans manually is possible, but it requires me to plug in the power cable. This is not very convenient when the cabinet is normally closed to reduce the noise. With the PDUs and fence_netio from the fence-agents-netio package, the fans inside the cabinet can be controlled remotely. That was a great step already!

Well, things can be even better. I don't want to monitor the temperature of my server and then decide when to turn on the fans. After spending some time looking for and comparing different monitoring solutions, I settled to try Zabbix. Packages for Zabbix are available for Fedora and EPEL, which makes trying it out pretty simple.

In a couple of hours playing with the installation and configuration, I was able to monitor the basics of my server. With a little manual configuration, the Zabbix Agent on the server can send the temperatures of the CPUs. All I had to do was setup a UserParameter in /etc/zabbix_agentd.conf:

UserParameter=cpu.temp.0,sensors | sed -n -r '/Physical id 0/s/^.*:[[:space:]]++([[:digit:]]+.[[:digit:]]+).*$/1/p'
UserParameter=cpu.temp.1,sensors | sed -n -r '/Physical id 1/s/^.*:[[:space:]]++([[:digit:]]+.[[:digit:]]+).*$/1/p'

The above configuration snippet tells the Zabbix Agent on the server to execute sensors (from the lm_sensors package), and filter the output through a sed command. The result is captured and sent to the Zabbix Server.

With the new cpu.temp.0 and .1 keys, the Zabbix webui can use these temperature items to setup a trigger which then invokes an action when the temperature rises above 55 Celsius. When the trigger enters the PROBLEM state, the action calls fence_netio and turns on the port that has the cable for the fans connected. When the trigger returns back to normal (checked every 5 minutes, now moved to 10), the port is disabled again.

This is the first time that I actually have setup monitoring with some custom actions. It was quite fun, and I'm certainly happy with the result.

by on

Configured Zabbix to keep my server cool

Recently I got myself an APC NetShelterCX mini. It is a 12U rack, with integrated fans for cooling. At the moment it is populated with some ARM boards (not rack mounted), their PDUs, a switch and (for now) one 2U server.

Surprisingly, the fans of the NetShelter are louder than the server (the rest does not have fans at all, except for the switch). But, it is not an option to keep the fans turned off all the time. When the server is idle, its CPUs temperature stays somewhere between 40-50 Celsius. However, when starting several virtual machines for testing some Gluster changes, the temperature rises steadily. To prevent overheating, the fans of the cabinet need to be turned on.

Of course, turning on the fans manually is possible, but it requires me to plug in the power cable. This is not very convenient when the cabinet is normally closed to reduce the noise. With the PDUs and fence_netio from the fence-agents-netio package, the fans inside the cabinet can be controlled remotely. That was a great step already!

Well, things can be even better. I don't want to monitor the temperature of my server and then decide when to turn on the fans. After spending some time looking for and comparing different monitoring solutions, I settled to try Zabbix. Packages for Zabbix are available for Fedora and EPEL, which makes trying it out pretty simple.

In a couple of hours playing with the installation and configuration, I was able to monitor the basics of my server. With a little manual configuration, the Zabbix Agent on the server can send the temperatures of the CPUs. All I had to do was setup a UserParameter in /etc/zabbix_agentd.conf:

UserParameter=cpu.temp.0,sensors | sed -n -r '/Physical id 0/s/^.*:[[:space:]]++([[:digit:]]+.[[:digit:]]+).*$/1/p'
UserParameter=cpu.temp.1,sensors | sed -n -r '/Physical id 1/s/^.*:[[:space:]]++([[:digit:]]+.[[:digit:]]+).*$/1/p'

The above configuration snippet tells the Zabbix Agent on the server to execute sensors (from the lm_sensors package), and filter the output through a sed command. The result is captured and sent to the Zabbix Server.

With the new cpu.temp.0 and .1 keys, the Zabbix webui can use these temperature items to setup a trigger which then invokes an action when the temperature rises above 55 Celsius. When the trigger enters the PROBLEM state, the action calls fence_netio and turns on the port that has the cable for the fans connected. When the trigger returns back to normal (checked every 5 minutes, now moved to 10), the port is disabled again.

This is the first time that I actually have setup monitoring with some custom actions. It was quite fun, and I'm certainly happy with the result.

by on November 5, 2014

Installing GlusterFS 3.4.x, 3.5.x or 3.6.0 on RHEL or CentOS 6.6

With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages in the standard channels/repositories. Unfortunately, these are only the client-side packages (like glusterfs-fuse and glusterfs-api). Users that want to run a Gluster Server on a current RHEL or CentOS now have difficulties installing any of todays current version of the Gluster Community packages.

The most prominent issue is that the glusterfs package from RHEL has a version of 3.6.0.28, and that is higher than the last week released version of 3.6.0. RHEL is shipping a pre-release that was created while the Gluster Community was still developing 3.6. An unfortunate packaging decision added a .28 to the version, where most other pre-releases would fall-back to a (rpm-)version like 3.6.0-0.1.something.bla.el6. The difference might look minor, but the result is a major disruption in the much anticipated 3.6 community release.

For the immediate need to fix this in a most easy way for our community users, we have decided to release version 3.6.1 later this week (maybe on Thursday November 6). This version is higher than the version in RHEL/CentOS, and therefore yum will prefer the package from the community repository over the one available in RHEL/CentOS. This is also the main reason why there have been no 3.6.0 packages provided on the download server.

Installing an older stable release (like 3.4 or 3.5) on RHEL/CentOS 6.6 requires a different approach. At the moment we can offer two solutions that can be used. We are still working on making this easier, until that is finalized, some manual actions are required.

Lets assume you want to verify if todays announced glusterfs-3.5.3beta2 packages indeed fix that bug you reported. (These steps apply to the other versions as well, this just happens to be what I have been testing.)

Option A: use exclude in the yum repository files for RHEL/CentOS

  1. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  2. edit /etc/yum.repos.d/redhat.repo or /etc/yum.repos.d/CentOS-Base.repo and under each repository that you find, add the following line

    exclude=glusterfs*

This prevents yum from installing the glusterfs* packages from the standard RHEL/CentOS repositories, but allows those packages from others. The Red Hat Customer Portal has an article about this configuration too.

Option B: install and configure yum-plugin-priorities

Using yum-plugin-priorities is probably a more stable solution. This does not require changes to the standard RHEL/CentOS repositories. However, an additional package needs to get installed.

  1. enable the optional repository when on RHEL, CentOS users can skip this step

    # subscription-manager repos --list | grep optional-rpms
    # subscription-manager repos --enable=*optional-rpms

  2. install the yum-plugin-priorities package:

    # yum install yum-plugin-priorities

  3. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  4. edit the /etc/yum.repos.d/glusterfs-353beta2-epel.repo file and add the following option to each repository definition:

    priority=50

The default priority for repositories is 99. The repositories with the lowest number have the highest priority. As long as the RHEL/CentOS repositories do not have the priority option set, the packages from the glusterfs-353beta2-epel.repo will get preferred by yum.

When using the yum-plugin-priorities approach, we highly recommend that you check if all your repositories have a suitable (or missing) priority option. In case some repositories have the option set, but yum-plugin-priorities was not installed yet, the order of the repositories might have changed. Because of this, we do not want to force using yum-plugin-priorities on all the Gluster Community users that run on RHEL/CentOS.

In case users still have issues installing the Gluster Community packages on RHEL or CentOS, we recommend getting in touch with us on the Gluster Users mailinglist (archive) or in the #gluster IRC channel on Freenode.

by on

Installing GlusterFS 3.4.x, 3.5.x or 3.6.0 on RHEL or CentOS 6.6

With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages in the standard channels/repositories. Unfortunately, these are only the client-side packages (like glusterfs-fuse and glusterfs-api). Users that want to run a Gluster Server on a current RHEL or CentOS now have difficulties installing any of todays current version of the Gluster Community packages.

The most prominent issue is that the glusterfs package from RHEL has a version of 3.6.0.28, and that is higher than the last week released version of 3.6.0. RHEL is shipping a pre-release that was created while the Gluster Community was still developing 3.6. An unfortunate packaging decision added a .28 to the version, where most other pre-releases would fall-back to a (rpm-)version like 3.6.0-0.1.something.bla.el6. The difference might look minor, but the result is a major disruption in the much anticipated 3.6 community release.

For the immediate need to fix this in a most easy way for our community users, we have decided to release version 3.6.1 later this week (maybe on Thursday November 6). This version is higher than the version in RHEL/CentOS, and therefore yum will prefer the package from the community repository over the one available in RHEL/CentOS. This is also the main reason why there have been no 3.6.0 packages provided on the download server.

Installing an older stable release (like 3.4 or 3.5) on RHEL/CentOS 6.6 requires a different approach. At the moment we can offer two solutions that can be used. We are still working on making this easier, until that is finalized, some manual actions are required.

Lets assume you want to verify if todays announced glusterfs-3.5.3beta2 packages indeed fix that bug you reported. (These steps apply to the other versions as well, this just happens to be what I have been testing.)

Option A: use exclude in the yum repository files for RHEL/CentOS

  1. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  2. edit /etc/yum.repos.d/redhat.repo or /etc/yum.repos.d/CentOS-Base.repo and under each repository that you find, add the following line

    exclude=glusterfs*

This prevents yum from installing the glusterfs* packages from the standard RHEL/CentOS repositories, but allows those packages from others. The Red Hat Customer Portal has an article about this configuration too.

Option B: install and configure yum-plugin-priorities

Using yum-plugin-priorities is probably a more stable solution. This does not require changes to the standard RHEL/CentOS repositories. However, an additional package needs to get installed.

  1. enable the optional repository when on RHEL, CentOS users can skip this step

    # subscription-manager repos --list | grep optional-rpms
    # subscription-manager repos --enable=*optional-rpms

  2. install the yum-plugin-priorities package:

    # yum install yum-plugin-priorities

  3. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  4. edit the /etc/yum.repos.d/glusterfs-353beta2-epel.repo file and add the following option to each repository definition:

    priority=50

The default priority for repositories is 99. The repositories with the lowest number have the highest priority. As long as the RHEL/CentOS repositories do not have the priority option set, the packages from the glusterfs-353beta2-epel.repo will get preferred by yum.

When using the yum-plugin-priorities approach, we highly recommend that you check if all your repositories have a suitable (or missing) priority option. In case some repositories have the option set, but yum-plugin-priorities was not installed yet, the order of the repositories might have changed. Because of this, we do not want to force using yum-plugin-priorities on all the Gluster Community users that run on RHEL/CentOS.

In case users still have issues installing the Gluster Community packages on RHEL or CentOS, we recommend getting in touch with us on the Gluster Users mailinglist (archive) or in the #gluster IRC channel on Freenode.

by on December 19, 2013

Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)

The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!

In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS.… Read the rest

The post Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO) appeared first on vmware admins.

by on April 7, 2013

Configuring a bluetooth keyboard system-wide from the command line

Recently I bought a new keyboard, which I intend to use when my laptop is placed in its docking station. There are two external monitors connected, making the display of the laptop rather useless (only two outputs are supported at the same time). In normal circumstances the laptop lid will be closed, so the keyboard is not accessible.

My new keyboard is a Logitech K760 and is connected through bluetooth. Pairing with help from the the XFCE/GNOME tools is easy enough, but this causes the keyboard to be available after login only. That is not very practical. After boot, I have to login through GDM and prefer to not need to use the keyboard of the laptop itself. For this, I needed to figure out how to make the bluetooth keyboard available on system level, and not per user. Descriptions on how to do this seem to be very sparse, and mostly depend on other distributions than RHEL or Fedora. I prefer to use standard tools as much as possible, adding custom scripts for these things makes it more difficult to move configurations between systems. Furthermore the keyboard can be paired to multiple (3) systems at the same time, the F1-3 keys can be used to select a system, similar to a KVM switch.

The most minimal and easy to use tools I could find, are included in the test suite of the BlueZ package. Unfortunately, these are not packaged for all I know, so installing or using these scripts is impractical. But, as these scripts are only needed for pairing once, I think they are a nice solution anyway. The advantage over other options, is that the scripts are updated with the bluez software itself, which causes the same scripts (well, different versions) to work regardless of changes to the bluez API.

Getting the scripts from the bluez test-suite that matches the available version in Fedora or RHEL, can be done with yumdownloader from the yum-utils package (all as normal unprivileged user):

$ yumdownloader --source bluez

Extract the source RPM by installing it:

$ rpm -ivh bluez-4.66-1.el6.src.rpm

Extract the sources which include the test-suite:

$ rpmbuild --nodeps -bp ~/rpmbuild/SPECS/bluez.spec
Note that the --nodeps parameter is used. The -bp argument causes all BuildRequires dependencies to be checked, most of them are not needed for the test-suite scripts.

After extracting the sources successfully, the test-suite is located under the BUILD directory:

$ cd ~/rpmbuild/BUILD/bluez-4.66/test/

Everything is now ready for pairing, so put the keyboard in discovery mode and scan for it:

$ sudo hcitool scan
Scanning ...
00:1F:20:3C:A2:03 Logitech K760

The keyboard will need ao authenthicate to the system. simple-agent can be used for that, like this:

$ sudo ./simple-agent hci0 00:1F:20:3C:A2:03
DisplayPasskey (/org/bluez/2117/hci0/dev_00_1F_20_3C_A2_03, 716635)
Release
New device (/org/bluez/2117/hci0/dev_00_1F_20_3C_A2_03)
The simple-agent script will wait for a response of the keyboard, press the PIN that is shown (here 716635) and hit enter.

Obviously the keyboard is a device that supports the input-class. Hence test-input can be used to setup the connection:

$ sudo ./test-input connect 00:1F:20:3C:A2:03

If this worked without error message, mark the keyboard as a trusted device. This will make it possible for the keyboard to connect to the system without requesting for approval:

$ sudo ./test-device trusted 00:1F:20:3C:A2:03 yes

After these steps, verify that the keyboard connects automatically after a reboot. This worked for me on my RHEL-6 laptop, and a cubieboard installed with Fedora 18 ARM.

by on March 17, 2013

Use dnsmasq for separating DNS queries

Automatic network configuration with DHCP is great. But if you need to use multiple separated networks at once, it gets more difficult pretty quickly. For example, my RHEL-6 laptop

  1. connects through wifi to the network at home, which provides internet access
  2. accesses remote systems connected via a VPN
  3. and manages virtual machines that need access to any of those

Now, when NetworkManager connects to the VPN, the DNS-servers for the VPN are added to /etc/resolv.conf with a higher priority than the home network one. This is fine in a lot of circumstances, but that means all domain name
service lookups will go through the VPN first. That's not optimal, and the administrator of the VPN does not need to see all the hostname lookups my laptop is doing either. Also, any lookups for the local network will go through
the VPN, fail there and are retried with the next DNS-server, making queries for the LAN slower than all the others.

The solution sounds simple: Only use the DNS-servers on the VPN for lookups for resources that are on the VPN.

Unfortunately, the configuration is not that simple if it needs to work dynamically. The main configuration file that contains DNS-servers (/etc/resolv.conf) does not have any options to tell that some DNS-servers are to be used for certain domains only. A workaround for this limitation is to use a DNS-server that supports filtering and relaying queries, and have it listen on localhost. This DNS-server is configured in /etc/resolv.conf, and any new network configurations (or removed ones) should not change the configuration in /etc/resolv.conf, but the local DNS-server instead.

This means that my /etc/resolv.conf looks like this:
nameserver 127.0.0.1
search lan.example.net

The minimal /etc/resolv.conf file is also saved as
/etc/resolv.conf.dnsmasq, which is used as a template for restoring the configuration when a VPN service (like OpenVPN) modified it.

The DNS-server for this setup became dnsmasq. This piece of software was already installed on my laptop as a dependency of libvirt, and offers the simple configuration that this setup can benefit from. For this setup, the
libvirt configuration of dnsmasq is not touched, it works fine and with its integrated DHCP-server I am not tempted to break my virtual machines (not now, and not when I install updates).

The configuration to let dnsmasq listen on localhost, and not intervene with libvirt that listens on virbr0 is very minimal as well. My preference is to prevent big changes in packaged configuration files as these may become difficult to merge with updates, so the only change in /etc/dnsmasq.conf that is required is this (newer versions seem to have this by default):
# Include a another lot of configuration options.
#conf-file=/etc/dnsmasq.more.conf
conf-dir=/etc/dnsmasq.d

An additional file in the /etc/dnsmasq.d directory suffices,  /etc/dnsmasq.d/localhost.conf:
no-resolv
no-poll
interface=lo
no-dhcp-interface=lo
bind-interfaces

The default configuration file /etc/dnsmasq.conf contains a good description of these options. It is not needed to repeat them here.

Enabling dnsmasq to start at boot is a prerequisite, otherwise any lookup that uses DNS-servers will fail completely. On my RHEL-6 system, I needed to enable starting of dnsmasq with /sbin/chkconfig dnsmasq on, start
the service with /sbin/service dnsmasq start.

With this current configuration, only hostnames and IP-addresses that are in /etc/hosts are being resolved. Which means, it is difficult to create any network connections outside of the laptop. The next step is to integrate the available connected networks with the dnsmasq configuration.

NetworkManager is used to configure the network on my laptop. This is convenient as it supports WLAN and can connect to the VPN. In order to teach it to write a dnsmasq configuration file for each network that gets setup, I used
an event script, /etc/NetworkManager/dispatcher.d/90-update-resolv.conf:
#!/bin/sh
#
# NetworkManager dispatcher script to prevent messing with DNS servers in the
# LAN.
#
# Author: Niels de Vos
#

DNSMASQ_RESOLV=/etc/dnsmasq.d/resolv-${CONNECTION_UUID}.conf

function write_dnsmasq_header
{
if [ ! -e ${DNSMASQ_RESOLV} ]
then
echo "# ${DNSMASQ_RESOLV} generated on $(date)" > ${DNSMASQ_RESOLV}
echo "# Generator: ${0}" >> ${DNSMASQ_RESOLV}
echo "# Connection: ${CONNECTION_UUID}" >> ${DNSMASQ_RESOLV}
fi
}

function create_dnsmasq_config_env
{
local NS

write_dnsmasq_header

for NS in ${IP4_NAMESERVERS}
do
echo "server=${NS}" >> ${DNSMASQ_RESOLV}
done
}

function create_dnsmasq_config_from_resolv_conf
{
local NS
local DOMAIN=""

write_dnsmasq_header

DOMAIN=$(awk '/^domain/ {print $2}' /etc/resolv.conf)
[ -n "${DOMAIN}" ] && DOMAIN="/${DOMAIN}/"

for NS in $(awk '/^nameserver/ {print $2}' /etc/resolv.conf)
do
# make sure the NS is not from an other config
grep -q "[=/]${NS}$" /etc/dnsmasq.d/resolv-*.conf && continue

echo "server=${DOMAIN}${NS}" >> ${DNSMASQ_RESOLV}
done
}

function remove_dnsmasq_config
{
rm -f ${DNSMASQ_RESOLV}
}

function remove_stale_configs
{
local CONF
local UUID

for CONF in /etc/dnsmasq.d/resolv-*.conf
do
# in case of a wildcard error
[ -e "${CONF}" ] || continue

UUID=$(awk '/^# Connection: / {print $3}' ${CONF})
if ! ( nmcli -t -f UUID con status | grep -q "^${UUID}$" )
then
rm -f ${CONF}
fi
done
}

function reload_dnsmasq
{
cat /etc/resolv.conf.dnsmasq > /etc/resolv.conf
[ -n "${DHCP4_DOMAIN_SEARCH}" ] && echo "search ${DHCP4_DOMAIN_SEARCH}" >> /etc/resolv.conf
# "killall -HUP dnsmasq" is not sufficient for new files
/sbin/service dnsmasq restart 2>&1 > /dev/null
}

case "$2" in
"up")
remove_stale_configs
create_dnsmasq_config_env
reload_dnsmasq
;;
"vpn-up")
remove_stale_configs
create_dnsmasq_config_from_resolv_conf
reload_dnsmasq
;;
"down")
remove_stale_configs
remove_dnsmasq_config
reload_dnsmasq
;;
"vpn-down")
remove_stale_configs
remove_dnsmasq_config
reload_dnsmasq
;;
esac

This script will write a configuration file like /etc/dnsmasq.d/resolv-0263cda6-edbd-437e-8d36-efb86dcc9112.conf:
# /etc/dnsmasq.d/resolv-0263cda6-edbd-437e-8d36-efb86dcc9112.conf generated on Sun Mar 17 11:57:26 CET 2013
# Generator: /etc/NetworkManager/dispatcher.d/90-update-resolv.conf
# Connection: 0263cda6-edbd-437e-8d36-efb86dcc9112
server=192.168.0.1

The generated configuration file for dnsmasq simply states that there is a DNS-server on 192.168.0.1, which can be used for any query. When the configuration has been written, the dnsmasq daemon is sent a SIGHUP which causes it to reload its configuraion files.

After connecting to a VPN, an other partial configuration file is generated. In this case /etc/dnsmasq.d/resolv-ba76186a-9923-4756-aa8a-19706a4d273c.conf:
# /etc/dnsmasq.d/resolv-ba76186a-9923-4756-aa8a-19706a4d273c.conf generated on Sun Mar 17 11:57:41 CET 2013
# Generator: /etc/NetworkManager/dispatcher.d/90-update-resolv.conf
# Connection: ba76186a-9923-4756-aa8a-19706a4d273c
server=/example.com/10.0.0.1
server=/example.com/10.0.0.2

Similar to the main WLAN connection, this configuration contains two DNS-servers, but these are to be used for the example.com network only.

For me this works in the environments I visit, wifi at home, network cable connected docking station, and several other (non-)public wireless networks.
by on September 17, 2012

Howto: Using UFO (swift) — A Quick Setup Guide

This sets up a GlusterFS Unified File and Object (UFO) server on a single node (single brick) Gluster server using the RPMs contained in my YUM repo at http://repos.fedorapeople.org/repos/kkeithle/glusterfs/. This repo contains RPMs for Fedora 16, Fedora 17, and RHEL 6. Alternatively you may use the glusterfs-3.4.0beta1 RPMs from the GlusterFS YUM repo at http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0beta1/

1. Add the repo to your system. See the README file there for instructions. N.B. If you’re using CentOS or some other RHEL clone you’ll want (need) to add the Fedora EPEL repo — see http://fedoraproject.org/wiki/EPEL.

2. Install glusterfs and UFO (remember to enable the new repo first):

  • glusterfs-3.3.1 or glusterfs-3.4.0beta1 on Fedora 17 and Fedora 18: `yum install glusterfs glusterfs-server glusterfs-fuse glusterfs-swift glusterfs-swift-account glusterfs-swift-container glusterfs-swift-object glusterfs-swift-proxy glusterfs-ufo`
  • glusterfs-3.4.0beta1 on Fedora 19, RHEL 6, and CentOS 6: `yum install glusterfs glusterfs-server glusterfs-fuse openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object openstack-swift-proxy glusterfs-ufo`

3. Start glusterfs:

  • On Fedora 17, Fedora 18: `systemctl start glusterd.service`
  • On Fedora 16 or  RHEL 6 `service start glusterd`
  • On CentOS6.x `/etc/init.d/glusterd start`

4. Create a glusterfs volume:
`gluster volume create $myvolname $myhostname:$pathtobrick`

5. Start the glusterfs volume:
`gluster volume start $myvolname`

6. Create a self-signed cert for UFO:
`cd /etc/swift; openssl req -new -x509 -nodes -out cert.crt -keyout cert.key`

7. fixup some files in /etc/swift:

  • `mv swift.conf-gluster swift.conf`
  • `mv fs.conf-gluster fs.conf`
  • `mv proxy-server.conf-gluster proxy-server.conf`
  • `mv account-server/1.conf-gluster account-server/1.conf`
  • `mv container-server/1.conf-gluster container-server/1.conf`
  • `mv object-server/1.conf-gluster object-server/1.conf`
  • `rm {account,container,object}-server.conf

8. Configure UFO (edit /etc/swift/proxy-server.conf):
+ add your cert and key to the [DEFAULT] section:
bind_port = 443
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
+ add one or more users of the gluster volume to the [filter:tempauth] section:
user_$myvolname_$username=$password .admin
+ add the memcache address to the [filter:cache] section:
memcache_servers = 127.0.0.1:11211

9. Generate builders:
`/usr/bin/gluster-swift-gen-builders $myvolname`

10. Start memcached:

  • On Fedora 17: `systemctl start memcached.service`
  • On Fedora 16 or  RHEL 6 `service start memcached`
  • On CentOS6.x `/etc/init.d/memcached start`

11. Start UFO:

`swift-init main start`

» This has bitten me more than once. If you ssh -X into the machine running swift, it’s likely that sshd will already be using ports 6010, 6011, and 6012, and will collide with the swift processes trying to use those ports «

12. Get authentication token from UFO:
`curl -v -H 'X-Storage-User: $myvolname:$username' -H 'X-Storage-Pass: $password' -k https://$myhostname:443/auth/v1.0`
(authtoken similar to AUTH_tk2c69b572dd544383b352d0f0d61c2e6d)

13. Create a container:
`curl -v -X PUT -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername -k`

14. List containers:
`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname -k`

15. Upload a file to a container:

`curl -v -X PUT -T $filename -H 'X-Auth-Token: $authtoken' -H 'Content-Length: $filelen' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k`

16. Download the file:

`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k > $filename`

More information and examples are available from

=======================================================================

N.B. We (Red Hat, Gluster) generally recommend using xfs for brick volumes; or if you’re feeling brave, btrfs. If you’re using ext4 be aware of the ext4 issue* and if you’re using ext3 make sure you mount it with -o user_xattr.

* http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/

by on May 17, 2012

Updated Wireshark packages for RHEL-6 and Fedora-17 available for testing

[From an email to the gluster-devel mailinglist] 
 
today I have merged support for GlusterFS 3.2 and 3.3 into one Wireshark 
'dissector'. The packages with date 20120516 in the version support both
the current stable 3.2.x version, and the latest 3.3.0qa41. Older 3.3.0
versions will likely have issues due to some changes in the RPC-AUTH
protocol used. Updating to the latest qa41 release (or newer) is
recommended anyway. I do not expect that we'll add support for earlier
3.3.0 releases.

My repository with packages for RHEL-6 and Fedora-17 contains a .repo
file for yum (save it in /etc/yum.repos.d):
- http://repos.fedorapeople.org/repos/devos/wireshark-gluster/

RPMs for other Fedora or RHEL versions can be provided on request. Let
me know if you need an other version (or architecture).

Single patches for some different Wireshark versions are available from
https://github.com/nixpanic/gluster-wireshark.

A full history of commits can be found here:
- https://github.com/nixpanic/gluster-wireshark-1.4/commits/master/
(Support for GlusterFS 3.3 was added by Akhila and Shree, thanks!)

Please test and report success and problems, file a issues on github:
https://github.com/nixpanic/gluster-wireshark-1.4/issues
Some functionality is still missing, but with the current status, it
should be good for most analysing already. With more issues filed, it
makes it easier to track what items are important.

Of course, you can also respond to this email and give feedback :-)

After some more cleanup of the code, this dissector will be passed on
for review and inclusion in the upstream Wireshark project. Some more
testing results is therefore much appreciated.
by on

Updated Wireshark packages for RHEL-6 and Fedora-17 available for testing

[From an email to the gluster-devel mailinglist] 
 
today I have merged support for GlusterFS 3.2 and 3.3 into one Wireshark 
'dissector'. The packages with date 20120516 in the version support both
the current stable 3.2.x version, and the latest 3.3.0qa41. Older 3.3.0
versions will likely have issues due to some changes in the RPC-AUTH
protocol used. Updating to the latest qa41 release (or newer) is
recommended anyway. I do not expect that we'll add support for earlier
3.3.0 releases.

My repository with packages for RHEL-6 and Fedora-17 contains a .repo
file for yum (save it in /etc/yum.repos.d):
- http://repos.fedorapeople.org/repos/devos/wireshark-gluster/

RPMs for other Fedora or RHEL versions can be provided on request. Let
me know if you need an other version (or architecture).

Single patches for some different Wireshark versions are available from
https://github.com/nixpanic/gluster-wireshark.

A full history of commits can be found here:
- https://github.com/nixpanic/gluster-wireshark-1.4/commits/master/
(Support for GlusterFS 3.3 was added by Akhila and Shree, thanks!)

Please test and report success and problems, file a issues on github:
https://github.com/nixpanic/gluster-wireshark-1.4/issues
Some functionality is still missing, but with the current status, it
should be good for most analysing already. With more issues filed, it
makes it easier to track what items are important.

Of course, you can also respond to this email and give feedback :-)

After some more cleanup of the code, this dissector will be passed on
for review and inclusion in the upstream Wireshark project. Some more
testing results is therefore much appreciated.