all posts tagged CentOS


by on November 27, 2015

Hurry up, only a few days left to do the 2015 Gluster Community Survey

The Gluster Community provides packages for Fedora, CentOS, Debian, Ubuntu, NetBSD and other distributions. All users are important to us, and we really like to hear how Gluster is (not?) working out for you, or what improvements are most wanted. It is easy to pass this information (anonymously) along through this years survey (it's a Google form).

If you would like to comment on the survey itself, please get in touch with Amye.

by on August 4, 2015

Gluster Community Packages

The Gluster Community currently provides GlusterFS packages for the following distributions:

                            3.5 3.6 3.7
Fedora 21                    ¹   ×   ×
Fedora 22                    ×   ¹   ×
Fedora 23                    ×   ×   ¹
Fedora 24                    ×   ×   ¹
RHEL/CentOS 5                ×   ×
RHEL/CentOS 6                ×   ×   ×
RHEL/CentOS 7                ×   ×   ×
Ubuntu 12.04 LTS (precise)   ×   ×
Ubuntu 14.04 LTS (trusty)    ×   ×   ×
Ubuntu 15.04 (vivid)             ×   ×
Ubuntu 15.10 (wily)
Debian 7 (wheezy)            ×   ×
Debian 8 (jessie)            ×   ×   ×
Debian 9 (squeeze)           ×   ×   ×
SLES 11                      ×   ×
SLES 12                          ×   ×
OpenSuSE 13                  ×   ×   ×
RHELSA 7                             ×

(Packages are also available in NetBSD and maybe FreeBSD.)

Most packages are available from download.gluster.org

Ubuntu packages are available from Launchpad

As can be seen, the old distributions don’t have pkgs of the latest GlusterFS, usually due to dependencies that are too old or missing. Similarly, new distributions don’t have pkgs of the older versions, for the same reason.

[1] In Fedora, Fedora Updates, or Fedora Updates-Testing for Primary architectures. Secondary architectures seem to be slow to sync with Primary; RPMs for aarch64 are often available from download.gluster.org

by on June 4, 2015

Stable releases continue, GlusterFS 3.5.4 is now available

GlusterFS 3.5 is the oldest stable release that is still getting updates. Yesterday GlusterFS 3.5.4 has been released, and the volunteering packagers have already provided RPM packages for different Fedora and EPEL versions. If you are running the 3.5 version on Fedora 20 or 21, you are encouraged to install the updates and provide karma.

Release Notes for GlusterFS 3.5.4

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2 and 3.5.3 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1092037: Issues reported by Cppcheck static analysis tool
  • 1101138: meta-data split-brain prevents entry/data self-heal of dir/file respectively
  • 1115197: Directory quota does not apply on it's sub-directories
  • 1159968: glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files
  • 1160711: libgfapi: use versioned symbols in libgfapi.so for compatibility
  • 1161102: self heal info logs are filled up with messages reporting split-brain
  • 1162150: AFR gives EROFS when fop fails on all subvolumes when client-quorum is enabled
  • 1162226: bulk remove xattr should not fail if removexattr fails with ENOATTR/ENODATA
  • 1162230: quota xattrs are exposed in lookup and getxattr
  • 1162767: DHT: Rebalance- Rebalance process crash after remove-brick
  • 1166275: Directory fd leaks in index translator
  • 1168173: Regression tests fail in quota-anon-fs-nfs.t
  • 1173515: [HC] - mount.glusterfs fails to check return of mount command.
  • 1174250: Glusterfs outputs a lot of warnings and errors when quota is enabled
  • 1177339: entry self-heal in 3.5 and 3.6 are not compatible
  • 1177928: Directories not visible anymore after add-brick, new brick dirs not part of old bricks
  • 1184528: Some newly created folders have root ownership although created by unprivileged user
  • 1186121: tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress
  • 1190633: self-heal-algorithm with option "full" doesn't heal sparse files correctly
  • 1191006: Building argp-standalone breaks nightly builds on Fedora Rawhide
  • 1192832: log files get flooded when removexattr() can't find a specified key or value
  • 1200764: [AFR] Core dump and crash observed during disk replacement case
  • 1202675: Perf: readdirp in replicated volumes causes performance degrade
  • 1211841: glusterfs-api.pc versioning breaks QEMU
  • 1222150: readdirp return 64bits inodes even if enable-ino32 is set

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).
by on November 5, 2014

Installing GlusterFS 3.4.x, 3.5.x or 3.6.0 on RHEL or CentOS 6.6

With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages in the standard channels/repositories. Unfortunately, these are only the client-side packages (like glusterfs-fuse and glusterfs-api). Users that want to run a Gluster Server on a current RHEL or CentOS now have difficulties installing any of todays current version of the Gluster Community packages.

The most prominent issue is that the glusterfs package from RHEL has a version of 3.6.0.28, and that is higher than the last week released version of 3.6.0. RHEL is shipping a pre-release that was created while the Gluster Community was still developing 3.6. An unfortunate packaging decision added a .28 to the version, where most other pre-releases would fall-back to a (rpm-)version like 3.6.0-0.1.something.bla.el6. The difference might look minor, but the result is a major disruption in the much anticipated 3.6 community release.

For the immediate need to fix this in a most easy way for our community users, we have decided to release version 3.6.1 later this week (maybe on Thursday November 6). This version is higher than the version in RHEL/CentOS, and therefore yum will prefer the package from the community repository over the one available in RHEL/CentOS. This is also the main reason why there have been no 3.6.0 packages provided on the download server.

Installing an older stable release (like 3.4 or 3.5) on RHEL/CentOS 6.6 requires a different approach. At the moment we can offer two solutions that can be used. We are still working on making this easier, until that is finalized, some manual actions are required.

Lets assume you want to verify if todays announced glusterfs-3.5.3beta2 packages indeed fix that bug you reported. (These steps apply to the other versions as well, this just happens to be what I have been testing.)

Option A: use exclude in the yum repository files for RHEL/CentOS

  1. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  2. edit /etc/yum.repos.d/redhat.repo or /etc/yum.repos.d/CentOS-Base.repo and under each repository that you find, add the following line

    exclude=glusterfs*

This prevents yum from installing the glusterfs* packages from the standard RHEL/CentOS repositories, but allows those packages from others. The Red Hat Customer Portal has an article about this configuration too.

Option B: install and configure yum-plugin-priorities

Using yum-plugin-priorities is probably a more stable solution. This does not require changes to the standard RHEL/CentOS repositories. However, an additional package needs to get installed.

  1. enable the optional repository when on RHEL, CentOS users can skip this step

    # subscription-manager repos --list | grep optional-rpms
    # subscription-manager repos --enable=*optional-rpms

  2. install the yum-plugin-priorities package:

    # yum install yum-plugin-priorities

  3. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  4. edit the /etc/yum.repos.d/glusterfs-353beta2-epel.repo file and add the following option to each repository definition:

    priority=50

The default priority for repositories is 99. The repositories with the lowest number have the highest priority. As long as the RHEL/CentOS repositories do not have the priority option set, the packages from the glusterfs-353beta2-epel.repo will get preferred by yum.

When using the yum-plugin-priorities approach, we highly recommend that you check if all your repositories have a suitable (or missing) priority option. In case some repositories have the option set, but yum-plugin-priorities was not installed yet, the order of the repositories might have changed. Because of this, we do not want to force using yum-plugin-priorities on all the Gluster Community users that run on RHEL/CentOS.

In case users still have issues installing the Gluster Community packages on RHEL or CentOS, we recommend getting in touch with us on the Gluster Users mailinglist (archive) or in the #gluster IRC channel on Freenode.

by on

Installing GlusterFS 3.4.x, 3.5.x or 3.6.0 on RHEL or CentOS 6.6

With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages in the standard channels/repositories. Unfortunately, these are only the client-side packages (like glusterfs-fuse and glusterfs-api). Users that want to run a Gluster Server on a current RHEL or CentOS now have difficulties installing any of todays current version of the Gluster Community packages.

The most prominent issue is that the glusterfs package from RHEL has a version of 3.6.0.28, and that is higher than the last week released version of 3.6.0. RHEL is shipping a pre-release that was created while the Gluster Community was still developing 3.6. An unfortunate packaging decision added a .28 to the version, where most other pre-releases would fall-back to a (rpm-)version like 3.6.0-0.1.something.bla.el6. The difference might look minor, but the result is a major disruption in the much anticipated 3.6 community release.

For the immediate need to fix this in a most easy way for our community users, we have decided to release version 3.6.1 later this week (maybe on Thursday November 6). This version is higher than the version in RHEL/CentOS, and therefore yum will prefer the package from the community repository over the one available in RHEL/CentOS. This is also the main reason why there have been no 3.6.0 packages provided on the download server.

Installing an older stable release (like 3.4 or 3.5) on RHEL/CentOS 6.6 requires a different approach. At the moment we can offer two solutions that can be used. We are still working on making this easier, until that is finalized, some manual actions are required.

Lets assume you want to verify if todays announced glusterfs-3.5.3beta2 packages indeed fix that bug you reported. (These steps apply to the other versions as well, this just happens to be what I have been testing.)

Option A: use exclude in the yum repository files for RHEL/CentOS

  1. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  2. edit /etc/yum.repos.d/redhat.repo or /etc/yum.repos.d/CentOS-Base.repo and under each repository that you find, add the following line

    exclude=glusterfs*

This prevents yum from installing the glusterfs* packages from the standard RHEL/CentOS repositories, but allows those packages from others. The Red Hat Customer Portal has an article about this configuration too.

Option B: install and configure yum-plugin-priorities

Using yum-plugin-priorities is probably a more stable solution. This does not require changes to the standard RHEL/CentOS repositories. However, an additional package needs to get installed.

  1. enable the optional repository when on RHEL, CentOS users can skip this step

    # subscription-manager repos --list | grep optional-rpms
    # subscription-manager repos --enable=*optional-rpms

  2. install the yum-plugin-priorities package:

    # yum install yum-plugin-priorities

  3. download the glusterfs-353beta2-epel.repo file and save it under /etc/yum.repos.d/

  4. edit the /etc/yum.repos.d/glusterfs-353beta2-epel.repo file and add the following option to each repository definition:

    priority=50

The default priority for repositories is 99. The repositories with the lowest number have the highest priority. As long as the RHEL/CentOS repositories do not have the priority option set, the packages from the glusterfs-353beta2-epel.repo will get preferred by yum.

When using the yum-plugin-priorities approach, we highly recommend that you check if all your repositories have a suitable (or missing) priority option. In case some repositories have the option set, but yum-plugin-priorities was not installed yet, the order of the repositories might have changed. Because of this, we do not want to force using yum-plugin-priorities on all the Gluster Community users that run on RHEL/CentOS.

In case users still have issues installing the Gluster Community packages on RHEL or CentOS, we recommend getting in touch with us on the Gluster Users mailinglist (archive) or in the #gluster IRC channel on Freenode.

by on September 30, 2014

Gluster, CIFS, ZFS – kind of part 2

A while ago I put together a post detailing the installation and configuration of 2 hosts running glusterfs, which was then presented as CIFS based storage.

http://jonarcher.info/2014/06/windows-cifs-fileshares-using-glusterfs-ctdb-highly-available-data/

This post gained a bit of interest through the comments and social networks, one of the comments I got was from John Mark Walker suggesting I look at the samba-gluster vfs method instead of mounting the filesystem using fuse (directly access the volume from samba, instead of mounting then presenting). On top of this I’ve also been looking quite a bit at ZFS, whereas previously I had a Linux RAID as the base filesystem. So here is a slightly different approach to my previous post.

Getting prepared

As before, we’re looking at 2 hosts, virtual in the case of this build but more than likely physical in a real world scenario, either way it’s irrelevant. Both of these hosts are running CentOS 6 minimal installs (I’ll update to 7 at a later date), static IP addresses assigned and DNS entries created. I’ll also be running everything under a root session, if you don’t do the same just prefix the commands with sudo. For purposes of this I have also disabled SELINUX and removed all firewall rules. I will one day leave SELINUX enabled in this configuration but for now lets leave it out of the equation.

In my case these names and addresses are as follows:

arcstor01 – 192.168.1.210

arcstor02 – 192.168.1.211

First off lets get the relevant repositories installed (EPEL, ZFS and Gluster)

yum localinstall --nogpgcheck http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
curl -o /etc/yum.repos.d/gluster.repo http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
curl -o /etc/yum.repos.d/glusterfs-samba-epel.repo http://download.gluster.org/pub/gluster/glusterfs/samba/EPEL.repo/glusterfs-samba-epel.repo

Local filesystem

As previously mentioned, this configuration will be hosted from 2 virtual machines, each will have 3 disks. 1 for the OS, and the other 2 to be used in a ZFS pool.

First off we need to install ZFS itself, once you have the above zfs-release repo installed this can be done with the following command:

yum install kernel-devel zfs

Perform this on both hosts.

We can now create a zfs pool. In my case the disk device names are vdX but they could be sdX,

fdisk -l

can help you identify the device names, whatever they are just replace them in the following commands.

Create a ZFS pool

zpool create -f  -m /gluster gluster mirror /dev/vdb /dev/vdc

this command will create a zfs pool mounted at /gluster, without -m /gluster it would mount at /{poolname} while in this case it’s the same I just added the option for clarity. The volume name is gluster, the redundancy level is mirrored which is similar to RAID1, there are a number of raid levels available in ZFS all are best explained here: http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/. The final element to the command is where to host the pool, in our case on /dev/vdb and /dev/vdc. The -f option specified is to force creation of the pool, this is required remove the need to create partitions prior to the creation of the pool.

Running the command

zpool status

Will return the status of the created pool, which if successful should look something similar to:

[root@arcstor01 ~]# zpool status
 pool: gluster
 state: ONLINE
 scan: none requested
 config:
NAME STATE READ WRITE CKSUM
 gluster ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 vdb1 ONLINE 0 0 0
 vdc1 ONLINE 0 0 0

errors: No known data errors

A quick ls and df will also show us that the /gluster mountpoint is present and the pool is mounted, the df should show the size as being half the sum of both drives in the pool:

[root@arcstor01 ~]# ls /
 bin boot cgroup dev etc gluster home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr var
 [root@arcstor01 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 15G 1.2G 13G 9% /
 tmpfs 498M 0 498M 0% /dev/shm
 gluster 20G 0 20G 0% /gluster

If this is the case, rinse and repeat on host 2. If this is also successful then we now have a resilient base filesystem on which to host our gluster volumes. There is a bucket load more to ZFS and it’s capabilities but it’s way outside the confines of this configuration, well worth looking into though.

Glusterising our pool

So now we have a filesystem, lets make it better. Next up, installing glusterfs, enabling it then preparing the directories, for this part it is pretty much identical to the previous post:

yum install glusterfs-server -y

chkconfig glusterd on

service glusterd start

mkdir  -p /gluster/bricks/share/brick1

This needs to be done on both hosts.

Now only on host1 lets make the two nodes friends, create and then start the gluster volume:

# gluster peer probe arcstor02
peer probe: success.

# gluster vol create share replica 2 arcstor01:/gluster/bricks/share/brick1 arcstor02:/gluster/bricks/share/brick1
volume create: share: success: please start the volume to access data

# gluster vol start share
volume start: share: success

[root@arcstor01 ~]# gluster vol info share

Volume Name: data1
Type: Replicate
Volume ID: 73df25d6-1689-430d-9da8-bff8b43d0e8b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: arcstor01:/gluster/bricks/share1/brick1
Brick2: arcstor02:/gluster/bricks/share1/brick1

If all goes well above we should have a gluster volume ready to go, this volume will be presented via samba directly. For this configuration a locally available shared area is required, for this we will create another gluster volume to mount locally in which to store lockfiles and shared config files.

mkdir  -p /gluster/bricks/config/brick1
gluster vol create config replica 2 arcstor01:/gluster/bricks/config/brick1 arcstor02:/gluster/bricks/config/brick1
gluster vol start config
mkdir  /opt/samba-config
mount -t glusterfs localhost:config /opt/samba-config

The share volume could probably be used by using a different path in the samba config but for simplicity we’ll keep them seperate for now.
The mountpoint for /opt/samba-config will need to be added to fstab to ensure it mounts at boot time.

echo "localhost:config /opt/samba-config glusterfs defaults,_netdev 0 0" >>/etc/fstab

Should take care of that, remember that needs to be on both hosts.

Samba and CTDB

We now have a highly resilient datastore which could withstand both disk and host downtime, but we need to make that datastore available for consumption and also highly available in the process, for this we will use CTDB, as in the previous post. CTDB is a cluster version of the TDB database which sits under Samba. The majority of this section will be the same as the previous post except for the extra packages and a slightly different config for samba. Lets install the required packages:

yum -y install ctdb samba samba-common samba-winbind-clients samba-client samba-vfs-glusterfs

For the majority of config files we will create them in our shared config volume and symlink them to their expected location. First file we need to create is /etc/sysconfig/ctdb but we will do this as /opt/samba-config/ctdb then link it afterwards

Note: The files which are created in the shared area should be done only on one host, but the linking needs to be done on both.

vi /opt/samba-config/ctdb
CTDB_RECOVERY_LOCK=/opt/samba-config/lockfile
 #CIFS only
 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
 CTDB_MANAGES_SAMBA=yes
 #CIFS only
 CTDB_NODES=/etc/ctdb/nodes

We’ll need to remove the existing file in /etc/sysconfig then we can create the symlink

rm /etc/sysconfig/ctdb
ln -s /opt/samba-config/ctdb /etc/sysconfig/ctdb

Although we are using Samba the service we will be using is CTDB which allows for the extra clustering components, we need to stop and disable the samba services and enable the ctdb ones:

service smb stop
chkconfig smb off
chkconfig ctdb on

With this configuration being a cluster of essentially a single datapoint we should really use a single entry point, for this a 3rd “floating” or virtual IP address is employed, more than one could be used but lets keep this simple – 192.168.1.212. We also need to create a ctdb config file which contains a list of all the nodes in the cluster. Both these files need to be created in the shared location:

vi /opt/samba-config/public_addresses
192.168.1.212/24 eth0
vi /opt/samba-config/nodes
192.168.1.210
192.168.1.211

They both then need to be linked to their expected locations – neither of these exist so don’t need to be removed.

ln -s /opt/samba-config/nodes /etc/ctdb/nodes
ln -s /opt/samba-config/public_addresses /etc/ctdb/public_addresses

The last step is to modify the samba configuration to present the volume via cifs, I seemed to have issues using a linked file for samba so will only use the shared area for storing a copy of the config which can then be copied to both nodes to keep them identical.

cp /etc/samba/smb.conf /opt/samba-config/

Lets edit that file:

vi /opt/samba-config/smb.conf

Near the top add the following options

clustering = yes
idmap backend = tdb2
private dir = /opt/samba-config/

These turn the clustering (CTDB) features on and specify the shared directory where samba will create lockfiles. You can test starting ctdb at this point to ensure all is working, on both hosts:

cp /opt/samba-config/smb.conf /etc/samba/
service ctdb start

It should start OK, then health status of the cluster can be checked with

ctdb status

At this point I was finding that CTDB was not starting correctly, after a little bit of logwatching I found an error in the samba logs suggesting:

Failed to create pipe directory /run/samba/ncalrpc - No such file or directory

Also, to be search engine friendly the CTDB logfile was outputting

50.samba OUTPUT:ERROR: Samba tcp port 445 is not responding

This was a red herring, the port wasn’t responding as the samba part of CTDB wasn’t starting, 50.samba is a script in /etc/ctdb/events/ which actually starts the smb process.

So I created the directory /run/samba and restarted ctdb and the issue seems to have disappeared.

Now we have a started service, we can go ahead and add the configuration for the share. A regular samba share would look something like:

[share]
 comment = just a share
 path = /share
 read only = no
 guest ok = yes
 valid users = jon

In the previous post this would have been ideal if our gluster volume was mounted at share, but for this we are removing a layer and want samba to talk directly to gluster rather than via the fuse layer. This is achieved using a VFS object, we installed the samba-vfs-glusterfs package earlier. The configuration is slightly different within the smb.conf file also. Adding the following to our file should enable access to the share volume we created:

[share]
 comment = gluster vfs share
 path = /
 read only = No
 guest ok = Yes
 kernel share modes = No
 vfs objects = glusterfs
 glusterfs:loglevel = 7
 glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
 glusterfs:volume = share

Notice the glusterfs: options near the bottom, these are specific to the glusterfs vfs object which is called further up (vfs objects = glusterfs). Another point to note is that the path is / this is relative to the volume rather than the filesystem, so a path to /test would be a test directory inside the gluster volume.

We can now reload the samba config, lets restart for completeness (on both nodes)

service ctdb restart

From a cifs client you should now be able to browse to \192.168.1.212share (or whatever IP you specified as the floating IP).

ctdb

 

All done!

To conclude, here we have created a highly resilient, highly available, very scalable storage solution using some fantastic technologies. We have created a single access method (Cifs on a floating  IP) to a datastore which is then stored on multiple hosts, which in turn store upon multiple disks. Talk about redundancy!

Useful links:

http://www.centos.org

http://zfsonlinux.org/

http://www.gluster.org/

http://ctdb.samba.org/

 

The post Gluster, CIFS, ZFS – kind of part 2 appeared first on Jon Archer.

flattr this!

by on August 13, 2014

Upgrade CentOS 6 to 7 with Upgrade Tools

I decided to try the upgrade process from EL 6 to 7 on the servers I used in my previous blog post “Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data”

Following the instructions here I found the process fairly painless. However there were 1 or two little niggles which caused various issues which I will detail here.

The servers were minimal CentOS 6.5 installs, with Gluster volumes shared via CTDB. The extra packages installed had mostly come from the EPEL or Glusterfs repositories, and I believe this is where the issues arise – third party repositories.

My initial attempt saw me running:

preupg -l

which gave me the output: CentOS6_7

This meant that I had CentOS 6 to 7 upgrade content available to me, this could now be utilised by running:

preupg -s CentOS6_7

Which then ran through the preupgrade checks and produced the report of whether my system could, or should, be upgraded.

The results came back with several informational items, but more importantly 4 “needs_action” items.

These included “Packages not signed by CentOS”, “Removed RPMs”, “General” and “Content for enabling and disabling services based on CnentOS 6 system”

Firing up links and pointing it at the output preupgrade/result.html file I took a deeper look into the above details.

“Packages not signed by CentOS” as expected covered the third party installed applications, in my case the glusterfs rpms and the epel-release, which were to be expected. The other sections didn’t present any great worries so I pressed on with the upgrade:

centos-upgrade-tool-cli --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64/

running this takes the data from the previous report and runs an upgrade process based on it. Interestingly the first part of the process (redhat_upgrade_tool.yum) checks out the yum repos that are configured and EPEL “seems OK” whereas the glusterfs-epel ones don’t. This called for a little more investigation, as on my first upgrade trial run these packages failed to upgrade, luckily I took a snapshot of the machine before upgrading so could try again.

Strangely, even though the $basearch and $releasever variables were used in the config file, manually changing the $releasever to 7 (as $releasever translates to 7.0) seemed to do the trick. I manually edited the EPEL file too as this contained epel-6 in the url. After this I also noticed that the gluster services were no longer listed in the INPLACERISK: HIGH categories but had been moved to the MEDIUM.

Continue with upgrade [Y/N]?.

yes please!

The upgrade tool then goes through the process of downloading the boot images and packages ready for the upgrade, for some reason I got a message about the CentOS 7 GPG key being listed but not installed, so while I hunted out the key to import I re-ran the upgrade tool with the –nogpgcheck switch to skip that check. The tool finished successfully then and then prompted me with:

Finished. Reboot to start upgrade.

Ok then, here goes….

Bringing up the console to that machine showed me it booting into the images it downloaded in preparation for the upgrade. Mostly a screen of RPM package updates and reconfiguration. The update completed fairly quickly then automatically rebooted.

As mentioned above this was the second attempt at an upgrade on this machine, the first time it was upgraded I was prompted with the emergengy login screen after reboot. This turned out, strangely, to be that the glusterfs packages hadn’t been upgraded so I logged onto the console brought up eth0 and ran yum update. After a reboot I was faced with a working system.

The second attempt I managed to ensure the gluster packages were included in the upgrade so after crossing fingers the reboot ended with a login prompt. Great News!

The only issues I faced were Gluster volumes not mounting at boot time, but I’m sure this is a systemd configuration which can be easily rectified and really don’t change the success of the upgrade process.

All in all, good work from the Red Hat and CentOS teams, happy with the upgrade process. It’s not too far removed from Fedup in Fedora of which I’m sure it’s based.

Update: The issues I faced with my gluster volumes not mounting locally were resolved by adding the _netdev directive after defaults in fstab e.g.:

localhost:data1 /data/data1 glusterfs defaults,_netdev 0 0

All that was occurring was systemd was trying to mount the device as a local filesystem, which would try to run before the glusterd service had started. Adding this option delayed the mounting until all network-target was complete essentially.

The other issue that became apparent after I resolved the gluster mounting issue was the CTDB service not running once boot had completed, this was due to the CTDB service trying to start before filesystems were active, I modified the ctdb.service file to ensure that it only started after gluster had started which seemed to be enough. I guess that getting it to start after the filesystems had mounted would be better but for now it works. To do this I modified the  /usr/lib/systemd/system/ctdb.service file and changed the line:

After=network.target

in the [Unit] section to

After=network.target glusterd.service

 

The post Upgrade CentOS 6 to 7 with Upgrade Tools appeared first on Jon Archer.

flattr this!

by on June 19, 2014

Community Gluster Image on Docker

If you would like to try out gluster, a new CentOS based docker container is available on the docker hub at https://registry.hub.docker.com/u/gluster/gluster/. This image is very new, so do not use it for production environments. It is meant to be an early community version of gluster running within docker.

For correctness and performance reasons, we recommend running Gluster on an host-mounted XFS volume that resides on a separate device from the root filesystem. For this proof of concept, we use only a single node gluster daemon.

This community image was originally created by Frederick F. Kautz IV and Harshavardhana.

Usage

Prepare an XFS mount

The preferred method to use gluster is to mount an XFS partition on a separate device. If you want to test the image and do not have an XFS partition available on your system, you can create and mount one using the following commands:
dd if=/dev/zero of=/data/gluster.xfs bs=1M count=2048
mkfs.xfs -isize=512 /data/gluster.xfs
mkdir /mnt/gluster
mount -oloop,inode64,noatime /data/gluster.xfs /mnt/gluster

Run docker with the XFS mount

host # docker run --privileged -i -t -h gluster -v /mnt/gluster:/mnt/vault \
gluster/gluster:latest
container # df -h /mnt/vault
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop4     2014M   45M  1969M   4% /mnt/vault

Access your new gluster volume from the host

Grab the ip address for the container
GLUSTER_CONTAINER_ID=$(docker ps | grep -i gluster | awk {'print $1'}
GLUSTER_IPADDR=$(docker inspect $GLUSTER_CONTAINER_ID | grep -i ipaddr | \
sed -e 's/\"//g' -e 's/\,//g' | awk {'print $2'})
Mount a container using the ip address provided in the above section.
mount -t glusterfs ${GLUSTER_IPADDR}:$VOLUME_NAME /mnt/gfs

Accessing your new gluster volume from a contaner

First, mount the volume to the host as shown in the previous section.
Second, mount the volume on container run
docker run -i -t -h gluster-client -v /mnt/gfs:/mnt/${VOLUME_NAME} gluster/gluster:latest
Note:
Docker drops CAP_SYS_ADMIN which prevents the user from mounting a container from within another container.

Shutting down and restarting gluster

Gluster stores metadata about the volume in /var/lib/glusterd and logs in /var/log/glusterfs. In order to preserve state, use docker commit before shutting down the cluster.
docker commit $GLUSTER_CONTAINER_ID mygluster:latest
docker kill $GLUSTER_CONTAINER_ID
To restart gluster, simply run your tagged gluster image.
docker run --privileged -i -t -h gluster -v /mnt/gluster:/mnt/vault mygluster:latest

Next Steps

We are investigating how to run gluster in a docker based multi-node environment. We will write a new blog post covering this topic soon. We are also investigating what changes are necessary to both gluster and docker to help support running gluster in docker.
If you are feeling adventurous, take a look at jpetazzo’s pipework project: https://github.com/jpetazzo/pipework.
by on March 27, 2014

Puppet-Gluster now available as RPM

I’ve been afraid of RPM and package maintaining [1] for years, but thanks to Kaleb Keithley, I have finally made some RPM’s that weren’t generated from a high level tool. Now that I have the boilerplate done, it’s a relatively painless process!

In case you don’t know kkeithley, he is a wizard [2] who happens to also be especially cool and hardworking. If you meet him, be sure to buy him a $BEVERAGE. </plug>

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

A photo of kkeithley after he (temporarily) transformed himself into a wizard penguin.

The full source of my changes is available in git.

If you want to make the RPM’s yourself, simply clone the puppet-gluster source, and run: make rpm. If you'd rather download pre-built RPM's, SRPM'S, or source tarballs, they are all being graciously hosted on download.gluster.org, thanks to John Mark Walker and the gluster.org community.

These RPM's will install their contents into /usr/share/puppet/modules/. They should work on Fedora or CentOS, but they do require a puppet package to be installed. I hope to offer them in the future as part of a repository for easier consumption.

There are also RPM's available for puppet-common, puppet-keepalived, puppet-puppet, puppet-shorewall, puppet-yum, and even puppetlabs-stdlib. These are the dependencies required to install the puppet-gluster module.

Please let me know if you find any issues with any of the packages, or if you have any recommendations for improvement! I'm new to packaging, so I probably made some mistakes.

Happy Hacking,

James

[1] package maintainer, aka: "paintainer" - according to semiosis, who is right!

[2] wizard as in an awesome, talented, hacker.


by on January 27, 2014

Screencasts of Puppet-Gluster + Vagrant

I decided to record some screencasts to show how easy it is to deploy GlusterFS using Puppet-Gluster+Vagrant. You can follow along even if you don’t know anything about Puppet or Vagrant. The hardest part of this process was producing the actual videos!

If recommend first reading my earlier articles if you’re planning on following along:

Without any further delay, here are the screencasts:

Part 1: Intro, and provisioning of the Puppet server.

Part 2: Initial building of the Gluster hosts.

Part 3: Finishing the Gluster builds.

Part 4: GlusterFS client mounting and tests.

Part 5: Mixed bag of code, infrastructure tours, examples and other details.

I hope you enjoyed these videos. Thank you to the Gluster.org community for hosting them. If you liked these videos, please consider sponsoring some of my work, or making a donation!

As a side note, the only screencast tool that worked was gtk-recordmydesktop, however it deleted my second recording (which had to be re-recorded) and the audio stopped working one minute into my third recording (which had to then be separately recorded, and mixed in). Amazingly, pitivi was the only tool which worked to properly mix them together!

Happy Hacking,

James

PS: Please note, you may not sell, edit, redistribute, perform, or host these videos elsewhere without my permission. I especially don’t want to see them on youtube until Google let’s me unlink my youtube account! If you do want my permission to use these videos for something, contact me, and we can work something out. I’ll surely allow it if it’s not for something evil. If you’d rather have an interactive, live demo, let me know!