Update GlusterFS 3.3.1 to 3.4.0 on CentOS 6.4 cluster

Notes from the GlusterFS 3.3.1 -> 3.4.0 upgrade on my storage / compute cluster at ILRI, Kenya. I referenced Vijay Bellur’s blog post about upgrading to 3.4, then added my own bits using Ansible for my infrastructure (I gave an overview of my Ansible setup here).

Our cluster is comprised of:

  • Three “storage” nodes (gluster servers)
  • Three “compute” nodes (gluster clients)

All servers and clients are running CentOS 6.4.

System updates

Make sure all servers are running the latest system updates (kernels etc!).

First the storage servers:

ansible storage -m shell -a "yum -y upgrade" -K --limit=storage0
ansible storage -m shell -a "reboot" -K --limit=storage0
ansible storage -m shell -a "yum -y upgrade" -K --limit=storage1
ansible storage -m shell -a "reboot" -K --limit=storage1
ansible storage -m shell -a "yum -y upgrade" -K --limit=storage2
ansible storage -m shell -a "reboot" -K --limit=storage2

Then the compute servers:

ansible compute -m shell -a "yum -y upgrade" -K

Reboot one by one, starting with the head node (“hpc”):

ansible compute -m shell -a "reboot" -K --limit=hpc
ansible compute -m shell -a "reboot" -K --limit=compute0
ansible compute -m shell -a "reboot" -K --limit=compute1

Update GlusterFS: storage servers

With GlusterFS we need to update servers first, then clients. Unmount all GlusterFS mounts and then upgrade to 3.4.0 (a new glusterfs-epel.repo is copied as part of this step):

ansible storage -m shell -a "umount -t glusterfs -a" -u root --limit=storage0
ansible storage -m service -a "name=glusterd state=stopped" -u root --limit=storage0
ansible-playbook storage.yml -u root --tags=glusterfs-server,firewall --limit=storage0
ansible storage -m yum -a "name=glusterfs* state=latest" -u root --limit=storage0
ansible storage -m service -a "name=glusterd state=started" -u root --limit=storage0

Repeat for storage1 and storage2, etc. I suppose if you are really brave and know what you’re doing, you could remove the --limit to do all the servers at once. 😉

NB: Make sure you update your firewall settings to account for GlusterFS’ new port enumeration for bricks; they now start enumerating from 49152 (previously 24009). My port configurations are managed by Ansible.

Check if volumes have all bricks online

All volumes should have processes up (Online = Y, PID, etc):

ansible storage -m shell -a "gluster volume status" -u root

Mount gluster volumes

ansible storage -m shell -a "mount -t glusterfs -a" -u root

Update GlusterFS: compute nodes

ansible compute -m shell -a "umount -t glusterfs -a" -u root
ansible-playbook compute.yml -u root --tags=glusterfs-client
ansible compute -m shell -a "mount -a -t glusterfs" -u root

Annnnd Bob’s your uncle. 🙂

Some notes

You might notice above sometimes I was using ansible with -K and then other times with -u root; the first form uses your local username and then relies on sudo, while the second one relies on SSH’ing as root. I had to do this because I can’t log in as myself once we’ve unmounted the /home directory (as it is stored on gluster). Both forms use private key authentication, of course.

Also, Vijay Bellur has some slides with the “What’s New” for GlusterFS 3.4.

2 thoughts on “Update GlusterFS 3.3.1 to 3.4.0 on CentOS 6.4 cluster

Comments are closed.