Hello all fellow GlusterFriends,

I would like you to comment / correct my upgrade procedure steps on replica 2 
volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum 
issue that Infrastructure currently has.

Infrastructure setup:
- all clients running on same nodes as servers (FUSE mounts)
- under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache
- all two hypervisor running as GlusterFS nodes and also Qemu compute nodes 
(Ubuntu 16.04 LTS)
- we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula)
- we currently run : 1x2 , Type: Replicate volume

Current Versions :
glusterfs-* [package] 3.7.6-1ubuntu1
qemu-*          [package] 2.5+dfsg-5ubuntu10.2glusterfs3.7.14xenial1

What we need : (New versions)
- upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are EOL - 
see https://www.gluster.org/community/release-schedule/ 
<https://www.gluster.org/community/release-schedule/>)
        - I want to use 
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
<https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12> as package 
repository for 3.12
- upgrade Qemu (with build-in support for libgfapi) - 
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12 
<https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12>
        - (sadly Ubuntu has packages build without libgfapi support)
- add third node to replica setup of volume (this is probably most dangerous 
operation)

Backup Phase
- backup "NFS storageā€ - raw DATA that runs on VMs
- stop all running VMs
- backup all running VMs (Qcow2 images) outside of gluster

Upgrading Gluster Phase
- killall glusterfs glusterfsd glusterd (on every server)
        (this should stop all gluster services - server and client as it runs 
on same nodes)
- install new Gluster Server and Client packages from repository mentioned 
upper (on every server) 
- install new Monotek's qemu glusterfs package with gfapi enabled support (on 
every server) 
- /etc/init.d/glusterfs-server start (on every server)
- /etc/init.d/glusterfs-server status - verify that all runs ok (on every 
server)
        - check :
                - gluster volume info
                - gluster volume status
                - check gluster FUSE clients, if mounts working as expected
- test if various VMs are able tu boot and run as expected (if libgfapi works 
in Qemu)
- reboot all nodes - do system upgrade of packages
- test and check again

Adding third node to replica 2 setup (replica 2 => replica 3)
(volumes will be mounted and up after upgrade and we tested VMs are able to be 
served with libgfapi = upgrade of gluster sucessfuly completed)
(next we extend replica 2 to replica 3 while volumes are mounted but no data is 
touched = no running VMs, only glusterfs servers and clients on nodes)
- issue command : gluster volume add-brick volume replica 3 
node3.san:/tank/gluster/brick1 (on new single node - node3)
        so we change : 
                Bricks:
                        Brick1: node1.san:/tank/gluster/brick1
                        Brick2: node2.san:/tank/gluster/brick1
        to :
                        Bricks:
                        Brick1: node1.san:/tank/gluster/brick1
                        Brick2: node2.san:/tank/gluster/brick1
                        Brick3: node3.san:/tank/gluster/brick1
- check gluster status
- (is rebalance / heal required here ?)
- start all VMs and start celebration :)

My Questions
- is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ?
- is this upgrade procedure OK ? What more/else should I do in order to do this 
upgrade correctly ?

Many thanks to all for support. Hope my little preparation howto will help 
others to solve same situation.

Best Regards,
Martin
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to