I have an array that was created with:

gluster volume create ghome replica 3 transport tcp machine00:/.g/ghome 
machine01:/.g/ghome machine02:/.g/ghome
gluster volume add-brick ghome machine04:/.g/ghome machine05:/.g/ghome 
machine06:/.g/ghome
gluster volume add-brick ghome machine07:/.g/ghome machine08:/.g/ghome 
machine10:/.g/ghome

machine04 died and isn't coming back.  Luckily I have a machine09 that I moved 
the zfs disk set over to from machine04 and it came up just fine.  The zfs disk 
set only moved /.g, no other files like /var, /etc, or /.

The question is, what is the best way to replace machine04 with machine09?

I don't want to destroy data and want to do it live.

Thanks.

I was reading:

  
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/

and found Replacing brick in Replicate/Distributed Replicate volumes, but it 
says:

  Please note that other variants of replace-brick command are not supported.

and that example only has an unchanging machine name and I wasn't sure that was 
the same variant that is supported or a different variant that isn't supported.

I think the right commands are steps 1 through 6 in the section under Replacing 
brick in Replicate/Distributed Replicate volumes, with step 6 being:

  # gluster volume replace-brick ghome machine04:/.g/ghome machine09:/.g/ghome 
commit force

but I want to double check to ensure that's the right way to do it.

machine09 is already in gluster peer status, and all other machines and bricks 
are up and operational.  Since I have a replica 3 array, and 2 good copies of 
the data in question still, the array is up and operational.

I'm using centos and ubuntu 14.04 and 16.04 with 3.7 (aka 
ppa:gluster/glusterfs-3.7 and 
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo).
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to