I've executed "gluster volume replace-brick ... status" on multiple peers at the same time, which resulted in quite an interesting breakage.
It's no longer possible to pause/abort/status/start the replace-brick operation. Please advise. I'm running glusterfs 3.2.6. root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 status replace-brick status unknown root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 pause replace-brick pause failed root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 abort replace-brick abort failed root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 start replace-brick failed to start -- Tomasz Chmielewski http://www.ptraveler.com _______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users