Re: [Gluster-users] gluster volume replace-brick ... status breaks when executed on multiple nodes

2012-05-26 Thread Krishnan Parthasarathi
Tomasz,

Glusterd version 3.2.6 doesn't handle concurrently issued volume commands
'gracefully'. It is known to end up in situations like the one you have
described below. This was fixed in the early days of what we informally
refer to as the 3.3.0.
[Ref: https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-3320]

Having said that, gluster cli's operation/commands semantics permits
only one volume command (like create, start, stop etc) run on the
cluster (storage pool). Even with the fix for the bug referred above (in 
master),
when two gluster commands are issued in parallel, both of them _may_ fail.
The fix ensures that you don't get into any 'breakages'. It is as though
the commands 'collided' and both aborted themselves.

Hope that helps,
krish

- Original Message -
From: Tomasz Chmielewski man...@wpkg.org
To: Gluster General Discussion List gluster-users@gluster.org
Sent: Friday, May 25, 2012 11:16:09 PM
Subject: [Gluster-users] gluster volume replace-brick ... status breaks when 
executed on multiple nodes

I've executed gluster volume replace-brick ... status on multiple peers at 
the same time, which resulted in quite an interesting breakage.

It's no longer possible to pause/abort/status/start the replace-brick operation.

Please advise. I'm running glusterfs 3.2.6.

root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 status
replace-brick status unknown
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 pause
replace-brick pause failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 abort
replace-brick abort failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 start
replace-brick failed to start



-- 
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] gluster volume replace-brick ... status breaks when executed on multiple nodes

2012-05-25 Thread Tomasz Chmielewski
I've executed gluster volume replace-brick ... status on multiple peers at 
the same time, which resulted in quite an interesting breakage.

It's no longer possible to pause/abort/status/start the replace-brick operation.

Please advise. I'm running glusterfs 3.2.6.

root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 status
replace-brick status unknown
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 pause
replace-brick pause failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 abort
replace-brick abort failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 start
replace-brick failed to start



-- 
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users