Re: [Gluster-users] Inconsistent volume

2010-07-28 Thread Steve Wilson

On 07/26/2010 04:37 PM, Andy Pace wrote:

I too would like to know how to "sync up" a replicated pair of bricks. Right 
now I've got a slight difference between the 2...

Scale-n-defrag.sh didn't do much either. Looking forward to some help :)

  13182120616 139057220 12362648984   2% /export
Vs
  13181705324 139057208 12362233500   2% /export

Granted, it's a very small amount (and the total availalble is slightly 
different), but the amount used should be the same, no?



-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Wilson
Sent: Monday, July 26, 2010 3:35 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Inconsistent volume

I have a volume that is distributed and replicated.  While deleting a directory structure 
on the mounted volume, I also restarted the GlusterFS daemon on one of the replicated 
servers.  After the "rm -rf"
command completed, it complained that it couldn't delete a directory because it 
wasn't empty.  But from the perspective of the mounted volume it appeared 
empty.  Looking at the individual bricks, though, I could see that there were 
files remaining in this directory.

My question: what is the proper way to correct this problem and bring the volume back to 
a consistent state?  I've tried using the "ls -alR"
command to force a self-heal but for some reason this always causes the volume 
to become unresponsive from any client after 10 minutes or so.

Some clients/servers are running version 3.0.4 while the others are running 
3.0.5.

Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager Markey Center for Structural 
Biology Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   


All,

Does anyone have any hints on how to proceed on this?  I do now have all 
three servers running at version 3.0.5.  Running a "du -bs" on each of 
the bricks shows the following:


Replication pair #1:
693999264161/gluster/jiang-scratch-1a
693998682461/gluster/jiang-scratch-1b

Replication pair #2:
231056270208/gluster/jiang-scratch-2a
231049560706/gluster/jiang-scratch-2b

Replication pair #3:
228227559462/gluster/jiang-scratch-3a
228839698590/gluster/jiang-scratch-3b

Is there something I can do manually (and safely) that will bring my 
volume back to a consistent state?


Thanks,

Steve

--
Steven M. Wilson, Systems and Network Manager
Markey Center for Structural Biology
Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Inconsistent volume

2010-07-26 Thread Andy Pace
I too would like to know how to "sync up" a replicated pair of bricks. Right 
now I've got a slight difference between the 2...

Scale-n-defrag.sh didn't do much either. Looking forward to some help :)

 13182120616 139057220 12362648984   2% /export
Vs
 13181705324 139057208 12362233500   2% /export

Granted, it's a very small amount (and the total availalble is slightly 
different), but the amount used should be the same, no?



-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Wilson
Sent: Monday, July 26, 2010 3:35 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Inconsistent volume

I have a volume that is distributed and replicated.  While deleting a directory 
structure on the mounted volume, I also restarted the GlusterFS daemon on one 
of the replicated servers.  After the "rm -rf" 
command completed, it complained that it couldn't delete a directory because it 
wasn't empty.  But from the perspective of the mounted volume it appeared 
empty.  Looking at the individual bricks, though, I could see that there were 
files remaining in this directory.

My question: what is the proper way to correct this problem and bring the 
volume back to a consistent state?  I've tried using the "ls -alR" 
command to force a self-heal but for some reason this always causes the volume 
to become unresponsive from any client after 10 minutes or so.

Some clients/servers are running version 3.0.4 while the others are running 
3.0.5.

Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager Markey Center for Structural 
Biology Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Inconsistent volume

2010-07-26 Thread Steve Wilson
I have a volume that is distributed and replicated.  While deleting a 
directory structure on the mounted volume, I also restarted the 
GlusterFS daemon on one of the replicated servers.  After the "rm -rf" 
command completed, it complained that it couldn't delete a directory 
because it wasn't empty.  But from the perspective of the mounted volume 
it appeared empty.  Looking at the individual bricks, though, I could 
see that there were files remaining in this directory.


My question: what is the proper way to correct this problem and bring 
the volume back to a consistent state?  I've tried using the "ls -alR" 
command to force a self-heal but for some reason this always causes the 
volume to become unresponsive from any client after 10 minutes or so.


Some clients/servers are running version 3.0.4 while the others are 
running 3.0.5.


Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager
Markey Center for Structural Biology
Purdue University
(765) 496-1946

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users