The use case is that we have a multiTB data partition that we would
like to glusterize. Could we add that store to a gluster volume and
have it explicitly rebalance across the gluster volume? Or would the
existing files/layout be ignored?
this would be a big selling point in justifying gluste
On Dec 15, 2011, at 7:03 AM, Brian Rosner wrote:
> Backends (file only exists on brick 3 and 4):
Slight correction: I meant to say bricks 2 and 4
Brian Rosner
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/l
On Dec 15, 2011, at 4:53 AM, Pranith Kumar K wrote:
> Brian,
> Could you give the output of the same files/dirs on the backends as well.
I had to recreate the situation again so here is the new log entry to show the
new file it is renaming:
[2011-12-15 13:47:14.513882] W [fuse-bridge.c:1348:fu
On Dec 15, 2011, at 5:05 AM, Pranith Kumar K wrote:
> Brian,
> Sorry, we need the 'id 1630' output on the client machine.
I should have clarified that 1630 is the UID of user i130. I ran 'id i130' on
the client machine:
uid=1630(i130) gid=65534(nogroup) groups=65534(nogroup)
User i130 doe
On 12/15/2011 05:31 PM, Pranith Kumar K wrote:
On 12/15/2011 05:23 PM, Pranith Kumar K wrote:
On 12/15/2011 03:35 AM, Brian Rosner wrote:
On Dec 13, 2011, at 7:54 PM, Pranith Kumar K wrote:
On 12/13/2011 10:56 PM, Brian Rosner wrote:
On Dec 12, 2011, at 1:41 AM, Pranith Kumar K wrote:
Seems
On 12/15/2011 05:23 PM, Pranith Kumar K wrote:
On 12/15/2011 03:35 AM, Brian Rosner wrote:
On Dec 13, 2011, at 7:54 PM, Pranith Kumar K wrote:
On 12/13/2011 10:56 PM, Brian Rosner wrote:
On Dec 12, 2011, at 1:41 AM, Pranith Kumar K wrote:
Seems like the issue with that specific file your appl
On 12/15/2011 03:35 AM, Brian Rosner wrote:
On Dec 13, 2011, at 7:54 PM, Pranith Kumar K wrote:
On 12/13/2011 10:56 PM, Brian Rosner wrote:
On Dec 12, 2011, at 1:41 AM, Pranith Kumar K wrote:
Seems like the issue with that specific file your application is trying to
rename. Could you check
On 12/15/2011 04:32 PM, Changliang Chen wrote:
Hi pranithk,
Thanks for your replay.
Because to keep availability,we haven't strace the process.After
shudowning the damon,the cluster recover.
In our case,
10.1.1.64(dfs-client-6): online node,when the other node(65)
restart,cpu
Hi pranithk,
Thanks for your replay.
Because to keep availability,we haven't strace the process.After
shudowning the damon,the cluster recover.
In our case,
10.1.1.64(dfs-client-6): online node,when the other node(65)
restart,cpu usr usage reach 100% (glusterfsd process)
10.1.