Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Ling Ho
/2013 02:58 PM, Ling Ho wrote: No this is our in-house patch. A similar fix is still not in 3.3.2qa but it's in 3.4.0alpha. I couldn't find any bug reported, if any. ... ling On 04/16/2013 02:15 PM, Thomas Wakefield wrote: Do you have the bug # for this patch? On Apr 16, 2013, at 3:48 P

Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Ling Ho
No this is our in-house patch. A similar fix is still not in 3.3.2qa but it's in 3.4.0alpha. I couldn't find any bug reported, if any. ... ling On 04/16/2013 02:15 PM, Thomas Wakefield wrote: Do you have the bug # for this patch? On Apr 16, 2013, at 3:48 PM, Ling Ho wrote: M

Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Ling Ho
max_inodes)) { max = conf->du_stats[i].avail_space; max_inodes = conf->du_stats[i].avail_inodes; ... ling On 04/16/2013 12:38 PM, Thomas Wakefield wrote: Running 3.3.1 on everything, client and servers :( Thomas Wakefield Sr Sys A

Re: [Gluster-users] Bricks filling up

2013-04-16 Thread Ling Ho
On 04/15/2013 06:35 PM, Thomas Wakefield wrote: Help- I have multiple gluster filesystems, all with the setting: cluster.min-free-disk: 500GB. My understanding is that this setting should stop new writes to a brick with less than 500GB of free space. But that existing files might expand, wh

Re: [Gluster-users] Write failure on distributed volume with free space available

2013-02-08 Thread Ling Ho
(after remounting the volume), and it still writes to the bricks with less than 10% or less than 500GB free. Is this a feature available in later released only? I am using Stable 3.3.1 . Thanks, ... ling On 01/28/2013 02:37 PM, Jeff Darcy wrote: On 01/28/2013 05:19 PM, Ling Ho wrote: How

Re: [Gluster-users] Write failure on distributed volume with free space available

2013-01-28 Thread Ling Ho
How "full" does it has to be before new files start getting written into the other bricks? In my recent experience, I added a new brick to an existing volume while one of the existing 4 bricks was close to full. And yet I constantly get out of space error when trying to write new files. Full r

Re: [Gluster-users] 3.3 Quota Problems

2012-09-11 Thread Ling Ho
I found this only happens when I mount the volume with the "acl" option. Otherwise, I don't get the "no limit-set option provided" message, and things works fine. Not even needing the remount the volume to see changes to quota. ... ling On 09/10/2012 07:15 PM, Ling

Re: [Gluster-users] 3.3 Quota Problems

2012-09-10 Thread Ling Ho
t-set" option provided ... ling On 09/10/2012 03:01 PM, Ling Ho wrote: I am trying to use directory quota in our environment and face two problems: 1. When new quota is set on a directory, it doesn't take effect until the volume is remounted on the client. This is a major inconvenience. 2.

[Gluster-users] 3.3 Quota Problems

2012-09-10 Thread Ling Ho
I am trying to use directory quota in our environment and face two problems: 1. When new quota is set on a directory, it doesn't take effect until the volume is remounted on the client. This is a major inconvenience. 2. If I add a new quota, quota stops working on the client. This is how to r

[Gluster-users] stat-prefetch under 3.3

2012-09-05 Thread Ling Ho
Is stat-prefetch enabled by default under 3.3? I don't see the option in the volume file even if I use volume set to turn it on. I am trying to speed up directory listing operation (ls -al) on client nodes. Thanks, ... ling ___ Gluster-users mailing l

Re: [Gluster-users] RDMA "not fully supported" by GlusterFS 3.3.0 ?!

2012-07-23 Thread Ling Ho
ratched my head more than once, thinking about what I could possibly have forgotten. Then I searched for all information I could find about RDMA and 3.3.0. Here is what I found: - On page 123 of the "GlusterFS Administration Guide 3.3.0", a small note saying: "NOTE: with 3.3.0 rel

[Gluster-users] rdma under 3.3.0

2012-07-07 Thread Ling Ho
Hello, I just noticed none of my clients is able to mount using rdma. My glusterfs volume is set up with tcp,rdma. On my clients connected to the server via infiniband, I am trying to mount using transport=rdma, and acl options. I am using volume.rdma naming format for the volume. However, ev

[Gluster-users] Weird file and file truncation problems

2012-06-14 Thread Ling Ho
I have a file on a brick with weird permission mode. I thought the "T" only appears on zero length pointer files. -r--r-s--T+ 2 psdatmgr ps-data 98780901596 Jan 18 15:06 e141-r0001-s02-c01.xtc lsof show it is being held open and read/write by the glusterfs process for the brick. glusterfs

Re: [Gluster-users] rebalance speedup

2012-06-12 Thread Ling Ho
hence reducing the bottleneck of having only 1 rebalance thread do the work. Additionally, all rebalance processes push data to the destination node, hence better utilizing the network too. Hope this answers your query. With regards, Shishir - Original Message - From: "Ling Ho" To:

[Gluster-users] rebalance speedup

2012-06-12 Thread Ling Ho
I found a discussion earlier this year regarding a planned feature for speeding up rebalancing in 3.3.0. Is these a way to speed up rebalancing now in 3.3.0? Is it possible to let rebalancing between multiple servers making use of more of available network bandwidth? I have 5 servers with 10

Re: [Gluster-users] Brick crashes

2012-06-08 Thread Ling Ho
0x32/0x40 [] sys_mmap_pgoff+0x5c/0x2d0 [] sys_mmap+0x29/0x30 [] system_call_fastpath+0x16/0x1b On 06/08/2012 05:18 PM, Anand Avati wrote: Those are 4.x GB. Can you post dmesg output as well? Also, what's 'ulimit -l' on your system? On Fri, Jun 8, 2012 at 4:41 PM, Ling Ho <

Re: [Gluster-users] Brick crashes

2012-06-08 Thread Ling Ho
files after the crash? Avati On Fri, Jun 8, 2012 at 4:22 PM, Ling Ho <mailto:l...@slac.stanford.edu>> wrote: Hello, I have a brick that crashed twice today, and another different brick that crashed just a while a go. This is what I see in one of the brick logs:

[Gluster-users] Brick crashes

2012-06-08 Thread Ling Ho
Hello, I have a brick that crashed twice today, and another different brick that crashed just a while a go. This is what I see in one of the brick logs: patchset: git://git.gluster.com/glusterfs.git patchset: git://git.gluster.com/glusterfs.git signal received: 6 signal received: 6 time of cr

Re: [Gluster-users] AUX GID > 16

2012-05-31 Thread Ling Ho
reproduce this issue. Let me know. Thanks, Anush On 05/17/2012 07:38 AM, Ling Ho wrote: Hi Anush, Did you have a chance to try to reproduce the error? Should I expect this feature/fix be supported in 3.3? Let me know if I am doing the right thing to test it, or if I should download a different

Re: [Gluster-users] AUX GID > 16

2012-05-16 Thread Ling Ho
Hi Anush, Did you have a chance to try to reproduce the error? Should I expect this feature/fix be supported in 3.3? Let me know if I am doing the right thing to test it, or if I should download a different tarball to try it. Thanks, ... ling On 05/07/2012 11:56 AM, Ling Ho wrote: Hi

Re: [Gluster-users] AUX GID > 16

2012-05-07 Thread Ling Ho
v105:mnt> id uid=10858(tstopr) gid=1109(xs) groups=1109(xs) Thanks, ... ling On 05/06/2012 11:10 PM, Anush Shetty wrote: Hi Ling, Can you please give me the steps used in testing the issue so that I could reproduce it locally. - Anush On 05/05/2012 06:24 AM, Ling Ho wrote: Hi, I trie

[Gluster-users] AUX GID > 16

2012-05-02 Thread Ling Ho
Hello, I am looking at the changes done for this bug 767229. I downloaded and installed 3.3.0beta3-1 which seems to have included the changes. But when I make myself a member of more than 16 groups, I can't access any directory nor files under my mounted glusterfs file system. I got a "Trans