/2013 02:58 PM, Ling Ho wrote:
No this is our in-house patch. A similar fix is still not in 3.3.2qa
but it's in 3.4.0alpha. I couldn't find any bug reported, if any.
...
ling
On 04/16/2013 02:15 PM, Thomas Wakefield wrote:
Do you have the bug # for this patch?
On Apr 16, 2013, at 3:48 P
No this is our in-house patch. A similar fix is still not in 3.3.2qa but
it's in 3.4.0alpha. I couldn't find any bug reported, if any.
...
ling
On 04/16/2013 02:15 PM, Thomas Wakefield wrote:
Do you have the bug # for this patch?
On Apr 16, 2013, at 3:48 PM, Ling Ho wrote:
M
max_inodes)) {
max =
conf->du_stats[i].avail_space;
max_inodes =
conf->du_stats[i].avail_inodes;
...
ling
On 04/16/2013 12:38 PM, Thomas Wakefield wrote:
Running 3.3.1 on everything, client and servers :(
Thomas Wakefield
Sr Sys A
On 04/15/2013 06:35 PM, Thomas Wakefield wrote:
Help-
I have multiple gluster filesystems, all with the setting:
cluster.min-free-disk: 500GB. My understanding is that this setting should
stop new writes to a brick with less than 500GB of free space. But that
existing files might expand, wh
(after remounting the
volume), and it still writes to the bricks with less than 10% or less
than 500GB free.
Is this a feature available in later released only? I am using Stable
3.3.1 .
Thanks,
...
ling
On 01/28/2013 02:37 PM, Jeff Darcy wrote:
On 01/28/2013 05:19 PM, Ling Ho wrote:
How
How "full" does it has to be before new files start getting written into
the other bricks?
In my recent experience, I added a new brick to an existing volume while
one of the existing 4 bricks was close to full. And yet I constantly get
out of space error when trying to write new files. Full r
I found this only happens when I mount the volume with the "acl" option.
Otherwise, I don't get the "no limit-set option provided" message, and
things works fine. Not even needing the remount the volume to see
changes to quota.
...
ling
On 09/10/2012 07:15 PM, Ling
t-set" option provided
...
ling
On 09/10/2012 03:01 PM, Ling Ho wrote:
I am trying to use directory quota in our environment and face two problems:
1. When new quota is set on a directory, it doesn't take effect until
the volume is remounted on the client. This is a major inconvenience.
2.
I am trying to use directory quota in our environment and face two problems:
1. When new quota is set on a directory, it doesn't take effect until
the volume is remounted on the client. This is a major inconvenience.
2. If I add a new quota, quota stops working on the client.
This is how to r
Is stat-prefetch enabled by default under 3.3? I don't see the option in
the volume file even if I use volume set to turn it on. I am trying to
speed up directory listing operation (ls -al) on client nodes.
Thanks,
...
ling
___
Gluster-users mailing l
ratched my head more than once, thinking about what I could possibly have
forgotten. Then I searched for all information I could find about RDMA and
3.3.0.
Here is what I found:
- On page 123 of the "GlusterFS Administration Guide 3.3.0", a small note saying:
"NOTE: with 3.3.0 rel
Hello,
I just noticed none of my clients is able to mount using rdma. My
glusterfs volume is set up with tcp,rdma. On my clients connected to the
server via infiniband, I am trying to mount using transport=rdma, and
acl options. I am using volume.rdma naming format for the volume.
However, ev
I have a file on a brick with weird permission mode. I thought the "T"
only appears on zero length pointer files.
-r--r-s--T+ 2 psdatmgr ps-data 98780901596 Jan 18 15:06
e141-r0001-s02-c01.xtc
lsof show it is being held open and read/write by the glusterfs process
for the brick.
glusterfs
hence
reducing the bottleneck of having only 1 rebalance thread do the work.
Additionally, all rebalance processes push data to the destination node, hence
better utilizing the network too.
Hope this answers your query.
With regards,
Shishir
- Original Message -
From: "Ling Ho"
To:
I found a discussion earlier this year regarding a planned feature for
speeding up rebalancing in 3.3.0.
Is these a way to speed up rebalancing now in 3.3.0? Is it possible to
let rebalancing between multiple servers making use of more of available
network bandwidth?
I have 5 servers with 10
0x32/0x40
[] sys_mmap_pgoff+0x5c/0x2d0
[] sys_mmap+0x29/0x30
[] system_call_fastpath+0x16/0x1b
On 06/08/2012 05:18 PM, Anand Avati wrote:
Those are 4.x GB. Can you post dmesg output as well? Also, what's
'ulimit -l' on your system?
On Fri, Jun 8, 2012 at 4:41 PM, Ling Ho <
files after the crash?
Avati
On Fri, Jun 8, 2012 at 4:22 PM, Ling Ho <mailto:l...@slac.stanford.edu>> wrote:
Hello,
I have a brick that crashed twice today, and another different
brick that crashed just a while a go.
This is what I see in one of the brick logs:
Hello,
I have a brick that crashed twice today, and another different brick
that crashed just a while a go.
This is what I see in one of the brick logs:
patchset: git://git.gluster.com/glusterfs.git
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
signal received: 6
time of cr
reproduce this issue.
Let me know.
Thanks,
Anush
On 05/17/2012 07:38 AM, Ling Ho wrote:
Hi Anush,
Did you have a chance to try to reproduce the error? Should I expect
this feature/fix be supported in 3.3?
Let me know if I am doing the right thing to test it, or if I should
download a different
Hi Anush,
Did you have a chance to try to reproduce the error? Should I expect
this feature/fix be supported in 3.3?
Let me know if I am doing the right thing to test it, or if I should
download a different tarball to try it.
Thanks,
...
ling
On 05/07/2012 11:56 AM, Ling Ho wrote:
Hi
v105:mnt> id
uid=10858(tstopr) gid=1109(xs) groups=1109(xs)
Thanks,
...
ling
On 05/06/2012 11:10 PM, Anush Shetty wrote:
Hi Ling,
Can you please give me the steps used in testing the issue so that I
could reproduce it locally.
-
Anush
On 05/05/2012 06:24 AM, Ling Ho wrote:
Hi,
I trie
Hello,
I am looking at the changes done for this bug 767229.
I downloaded and installed 3.3.0beta3-1 which seems to have included the
changes. But when I make myself a member of more than 16 groups, I can't
access any directory nor files under my mounted glusterfs file system.
I got a "Trans
22 matches
Mail list logo