On 06/14/2013 01:04 PM, John Brunelle wrote:
> Thanks, Jeff! I ran readdir.c on all 23 bricks on the gluster nfs
> server to which my test clients are connected (one client that's
> working, and one that's not; and I ran on those, too). The results
> are attached.
>
> The values it prints are al
Ah, I did not know that about 0x7. Is it of note that the
clients do *not* get this?
This is on an NFS mount, and the volume has nfs.enable-ino32 On. (I
should've pointed that out again when Jeff mentioned FUSE.)
Side note -- we do have a couple FUSE mounts, too, and I had not seen
this
Are the ls commands (which list partially, or loop and die of ENOMEM
eventually) executed on an NFS mount or FUSE mount? Or does it happen on
both?
Avati
On Fri, Jun 14, 2013 at 11:14 AM, Anand Avati wrote:
>
>
>
> On Fri, Jun 14, 2013 at 10:04 AM, John Brunelle > wrote:
>
>> Thanks, Jeff! I
On Fri, Jun 14, 2013 at 10:04 AM, John Brunelle
wrote:
> Thanks, Jeff! I ran readdir.c on all 23 bricks on the gluster nfs
> server to which my test clients are connected (one client that's
> working, and one that's not; and I ran on those, too). The results
> are attached.
>
> The values it pri
On 06/13/2013 03:38 PM, John Brunelle wrote:
> We have a directory containing 3,343 subdirectories. On some
> clients, ls lists only a subset of the directories (a different
> amount on different clients). On others, ls gets stuck in a getdents
> loop and consumes more and more memory until it hi
Thanks for the reply, Vijay. I set that parameter "On", but it hasn't
helped, and in fact it seems a bit worse. After making the change on
the volume and dropping caches on some test clients, some are now
seeing zero subdirectories at all. In my tests before, after dropping
caches clients go bac
On 06/13/2013 03:38 PM, John Brunelle wrote:
Hello,
We're having an issue with our distributed gluster filesystem:
* gluster 3.3.1 servers and clients
* distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes
* xfs backend
* nfs clients
* nfs.enable-ino32: On
* servers: CentOS
Hello,
We're having an issue with our distributed gluster filesystem:
* gluster 3.3.1 servers and clients
* distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes
* xfs backend
* nfs clients
* nfs.enable-ino32: On
* servers: CentOS 6.3, 2.6.32-279.14.1.el6.centos.plus.x86_64
* c