Anders,

Mostly its a case of memory leak. It would be helpful if you can file a bug on 
this. Following information would be useful to fix the issue:

1. valgrind reports (if possible).
 a. To start brick and nfs processes with valgrind you can use following 
cmdline when starting glusterd.
    # glusterd --xlator-option *.run-with-valgrind=yes

    In this case all the valgrind logs can be found in standard glusterfs log 
directory.

 b. For client you can start glusterfs just like any other process in valgrind. 
Since glusterfs is daemonized, while running with valgrind we need to prevent 
it by running it in foreground. We can use -N option to do that
    # valgrind --leak-check=full --log-file=<path-to-valgrind-log> glusterfs 
--volfile-id=xyz --volfile-server=abc -N /mnt/glfs

2. Once you observe a considerable leak in memory, please get a statedump of 
glusterfs

  # gluster volume statedump <volname>

and attach the reports in the bug.

regards,
Raghavendra.

----- Original Message -----
> From: "Anders Blomdell" <anders.blomd...@control.lth.se>
> To: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Friday, August 1, 2014 12:01:15 AM
> Subject: [Gluster-devel] Monotonically increasing memory
> 
> During rsync of 350000 files, memory consumption of glusterfs
> rose to 12 GB (after approx 14 hours), I take it that this is a
> bug I should try to track down?
> 
> Version is 3.7dev as of tuesday...
> 
> /Anders
> 
> --
> Anders Blomdell                  Email: anders.blomd...@control.lth.se
> Department of Automatic Control
> Lund University                  Phone:    +46 46 222 4625
> P.O. Box 118                     Fax:      +46 46 138118
> SE-221 00 Lund, Sweden
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to