hi Kaleb,
I went through the logs. I don't see anything significant. What is the test case that recreates the mem-leak? May be I can try it on my setup and get back to you?

Pranith
On 10/15/2014 08:57 PM, Kaleb S. KEITHLEY wrote:
As mentioned in the Gluster Community Meeting on irc today, here are the glusterfs client side valgrind logs. By 'glusterfs client side' I specifically mean the glusterfs fuse bridge daemon on the client.

http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/valgrind-3.4-memleak/

The basic test is, simply, mount the gluster volume, make a deep directory path on the volume, e.g. /mnt/a/b/c/d/e/f/g, do three or five `ls -R /mnt`, and unmount.

tmp[35]/glusterfs.fuse.out are the logs from three or five ls -R.

tmp[35]+/glusterfs.fuse.out are the logs as above, but with the addition that the directories are populated with a few files.

Notice that, e.g. both tmp[35]/glusterfs.fuse.out show approximately the same amount of {definitely,indirectly,possibly} lost memory. I.e. the number of `ls -R` invocations did not affect how much memory was leaked.

The same is true for tmp[35]+/glusterfs.fuse.out. I.e. more `ls -R` did not affect the amount of memory leaked, _but_ notice that when the directories are populated with files that more memory was leaked, across the board, than when the directories were empty.

Make sense? Any questions, don't hesitate to ask.

Thanks,


_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to