On 03/27/2014 03:29 AM, Giuseppe Ragusa wrote:
Hi all,
I'm running glusterfs-3.5.20140324.4465475-1.autobuild (from published
nightly rpm packages) on CentOS 6.5 as storage solution for oVirt 3.4.0
(latest snapshot too) on 2 physical nodes (12 GiB RAM) with
self-hosted-engine.

I suppose this should be a good "selling point" for Gluster/oVirt and I
have solved almost all my oVirt problems but one remains:
Gluster-provided NFS (as a storage domain for oVirt self-hosted-engine)
grows (from reboot) to about 8 GiB RAM usage (I even had it die before,
when put under cgroup memory restrictions) in about one day of no actual
usage (only the oVirt Engine VM is running on one node with no other
operations done on it or the whole cluster).

I have seen similar reports on users and devel mailing lists and I'm
wondering how I can help in diagnosing this and/or if it would be better
to rely on latest 3.4.x Gluster (but it seems that the stable line has
had its share of memleaks too...).


Can you please check if turning off drc through:

volume set <volname> nfs.drc off

helps?

-Vijay


_______________________________________________
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel

Reply via email to