OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the following 
patches:

===
Kaleb S KEITHLEY (1):
      fuse: use-after-free fix in fuse-bridge, revisited

Pranith Kumar K (1):
      mount/fuse: Fix use-after-free crash

Soumya Koduri (3):
      gfapi: Fix inode nlookup counts
      inode: Retire the inodes from the lru list in inode_table_destroy
      upcall: free the xdr* allocations
===

I run rsync from one GlusterFS volume to another. While memory started from 
under 100 MiBs, it stalled at around 600 MiBs for source volume and does not 
grow further. As for target volume it is ~730 MiBs, and that is why I'm going 
to do several rsync rounds to see if it grows more (with no patches bare 3.7.6 
could consume more than 20 GiBs).

No "kernel notifier loop terminated" message so far for both volumes.

Will report more in several days. I hope current patches will be incorporated 
into 3.7.7.

On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote:
> On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> >> I presume by this you mean you're not seeing the "kernel notifier loop
> >> terminated" error in your logs.
> > 
> > Correct, but only with simple traversing. Have to test under rsync.
> 
> Without the patch I'd get "kernel notifier loop terminated" within a few
> minutes of starting I/O.  With the patch I haven't seen it in 24 hours
> of beating on it.
> 
> >> Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
> >> stable:
> >> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev
> >> ity /client.out
> > 
> > What ops do you perform on mounted volume? Read, write, stat? Is that
> > 3.7.6 + patches?
> 
> I'm running an internally developed I/O load generator written by a guy
> on our perf team.
> 
> it does, create, write, read, rename, stat, delete, and more.


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to