OK, given GlusterFS v3.7.6 with the following patches:
===
Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisited
Pranith Kumar K (1):
mount/fuse: Fix use-after-free crash
Soumya Koduri (3):
gfapi: Fix inode nlookup counts
inode: Retire the inodes from
Here are the results of "rsync" test. I've got 2 volumes — source and target —
performing multiple files rsyncing from one volume to another.
Source volume:
===
root 22259 3.5 1.5 1204200 771004 ? Ssl Jan23 109:42 /usr/sbin/
glusterfs --volfile-server=glusterfs.example.com
Thanks for all your tests and times, it looks promising :)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko :
> OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the
> following
> patches:
>
> ===
> Kaleb S
Also, I've repeated the same "find" test again, but with glusterfs process
launched under valgrind. And here is valgrind output:
https://gist.github.com/097afb01ebb2c5e9e78d
On неділя, 24 січня 2016 р. 09:33:00 EET Mathieu Chateau wrote:
> Thanks for all your tests and times, it looks promising
BTW, am I the only one who sees in
max_size=4294965480
almost 2^32? Could that be integer overflow?
On неділя, 24 січня 2016 р. 13:23:55 EET Oleksandr Natalenko wrote:
> The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS
> volume, with mentioned patches applied and
The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS
volume, with mentioned patches applied and without "kernel notifier loop
terminated" message, but "glusterfs" process consumed ~4GiB of RAM after
"find" finished.
Here is statedump:
OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the following
patches:
===
Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisited
Pranith Kumar K (1):
mount/fuse: Fix use-after-free crash
Soumya Koduri (3):
gfapi: Fix inode nlookup counts
OK, compiles and runs well now, but still leaks. Will try to load the volume
with rsync.
On четвер, 21 січня 2016 р. 20:40:45 EET Kaleb KEITHLEY wrote:
> On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote:
> > I see extra GF_FREE (node); added with two patches:
> >
> > ===
> > $ git diff HEAD~2
On 01/22/2016 12:15 PM, Oleksandr Natalenko wrote:
> OK, compiles and runs well now,
I presume by this you mean you're not seeing the "kernel notifier loop
terminated" error in your logs.
> but still leaks.
Hmmm. My system is not leaking. Last 24 hours the RSZ and VSZ are
stable:
On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> I presume by this you mean you're not seeing the "kernel notifier loop
> terminated" error in your logs.
Correct, but only with simple traversing. Have to test under rsync.
> Hmmm. My system is not leaking. Last 24 hours the
On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
>> I presume by this you mean you're not seeing the "kernel notifier loop
>> terminated" error in your logs.
>
> Correct, but only with simple traversing. Have to test under
I perform the tests using 1) rsync (massive copy of millions of files); 2)
find (simple tree traversing).
To check if memory leak happens, I use find tool. I've performed two
traversing (w/ and w/o fopen-keep-cache=off) with remount between them, but I
didn't encounter "kernel notifier loop
If this message appears way before the volume is unmounted, can you try
to start the volume manually using this command and repeat the tests ?
glusterfs --fopen-keep-cache=off --volfile-server=
--volfile-id=/
This will prevent invalidation requests to be sent to the kernel, so
there
I see extra GF_FREE (node); added with two patches:
===
$ git diff HEAD~2 | gist
https://gist.github.com/9524fa2054cc48278ea8
===
Is that intentionally? I guess I face double-free issue.
On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote:
> On 01/20/2016 04:08 AM, Oleksandr Natalenko
With the proposed patches I get the following assertion while copying files to
GlusterFS volume:
===
glusterfs: mem-pool.c:305: __gf_free: Assertion `0xCAFEBABE == header->magic'
failed.
Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffe9ffb700 (LWP 12635)]
On 01/20/2016 04:08 AM, Oleksandr Natalenko wrote:
> Yes, there are couple of messages like this in my logs too (I guess one
> message per each remount):
>
> ===
> [2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
> glusterfs-fuse: kernel notifier loop terminated
> ===
>
On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote:
> I see extra GF_FREE (node); added with two patches:
>
> ===
> $ git diff HEAD~2 | gist
> https://gist.github.com/9524fa2054cc48278ea8
> ===
>
> Is that intentionally? I guess I face double-free issue.
>
I presume you're referring to the
I'm seeing a similar problem with 3.7.6.
This latest statedump contains a lot of gf_fuse_mt_invalidate_node_t
objects in fuse. Looking at the code I see they are used to send
invalidations to kernel fuse, however this is done in a separate thread
that writes a log message when it exits. On
Yes, there are couple of messages like this in my logs too (I guess one
message per each remount):
===
[2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
glusterfs-fuse: kernel notifier loop terminated
===
On середа, 20 січня 2016 р. 09:51:23 EET Xavier Hernandez wrote:
>
-de...@gluster.org
> >> Sent: Monday, December 28, 2015 9:32:07 AM
> >> Subject: Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS
> >> FUSE client>>
> >> On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
> >>> Also, here
On 01/03/2016 09:23 AM, Oleksandr Natalenko wrote:
Another Valgrind run.
I did the following:
===
valgrind --leak-check=full --show-leak-kinds=all --log-
file="valgrind_fuse.log" /usr/bin/glusterfs -N --volfile-
server=some.server.com --volfile-id=somevolume /mnt/volume
===
then cd to
Here is another Valgrind log of similar scenario but with drop_caches before
umount:
https://gist.github.com/06997ecc8c7bce83aec1
Also, I've tried to drop caches on production VM with GluserFS volume mounted
and memleaking for several weeks with absolutely no effect:
===
root 945 0.1
ster.org, gluster-de...@gluster.org
Sent: Monday, December 28, 2015 9:32:07 AM
Subject: Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE
client
On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
Also, here is valgrind output with our custom tool, that does GlusterFS
volume
tr
r.org
> Sent: Monday, December 28, 2015 9:32:07 AM
> Subject: Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE
> client
>
>
>
> On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
> > Also, here is valgrind output with our custom tool, that does Gl
On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
Also, here is valgrind output with our custom tool, that does GlusterFS volume
traversing (with simple stats) just like find tool. In this case NFS-Ganesha
is not used.
https://gist.github.com/e4602a50d3c98f7a2766
hi Oleksandr,
I went
Here are two consecutive statedumps of brick in question memory usage
[1] [2]. glusterfs client process went from ~630 MB to ~1350 MB of
memory usage in less than one hour.
Volume options:
===
cluster.lookup-optimize: on
cluster.readdir-optimize: on
client.event-threads: 4
Hi Oleksandr,
You are right. The description should have said it as the limit on the
number of inodes in the lru list of the inode cache. I have sent a patch
for that.
http://review.gluster.org/#/c/12242/
Regards,
Raghavendra Bhat
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <
27 matches
Mail list logo