Hi Abhishek,
Please use statedumps taken at intervals to determine where the memory is
increasing. See [1] for details.
Regards,
Nithya
[1] https://docs.gluster.org/en/latest/Troubleshooting/statedump/
On Fri, 7 Jun 2019 at 08:13, ABHISHEK PALIWAL
wrote:
> Hi Nithya,
>
> We are having the
Hi Nithya,
We are having the setup where copying the file to and deleting it from
gluster mount point to update the latest file. We noticed due to this
having some memory increase in glusterfsd process.
To find the memory leak we are using valgrind but didn't get any help.
That's why contacted
Hi Abhishek,
I am still not clear as to the purpose of the tests. Can you clarify why
you are using valgrind and why you think there is a memory leak?
Regards,
Nithya
On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL
wrote:
> Hi Nithya,
>
> Here is the Setup details and test which we are doing as
Hi,
Writing to a volume should not affect glusterd. The stack you have shown in
the valgrind looks like the memory used to initialise the structures
glusterd uses and will free only when it is stopped.
Can you provide more details to what it is you are trying to test?
Regards,
Nithya
On Tue,
Hi Team,
Please respond on the issue which I raised.
Regards,
Abhishek
On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL
wrote:
> Anyone please reply
>
> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL
> wrote:
>
>> Hi Team,
>>
>> I upload some valgrind logs from my gluster 5.4 setup. This
Anyone please reply
On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL
wrote:
> Hi Team,
>
> I upload some valgrind logs from my gluster 5.4 setup. This is writing to
> the volume every 15 minutes. I stopped glusterd and then copy away the
> logs. The test was running for some simulated days.
Hi Team,
I upload some valgrind logs from my gluster 5.4 setup. This is writing to
the volume every 15 minutes. I stopped glusterd and then copy away the
logs. The test was running for some simulated days. They are zipped in
valgrind-54.zip.
Lots of info in valgrind-2730.log. Lots of possibly
Here is another RAM usage stats and statedump of GlusterFS mount approaching
to just another OOM:
===
root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/
glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume
===
On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
Thanks for sharing the results. I made changes to fix one leak reported
there wrt '
On 01/13/2016 04:08 PM, Soumya Koduri wrote:
On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
Thanks for sharing the results. I made
On 01/12/2016 12:17 PM, Mathieu Chateau wrote:
I tried like suggested:
echo 3 > /proc/sys/vm/drop_caches
sync
It lower a bit usage:
before:
Images intégrées 2
after:
Images intégrées 1
Thanks Mathieu. There is a drop in memory usage after dropping vfs cache
but doesn't seem
I've applied client_cbk_cache_invalidation leak patch, and here are the
results.
Launch:
===
valgrind --leak-check=full --show-leak-kinds=all
--log-file="valgrind_fuse.log" /usr/bin/glusterfs -N
--volfile-server=server.example.com --volfile-id=somevolume
/mnt/somevolume
find
I have made changes to fix the lookup leak in a different way (as
discussed with Pranith) and uploaded them in the latest patch set #4
- http://review.gluster.org/#/c/13096/
Please check if it resolves the mem leak and hopefully doesn't result in
any assertion :)
Thanks,
Soumya
On
On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote:
Brief test shows that Ganesha stopped leaking and crashing, so it seems
to be good for me.
Thanks for checking.
Nevertheless, back to my original question: what about FUSE client? It
is still leaking despite all the fixes applied. Should
Hello,
I also experience high memory usage on my gluster clients. Sample :
[image: Images intégrées 1]
Can I help in testing/debugging ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:24 GMT+01:00 Soumya Koduri :
>
>
> On 01/11/2016 05:11 PM, Oleksandr
I tried like suggested:
echo 3 > /proc/sys/vm/drop_caches
sync
It lower a bit usage:
before:
[image: Images intégrées 2]
after:
[image: Images intégrées 1]
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:34 GMT+01:00 Mathieu Chateau :
> Hello,
>
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
12.01.2016 08:24, Soumya Koduri написав:
For fuse client, I tried vfs drop_caches as suggested by Vijay in an
earlier mail. Though all the
Brief test shows that Ganesha stopped leaking and crashing, so it seems
to be good for me.
Nevertheless, back to my original question: what about FUSE client? It
is still leaking despite all the fixes applied. Should it be considered
another issue?
11.01.2016 12:26, Soumya Koduri написав:
OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the most recent
revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision too).
On traversing GlusterFS volume with many files in one folder via NFS mount I
get an assertion:
===
ganesha.nfsd: inode.c:716:
OK, here is valgrind log of patched Ganesha (I took recent version of
your patchset, 8685abfc6d) with Entries_HWMARK set to 500.
https://gist.github.com/5397c152a259b9600af0
See no huge runtime leaks now. However, I've repeated this test with
another volume in replica and got the following
I tried to debug the inode* related leaks and seen some improvements
after applying the below patches when ran the same test (but will
smaller load). Could you please apply those patches & confirm the same?
a) http://review.gluster.org/13125
This will fix the inodes & their ctx related leaks
Correct, I used FUSE mount. Shouldn't gfapi be used by FUSE mount helper (/
usr/bin/glusterfs)?
On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote:
> On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
> > Unfortunately, both patches didn't make any difference for me.
> >
> > I've
On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
Unfortunately, both patches didn't make any difference for me.
I've patched 3.7.6 with both patches, recompiled and installed patched
GlusterFS package on client side and mounted volume with ~2M of files.
The I performed usual tree traverse
On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote:
OK, I've repeated the same traversing test with patched GlusterFS API, and
here is new Valgrind log:
https://gist.github.com/17ecb16a11c9aed957f5
Fuse mount doesn't use gfapi helper. Does your above GlusterFS API
application call
OK, I've repeated the same traversing test with patched GlusterFS API, and
here is new Valgrind log:
https://gist.github.com/17ecb16a11c9aed957f5
Still leaks.
On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote:
> On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
> > Unfortunately,
Unfortunately, both patches didn't make any difference for me.
I've patched 3.7.6 with both patches, recompiled and installed patched
GlusterFS package on client side and mounted volume with ~2M of files.
The I performed usual tree traverse with simple "find".
Memory RES value went from
Thanks for sharing the results. Shall look at the leaks and update.
-Soumya
On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
Also, here is valgrind output with our custom tool, that does GlusterFS volume
traversing (with simple stats) just like find tool. In this case NFS-Ganesha
is not
1. test with Cache_Size = 256 and Entries_HWMark = 4096
Before find . -type f:
root 3120 0.6 11.0 879120 208408 ? Ssl 17:39 0:00 /usr/bin/
ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
After:
root 3120 11.4 24.3 1170076 458168 ? Ssl
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote:
What units Cache_Size is measured in? Bytes?
Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you
please run ganesha process under valgrind? Will help in detecting leaks.
Thanks,
Soumya
25.12.2015 16:58, Soumya Koduri
What units Cache_Size is measured in? Bytes?
25.12.2015 16:58, Soumya Koduri написав:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find . -type
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory
usage:
===
root 5416 34.2 78.5
Also, here is valgrind output with our custom tool, that does GlusterFS volume
traversing (with simple stats) just like find tool. In this case NFS-Ganesha
is not used.
https://gist.github.com/e4602a50d3c98f7a2766
One may see GlusterFS-related leaks here as well.
On пʼятниця, 25 грудня 2015
OK, I've rebuild GlusterFS v3.7.6 with debug enabled as well as NFS-Ganesha
with debug enabled as well (and libc allocator).
Here is my test steps:
1. launch nfs-ganesha:
valgrind --leak-check=full --show-leak-kinds=all --log-file="valgrind.log" /
opt/nfs-ganesha/bin/ganesha.nfsd -F -L
Still actual issue for 3.7.6. Any suggestions?
24.09.2015 10:14, Oleksandr Natalenko написав:
In our GlusterFS deployment we've encountered something like memory
leak in GlusterFS FUSE client.
We use replicated (×2) GlusterFS volume to store mail (exim+dovecot,
maildir format). Here is inode
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory
usage:
===
root 5416 34.2 78.5 2047176 1480552 ? Ssl 12:02 117:54
google vdsm memory leak..it's been discussed on list last year and earlier
this one...
On Thu, Sep 24, 2015 at 10:14 AM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:
> In our GlusterFS deployment we've encountered something like memory leak
> in GlusterFS FUSE client.
>
> We use
oh, my bad...
coulb be this one?
https://bugzilla.redhat.com/show_bug.cgi?id=1126831
Anyway, on ovirt+gluster w I experienced similar behavior...
On Thu, Sep 24, 2015 at 10:32 AM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:
> We use bare GlusterFS installation with no oVirt involved.
In our GlusterFS deployment we've encountered something like memory leak
in GlusterFS FUSE client.
We use replicated (×2) GlusterFS volume to store mail (exim+dovecot,
maildir format). Here is inode stats for both bricks and mountpoint:
===
Brick 1 (Server 1):
Filesystem
We use bare GlusterFS installation with no oVirt involved.
24.09.2015 10:29, Gabi C wrote:
google vdsm memory leak..it's been discussed on list last year and
earlier this one...
___
Gluster-users mailing list
Gluster-users@gluster.org
I've checked statedump of volume in question and haven't found lots of
iobuf as mentioned in that bugreport.
However, I've noticed that there are lots of LRU records like this:
===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
, GlusterFS
RedHat Inc.
- Original Message -
From: Philip Poten philip.po...@gmail.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, June 21, 2012 1:03:53 PM
Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
Hi Rajesh,
We are handling
RedHat Inc.
From: Xavier Normand xavier.norm...@gmail.com
To: Philip Poten philip.po...@gmail.com
Cc: gluster-users@gluster.org
Sent: Tuesday, June 12, 2012 6:32:41 PM
Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
Hi Philip,
I do have
-
From: Philip Poten philip.po...@gmail.com
To: gluster-users@gluster.org
Sent: Tuesday, June 12, 2012 1:30:17 AM
Subject: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
Hi,
we're running a distributed-replicated setup for our images, and while
we use a caching proxy for the hotset, quite
On 06/12/2012 05:15 AM, gluster-users-requ...@gluster.org wrote:
Date: Mon, 11 Jun 2012 22:00:17 +0200
From: Philip Potenphilip.po...@gmail.com
Subject: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
To: gluster-users@gluster.org
Message-ID
2012/6/12 Dan Bretherton d.a.brether...@reading.ac.uk:
I wonder if this memory leak is the cause of the NFS performance degradation
I reported in April.
That's probable, since the performance does go down for us too when
the glusterfs process reaches a large percentage of RAM. My initial
guess
Hi Philip,I do have about the same problem that you describe. There is my setup:Gluster: Two bricks running gluster 3.2.6Clients:4 clients running native gluster fuse client.2 clients running nfs clientMy nfs client are not doing that much traffic but i was able to view after a couple days that
Hi,
we're running a distributed-replicated setup for our images, and while
we use a caching proxy for the hotset, quite a few requests land on
glusterfs (3.2.6 on squeeze). Since glusterfs fuse client experiences
regular hangs which require reboots (I couldnt yet find a solution to
that), we run
47 matches
Mail list logo