On Mon, Feb 1, 2016 at 2:24 PM, Soumya Koduri wrote:
>
>
> On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
>
>> Wait. It seems to be my bad.
>>
>> Before unmounting I do drop_caches (2), and glusterfs process CPU usage
>> goes to 100% for a while. I haven't waited for it to drop to 0%, and
>>
Here goes the report on DHT-related leaks patch ("rsync" test).
RAM usage before drop_caches: [1]
Statedump before drop_caches: [2]
RAM usage after drop_caches: [3]
Statedump after drop_caches: [4]
Statedumps diff: [5]
Valgrind output: [6]
[1] https://gist.github.com/ca8d56834c14c4bfa98e
[2] htt
- Original Message -
> From: "Serkan Çoban"
>
> Will those patches be available in 3.7.7?
3.7.7 was released yesterday, so no.
But 3.7.7 has another issue; there won't be packages available from
download.gluster.org.
Pranith has already indicated that we will fix the other issue and
02.02.2016 10:07, Xavier Hernandez написав:
Could it be memory used by Valgrind itself to track glusterfs' memory
usage ?
Could you repeat the test without Valgrind and see if the memory usage
after dropping caches returns to low values ?
Yup. Here are the results:
===
pf@server:~ » ps aux |
Hi Oleksandr,
On 01/02/16 19:28, Oleksandr Natalenko wrote:
Please take a look at updated test results.
Test: find /mnt/volume -type d
RAM usage after "find" finishes: ~ 10.8G (see "ps" output [1]).
Statedump after "find" finishes: [2].
Then I did drop_caches, and RAM usage dropped to ~4.7G
Please take a look at updated test results.
Test: find /mnt/volume -type d
RAM usage after "find" finishes: ~ 10.8G (see "ps" output [1]).
Statedump after "find" finishes: [2].
Then I did drop_caches, and RAM usage dropped to ~4.7G [3].
Statedump after drop_caches: [4].
Here is diff between s
Will those patches be available in 3.7.7?
On Mon, Feb 1, 2016 at 1:28 PM, Soumya Koduri wrote:
>
>
> On 02/01/2016 02:48 PM, Xavier Hernandez wrote:
>>
>> Hi,
>>
>> On 01/02/16 09:54, Soumya Koduri wrote:
>>>
>>>
>>>
>>> On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems
On 02/01/2016 02:48 PM, Xavier Hernandez wrote:
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited fo
Hi Oleksandr,
On 01/02/16 09:09, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while.
That's the expected behavior after applying the nlookup count patch. As
it's configured now, gluster won'
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It seems glusterfs is purging inodes and that's
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It seems glusterfs is purging inodes and that's
why it uses 100% of CPU. I've re-tested it, waiting
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:
Unfortunately, this patch doesn't help.
RAM usage on "find" finish is ~9G.
Here is statedump before drop_caches: https://gist.github.com/
fc1647de0982ab447e20
[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
size=706766688
num
Unfortunately, this patch doesn't help.
RAM usage on "find" finish is ~9G.
Here is statedump before drop_caches: https://gist.github.com/
fc1647de0982ab447e20
And after drop_caches: https://gist.github.com/5eab63bc13f78787ed19
And here is Valgrind output: https://gist.github.com/2490aeac448320d
There's another inode leak caused by an incorrect counting of
lookups on directory reads.
Here's a patch that solves the problem for
3.7:
http://review.gluster.org/13324
Hopefully with this patch the
memory leaks should disapear.
Xavi
On 29.01.2016 19:09, Oleksandr
Natalenko wrote:
> Here
On 01/29/2016 01:09 PM, Oleksandr Natalenko wrote:
Here is intermediate summary of current memory leaks in FUSE client
investigation.
I use GlusterFS v3.7.6 release with the following patches:
===
Kaleb S KEITHLEY (1):
fuse: use-after-free fix in fuse-bridge, revisited
Pranith Kumar K (
16 matches
Mail list logo