Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Anders Blomdell
On 2014-08-01 02:02, Harshavardhana wrote: On Thu, Jul 31, 2014 at 11:31 AM, 
Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?

 
 Does it ever come down? what happens if you repeatedly the same files
 again? does it OOM?
Well, it OOM'd my firefox first (that's how good I monitor my experiments :-()
No, memory usage does not come down by itself, ACAICT


On 2014-08-01 02:12, Raghavendra Gowdappa wrote: Anders,
 
 Mostly its a case of memory leak. It would be helpful if you can file a bug 
 on this. Following information would be useful to fix the issue:
 
 1. valgrind reports (if possible).
  a. To start brick and nfs processes with valgrind you can use following 
 cmdline when starting glusterd.
 # glusterd --xlator-option *.run-with-valgrind=yes
 
 In this case all the valgrind logs can be found in standard glusterfs log 
 directory.
 
  b. For client you can start glusterfs just like any other process in 
 valgrind. Since glusterfs is daemonized, while running with valgrind we need 
 to prevent it by running it in foreground. We can use -N option to do that
 # valgrind --leak-check=full --log-file=path-to-valgrind-log glusterfs 
 --volfile-id=xyz --volfile-server=abc -N /mnt/glfs
 
 2. Once you observe a considerable leak in memory, please get a statedump of 
 glusterfs
 
   # gluster volume statedump volname
 
 and attach the reports in the bug.
Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
pressing duties).

On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
 Yes, even I saw the following leaks, when I tested it a week back. These were 
 the leaks:
 You should probably take a statedump and see what datatypes are leaking.
 
 root@localhost - /usr/local/var/run/gluster
 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
 [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
 size=341240
 num_allocs=23602
 max_size=347987
 max_num_allocs=23604
 total_allocs=653194
 ... 
I'll revisit this in a few weeks,

Harshavardhana, Raghavendra, Pranith (and all others),

Gluster is one of the most responsive Open Source project I
have participated in this far, I'm very happy with all support,
help and encouragement I have got this far. Even though my initial
tests weren't fully satisfactory, you are the main reason for
my perseverance :-)

/Anders



-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Anders Blomdell
On 2014-08-01 08:56, Pranith Kumar Karampuri wrote:
 
 On 08/01/2014 12:09 PM, Anders Blomdell wrote:
 On 2014-08-01 02:02, Harshavardhana wrote: On Thu, Jul 31, 2014 at 11:31 
 AM, Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?

 Does it ever come down? what happens if you repeatedly the same files
 again? does it OOM?
 Well, it OOM'd my firefox first (that's how good I monitor my experiments 
 :-()
 No, memory usage does not come down by itself, ACAICT


 On 2014-08-01 02:12, Raghavendra Gowdappa wrote: Anders,
 Mostly its a case of memory leak. It would be helpful if you can file a bug 
 on this. Following information would be useful to fix the issue:

 1. valgrind reports (if possible).
   a. To start brick and nfs processes with valgrind you can use following 
 cmdline when starting glusterd.
  # glusterd --xlator-option *.run-with-valgrind=yes

  In this case all the valgrind logs can be found in standard glusterfs 
 log directory.

   b. For client you can start glusterfs just like any other process in 
 valgrind. Since glusterfs is daemonized, while running with valgrind we 
 need to prevent it by running it in foreground. We can use -N option to do 
 that
  # valgrind --leak-check=full --log-file=path-to-valgrind-log 
 glusterfs --volfile-id=xyz --volfile-server=abc -N /mnt/glfs

 2. Once you observe a considerable leak in memory, please get a statedump 
 of glusterfs

# gluster volume statedump volname

 and attach the reports in the bug.
 Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
 pressing duties).

 On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
 Yes, even I saw the following leaks, when I tested it a week back. These 
 were the leaks:
 You should probably take a statedump and see what datatypes are leaking.

 root@localhost - /usr/local/var/run/gluster
 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
 [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
 size=341240
 num_allocs=23602
 max_size=347987
 max_num_allocs=23604
 total_allocs=653194
 ...
 I'll revisit this in a few weeks,

 Harshavardhana, Raghavendra, Pranith (and all others),

 Gluster is one of the most responsive Open Source project I
 have participated in this far, I'm very happy with all support,
 help and encouragement I have got this far. Even though my initial
 tests weren't fully satisfactory, you are the main reason for
 my perseverance :-)
 Yay! good :-). Do you have any suggestions where we need to improve as 
 a community that would make it easier for new contributors?
http://review.gluster.org/#/c/8181/ (will hopefully come around and
review that, real soon now...)

Otherwise, no. Will recommend gluster as an eminent crash-course in 
git, gerrit and continous integration. Keep up the good work.

/Anders 

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Justin Clift
- Original Message -
From: Anders Blomdell anders.blomd...@control.lth.se
To: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel 
gluster-devel@gluster.org, Harshavardhana har...@harshavardhana.net, 
Raghavendra Gowdappa rgowd...@redhat.com
Sent: Friday, 1 August, 2014 7:39:55 AM
Subject: Re: [Gluster-devel] Monotonically increasing memory

snip
 Harshavardhana, Raghavendra, Pranith (and all others),

 Gluster is one of the most responsive Open Source project I
 have participated in this far, I'm very happy with all support,
 help and encouragement I have got this far. Even though my initial
 tests weren't fully satisfactory, you are the main reason for
 my perseverance :-)

Awesome.  Thank you for the encouraging feedback.  This kind of thing
really helps make people's day. :D

+ Justin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Harshavardhana
On Thu, Jul 31, 2014 at 11:31 AM, Anders Blomdell
anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?


Does it ever come down? what happens if you repeatedly the same files
again? does it OOM?

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Raghavendra Gowdappa
Anders,

Mostly its a case of memory leak. It would be helpful if you can file a bug on 
this. Following information would be useful to fix the issue:

1. valgrind reports (if possible).
 a. To start brick and nfs processes with valgrind you can use following 
cmdline when starting glusterd.
# glusterd --xlator-option *.run-with-valgrind=yes

In this case all the valgrind logs can be found in standard glusterfs log 
directory.

 b. For client you can start glusterfs just like any other process in valgrind. 
Since glusterfs is daemonized, while running with valgrind we need to prevent 
it by running it in foreground. We can use -N option to do that
# valgrind --leak-check=full --log-file=path-to-valgrind-log glusterfs 
--volfile-id=xyz --volfile-server=abc -N /mnt/glfs

2. Once you observe a considerable leak in memory, please get a statedump of 
glusterfs

  # gluster volume statedump volname

and attach the reports in the bug.

regards,
Raghavendra.

- Original Message -
 From: Anders Blomdell anders.blomd...@control.lth.se
 To: Gluster Devel gluster-devel@gluster.org
 Sent: Friday, August 1, 2014 12:01:15 AM
 Subject: [Gluster-devel] Monotonically increasing memory
 
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?
 
 Version is 3.7dev as of tuesday...
 
 /Anders
 
 --
 Anders Blomdell  Email: anders.blomd...@control.lth.se
 Department of Automatic Control
 Lund University  Phone:+46 46 222 4625
 P.O. Box 118 Fax:  +46 46 138118
 SE-221 00 Lund, Sweden
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Pranith Kumar Karampuri
Yes, even I saw the following leaks, when I tested it a week back. These 
were the leaks:

You should probably take a statedump and see what datatypes are leaking.

root@localhost - /usr/local/var/run/gluster
14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
[mount/fuse.fuse - usage-type gf_common_mt_char memusage]
size=341240
num_allocs=23602
max_size=347987
max_num_allocs=23604
total_allocs=653194

[mount/fuse.fuse - usage-type gf_common_mt_mem_pool memusage]
size=4335440
num_allocs=45159
max_size=7509032
max_num_allocs=77391
total_allocs=530058

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf 
memusage]

size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char 
memusage]

size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool 
memusage]

size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf 
memusage]

size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char 
memusage]

size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool 
memusage]

size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956


root@localhost - /usr/local/var/run/gluster
14:10:28 ?

Pranith

On 08/01/2014 12:01 AM, Anders Blomdell wrote:

During rsync of 35 files, memory consumption of glusterfs
rose to 12 GB (after approx 14 hours), I take it that this is a
bug I should try to track down?

Version is 3.7dev as of tuesday...

/Anders



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel