Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Anders Blomdell
On 2014-08-01 02:02, Harshavardhana wrote: On Thu, Jul 31, 2014 at 11:31 AM, 
Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?

 
 Does it ever come down? what happens if you repeatedly the same files
 again? does it OOM?
Well, it OOM'd my firefox first (that's how good I monitor my experiments :-()
No, memory usage does not come down by itself, ACAICT


On 2014-08-01 02:12, Raghavendra Gowdappa wrote: Anders,
 
 Mostly its a case of memory leak. It would be helpful if you can file a bug 
 on this. Following information would be useful to fix the issue:
 
 1. valgrind reports (if possible).
  a. To start brick and nfs processes with valgrind you can use following 
 cmdline when starting glusterd.
 # glusterd --xlator-option *.run-with-valgrind=yes
 
 In this case all the valgrind logs can be found in standard glusterfs log 
 directory.
 
  b. For client you can start glusterfs just like any other process in 
 valgrind. Since glusterfs is daemonized, while running with valgrind we need 
 to prevent it by running it in foreground. We can use -N option to do that
 # valgrind --leak-check=full --log-file=path-to-valgrind-log glusterfs 
 --volfile-id=xyz --volfile-server=abc -N /mnt/glfs
 
 2. Once you observe a considerable leak in memory, please get a statedump of 
 glusterfs
 
   # gluster volume statedump volname
 
 and attach the reports in the bug.
Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
pressing duties).

On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
 Yes, even I saw the following leaks, when I tested it a week back. These were 
 the leaks:
 You should probably take a statedump and see what datatypes are leaking.
 
 root@localhost - /usr/local/var/run/gluster
 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
 [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
 size=341240
 num_allocs=23602
 max_size=347987
 max_num_allocs=23604
 total_allocs=653194
 ... 
I'll revisit this in a few weeks,

Harshavardhana, Raghavendra, Pranith (and all others),

Gluster is one of the most responsive Open Source project I
have participated in this far, I'm very happy with all support,
help and encouragement I have got this far. Even though my initial
tests weren't fully satisfactory, you are the main reason for
my perseverance :-)

/Anders



-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Anders Blomdell
On 2014-08-01 08:56, Pranith Kumar Karampuri wrote:
 
 On 08/01/2014 12:09 PM, Anders Blomdell wrote:
 On 2014-08-01 02:02, Harshavardhana wrote: On Thu, Jul 31, 2014 at 11:31 
 AM, Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?

 Does it ever come down? what happens if you repeatedly the same files
 again? does it OOM?
 Well, it OOM'd my firefox first (that's how good I monitor my experiments 
 :-()
 No, memory usage does not come down by itself, ACAICT


 On 2014-08-01 02:12, Raghavendra Gowdappa wrote: Anders,
 Mostly its a case of memory leak. It would be helpful if you can file a bug 
 on this. Following information would be useful to fix the issue:

 1. valgrind reports (if possible).
   a. To start brick and nfs processes with valgrind you can use following 
 cmdline when starting glusterd.
  # glusterd --xlator-option *.run-with-valgrind=yes

  In this case all the valgrind logs can be found in standard glusterfs 
 log directory.

   b. For client you can start glusterfs just like any other process in 
 valgrind. Since glusterfs is daemonized, while running with valgrind we 
 need to prevent it by running it in foreground. We can use -N option to do 
 that
  # valgrind --leak-check=full --log-file=path-to-valgrind-log 
 glusterfs --volfile-id=xyz --volfile-server=abc -N /mnt/glfs

 2. Once you observe a considerable leak in memory, please get a statedump 
 of glusterfs

# gluster volume statedump volname

 and attach the reports in the bug.
 Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
 pressing duties).

 On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
 Yes, even I saw the following leaks, when I tested it a week back. These 
 were the leaks:
 You should probably take a statedump and see what datatypes are leaking.

 root@localhost - /usr/local/var/run/gluster
 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
 [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
 size=341240
 num_allocs=23602
 max_size=347987
 max_num_allocs=23604
 total_allocs=653194
 ...
 I'll revisit this in a few weeks,

 Harshavardhana, Raghavendra, Pranith (and all others),

 Gluster is one of the most responsive Open Source project I
 have participated in this far, I'm very happy with all support,
 help and encouragement I have got this far. Even though my initial
 tests weren't fully satisfactory, you are the main reason for
 my perseverance :-)
 Yay! good :-). Do you have any suggestions where we need to improve as 
 a community that would make it easier for new contributors?
http://review.gluster.org/#/c/8181/ (will hopefully come around and
review that, real soon now...)

Otherwise, no. Will recommend gluster as an eminent crash-course in 
git, gerrit and continous integration. Keep up the good work.

/Anders 

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding mempool documentation patch

2014-08-01 Thread Vijay Bellur

On 08/01/2014 10:53 AM, Pranith Kumar Karampuri wrote:

hi,
 If there are no more comments, could we take
http://review.gluster.com/#/c/8343 in.



Thanks, have merged the patch.

-Vijay




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] regarding resolution for fuse/server

2014-08-01 Thread Pranith Kumar Karampuri

hi,
 Does anyone know why there is different code for resolution in 
fuse vs server? There are some differences too, like server asserts 
about the resolution types like RESOLVE_MUST/RESOLVE_NOT etc where as 
fuse doesn't do any such thing. Wondering if there is any reason why the 
code is different in these two xlators.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs-3.5.2 RPMs are now available

2014-08-01 Thread Lalatendu Mohanty

On 07/31/2014 05:07 PM, Niels de Vos wrote:

On Thu, Jul 31, 2014 at 04:06:46AM -0700, Gluster Build System wrote:


SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.2.tar.gz

Bugs that have been marked for 3.5.2 and did not get fixed with this
release, are being moved to the new 3.5.3 release tracker:
- https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.5.3

Please add 'glusterfs-3.5.3' in the 'Blocks' field of bugs to propose
inclusion in GlusterFS 3.5.3.

Link to the release notes of glusterfs-3.5.2:
- http://blog.nixpanic.net/2014/07/glusterfs-352-has-been-released.html
- 
https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.2.md

Remember that packages for different distributions (downstream) will get
created after this (upstream) release.

Thanks,
Niels



[RPMS for EL5, 6 , 7]
RPMs are available at download.gluster.org [1].

[Fedora]
GlusterFS-3.5.2 RPMs for Fedora will be available from the Fedora 
updates-testing YUM repository. After they have passed a nominal testing 
period they will be available in the Fedora updates YUM repository.


The RPMs were build using fedoraproject koji, so you can also find them 
at koji [2]



[1] http://download.gluster.org/pub/gluster/glusterfs/LATEST/
[2] https://koji.fedoraproject.org/koji/packageinfo?packageID=5443

Thanks,
Lala

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-08-01 Thread Justin Clift
- Original Message -
From: Anders Blomdell anders.blomd...@control.lth.se
To: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel 
gluster-devel@gluster.org, Harshavardhana har...@harshavardhana.net, 
Raghavendra Gowdappa rgowd...@redhat.com
Sent: Friday, 1 August, 2014 7:39:55 AM
Subject: Re: [Gluster-devel] Monotonically increasing memory

snip
 Harshavardhana, Raghavendra, Pranith (and all others),

 Gluster is one of the most responsive Open Source project I
 have participated in this far, I'm very happy with all support,
 help and encouragement I have got this far. Even though my initial
 tests weren't fully satisfactory, you are the main reason for
 my perseverance :-)

Awesome.  Thank you for the encouraging feedback.  This kind of thing
really helps make people's day. :D

+ Justin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding resolution for fuse/server

2014-08-01 Thread Anand Avati
There are subtle differences between fuse and server. In fuse the inode
table does not use LRU pruning, so expected inodes are guaranteed to be
cached. For e.g, when mkdir() FOP arrives, fuse would have already checked
with a lookup and the kernel guarantees another thread would not have
created mkdir in the mean time (with mutex on dir). In the server, either
because of LRU or threads racing, you need to re-evaulate the situation to
be sure. RESOLVE_MUST/NOT makes more sense in the server because of this.

HTH

On Fri, Aug 1, 2014 at 2:43 AM, Pranith Kumar Karampuri pkara...@redhat.com
 wrote:

 hi,
  Does anyone know why there is different code for resolution in fuse
 vs server? There are some differences too, like server asserts about the
 resolution types like RESOLVE_MUST/RESOLVE_NOT etc where as fuse doesn't do
 any such thing. Wondering if there is any reason why the code is different
 in these two xlators.

 Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel