Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Pranith Kumar Karampuri


On 08/01/2014 12:09 PM, Anders Blomdell wrote:

On 2014-08-01 02:02, Harshavardhana wrote:> On Thu, Jul 31, 2014 at 11:31 AM, 
Anders Blomdell

 wrote:

During rsync of 35 files, memory consumption of glusterfs
rose to 12 GB (after approx 14 hours), I take it that this is a
bug I should try to track down?


Does it ever come down? what happens if you repeatedly the same files
again? does it OOM?

Well, it OOM'd my firefox first (that's how good I monitor my experiments :-()
No, memory usage does not come down by itself, ACAICT


On 2014-08-01 02:12, Raghavendra Gowdappa wrote:> Anders,

Mostly its a case of memory leak. It would be helpful if you can file a bug on 
this. Following information would be useful to fix the issue:

1. valgrind reports (if possible).
  a. To start brick and nfs processes with valgrind you can use following 
cmdline when starting glusterd.
 # glusterd --xlator-option *.run-with-valgrind=yes

 In this case all the valgrind logs can be found in standard glusterfs log 
directory.

  b. For client you can start glusterfs just like any other process in 
valgrind. Since glusterfs is daemonized, while running with valgrind we need to 
prevent it by running it in foreground. We can use -N option to do that
 # valgrind --leak-check=full --log-file= glusterfs 
--volfile-id=xyz --volfile-server=abc -N /mnt/glfs

2. Once you observe a considerable leak in memory, please get a statedump of 
glusterfs

   # gluster volume statedump 

and attach the reports in the bug.

Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
pressing duties).

On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:

Yes, even I saw the following leaks, when I tested it a week back. These were 
the leaks:
You should probably take a statedump and see what datatypes are leaking.

root@localhost - /usr/local/var/run/gluster
14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
[mount/fuse.fuse - usage-type gf_common_mt_char memusage]
size=341240
num_allocs=23602
max_size=347987
max_num_allocs=23604
total_allocs=653194
...

I'll revisit this in a few weeks,

Harshavardhana, Raghavendra, Pranith (and all others),

Gluster is one of the most responsive Open Source project I
have participated in this far, I'm very happy with all support,
help and encouragement I have got this far. Even though my initial
tests weren't fully satisfactory, you are the main reason for
my perseverance :-)
Yay! good :-). Do you have any suggestions where we need to improve as a 
community that would make it easier for new contributors?


Pranith


/Anders





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Anders Blomdell
On 2014-08-01 02:02, Harshavardhana wrote:> On Thu, Jul 31, 2014 at 11:31 AM, 
Anders Blomdell
>  wrote:
>> During rsync of 35 files, memory consumption of glusterfs
>> rose to 12 GB (after approx 14 hours), I take it that this is a
>> bug I should try to track down?
>>
> 
> Does it ever come down? what happens if you repeatedly the same files
> again? does it OOM?
Well, it OOM'd my firefox first (that's how good I monitor my experiments :-()
No, memory usage does not come down by itself, ACAICT


On 2014-08-01 02:12, Raghavendra Gowdappa wrote:> Anders,
> 
> Mostly its a case of memory leak. It would be helpful if you can file a bug 
> on this. Following information would be useful to fix the issue:
> 
> 1. valgrind reports (if possible).
>  a. To start brick and nfs processes with valgrind you can use following 
> cmdline when starting glusterd.
> # glusterd --xlator-option *.run-with-valgrind=yes
> 
> In this case all the valgrind logs can be found in standard glusterfs log 
> directory.
> 
>  b. For client you can start glusterfs just like any other process in 
> valgrind. Since glusterfs is daemonized, while running with valgrind we need 
> to prevent it by running it in foreground. We can use -N option to do that
> # valgrind --leak-check=full --log-file= glusterfs 
> --volfile-id=xyz --volfile-server=abc -N /mnt/glfs
> 
> 2. Once you observe a considerable leak in memory, please get a statedump of 
> glusterfs
> 
>   # gluster volume statedump 
> 
> and attach the reports in the bug.
Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
pressing duties).

On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
> Yes, even I saw the following leaks, when I tested it a week back. These were 
> the leaks:
> You should probably take a statedump and see what datatypes are leaking.
> 
> root@localhost - /usr/local/var/run/gluster
> 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
> [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
> size=341240
> num_allocs=23602
> max_size=347987
> max_num_allocs=23604
> total_allocs=653194
> ... 
I'll revisit this in a few weeks,

Harshavardhana, Raghavendra, Pranith (and all others),

Gluster is one of the most responsive Open Source project I
have participated in this far, I'm very happy with all support,
help and encouragement I have got this far. Even though my initial
tests weren't fully satisfactory, you are the main reason for
my perseverance :-)

/Anders



-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] regarding mempool documentation patch

2014-07-31 Thread Pranith Kumar Karampuri

hi,
If there are no more comments, could we take 
http://review.gluster.com/#/c/8343 in.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Pranith Kumar Karampuri
Yes, even I saw the following leaks, when I tested it a week back. These 
were the leaks:

You should probably take a statedump and see what datatypes are leaking.

root@localhost - /usr/local/var/run/gluster
14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
[mount/fuse.fuse - usage-type gf_common_mt_char memusage]
size=341240
num_allocs=23602
max_size=347987
max_num_allocs=23604
total_allocs=653194

[mount/fuse.fuse - usage-type gf_common_mt_mem_pool memusage]
size=4335440
num_allocs=45159
max_size=7509032
max_num_allocs=77391
total_allocs=530058

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf 
memusage]

size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char 
memusage]

size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool 
memusage]

size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf 
memusage]

size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char 
memusage]

size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool 
memusage]

size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956


root@localhost - /usr/local/var/run/gluster
14:10:28 ?

Pranith

On 08/01/2014 12:01 AM, Anders Blomdell wrote:

During rsync of 35 files, memory consumption of glusterfs
rose to 12 GB (after approx 14 hours), I take it that this is a
bug I should try to track down?

Version is 3.7dev as of tuesday...

/Anders



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] When inode table is populated?

2014-07-31 Thread Raghavendra G
On Wed, Jul 30, 2014 at 12:43 PM, Anoop C S  wrote:

>
> On 07/30/2014 12:29 PM, Raghavendra Gowdappa wrote:
>
>>
>> - Original Message -
>>
>>> From: "Jiffin Thottan" 
>>> To: gluster-devel@gluster.org
>>> Sent: Wednesday, July 30, 2014 12:22:30 PM
>>> Subject: [Gluster-devel] When  inode table is populated?
>>>
>>> Hi,
>>>
>>> When we were trying to call rename from translator (in reconfigure) using
>>> STACK_WIND , inode table(this->itable) value seems to be null.
>>>
>>> Since inode is required for performing rename, When will inode table gets
>>> populated and Why it is not populated in reconfigure or init?
>>>
>> Not every translator has an inode table (nor it is required to). Only the
>> translators which do inode management (like fuse-bridge, protocol/server,
>> libgfapi, possibly nfsv3 server??) will have an inode table associated with
>> them.
>
>
I was not entirely correct when I made the above statement. Though inode
management is done by above said (fuse-bridge, server, libgfapi etc)
translators, inode tables are _not associated_ with them (with nfsv3 server
as exception which has inode table associated with it).

For a client (fuse-mount, libgfapi etc), the top level xlator (in the
graph) has the inode table. If you had used --volume-name option on client,
the value (which is an xlator name) specified will be set as top.
Otherwise, whatever happens to be the topmost xlator is set as the top.
Usually this is one of acl, meta, worm etc (please refer to the code -
graph.c, which is the "first" xlator in graph).

On the brick process, itables are associated with any of the xlators
"bounded" to the protocol/server. The concept of bounded-xlator was
introduced so that clients have the flexibility to connect to any xlator
(based on what functionality it wants) in the server graph. Bounded xlators
of a server are specified through "option subvolumes" in a brick volfile.
IIRC, we can have multiple bound xlators for a single protocol/server
(though in the volfiles we generate using gluster-cli we use only one).

If you need to access itable, you can do that using inode->table.
>>
>
> Here the old location was created by assigning a fixed gfid using
> STACK_WIND (mkdir) from translator [notify ( )]. And inode was null inside
> its cbk function. Thus old location doesn't have an inode value.


I hope you are speaking in the context of trash translator and probably you
want to create a ".trash" directory in each of the brick. It would be a bit
tricky if you  want to initiate an mkdir fop from notify of an xlator. For
mkdir, pargfid, name and inode members of loc_t structure have to be
populated. The inode would be a new inode (got by calling inode_new ()).
But we need a pointer to itable to pass as an argument to inode_new().
Based on whether you are loading trash translator in client or server, you
can access itable using the explanation given above.


>
>
>>  Or should we create a private inode table and generate inode using it?
>>>
>>
This means your ".trash" directory wouldn't be accessible through standard
inode resolution code in fuse-bridge/libgfapi/server. In short I would
advise you not to use a private inode-table, though probably you can make
it work through some hacks.


>
>>> -Jiffin
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>  ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>
> --
> Anoop C S
> +91 740 609 8342
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Raghavendra Gowdappa
Anders,

Mostly its a case of memory leak. It would be helpful if you can file a bug on 
this. Following information would be useful to fix the issue:

1. valgrind reports (if possible).
 a. To start brick and nfs processes with valgrind you can use following 
cmdline when starting glusterd.
# glusterd --xlator-option *.run-with-valgrind=yes

In this case all the valgrind logs can be found in standard glusterfs log 
directory.

 b. For client you can start glusterfs just like any other process in valgrind. 
Since glusterfs is daemonized, while running with valgrind we need to prevent 
it by running it in foreground. We can use -N option to do that
# valgrind --leak-check=full --log-file= glusterfs 
--volfile-id=xyz --volfile-server=abc -N /mnt/glfs

2. Once you observe a considerable leak in memory, please get a statedump of 
glusterfs

  # gluster volume statedump 

and attach the reports in the bug.

regards,
Raghavendra.

- Original Message -
> From: "Anders Blomdell" 
> To: "Gluster Devel" 
> Sent: Friday, August 1, 2014 12:01:15 AM
> Subject: [Gluster-devel] Monotonically increasing memory
> 
> During rsync of 35 files, memory consumption of glusterfs
> rose to 12 GB (after approx 14 hours), I take it that this is a
> bug I should try to track down?
> 
> Version is 3.7dev as of tuesday...
> 
> /Anders
> 
> --
> Anders Blomdell  Email: anders.blomd...@control.lth.se
> Department of Automatic Control
> Lund University  Phone:+46 46 222 4625
> P.O. Box 118 Fax:  +46 46 138118
> SE-221 00 Lund, Sweden
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Harshavardhana
On Thu, Jul 31, 2014 at 11:31 AM, Anders Blomdell
 wrote:
> During rsync of 35 files, memory consumption of glusterfs
> rose to 12 GB (after approx 14 hours), I take it that this is a
> bug I should try to track down?
>

Does it ever come down? what happens if you repeatedly the same files
again? does it OOM?

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Anders Blomdell
During rsync of 35 files, memory consumption of glusterfs 
rose to 12 GB (after approx 14 hours), I take it that this is a 
bug I should try to track down? 

Version is 3.7dev as of tuesday...

/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Poll for new Weekly meeting time is tied. Need tie breaking votes. :)

2014-07-31 Thread Justin Clift
Hi all,

There's currently a tie in the poll to decide the new meeting time
for our Weekly GlusterFS Community Meeting:

  * 5:30PM / 12:00 UTC - 12 votes (26%)
  * 6:30PM / 13:00 UTC -  8 votes (17%)
  * 7:30PM / 14:00 UTC - 10 votes (22%)
  * 8:30PM / 15:00 UTC - 12 votes (26%)

If you're someone that does or would like to attend, and haven't
yet cast a vote, please do so:

  http://goo.gl/SwsKBf

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs-3.5.2 released

2014-07-31 Thread Niels de Vos
On Thu, Jul 31, 2014 at 04:06:46AM -0700, Gluster Build System wrote:
> 
> 
> SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.2.tar.gz

Bugs that have been marked for 3.5.2 and did not get fixed with this 
release, are being moved to the new 3.5.3 release tracker:
- https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.5.3

Please add 'glusterfs-3.5.3' in the 'Blocks' field of bugs to propose 
inclusion in GlusterFS 3.5.3.

Link to the release notes of glusterfs-3.5.2:
- http://blog.nixpanic.net/2014/07/glusterfs-352-has-been-released.html
- 
https://github.com/gluster/glusterfs/blob/release-3.5/doc/release-notes/3.5.2.md

Remember that packages for different distributions (downstream) will get 
created after this (upstream) release.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs-3.5.2 released

2014-07-31 Thread Gluster Build System


SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.2.tar.gz

This release is made off jenkins-release-86

-- Gluster Build System
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel