Re: [Gluster-users] Gfapi memleaks, all versions

2016-11-08 Thread Prasanna Kalever
Apologies for delay in response, it took me a while to switch here.

As someone pointed rightly in the discussion above. The start and stop
of a VM via libvirt (virsh) will at least call 2
glfs_new/glfs_init/glfs_fini calls.
In fact there are 3 calls involved 2 (mostly for stat, read headers
and chown) in libvirt context and 1 (actual read write IO) in qemu
context, since qemu is forked out and executed in its own process
memory context that will not incur a leak in libvirt, also on stop of
VM the qemu process dies.
Not that all, In case if we are using 4 extra attached disks, then the
total calls to glfs_* will be (4+1)*2 in libvirt and (4+1)*1 in qemu
space i.e 15.

What's been done so far in QEMU,
I have submitted a patch to qemu to cache the glfs object, Hence there
will be one glfs object per volume, now the glfs_* calls will be
reduced from N (In above case 4+1=5)  to 1 per volume.
This will optimize the performance by reducing number of calls, reduce
the memory consumption (as each instance occupies ~300MB VSZ) and
reduce the leak ( ~ 7 - 10 MB per call)
Note this patch is in master [1] already.

What about Libvirt then ?
Almost same here, I am planning to cache the connections (the glfs
object) until all the disks are initialized then finally followed by a
glfs_fini()
There by we reduce N * 2 (From above case its (4+1)*2 = 10) calls to
1, Work on this change is in progress, can expect this by end of the
week mostly.


[1] https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg07087.html


--
Prasanna



On Thu, Oct 27, 2016 at 12:23 PM, Pranith Kumar Karampuri
 wrote:
> +Prasanna
>
> Prasanna changed qemu code to reuse the glfs object for adding multiple
> disks from same volume using refcounting. So the memory usage went down from
> 2GB to 200MB in the case he targetted. Wondering if the same can be done for
> this case too.
>
> Prasanna could you let us know if we can use refcounting even in this case.
>
>
> On Wed, Sep 7, 2016 at 10:28 AM, Oleksandr Natalenko
>  wrote:
>>
>> Correct.
>>
>> On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri
>>  wrote:
>> >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>> >oleksa...@natalenko.name> wrote:
>> >
>> >> Hello,
>> >>
>> >> thanks, but that is not what I want. I have no issues debugging gfapi
>> >apps,
>> >> but have an issue with GlusterFS FUSE client not being handled
>> >properly by
>> >> Massif tool.
>> >>
>> >> Valgrind+Massif does not handle all forked children properly, and I
>> >believe
>> >> that happens because of some memory corruption in GlusterFS FUSE
>> >client.
>> >>
>> >
>> >Is this the same libc issue that we debugged and provided with the
>> >option
>> >to avoid it?
>> >
>> >
>> >>
>> >> Regards,
>> >>   Oleksandr
>> >>
>> >> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
>> >> >  Hello,  Oleksandr
>> >> > You can compile that simple test code posted
>> >> > here(http://www.gluster.org/pipermail/gluster-users/2016-
>> >> August/028183.html
>> >> > ). Then, run the command
>> >> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
>> >> > --tool=massif  ./glfsxmp the cmd will produce a file like
>> >> massif.out.,
>> >> >  the file is the memory leak log file , you can use ms_print tool
>> >as
>> >> below
>> >> > command $>ms_print  massif.out.
>> >> > the cmd will output the memory alloc detail.
>> >> >
>> >> > the simple test code just call glfs_init and glfs_fini 100 times to
>> >found
>> >> > the memory leak,  by my test, all xlator init and fini is the main
>> >memory
>> >> > leak function. If you can locate the simple code memory leak code,
>> >maybe,
>> >> > you can locate the leak code in fuse client.
>> >> >
>> >> > please enjoy.
>> >>
>> >>
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> >>
>>
>
>
>
> --
> Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-10-26 Thread Pranith Kumar Karampuri
+Prasanna

Prasanna changed qemu code to reuse the glfs object for adding multiple
disks from same volume using refcounting. So the memory usage went down
from 2GB to 200MB in the case he targetted. Wondering if the same can be
done for this case too.

Prasanna could you let us know if we can use refcounting even in this case.


On Wed, Sep 7, 2016 at 10:28 AM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:

> Correct.
>
> On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
> >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
> >oleksa...@natalenko.name> wrote:
> >
> >> Hello,
> >>
> >> thanks, but that is not what I want. I have no issues debugging gfapi
> >apps,
> >> but have an issue with GlusterFS FUSE client not being handled
> >properly by
> >> Massif tool.
> >>
> >> Valgrind+Massif does not handle all forked children properly, and I
> >believe
> >> that happens because of some memory corruption in GlusterFS FUSE
> >client.
> >>
> >
> >Is this the same libc issue that we debugged and provided with the
> >option
> >to avoid it?
> >
> >
> >>
> >> Regards,
> >>   Oleksandr
> >>
> >> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
> >> >  Hello,  Oleksandr
> >> > You can compile that simple test code posted
> >> > here(http://www.gluster.org/pipermail/gluster-users/2016-
> >> August/028183.html
> >> > ). Then, run the command
> >> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
> >> > --tool=massif  ./glfsxmp the cmd will produce a file like
> >> massif.out.,
> >> >  the file is the memory leak log file , you can use ms_print tool
> >as
> >> below
> >> > command $>ms_print  massif.out.
> >> > the cmd will output the memory alloc detail.
> >> >
> >> > the simple test code just call glfs_init and glfs_fini 100 times to
> >found
> >> > the memory leak,  by my test, all xlator init and fini is the main
> >memory
> >> > leak function. If you can locate the simple code memory leak code,
> >maybe,
> >> > you can locate the leak code in fuse client.
> >> >
> >> > please enjoy.
> >>
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >>
>
>


-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Correct.

On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri 
 wrote:
>On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>oleksa...@natalenko.name> wrote:
>
>> Hello,
>>
>> thanks, but that is not what I want. I have no issues debugging gfapi
>apps,
>> but have an issue with GlusterFS FUSE client not being handled
>properly by
>> Massif tool.
>>
>> Valgrind+Massif does not handle all forked children properly, and I
>believe
>> that happens because of some memory corruption in GlusterFS FUSE
>client.
>>
>
>Is this the same libc issue that we debugged and provided with the
>option
>to avoid it?
>
>
>>
>> Regards,
>>   Oleksandr
>>
>> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
>> >  Hello,  Oleksandr
>> > You can compile that simple test code posted
>> > here(http://www.gluster.org/pipermail/gluster-users/2016-
>> August/028183.html
>> > ). Then, run the command
>> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
>> > --tool=massif  ./glfsxmp the cmd will produce a file like
>> massif.out.,
>> >  the file is the memory leak log file , you can use ms_print tool
>as
>> below
>> > command $>ms_print  massif.out.
>> > the cmd will output the memory alloc detail.
>> >
>> > the simple test code just call glfs_init and glfs_fini 100 times to
>found
>> > the memory leak,  by my test, all xlator init and fini is the main
>memory
>> > leak function. If you can locate the simple code memory leak code,
>maybe,
>> > you can locate the leak code in fuse client.
>> >
>> > please enjoy.
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Pranith Kumar Karampuri
On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:

> Hello,
>
> thanks, but that is not what I want. I have no issues debugging gfapi apps,
> but have an issue with GlusterFS FUSE client not being handled properly by
> Massif tool.
>
> Valgrind+Massif does not handle all forked children properly, and I believe
> that happens because of some memory corruption in GlusterFS FUSE client.
>

Is this the same libc issue that we debugged and provided with the option
to avoid it?


>
> Regards,
>   Oleksandr
>
> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
> >  Hello,  Oleksandr
> > You can compile that simple test code posted
> > here(http://www.gluster.org/pipermail/gluster-users/2016-
> August/028183.html
> > ). Then, run the command
> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
> > --tool=massif  ./glfsxmp the cmd will produce a file like
> massif.out.,
> >  the file is the memory leak log file , you can use ms_print tool as
> below
> > command $>ms_print  massif.out.
> > the cmd will output the memory alloc detail.
> >
> > the simple test code just call glfs_init and glfs_fini 100 times to found
> > the memory leak,  by my test, all xlator init and fini is the main memory
> > leak function. If you can locate the simple code memory leak code, maybe,
> > you can locate the leak code in fuse client.
> >
> > please enjoy.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Hello,

thanks, but that is not what I want. I have no issues debugging gfapi apps, 
but have an issue with GlusterFS FUSE client not being handled properly by 
Massif tool.

Valgrind+Massif does not handle all forked children properly, and I believe 
that happens because of some memory corruption in GlusterFS FUSE client.

Regards,
  Oleksandr

On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
>  Hello,  Oleksandr
> You can compile that simple test code posted
> here(http://www.gluster.org/pipermail/gluster-users/2016-August/028183.html
> ). Then, run the command
> $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
> --tool=massif  ./glfsxmp the cmd will produce a file like massif.out., 
>  the file is the memory leak log file , you can use ms_print tool as below
> command $>ms_print  massif.out.
> the cmd will output the memory alloc detail.
> 
> the simple test code just call glfs_init and glfs_fini 100 times to found
> the memory leak,  by my test, all xlator init and fini is the main memory
> leak function. If you can locate the simple code memory leak code, maybe,
> you can locate the leak code in fuse client.
> 
> please enjoy.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread feihu929
 Hello,  Oleksandr
You can compile that simple test code posted 
here(http://www.gluster.org/pipermail/gluster-users/2016-August/028183.html).
Then, run the command
$>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind 
--tool=massif  ./glfsxmp
the cmd will produce a file like massif.out.,   the file is the memory leak 
log file , you can use ms_print tool as below command  
$>ms_print  massif.out.
the cmd will output the memory alloc detail.

the simple test code just call glfs_init and glfs_fini 100 times to found the 
memory leak,  by my test, all xlator init and fini is the main memory leak 
function. If you can locate the simple code memory leak code, maybe, you can 
locate the leak code in fuse client.

please enjoy.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-03 Thread Oleksandr Natalenko
Hello.

On субота, 3 вересня 2016 р. 03:06:50 EEST Pranith Kumar Karampuri wrote:
> On a completely different note, I see that you used massif for doing this
> analysis. Oleksandr is looking for some help in using massif to provide
> more information in a different usecase. Could you help him?

Yup. would be nice. Please let me know how do you use Massif tool with 
GlusterFS — I need to use it with FUSE client but have no luck with this so 
far.

Regards,
  Oleksandr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-02 Thread Pranith Kumar Karampuri
On Fri, Sep 2, 2016 at 11:41 AM,  wrote:

> Hi,
>
> *Pranith*
> >There was a time when all this clean up was not necessary because
> >the mount process would die anyway. Then when gfapi came in, the process
> >wouldn't die anymore :-), so we have to fix all the shortcuts we took over
> >the years to properly fix it. So lot of work :-/. It is something that
> >needs to be done properly (multi threaded nature of the workloads make it
> a
> >bit difficult), because the previous attempts at fixing it caused the
> >process to die because of double free etc.
>
> When libgfapi used by libvirtd, the glfs_init and glfs_fini will call two
> times with start or stop virtual machine, and with other libvirt command
> like virsh domblkinfo which will visit glusterfs api, SO, as long as
> libvirtd start and stop vm, the libvirtd process will leak large memory,
> about 1.5G memory consume by 100 times call glfs_init and glfs_fini
>

Fixing it all at once is a very big effort. I thought a little about how to
give smaller fixes per release so that we can fix this bug over some
releases. Fortunately in the virt layer only client side xlators are loaded
and in that, performance xlators are generally disabled except write-behind
at least by using 'group-virt' setting this is what will happen. So
probably that should be the first set of xlators where we need to
concentrate our efforts to make just this usecase better.
So just write-behind, shard, dht, afr, client xlators should probably a
good start to address this case.

On a completely different note, I see that you used massif for doing this
analysis. Oleksandr is looking for some help in using massif to provide
more information in a different usecase. Could you help him?


> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-02 Thread feihu929
Hi, Pranith

>There was a time when all this clean up was not necessary because
>the mount process would die anyway. Then when gfapi came in, the process
>wouldn't die anymore :-), so we have to fix all the shortcuts we took over
>the years to properly fix it. So lot of work :-/. It is something that
>needs to be done properly (multi threaded nature of the workloads make it a
>bit difficult), because the previous attempts at fixing it caused the
>process to die because of double free etc.

When libgfapi used by libvirtd, the glfs_init and glfs_fini will call two 
times with start or stop virtual machine, and with other libvirt command like 
virsh domblkinfo which will visit glusterfs api, SO, as long as libvirtd start 
and stop vm, the libvirtd process will leak large memory, about 1.5G memory 
consume by 100 times call  glfs_init and glfs_fini___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-26 Thread Pranith Kumar Karampuri
On Fri, Aug 26, 2016 at 3:01 PM, Piotr Rybicki  wrote:

>
>
> W dniu 2016-08-25 o 23:22, Joe Julian pisze:
>
>> I don't think "unfortunatelly with no attraction from developers" is
>> fair. Most of the leaks that have been reported against 3.7 have been
>> fixed recently. Clearly, with 132 contributors, not all of them can, or
>> should, work on fixing bugs. New features don't necessarily interfere
>> with bug fixes.
>>
>> The solution is to file good bug reports,
>> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS , and
>> include valgrind reports, state dumps, and logs.
>>
>> This is a community, not a company. Nobody's under an obligation to
>> prioritize your needs over the needs of others. What I *have* seen is
>> that the more effort you put in to identifying a problem, the easier it
>> is to find a developer that can help you. If you need help, ask. It's
>> not just developers that can help you. I spend countless hours on IRC
>> helping people and I don't make a dime off doing so.
>>
>> Finally, if you have a bug that's not getting any attention, feel free
>> to email the maintainer of the component you've reported a bug against.
>> Be nice. They're good people and willing to help.
>>
>
> Hello Joe.
>
> First, I didn't wish do offend anyone. If one feels this way - I'm sorry
> for that. I just wanted to get attraction to this memleak(s) issue.
>
> I really like gluster project, and all I wish is to make it better.
>
> I just filled bug report:
> https://bugzilla.redhat.com/show_bug.cgi?id=1370417


hi Piotr,
 Sorry for the delay in addressing this issue. But this is not
something that is easy to fix. First round of fixing the leaks which was
done by Poornima to free up the inode tables took lot of time(Probably
months, I could be wrong. It was a very big effort) and it was across the
whole project. She also addressed graph deallocation part but it is not
enabled I think (I cced her as well to the thread). We have a new feature
called brick multiplexing that is in the works which makes it very
important to handle this leak, so it will be fixed as part of that
feature/stabilizing the feature in proper way IMO.

There was a time when all this clean up was not necessary because
the mount process would die anyway. Then when gfapi came in, the process
wouldn't die anymore :-), so we have to fix all the shortcuts we took over
the years to properly fix it. So lot of work :-/. It is something that
needs to be done properly (multi threaded nature of the workloads make it a
bit difficult), because the previous attempts at fixing it caused the
process to die because of double free etc.


>
>
> Thank You & best regards
>
> Piotr Rybicki
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-26 Thread Piotr Rybicki



W dniu 2016-08-25 o 23:22, Joe Julian pisze:

I don't think "unfortunatelly with no attraction from developers" is
fair. Most of the leaks that have been reported against 3.7 have been
fixed recently. Clearly, with 132 contributors, not all of them can, or
should, work on fixing bugs. New features don't necessarily interfere
with bug fixes.

The solution is to file good bug reports,
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS , and
include valgrind reports, state dumps, and logs.

This is a community, not a company. Nobody's under an obligation to
prioritize your needs over the needs of others. What I *have* seen is
that the more effort you put in to identifying a problem, the easier it
is to find a developer that can help you. If you need help, ask. It's
not just developers that can help you. I spend countless hours on IRC
helping people and I don't make a dime off doing so.

Finally, if you have a bug that's not getting any attention, feel free
to email the maintainer of the component you've reported a bug against.
Be nice. They're good people and willing to help.


Hello Joe.

First, I didn't wish do offend anyone. If one feels this way - I'm sorry 
for that. I just wanted to get attraction to this memleak(s) issue.


I really like gluster project, and all I wish is to make it better.

I just filled bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1370417

Thank You & best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-25 Thread Joe Julian
I don't think "unfortunatelly with no attraction from developers" is 
fair. Most of the leaks that have been reported against 3.7 have been 
fixed recently. Clearly, with 132 contributors, not all of them can, or 
should, work on fixing bugs. New features don't necessarily interfere 
with bug fixes.


The solution is to file good bug reports, 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS , and 
include valgrind reports, state dumps, and logs.


This is a community, not a company. Nobody's under an obligation to 
prioritize your needs over the needs of others. What I *have* seen is 
that the more effort you put in to identifying a problem, the easier it 
is to find a developer that can help you. If you need help, ask. It's 
not just developers that can help you. I spend countless hours on IRC 
helping people and I don't make a dime off doing so.


Finally, if you have a bug that's not getting any attention, feel free 
to email the maintainer of the component you've reported a bug against. 
Be nice. They're good people and willing to help.


On 08/25/2016 12:26 PM, Piotr Rybicki wrote:



W dniu 2016-08-24 o 08:49, feihu...@sina.com pisze:

Hello
There is a large memleak (as reported by valgrind) in all gluster 
versions, even in 3.8.3




Although I cant help You with that, i'm happy that some one else 
pointed this issue.


I'm reporting memleak issues from some time, unfortunatelly with no 
attraction from developers.


To be honest, i'd rather see theese memleak issues addressed, than 
optimisations/new features.


Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-25 Thread Piotr Rybicki



W dniu 2016-08-24 o 08:49, feihu...@sina.com pisze:

Hello
There is a large memleak (as reported by valgrind) in all gluster 
versions, even in 3.8.3




Although I cant help You with that, i'm happy that some one else pointed 
this issue.


I'm reporting memleak issues from some time, unfortunatelly with no 
attraction from developers.


To be honest, i'd rather see theese memleak issues addressed, than 
optimisations/new features.


Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users