Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-26 Thread Piotr Rybicki



W dniu 2016-08-25 o 23:22, Joe Julian pisze:

I don't think "unfortunatelly with no attraction from developers" is
fair. Most of the leaks that have been reported against 3.7 have been
fixed recently. Clearly, with 132 contributors, not all of them can, or
should, work on fixing bugs. New features don't necessarily interfere
with bug fixes.

The solution is to file good bug reports,
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS , and
include valgrind reports, state dumps, and logs.

This is a community, not a company. Nobody's under an obligation to
prioritize your needs over the needs of others. What I *have* seen is
that the more effort you put in to identifying a problem, the easier it
is to find a developer that can help you. If you need help, ask. It's
not just developers that can help you. I spend countless hours on IRC
helping people and I don't make a dime off doing so.

Finally, if you have a bug that's not getting any attention, feel free
to email the maintainer of the component you've reported a bug against.
Be nice. They're good people and willing to help.


Hello Joe.

First, I didn't wish do offend anyone. If one feels this way - I'm sorry 
for that. I just wanted to get attraction to this memleak(s) issue.


I really like gluster project, and all I wish is to make it better.

I just filled bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1370417

Thank You & best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gfapi memleaks, all versions

2016-08-25 Thread Piotr Rybicki



W dniu 2016-08-24 o 08:49, feihu...@sina.com pisze:

Hello
There is a large memleak (as reported by valgrind) in all gluster 
versions, even in 3.8.3




Although I cant help You with that, i'm happy that some one else pointed 
this issue.


I'm reporting memleak issues from some time, unfortunatelly with no 
attraction from developers.


To be honest, i'd rather see theese memleak issues addressed, than 
optimisations/new features.


Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gfapi memleaks, all versions

2016-08-01 Thread Piotr Rybicki
 blocks
==31396==still reachable: 18,144 bytes in 3 blocks
==31396== suppressed: 0 bytes in 0 blocks
==31396==
==31396== For counts of detected and suppressed errors, rerun with: -v
==31396== ERROR SUMMARY: 52 errors from 52 contexts (suppressed: 0 from 0)

Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: libgfapi libvirt memory leak version 3.7.8

2016-02-12 Thread Piotr Rybicki

Hi All

I have to report, that there is a mem leak latest version of gluster

gluster: 3.7.8
libvirt 1.3.1

mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster 3.5.X).

I believe libvirt itself uses libgfapi only to check existence of a disk
(via libgfapi). Libvirt calls glfs_ini and glfs_fini when doing this check.

When using drive via file (gluster fuse mount), there is no mem leak
when starting domain.

my drive definition (libgfapi):


  
  
 # connection is still
via tcp. Defining 'tcp' here doesn't make any difference.
  
  
  
  


I've at first reported to libvirt developers, but they blame gluster.

valgrind details (libgfapi):

# valgrind --leak-check=full --show-reachable=yes
--child-silent-after-fork=yes libvirtd --listen 2> libvirt-gfapi.log

On the other console:
virsh start DOMAIN
...wait...
virsh shutdown DOMAIN
...wait and stop valgrind/libvirtd

valgrind log:

==5767== LEAK SUMMARY:
==5767==definitely lost: 19,666 bytes in 96 blocks
==5767==indirectly lost: 21,194 bytes in 123 blocks
==5767==  possibly lost: 2,699,140 bytes in 68 blocks
==5767==still reachable: 986,951 bytes in 15,038 blocks
==5767== suppressed: 0 bytes in 0 blocks
==5767==
==5767== For counts of detected and suppressed errors, rerun with: -v
==5767== ERROR SUMMARY: 96 errors from 96 contexts (suppressed: 0 from 0)

full log:
http://195.191.233.1/libvirt-gfapi.log
http://195.191.233.1/libvirt-gfapi.log.bz2

Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: Re: libgfapi libvirt memory leak version 3.7.8

2016-02-12 Thread Piotr Rybicki


W dniu 2016-02-11 o 16:02, Piotr Rybicki pisze:

Hi All

I have to report, that there is a mem leak latest version of gluster

gluster: 3.7.8
libvirt 1.3.1

mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster
3.5.X).

I believe libvirt itself uses libgfapi only to check existence of a disk
(via libgfapi). Libvirt calls glfs_ini and glfs_fini when doing this check.

When using drive via file (gluster fuse mount), there is no mem leak
when starting domain.

my drive definition (libgfapi):

 
   
   
  # connection is still
via tcp. Defining 'tcp' here doesn't make any difference.
   
   
   
   
 

I've at first reported to libvirt developers, but they blame gluster.

valgrind details (libgfapi):

# valgrind --leak-check=full --show-reachable=yes
--child-silent-after-fork=yes libvirtd --listen 2> libvirt-gfapi.log

On the other console:
virsh start DOMAIN
...wait...
virsh shutdown DOMAIN
...wait and stop valgrind/libvirtd

valgrind log:

==5767== LEAK SUMMARY:
==5767==definitely lost: 19,666 bytes in 96 blocks
==5767==indirectly lost: 21,194 bytes in 123 blocks
==5767==  possibly lost: 2,699,140 bytes in 68 blocks
==5767==still reachable: 986,951 bytes in 15,038 blocks
==5767== suppressed: 0 bytes in 0 blocks
==5767==
==5767== For counts of detected and suppressed errors, rerun with: -v
==5767== ERROR SUMMARY: 96 errors from 96 contexts (suppressed: 0 from 0)

full log:
http://195.191.233.1/libvirt-gfapi.log
http://195.191.233.1/libvirt-gfapi.log.bz2

Best regards
Piotr Rybicki


Even simpler case to see memleak is to valgrind on:

qemu-img info gluster://SERVER_IP:0/pool/FILE.img

==6100== LEAK SUMMARY:
==6100==definitely lost: 19,846 bytes in 98 blocks
==6100==indirectly lost: 2,479,205 bytes in 182 blocks
==6100==  possibly lost: 240,600 bytes in 7 blocks
==6100==still reachable: 3,271,130 bytes in 2,931 blocks
==6100== suppressed: 0 bytes in 0 blocks

Best regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi

2015-11-10 Thread Piotr Rybicki

W dniu 2015-11-10 o 04:01, satish kondapalli pisze:

Hi,

I am running performance  test between fuse vs libgfapi.  I have a
single node, client and server are running on same node. I have NVMe SSD
device as a storage.

My volume info::

[root@sys04 ~]# gluster vol info
Volume Name: vol1
Type: Distribute
Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 172.16.71.19:/mnt_nvme/brick1
Options Reconfigured:
performance.cache-size: 0
performance.write-behind: off
performance.read-ahead: off
performance.io-cache: off
performance.strict-o-direct: on


fio Job file::

[global]
direct=1
runtime=20
time_based
ioengine=gfapi
iodepth=1
volume=vol1
brick=172.16.71.19
rw=read
size=128g
bs=32k
group_reporting
numjobs=1
filename=128g.bar

While doing sequential read test, I am not seeing any data transfer on
device with iostat tool.  Looks like gfapi engine is reading from the
cache because i am reading from same file with different block sizes.

But i disabled  io cache  for my volume. Can someone help me  from where
fio is reading the data?


Hi.

It is normal - not seeing traffic on ethernet interface, when using 
native RDMA protocol (not TCP via IPoIB).


Try perfquery -x , to see traffic counters increase on RDMA interface.

Regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users