Re: [Gluster-users] Volume with only one node

2016-10-25 Thread Oleksandr Natalenko
Hello. 25.10.2016 10:08, Maxence Sartiaux wrote: I need to migrate a old 2 node cluster to a proxmox cluster with a replicated gluster storage between those two (and a third arbitrer node). Id like to create a volume with a single node, migrate the data on this volume from the old server and

Re: [Gluster-users] strange memory consumption with libgfapi

2016-10-25 Thread Oleksandr Natalenko
Hello. 25.10.2016 09:11, Pavel Cernohorsky wrote: Unfortunately it is not possible to use valgrind properly, because libgfapi seems to leak just by initializing and deinitializing (tested with different code). Use Valgrind with Massif tool. That would definitely help.

Re: [Gluster-users] Setting op-version after upgrade

2016-10-22 Thread Oleksandr Natalenko
IIRC, latest opversion is 30712. On October 22, 2016 1:38:44 PM GMT+02:00, mabi wrote: >Hello, > >I just upgraded from GlusterFS 3.7.12 to 3.7.16 and checked the >op-version for all my volumes and found out that I am still using >op-version 30706 as you can see below: >

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Correct. On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri <pkara...@redhat.com> wrote: >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko < >oleksa...@natalenko.name> wrote: > >> Hello, >> >> thanks, but that is not what I want.

Re: [Gluster-users] [Gluster-devel] Profiling GlusterFS FUSE client with Valgrind's Massif tool

2016-09-06 Thread Oleksandr Natalenko
Created BZ for it [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1373630 On вівторок, 6 вересня 2016 р. 23:32:51 EEST Pranith Kumar Karampuri wrote: > I included you on a thread on users, let us see if he can help you out. > > On Mon, Aug 29, 2016 at 4:02 PM, Oleksandr

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Hello, thanks, but that is not what I want. I have no issues debugging gfapi apps, but have an issue with GlusterFS FUSE client not being handled properly by Massif tool. Valgrind+Massif does not handle all forked children properly, and I believe that happens because of some memory corruption

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-03 Thread Oleksandr Natalenko
Hello. On субота, 3 вересня 2016 р. 03:06:50 EEST Pranith Kumar Karampuri wrote: > On a completely different note, I see that you used massif for doing this > analysis. Oleksandr is looking for some help in using massif to provide > more information in a different usecase. Could you help him?

Re: [Gluster-users] Shard storage suggestions

2016-07-18 Thread Oleksandr Natalenko
I'd say, like this: /.shard/d2/18/D218CD1C-4BD9-40D7-9810-86B3F7932509.1 18.07.2016 10:31, Gandalf Corvotempesta написав: AFAIK gluster store each shard on a single directory. With huge files this could lead to millions of small shard file in the same directory that certainly lead to a

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-08 Thread Oleksandr Natalenko
. Pranith? 08.06.2016 10:06, Pranith Kumar Karampuri написав: On Wed, Jun 8, 2016 at 12:33 PM, Oleksandr Natalenko <oleksa...@natalenko.name> wrote: Yup, I can do that, but please note that RSS does not change. Will statedump show VIRT values? Also, I'm looking at the numbers now, a

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-08 Thread Oleksandr Natalenko
. On Wed, Jun 8, 2016 at 12:03 PM, Oleksandr Natalenko <oleksa...@natalenko.name> wrote: Also, I've checked shd log files, and found out that for some reason shd constantly reconnects to bricks: [1] Please note that suggested fix [2] by Pranith does not help, VIRT value still grows: ==

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-08 Thread Oleksandr Natalenko
in a similar way. On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko <oleksa...@natalenko.name> wrote: Hello. We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for keeping volumes metadata. Now we observe huge VSZ (VIRT) usage by glustershd on dummy node: === root 1510

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
]' | grep 8192K | wc -l 9261 $ echo "9261*(8192+4)" | bc 75903156 === Which is something like 70G+ I have got in VIRT. 06.06.2016 11:24, Oleksandr Natalenko написав: Hello. We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for keeping volumes metadata. Now we observe huge

Re: [Gluster-users] [Gluster-devel] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
-threaded shd works, but it could be leaking threads in a similar way. On Mon, Jun 6, 2016 at 1:54 PM, Oleksandr Natalenko <oleksa...@natalenko.name> wrote: Hello. We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for keeping volumes metadata. Now we observe huge VSZ (VIRT)

[Gluster-users] Huge VSZ (VIRT) usage by glustershd on dummy node

2016-06-06 Thread Oleksandr Natalenko
Hello. We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for keeping volumes metadata. Now we observe huge VSZ (VIRT) usage by glustershd on dummy node: === root 15109 0.0 13.7 76552820 535272 ? Ssl тра26 2:11 /usr/sbin/glusterfs -s localhost --volfile-id

Re: [Gluster-users] [Gluster-devel] Idea: Alternate Release process

2016-05-30 Thread Oleksandr Natalenko
30.05.2016 05:08, Sankarshan Mukhopadhyay написав: It would perhaps be worthwhile to extend this release timeline/cadence discussion into (a) End-of-Life definition and invocation (b) whether a 'long term support' (assuming that is what LTS is) is of essentially any value to users of GlusterFS.

Re: [Gluster-users] [Gluster-devel] Idea: Alternate Release process

2016-05-11 Thread Oleksandr Natalenko
My 2 cents on timings etc. Rationale: 1. deliver new features to users as fast as possible to get the feedback; 2. leave an option of using LTS branch for those who do not want update too often. Definition: * "stable release" — .0 tag that receives critical bugfixes and security updates for

Re: [Gluster-users] gluster for web hosting

2016-04-30 Thread Oleksandr Natalenko
On субота, 30 квітня 2016 р. 19:46:30 EEST Gandalf Corvotempesta wrote: > All servers has 2x10G? just for HA or load balanced? Everything is 2×10G, for both HA (virtual chassis) and load balancing. > 3xreplica2 means you have 3 gluster servers with a replica of 2 or you have > 3 servers

Re: [Gluster-users] gluster for web hosting

2016-04-30 Thread Oleksandr Natalenko
ogy and how many sites are you > hosting? > > Any front cache like varnish? > > We are planning two clusters: one for mass hosting (in the order of ten > thousands of websites) and one for virtual machines (the webservers that > would access to the hosting cluster) > Il 30 a

Re: [Gluster-users] gluster for web hosting

2016-04-30 Thread Oleksandr Natalenko
Yup, what are you interested in? On April 30, 2016 3:39:00 PM GMT+03:00, Gandalf Corvotempesta wrote: >Anyone using gluster as storage backends for web servers (hosting >wordpress, joomla, .) >in production environment willing to share some info ?

Re: [Gluster-users] [Gluster-devel] [RFC] FUSE bridge based on GlusterFS API

2016-04-07 Thread Oleksandr Natalenko
On четвер, 7 квітня 2016 р. 16:12:07 EEST Jeff Darcy wrote: > "Considered wrong" might be overstating the case. It might be useful to > keep in mind that the fuse-bridge code predates GFAPI by a considerable > amount. In fact, significant parts of GFAPI were borrowed from the > existing

Re: [Gluster-users] Convert replica 2 to replica 3 arbiter 1

2016-03-29 Thread Oleksandr Natalenko
Unfortunately, one couldn't convert replica 2 into replica 3 arbiter 1 now, but I really hope to get this feature before 3.8 release for 3.7 branch. See latest community meeting log for discussion [1] (starting from 15:27:03). [1] https://meetbot.fedoraproject.org/gluster-meeting/2016-03-23/

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
And for 256b inode: (597904 - 33000) / (1066036 - 23) == 530 bytes per inode. So I still consider 1k to be good estimation for average workload. Regards, Oleksandr. On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote: > Looks okay to me Oleksandr. You might want to make a github

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
wrote: > On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote: > > In order to estimate GlusterFS arbiter brick size, I've deployed test > > setup > > with replica 3 arbiter 1 volume within one node. Each brick is located on > > separate HDD (XFS with inode size == 5

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
Ravi, I will definitely arrange the results into some short handy document and post it here. Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks with inode size of 256b and 1k: === 22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes might look like for

Re: [Gluster-users] Arbiter brick size estimation

2016-03-18 Thread Oleksandr Natalenko
Ravi, here is the summary: [1] Regards, Oleksandr. [1] https://gist.github.com/e8265ca07f7b19f30bb3 On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote: > On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote: > > OK, I've repeated the test with the following hierarchy: >

Re: [Gluster-users] Arbiter brick size estimation

2016-03-08 Thread Oleksandr Natalenko
Hi. On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote: > I think the first one is right because you still haven't used up all the > inodes.(2036637 used vs. the max. permissible 3139091). But again this > is an approximation because not all files would be 899 bytes. For > example if

[Gluster-users] Arbiter brick size estimation

2016-03-05 Thread Oleksandr Natalenko
In order to estimate GlusterFS arbiter brick size, I've deployed test setup with replica 3 arbiter 1 volume within one node. Each brick is located on separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 + memleak patches. Volume options are kept default. Here is the script that

Re: [Gluster-users] [Gluster-devel] 3.7.8 client is slow

2016-02-22 Thread Oleksandr Natalenko
David, could you please cross-post your observations to the following bugreport: https://bugzilla.redhat.com/show_bug.cgi?id=1309462 ? It seems you have faced similar issue. On понеділок, 22 лютого 2016 р. 16:46:01 EET David Robinson wrote: > The 3.7.8 FUSE client is significantly slower than

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Oleksandr Natalenko
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy rpc: Fix for rpc_transport_t leak === Here is Valgrind

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Oleksandr Natalenko
2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2): inode: Retire the inodes from the lru list in inode_table_destroy gfapi: Use inode_forget in case of hand

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Oleksandr Natalenko
oumya? [1] https://github.com/pfactum/xglfs [2] https://github.com/pfactum/xglfs/blob/master/xglfs_destroy.c#L30 [3] https://gist.github.com/aec72b6164a695cf2d44 11.02.2016 10:12, Oleksandr Natalenko написав: And here goes "rsync" test results (v3.7.8 + two patches by Soumya). 2 volu

[Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-10 Thread Oleksandr Natalenko
Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2): inode: Retire the inodes from the lru list in inode_table_destroy gfapi: Use inode_forget in case of handle objects === Those are the only 2 not merged

Re: [Gluster-users] FUSE fuse_main

2016-02-10 Thread Oleksandr Natalenko
Also, you may take a look at my dirty GlusterFS FUSE API client that uses fuse_main(): https://github.com/pfactum/xglfs 11.02.2016 02:10, Samuel Hall написав: Hello everyone, I am trying to find where FUSE is initialized and the fuse_main function is called by Gluster. Normally the call by

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-02 Thread Oleksandr Natalenko
jay Bellur написав: On 01/29/2016 01:09 PM, Oleksandr Natalenko wrote: Here is intermediate summary of current memory leaks in FUSE client investigation. I use GlusterFS v3.7.6 release with the following patches: === Kaleb S KEITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisite

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-02 Thread Oleksandr Natalenko
thub.com/7013b493d19c8c5fffae [4] https://gist.github.com/cc38155b57e68d7e86d5 [5] https://gist.github.com/6a24000c77760a97976a [6] https://gist.github.com/74bd7a9f734c2fd21c33 On понеділок, 1 лютого 2016 р. 14:24:22 EET Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Oleksandr Natalenko
: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de0982ab447e20 [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage] size

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Oleksandr Natalenko
7 [3] https://gist.github.com/7013b493d19c8c5fffae [4] https://gist.github.com/cc38155b57e68d7e86d5 [5] https://gist.github.com/6a24000c77760a97976a [6] https://gist.github.com/74bd7a9f734c2fd21c33 On понеділок, 1 лютого 2016 р. 14:24:22 EET Soumya Koduri wrote: > On 02/01/2016 01:39

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Oleksandr Natalenko
> > Hopefully with this patch the > memory leaks should disapear. > > Xavi > > On 29.01.2016 19:09, Oleksandr > > Natalenko wrote: > > Here is intermediate summary of current memory > > leaks in FUSE client > > > investigation. > > > > I use G

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-29 Thread Oleksandr Natalenko
the lru list in inode_table_destroy upcall: free the xdr* allocations === I've repeated "rsync" test under Valgrind, and here is Valgrind output: https://gist.github.com/f8e0151a6878cacc9b1a I see DHT-related leaks. On понеділок, 25 січня 2016 р. 02:46:32 EET Oleksandr Natal

[Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-01-29 Thread Oleksandr Natalenko
Here is intermediate summary of current memory leaks in FUSE client investigation. I use GlusterFS v3.7.6 release with the following patches: === Kaleb S KEITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-25 Thread Oleksandr Natalenko
6 max_size=725575836 max_num_allocs=7552489 total_allocs=90843958 [cluster/distribute.asterisk_records-dht - usage-type gf_common_mt_char memusage] size=586404954 num_allocs=7572836 max_size=586405157 max_num_allocs=7572839 total_allocs=80463096 === Ideas? On понеділок, 25 січня 2016 р. 02:46:32 EET

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-24 Thread Oleksandr Natalenko
oks promising :) > > > Cordialement, > Mathieu CHATEAU > http://www.lotp.fr > > 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksa...@natalenko.name>: > > OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the > > following &g

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-24 Thread Oleksandr Natalenko
BTW, am I the only one who sees in max_size=4294965480 almost 2^32? Could that be integer overflow? On неділя, 24 січня 2016 р. 13:23:55 EET Oleksandr Natalenko wrote: > The leak definitely remains. I did "find /mnt/volume -type d" over GlusterFS > volume, with mentioned

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-24 Thread Oleksandr Natalenko
3:00 EET Mathieu Chateau wrote: > Thanks for all your tests and times, it looks promising :) > > > Cordialement, > Mathieu CHATEAU > http://www.lotp.fr > > 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksa...@natalenko.name>: > > OK, now I'm re-pe

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-23 Thread Oleksandr Natalenko
: > On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri < > > pkara...@redhat.com> wrote: > > On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote: > >> XFS. Server side works OK, I'm able to mount volume again. Brick is 30% > >> full. > > > >

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-23 Thread Oleksandr Natalenko
nt patches will be incorporated into 3.7.7. On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote: > On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote: > > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote: > >> I presume by this you mean you're not seei

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-22 Thread Oleksandr Natalenko
OK, compiles and runs well now, but still leaks. Will try to load the volume with rsync. On четвер, 21 січня 2016 р. 20:40:45 EET Kaleb KEITHLEY wrote: > On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote: > > I see extra GF_FREE (node); added with two patches: > > > > ===

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-22 Thread Oleksandr Natalenko
On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote: > I presume by this you mean you're not seeing the "kernel notifier loop > terminated" error in your logs. Correct, but only with simple traversing. Have to test under rsync. > Hmmm. My system is not leaking. Last 24 hours the

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
its > prematurely. > > If that solves the problem, we could try to determine the cause of the > premature exit and solve it. > > Xavi > > On 20/01/16 10:08, Oleksandr Natalenko wrote: > > Yes, there are couple of messages like this in my logs too (I guess one >

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
I see extra GF_FREE (node); added with two patches: === $ git diff HEAD~2 | gist https://gist.github.com/9524fa2054cc48278ea8 === Is that intentionally? I guess I face double-free issue. On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote: > On 01/20/2016 04:08 AM, Oleksandr Natale

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Oleksandr Natalenko
o.6 === On четвер, 21 січня 2016 р. 17:29:53 EET Kaleb KEITHLEY wrote: > On 01/20/2016 04:08 AM, Oleksandr Natalenko wrote: > > Yes, there are couple of messages like this in my logs too (I guess one > > message per each remount): > > > > === > > [2016-01-18 23

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-20 Thread Oleksandr Natalenko
t sure if there could be any other error > that can cause this. > > Xavi > > On 20/01/16 00:13, Oleksandr Natalenko wrote: > > Here is another RAM usage stats and statedump of GlusterFS mount > > approaching to just another OOM: > > > > === > > root

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-19 Thread Oleksandr Natalenko
Here is another RAM usage stats and statedump of GlusterFS mount approaching to just another OOM: === root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/ glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume ===

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-18 Thread Oleksandr Natalenko
D status,and > the brick process and relate thread also be in the D status. > And the brick dev disk util is 100% . > > On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natalenko > > <oleksa...@natalenko.name> wrote: > > Wrong assumption, rsync hung again. > > > >

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-16 Thread Oleksandr Natalenko
:09:51 EET Oleksandr Natalenko wrote: > Another observation: if rsyncing is resumed after hang, rsync itself > hangs a lot faster because it does stat of already copied files. So, the > reason may be not writing itself, but massive stat on GlusterFS volume > as well. > > 15.01.201

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-16 Thread Oleksandr Natalenko
Wrong assumption, rsync hung again. On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote: > One possible reason: > > cluster.lookup-optimize: on > cluster.readdir-optimize: on > > I've disabled both optimizations, and at least as of now rsync still does > its

Re: [Gluster-users] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-15 Thread Oleksandr Natalenko
Another observation: if rsyncing is resumed after hang, rsync itself hangs a lot faster because it does stat of already copied files. So, the reason may be not writing itself, but massive stat on GlusterFS volume as well. 15.01.2016 09:40, Oleksandr Natalenko написав: While doing rsync over

[Gluster-users] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-14 Thread Oleksandr Natalenko
While doing rsync over millions of files from ordinary partition to GlusterFS volume, just after approx. first 2 million rsync hang happens, and the following info appears in dmesg: === [17075038.924481] INFO: task rsync:10310 blocked for more than 120 seconds. [17075038.931948] "echo 0 >

Re: [Gluster-users] GlusterFS FUSE client hangs on rsyncing lots of file

2016-01-14 Thread Oleksandr Natalenko
Here is similar issue described on serverfault.com: https://serverfault.com/questions/716410/rsync-crashes-machine-while-performing-sync-on-glusterfs-mounted-share I've checked GlusterFS logs with no luck — as if nothing happened. P.S. GlusterFS v3.7.6. 15.01.2016 09:40, Oleksandr Natalenko

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Oleksandr Natalenko
ome issues. 13.01.2016 12:56, Soumya Koduri написав: On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Oleksandr Natalenko
Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 12.01.2016 08:24, Soumya Koduri написав: For fuse client, I tried vfs drop_caches as suggested by Vijay in an earlier mail. Though all the

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Oleksandr Natalenko
post the patch soon. Thanks for your patience! -Soumya On 01/07/2016 07:34 PM, Oleksandr Natalenko wrote: OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the most recent revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision too). On traversing GlusterFS volume with many f

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-07 Thread Oleksandr Natalenko
OK, I've patched GlusterFS v3.7.6 with 43570a01 and 5cffb56b (the most recent revisions) and NFS-Ganesha v2.3.0 with 8685abfc (most recent revision too). On traversing GlusterFS volume with many files in one folder via NFS mount I get an assertion: === ganesha.nfsd: inode.c:716:

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-06 Thread Oleksandr Natalenko
Ganesha error: === ganesha.nfsd: inode.c:716: __inode_forget: Assertion `inode->nlookup >= nlookup' failed. === 06.01.2016 08:40, Soumya Koduri написав: On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote: OK, I've repeated the same traversing test with patched GlusterFS API, and here

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Oleksandr Natalenko
Correct, I used FUSE mount. Shouldn't gfapi be used by FUSE mount helper (/ usr/bin/glusterfs)? On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote: > On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: > > Unfortunately, both patches didn't make any difference for me. >

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Oleksandr Natalenko
OK, I've repeated the same traversing test with patched GlusterFS API, and here is new Valgrind log: https://gist.github.com/17ecb16a11c9aed957f5 Still leaks. On вівторок, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote: > On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: > > Unfo

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Oleksandr Natalenko
the above patches (or rather Gluster) and I am currently debugging it. Thanks, Soumya On 12/25/2015 11:34 PM, Oleksandr Natalenko wrote: 1. test with Cache_Size = 256 and Entries_HWMark = 4096 Before find . -type f: root 3120 0.6 11.0 879120 208408 ? Ssl 17:39 0:00 /usr/bin/ ga

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-03 Thread Oleksandr Natalenko
> > > >> From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > >> To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" > >> <skod...@redhat.com> Cc: gluster-users@gluster.org, > >> gluster

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-03 Thread Oleksandr Natalenko
Here is another Valgrind log of similar scenario but with drop_caches before umount: https://gist.github.com/06997ecc8c7bce83aec1 Also, I've tried to drop caches on production VM with GluserFS volume mounted and memleaking for several weeks with absolutely no effect: === root 945 0.1

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Oleksandr Natalenko
, server-side GlusterFS cache or server kernel page cache is the cause). There are ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: > On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: > > Another addition: it seems to be GlusterFS AP

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Oleksandr Natalenko
What units Cache_Size is measured in? Bytes? 25.12.2015 16:58, Soumya Koduri написав: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find .

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Oleksandr Natalenko
. 20:28:13 EET Soumya Koduri wrote: > On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: > > Another addition: it seems to be GlusterFS API library memory leak > > because NFS-Ganesha also consumes huge amount of memory while doing > > ordinary "find . -type f" via

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Oleksandr Natalenko
l valgrind output: https://gist.github.com/eebd9f94ababd8130d49 One may see the probability of massive leaks at the end of valgrind output related to both GlusterFS and NFS-Ganesha code. On пʼятниця, 25 грудня 2015 р. 23:29:07 EET Soumya Koduri wrote: > On 12/25/2015 08:56 PM, Oleksandr Na

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-24 Thread Oleksandr Natalenko
Still actual issue for 3.7.6. Any suggestions? 24.09.2015 10:14, Oleksandr Natalenko написав: In our GlusterFS deployment we've encountered something like memory leak in GlusterFS FUSE client. We use replicated (×2) GlusterFS volume to store mail (exim+dovecot, maildir format). Here is inode

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-24 Thread Oleksandr Natalenko
/usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT === 1.4G is too much for simple stat() :(. Ideas? 24.12.2015 16:32, Oleksandr Natalenko написав: Still actual issue for 3.7.6. Any suggestions? 24.09.2015 10:14, Oleksandr Natalenko написав: In our

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-10-13 Thread Oleksandr Natalenko
list of the inode cache. I have sent a patch for that. http://review.gluster.org/#/c/12242/ [3] Regards, Raghavendra Bhat On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <oleksa...@natalenko.name> wrote: I've checked statedump of volume in question and haven't found lots of iobuf as men

[Gluster-users] Memory leak in GlusterFS FUSE client

2015-09-24 Thread Oleksandr Natalenko
In our GlusterFS deployment we've encountered something like memory leak in GlusterFS FUSE client. We use replicated (×2) GlusterFS volume to store mail (exim+dovecot, maildir format). Here is inode stats for both bricks and mountpoint: === Brick 1 (Server 1): Filesystem

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-09-24 Thread Oleksandr Natalenko
We use bare GlusterFS installation with no oVirt involved. 24.09.2015 10:29, Gabi C wrote: google vdsm memory leak..it's been discussed on list last year and earlier this one... ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-09-24 Thread Oleksandr Natalenko
I've checked statedump of volume in question and haven't found lots of iobuf as mentioned in that bugreport. However, I've noticed that there are lots of LRU records like this: === [conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1] gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595 nlookup=1

[Gluster-users] GlusterFS cache architecture

2015-08-25 Thread Oleksandr Natalenko
Hello. I'm trying to investigate how GlusterFS manages cache on both server and client side, but unfortunately cannot find any exhaustive, appropriate and up to date information. The disposition is that we have, saying, 2 GlusterFS nodes (server_a and server_b) with replicated volume