Re: [Gluster-devel] Volume usage mismatch problem

2016-02-01 Thread Manikandan Selvaganesh
Hi, Please find my comments inline. > Hi all > > Gluster-3.7.6 in 'Quota' problem occurs in the following test case. > > (it doesn't occur if don't enable the volume quota) > > Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS. > > Can you help with the following problems? > > > 1. z

[Gluster-devel] regarding GF_CONTENT_KEY and dht2 - perf with small files

2016-02-01 Thread Pranith Kumar Karampuri
hi, Background: Quick-read + open-behind xlators are developed to help in small file workload reads like apache webserver, tar etc to get the data of the file in lookup FOP itself. What happens is, when a lookup FOP is executed, GF_CONTENT_KEY is added in xdata with max-length and posix x

Re: [Gluster-devel] Losing connection to bricks, Gluster processes restarting

2016-02-01 Thread Atin Mukherjee
Initially I was suspecting about server-quorum be the culprit which is not the case. By any chance is your network flaky? On 02/01/2016 10:33 PM, Logan Barfield wrote: > Volume Name: data02 > Type: Replicate > Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870 > Status: Started > Number of Bricks: 1

Re: [Gluster-devel] [Gluster-users] glustershd fail to start after 3.7.7 upgrade

2016-02-01 Thread Vijay Bellur
On 02/01/2016 10:24 PM, Pranith Kumar Karampuri wrote: On 02/01/2016 10:16 PM, Joe Julian wrote: WTF? if (!xattrs_list) { ret = -EINVAL; gf_msg (this->name, GF_LOG_ERROR, -ret, AFR_MSG_NO_CHANGELOG, "Unable to fetch afr pending c

Re: [Gluster-devel] [Gluster-users] glustershd fail to start after 3.7.7 upgrade

2016-02-01 Thread Pranith Kumar Karampuri
On 02/01/2016 10:16 PM, Joe Julian wrote: WTF? if (!xattrs_list) { ret = -EINVAL; gf_msg (this->name, GF_LOG_ERROR, -ret, AFR_MSG_NO_CHANGELOG, "Unable to fetch afr pending changelogs. Is op-version" " >

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Oleksandr Natalenko
Please take a look at updated test results. Test: find /mnt/volume -type d RAM usage after "find" finishes: ~ 10.8G (see "ps" output [1]). Statedump after "find" finishes: [2]. Then I did drop_caches, and RAM usage dropped to ~4.7G [3]. Statedump after drop_caches: [4]. Here is diff between s

Re: [Gluster-devel] Losing connection to bricks, Gluster processes restarting

2016-02-01 Thread Logan Barfield
Volume Name: data02 Type: Replicate Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gluster-stor01:/export/data/brick02 <-- 10.1.1.10 Brick2: gluster-stor02:/export/data/brick02 <-- 10.1.1.11 Options Reconfigured: s

Re: [Gluster-devel] [Gluster-users] glustershd fail to start after 3.7.7 upgrade

2016-02-01 Thread Joe Julian
WTF? if (!xattrs_list) { ret = -EINVAL; gf_msg (this->name, GF_LOG_ERROR, -ret, AFR_MSG_NO_CHANGELOG, "Unable to fetch afr pending changelogs. Is op-version" " >= 30707?"); goto out; }

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited fo

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Xavier Hernandez
Hi Oleksandr, On 01/02/16 09:09, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. That's the expected behavior after applying the nlookup count patch. As it's configured now, gluster won'

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Xavier Hernandez
Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Oleksandr Natalenko
Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's why it uses 100% of CPU. I've re-tested it, waiting