[Gluster-devel] More news on 3.7.11

2016-04-15 Thread Kaushal M
Some more (bad) news on the status of 3.7.11. I've been doing some more tests with release-3.7, and found that the fix for solving daemons failing to start when management encryption is enabled doesn't work in all cases. Now I've got 2 options I can take, and would like some opinions on which I s

[Gluster-devel] Should it be possible to disable own-thread for encrypted RPC?

2016-04-15 Thread Kaushal M
Hi Jeff, I've been testing release-3.7 in the lead up to tagging 3.7.11, and found that the fix I did to allow daemons to start when management encryption is enabled, doesn't work always. The daemons fail to start because they can't connect to glusterd to fetch the volfiles, and the connection fai

Re: [Gluster-devel] [Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Atin Mukherjee
On 04/15/2016 01:32 PM, Kaushal M wrote: > Some more (bad) news on the status of 3.7.11. > > I've been doing some more tests with release-3.7, and found that the > fix for solving daemons failing to start when management encryption is > enabled doesn't work in all cases. > > Now I've got 2 opti

Re: [Gluster-devel] [Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Niels de Vos
On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote: > Some more (bad) news on the status of 3.7.11. > > I've been doing some more tests with release-3.7, and found that the > fix for solving daemons failing to start when management encryption is > enabled doesn't work in all cases. > > Now

Re: [Gluster-devel] [Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Kaushal M
On Fri, Apr 15, 2016 at 2:36 PM, Niels de Vos wrote: > On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote: >> Some more (bad) news on the status of 3.7.11. >> >> I've been doing some more tests with release-3.7, and found that the >> fix for solving daemons failing to start when management

Re: [Gluster-devel] More news on 3.7.11

2016-04-15 Thread Emmanuel Dreyfus
On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote: > Or, > 2. Revert the IPv6 patch that exposed this problem IMO the good practice when a change breaks a stable release is to back it out, and work on a betterfix on master for later pull up to stable. -- Emmanuel Dreyfus m...@netbsd.org

Re: [Gluster-devel] Should it be possible to disable own-thread for encrypted RPC?

2016-04-15 Thread Jeff Darcy
> I've been testing release-3.7 in the lead up to tagging 3.7.11, and > found that the fix I did to allow daemons to start when management > encryption is enabled, doesn't work always. The daemons fail to start > because they can't connect to glusterd to fetch the volfiles, and the > connection fai

[Gluster-devel] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Hi, During read/write tests to a 78x(16+4) distributed disperse volume from 50 clients, One clients hangs on read/write with the following logs: [2016-04-14 11:11:04.728580] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-v0-disperse-6: Mismatching xdata in answers of 'LOOKUP' [2016-04-14

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
I think this is the statesump of only one brick. We would required statedump from all the bricks and client process in case of fuse or nfs process if it is mounted through nfs. Ashish - Original Message - From: "Serkan Çoban" To: "Ashish Pandey" Cc: "Gluster Users" , "Glu

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Sorry for typo, brick state dump file. On Fri, Apr 15, 2016 at 11:41 AM, Serkan Çoban wrote: > Hi I reproduce the problem, brick log file is in below link: > https://www.dropbox.com/s/iy09j7mm2hrsf03/bricks-02.5677.dump.1460705370.gz?dl=0 > > > On Thu, Apr 14, 2016 at 8:07 PM, Ashish Pandey wrot

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Yes it is only one brick which error appears. I can send all other brick dumps too.. How can I get state dump in fuse client? There is no gluster command there.. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listi

Re: [Gluster-devel] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Here is the related brick log: /var/log/glusterfs/bricks/bricks-02.log:[2016-04-14 11:31:25.700556] E [inodelk.c:309:__inode_unlock_lock] 0-v0-locks: Matching lock not found for unlock 0-9223372036854775807, by 94d29e885e7f on 0x7f037413b990 /var/log/glusterfs/bricks/bricks-02.log:[2016-04-14

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Hi I reproduce the problem, brick log file is in below link: https://www.dropbox.com/s/iy09j7mm2hrsf03/bricks-02.5677.dump.1460705370.gz?dl=0 On Thu, Apr 14, 2016 at 8:07 PM, Ashish Pandey wrote: > Hi Serkan, > > Could you also provide us the statedump of all the brick processes and > clients? >

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
To get the state dump of fuse client- 1 - get the PID of fuse mount process 2 - kill -USR1 statedump can be found in the same directory where u get for brick process. Following link could be helpful for future reference - https://github.com/gluster/glusterfs/blob/master/doc/debugging/state

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Serkan Çoban
Hi Asish, Sorry for the question but do you want all brick statedumps from all servers or all brick dumps from one server? All server brick dumps is nearly 700MB zipped.. On Fri, Apr 15, 2016 at 2:16 PM, Ashish Pandey wrote: > > To get the state dump of fuse client- > 1 - get the PID of fuse mou

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
Hi Serkan, Could you also provide us the statedump of all the brick processes and clients? Commands to generate statedumps for brick processes/nfs server/quotad For bricks: gluster volume statedump For nfs server: gluster volume statedump nfs We can find the directory where statedump f

Re: [Gluster-devel] [Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Vijay Bellur
On 04/15/2016 05:29 AM, Emmanuel Dreyfus wrote: On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote: Or, 2. Revert the IPv6 patch that exposed this problem IMO the good practice when a change breaks a stable release is to back it out, and work on a betterfix on master for later pull up t

[Gluster-devel] WORM/Retention Feature - 15/04/2016

2016-04-15 Thread Karthik Subrahmanya
Hi all, This week's status: -Added option to switch between the existing volume level WORM and the file level WORM -Fixed the issue with the rename fop with the distributed volume -Wrote some test cases for the current work -Updated the design-specs Plan for next week: -Handling the other fo

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
Actually it was my mistake I overlooked the configuration you provided..It will be huge. I would suggest to take statedump on all the nodes and try to grep for "BLOCKED" in statedump files on all the nodes. See if you can see any such line in any file and send those files. No need to send sta