Some more (bad) news on the status of 3.7.11.
I've been doing some more tests with release-3.7, and found that the
fix for solving daemons failing to start when management encryption is
enabled doesn't work in all cases.
Now I've got 2 options I can take, and would like some opinions on
which I s
Hi Jeff,
I've been testing release-3.7 in the lead up to tagging 3.7.11, and
found that the fix I did to allow daemons to start when management
encryption is enabled, doesn't work always. The daemons fail to start
because they can't connect to glusterd to fetch the volfiles, and the
connection fai
On 04/15/2016 01:32 PM, Kaushal M wrote:
> Some more (bad) news on the status of 3.7.11.
>
> I've been doing some more tests with release-3.7, and found that the
> fix for solving daemons failing to start when management encryption is
> enabled doesn't work in all cases.
>
> Now I've got 2 opti
On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote:
> Some more (bad) news on the status of 3.7.11.
>
> I've been doing some more tests with release-3.7, and found that the
> fix for solving daemons failing to start when management encryption is
> enabled doesn't work in all cases.
>
> Now
On Fri, Apr 15, 2016 at 2:36 PM, Niels de Vos wrote:
> On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote:
>> Some more (bad) news on the status of 3.7.11.
>>
>> I've been doing some more tests with release-3.7, and found that the
>> fix for solving daemons failing to start when management
On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote:
> Or,
> 2. Revert the IPv6 patch that exposed this problem
IMO the good practice when a change breaks a stable release
is to back it out, and work on a betterfix on master for later
pull up to stable.
--
Emmanuel Dreyfus
m...@netbsd.org
> I've been testing release-3.7 in the lead up to tagging 3.7.11, and
> found that the fix I did to allow daemons to start when management
> encryption is enabled, doesn't work always. The daemons fail to start
> because they can't connect to glusterd to fetch the volfiles, and the
> connection fai
Hi,
During read/write tests to a 78x(16+4) distributed disperse volume
from 50 clients, One clients hangs on read/write with the following
logs:
[2016-04-14 11:11:04.728580] W [MSGID: 122056]
[ec-combine.c:866:ec_combine_check] 0-v0-disperse-6: Mismatching xdata
in answers of 'LOOKUP'
[2016-04-14
I think this is the statesump of only one brick.
We would required statedump from all the bricks and client process in case of
fuse or nfs process if it is mounted through nfs.
Ashish
- Original Message -
From: "Serkan Çoban"
To: "Ashish Pandey"
Cc: "Gluster Users" , "Glu
Sorry for typo, brick state dump file.
On Fri, Apr 15, 2016 at 11:41 AM, Serkan Çoban wrote:
> Hi I reproduce the problem, brick log file is in below link:
> https://www.dropbox.com/s/iy09j7mm2hrsf03/bricks-02.5677.dump.1460705370.gz?dl=0
>
>
> On Thu, Apr 14, 2016 at 8:07 PM, Ashish Pandey wrot
Yes it is only one brick which error appears. I can send all other
brick dumps too..
How can I get state dump in fuse client? There is no gluster command there..
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listi
Here is the related brick log:
/var/log/glusterfs/bricks/bricks-02.log:[2016-04-14 11:31:25.700556] E
[inodelk.c:309:__inode_unlock_lock] 0-v0-locks: Matching lock not
found for unlock 0-9223372036854775807, by 94d29e885e7f on
0x7f037413b990
/var/log/glusterfs/bricks/bricks-02.log:[2016-04-14
Hi I reproduce the problem, brick log file is in below link:
https://www.dropbox.com/s/iy09j7mm2hrsf03/bricks-02.5677.dump.1460705370.gz?dl=0
On Thu, Apr 14, 2016 at 8:07 PM, Ashish Pandey wrote:
> Hi Serkan,
>
> Could you also provide us the statedump of all the brick processes and
> clients?
>
To get the state dump of fuse client-
1 - get the PID of fuse mount process
2 - kill -USR1
statedump can be found in the same directory where u get for brick process.
Following link could be helpful for future reference -
https://github.com/gluster/glusterfs/blob/master/doc/debugging/state
Hi Asish,
Sorry for the question but do you want all brick statedumps from all
servers or all brick dumps from one server?
All server brick dumps is nearly 700MB zipped..
On Fri, Apr 15, 2016 at 2:16 PM, Ashish Pandey wrote:
>
> To get the state dump of fuse client-
> 1 - get the PID of fuse mou
Hi Serkan,
Could you also provide us the statedump of all the brick processes and clients?
Commands to generate statedumps for brick processes/nfs server/quotad
For bricks: gluster volume statedump
For nfs server: gluster volume statedump nfs
We can find the directory where statedump f
On 04/15/2016 05:29 AM, Emmanuel Dreyfus wrote:
On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote:
Or,
2. Revert the IPv6 patch that exposed this problem
IMO the good practice when a change breaks a stable release
is to back it out, and work on a betterfix on master for later
pull up t
Hi all,
This week's status:
-Added option to switch between the existing volume level
WORM and the file level WORM
-Fixed the issue with the rename fop with the distributed
volume
-Wrote some test cases for the current work
-Updated the design-specs
Plan for next week:
-Handling the other fo
Actually it was my mistake I overlooked the configuration you provided..It will
be huge.
I would suggest to take statedump on all the nodes and try to grep for
"BLOCKED" in statedump files on all the nodes.
See if you can see any such line in any file and send those files. No need to
send sta
19 matches
Mail list logo