Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-28 Thread Emmanuel Dreyfus
Krishnan Parthasarathi wrote: > I would like to add your root cause analysis to the commit message before > merging. Hope that's OK with you. Sure, go ahead. -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing l

[Gluster-devel] Bitrot detection discussion, revisited

2014-11-28 Thread Dave McAllister
Trying again for the Bitrot Discussion, Dev 2nd, 1300 UTC / 0500 PST Scheduled: http://bit.ly/12bAQDr davemc -- Dave McAllister GlusterFS: Your simple solution to scale-out file systems ___ Gluster-devel mailing list Gluster-devel@gluster.org http:

Re: [Gluster-devel] NetBSD regression tests: reviews required

2014-11-28 Thread Vijay Bellur
On 11/28/2014 09:44 PM, Emmanuel Dreyfus wrote: On Fri, Nov 28, 2014 at 03:55:22PM +, Justin Clift wrote: We're trying to get the NetBSD side of things running 100%, and waiting on these is blocking us. ;) And as the fix crop, I have a few others to share :-) More the merrier :-). Cur

Re: [Gluster-devel] BitRot notes

2014-11-28 Thread Vijay Bellur
On 11/28/2014 08:30 AM, Venky Shankar wrote: [snip] 1. Can the bitd be one per node like self-heal-daemon and other "global" services? I worry about creating 2 * N processes for N bricks in a node. Maybe we can consider having one thread per volume/brick etc. in a single bitd process to make it

Re: [Gluster-devel] NetBSD regression tests: reviews required

2014-11-28 Thread Emmanuel Dreyfus
On Fri, Nov 28, 2014 at 03:55:22PM +, Justin Clift wrote: > We're trying to get the NetBSD side of things running 100%, and > waiting on these is blocking us. ;) And as the fix crop, I have a few others to share :-) Currently I test with: http://review.gluster.org/8074 http://review.gluster.o

Re: [Gluster-devel] NetBSD regression tests: reviews required

2014-11-28 Thread Vijay Bellur
On 11/28/2014 09:25 PM, Justin Clift wrote: On Sat, 22 Nov 2014 16:55:07 +0100 m...@netbsd.org (Emmanuel Dreyfus) wrote: Some news on triggered NetBSD regression tests: we still have a few test that always fail in basic. I tweaked the regression test launching script to skip them, so that we ca

Re: [Gluster-devel] NetBSD regression tests: reviews required

2014-11-28 Thread Justin Clift
On Sat, 22 Nov 2014 16:55:07 +0100 m...@netbsd.org (Emmanuel Dreyfus) wrote: > Some news on triggered NetBSD regression tests: we still have a few > test that always fail in basic. I tweaked the regression test > launching script to skip them, so that we can come some useful > results until they a

Re: [Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-11-28 Thread Justin Clift
On Fri, 28 Nov 2014 11:46:04 + Andrea Tartaglia wrote: > Hi, Incurred in this problem again. After purging the .processed > everything went ok for a while. But now the .processing directory is > filling up which is going to bring to the same issue. > > After some further investigation I fo

Re: [Gluster-devel] snapshot restore and USS

2014-11-28 Thread RAGHAVENDRA TALUR
On Thu, Nov 27, 2014 at 2:59 PM, Raghavendra Bhat wrote: > Hi, > > With USS to access snapshots, we depend on last snapshot of the volume (or > the latest snapshot) to resolve some issues. > Ex: > Say there is a directory called "dir" within the root of the volume and USS > is enabled. Now when .s

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-28 Thread Krishnan Parthasarathi
Emmanuel, The patch can be found here: http://review.gluster.org/9212. Let me know if it works for you. I would like to add your root cause analysis to the commit message before merging. Hope that's OK with you. thanks, kp - Original Message - > Emmanuel, > > OK, let me send out a patch

Re: [Gluster-devel] glusterfs 3.6.0beta3 fills up inodes

2014-11-28 Thread Andrea Tartaglia
Hi, Incurred in this problem again. After purging the .processed everything went ok for a while. But now the .processing directory is filling up which is going to bring to the same issue. After some further investigation I found that this is happening only on the secondary node, the primary on

Re: [Gluster-devel] spurious error in self-heald.t

2014-11-28 Thread Emmanuel Dreyfus
Would such a workaround make sense? diff --git a/xlators/cluster/afr/src/afr-self-heald.c b/xlators/cluster/afr/src/ afr-self-heald.c index a341015..dd7ac1a 100644 --- a/xlators/cluster/afr/src/afr-self-heald.c +++ b/xlators/cluster/afr/src/afr-self-heald.c @@ -547,6 +579,11 @@ afr_shd_full_sweep

[Gluster-devel] spurious error in self-heald.t

2014-11-28 Thread Emmanuel Dreyfus
Hi By looping on tests/basic/afr/self-heald.t I can sometime have a crash on this assertion: #4 0xb9d2d966 in client3_3_opendir (frame=0xbb289918, this=0xbb2bc018, data=0xb89fe73c) at client-rpc-fops.c:4412 4412GF_ASSERT_AND_GOTO_WITH_ERROR (this->name, 4413