Krishnan Parthasarathi wrote:
> I would like to add your root cause analysis to the commit message before
> merging. Hope that's OK with you.
Sure, go ahead.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing l
Trying again for the Bitrot Discussion,
Dev 2nd, 1300 UTC / 0500 PST
Scheduled: http://bit.ly/12bAQDr
davemc
--
Dave McAllister
GlusterFS: Your simple solution to scale-out file systems
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http:
On 11/28/2014 09:44 PM, Emmanuel Dreyfus wrote:
On Fri, Nov 28, 2014 at 03:55:22PM +, Justin Clift wrote:
We're trying to get the NetBSD side of things running 100%, and
waiting on these is blocking us. ;)
And as the fix crop, I have a few others to share :-)
More the merrier :-).
Cur
On 11/28/2014 08:30 AM, Venky Shankar wrote:
[snip]
1. Can the bitd be one per node like self-heal-daemon and other "global"
services? I worry about creating 2 * N processes for N bricks in a node.
Maybe we can consider having one thread per volume/brick etc. in a single
bitd process to make it
On Fri, Nov 28, 2014 at 03:55:22PM +, Justin Clift wrote:
> We're trying to get the NetBSD side of things running 100%, and
> waiting on these is blocking us. ;)
And as the fix crop, I have a few others to share :-)
Currently I test with:
http://review.gluster.org/8074
http://review.gluster.o
On 11/28/2014 09:25 PM, Justin Clift wrote:
On Sat, 22 Nov 2014 16:55:07 +0100
m...@netbsd.org (Emmanuel Dreyfus) wrote:
Some news on triggered NetBSD regression tests: we still have a few
test that always fail in basic. I tweaked the regression test
launching script to skip them, so that we ca
On Sat, 22 Nov 2014 16:55:07 +0100
m...@netbsd.org (Emmanuel Dreyfus) wrote:
> Some news on triggered NetBSD regression tests: we still have a few
> test that always fail in basic. I tweaked the regression test
> launching script to skip them, so that we can come some useful
> results until they a
On Fri, 28 Nov 2014 11:46:04 +
Andrea Tartaglia wrote:
> Hi, Incurred in this problem again. After purging the .processed
> everything went ok for a while. But now the .processing directory is
> filling up which is going to bring to the same issue.
>
> After some further investigation I fo
On Thu, Nov 27, 2014 at 2:59 PM, Raghavendra Bhat wrote:
> Hi,
>
> With USS to access snapshots, we depend on last snapshot of the volume (or
> the latest snapshot) to resolve some issues.
> Ex:
> Say there is a directory called "dir" within the root of the volume and USS
> is enabled. Now when .s
Emmanuel,
The patch can be found here: http://review.gluster.org/9212.
Let me know if it works for you. I would like to add your
root cause analysis to the commit message before merging.
Hope that's OK with you.
thanks,
kp
- Original Message -
> Emmanuel,
>
> OK, let me send out a patch
Hi, Incurred in this problem again. After purging the .processed
everything went ok for a while. But now the .processing directory is
filling up which is going to bring to the same issue.
After some further investigation I found that this is happening only on
the secondary node, the primary on
Would such a workaround make sense?
diff --git a/xlators/cluster/afr/src/afr-self-heald.c b/xlators/cluster/afr/src/
afr-self-heald.c
index a341015..dd7ac1a 100644
--- a/xlators/cluster/afr/src/afr-self-heald.c
+++ b/xlators/cluster/afr/src/afr-self-heald.c
@@ -547,6 +579,11 @@ afr_shd_full_sweep
Hi
By looping on tests/basic/afr/self-heald.t I can sometime have a crash
on this assertion:
#4 0xb9d2d966 in client3_3_opendir (frame=0xbb289918, this=0xbb2bc018,
data=0xb89fe73c) at client-rpc-fops.c:4412
4412GF_ASSERT_AND_GOTO_WITH_ERROR (this->name,
4413
13 matches
Mail list logo