On Fri, 12 Dec 2014 11:32:31 +0530
Atin Mukherjee wrote:
> Issue is cracked now!!
>
> http://review.gluster.org/#/c/9269/ should solve it. Commit message is
> self explanatory, however just briefing the problem and solution:
>
> If we look at the mgmt_v3-locks.t, it tries to perform multiple sy
Issue is cracked now!!
http://review.gluster.org/#/c/9269/ should solve it. Commit message is
self explanatory, however just briefing the problem and solution:
If we look at the mgmt_v3-locks.t, it tries to perform multiple syncop
transactions parallely. Now since these operations are attempted o
Justin,
I could only get a chance to look into this yesterday as last week I was
in RHEL 7 training. I've some interesting facts to share which is as
follows:
When I execute the script which sets options for two different volumes
(one in background) in a loop, after few iterations the memory
cons
On 12/03/2014 07:36 PM, Justin Clift wrote:
> On Tue, 02 Dec 2014 10:05:36 +0530
> Atin Mukherjee wrote:
>
>> Its on my radar, I am in progress of analysing it. The last patch set
>> was on the clean up part of the test cases, I felt the changes could
>> have solved it, but I am afraid it didn'
On Tue, 02 Dec 2014 10:05:36 +0530
Atin Mukherjee wrote:
> Its on my radar, I am in progress of analysing it. The last patch set
> was on the clean up part of the test cases, I felt the changes could
> have solved it, but I am afraid it didn't. I tried to reproduce it
> multiple times in my local
On 12/01/2014 09:43 PM, Emmanuel Dreyfus wrote:
> On Mon, Dec 01, 2014 at 09:42:09PM +0530, Ravishankar N wrote:
>> I see the mgmt locks spurious-failure-fix patches referenced above have been
>> merged but I am still able to hit it on master.
>> In case some motivated soul wants to have a look,
On Mon, Dec 01, 2014 at 09:42:09PM +0530, Ravishankar N wrote:
> I see the mgmt locks spurious-failure-fix patches referenced above have been
> merged but I am still able to hit it on master.
> In case some motivated soul wants to have a look, here is the log:
> http://build.gluster.org/job/racks
On 11/13/2014 12:11 PM, Atin Mukherjee wrote:
>
>
> On 11/09/2014 10:31 PM, Atin Mukherjee wrote:
>>
>>
>> On 11/08/2014 05:21 AM, Justin Clift wrote:
>>> On Wed, 05 Nov 2014 14:58:06 +0530
>>> Atin Mukherjee wrote:
>>>
Can there be any cases where glusterd instance may go down
unexpe
On 11/09/2014 10:31 PM, Atin Mukherjee wrote:
>
>
> On 11/08/2014 05:21 AM, Justin Clift wrote:
>> On Wed, 05 Nov 2014 14:58:06 +0530
>> Atin Mukherjee wrote:
>>
>>> Can there be any cases where glusterd instance may go down
>>> unexpectedly without a crash?
>>>
>>> [1] http://build.gluster.o
On 11/08/2014 05:21 AM, Justin Clift wrote:
> On Wed, 05 Nov 2014 14:58:06 +0530
> Atin Mukherjee wrote:
>
>> Can there be any cases where glusterd instance may go down
>> unexpectedly without a crash?
>>
>> [1] http://build.gluster.org/job/rackspace-regression-2GB-triggered
>> /2319/consoleFul
On Wed, 05 Nov 2014 14:58:06 +0530
Atin Mukherjee wrote:
> Can there be any cases where glusterd instance may go down
> unexpectedly without a crash?
>
> [1] http://build.gluster.org/job/rackspace-regression-2GB-triggered
> /2319/consoleFull
Has anyone gotten back to you about this?
+ Justin
On 11/03/2014 06:15 PM, Justin Clift wrote:
> On Sun, 02 Nov 2014 21:41:02 +0530
> Atin Mukherjee wrote:
>> On 10/31/2014 07:08 PM, Justin Clift wrote:
>>> On Fri, 31 Oct 2014 10:17:28 +0530
>>> Atin Mukherjee wrote:
>>>
Justin,
For last three runs, I've observed the same failur
On Sun, 02 Nov 2014 21:41:02 +0530
Atin Mukherjee wrote:
> On 10/31/2014 07:08 PM, Justin Clift wrote:
> > On Fri, 31 Oct 2014 10:17:28 +0530
> > Atin Mukherjee wrote:
> >
> >> Justin,
> >>
> >> For last three runs, I've observed the same failure. I think its
> >> really the time to debug this w
On 10/31/2014 07:08 PM, Justin Clift wrote:
> On Fri, 31 Oct 2014 10:17:28 +0530
> Atin Mukherjee wrote:
>
>> Justin,
>>
>> For last three runs, I've observed the same failure. I think its
>> really the time to debug this without any further delay. Can you
>> please share a rackspace machine su
On Fri, 31 Oct 2014 10:17:28 +0530
Atin Mukherjee wrote:
> Justin,
>
> For last three runs, I've observed the same failure. I think its
> really the time to debug this without any further delay. Can you
> please share a rackspace machine such that I can debug this issue?
Yep, this is very doabl
On Fri, 31 Oct 2014 12:47:21 +0100
Xavier Hernandez wrote:
> I've filed a bug and uploaded a patch for master and release-3.6
> branches for this problem.
>
> master:
>bug: https://bugzilla.redhat.com/show_bug.cgi?id=1159269
>patch: http://review.gluster.org/9031/
>
> release-3.6:
>
I've filed a bug and uploaded a patch for master and release-3.6
branches for this problem.
master:
bug: https://bugzilla.redhat.com/show_bug.cgi?id=1159269
patch: http://review.gluster.org/9031/
release-3.6:
bug: https://bugzilla.redhat.com/show_bug.cgi?id=1159284
patch: http://rev
Hi,
On 10/31/2014 09:31 AM, Xavier Hernandez wrote:
Hi Atin,
On 10/31/2014 05:47 AM, Atin Mukherjee wrote:
On 08/24/2014 11:41 PM, Justin Clift wrote:
I'd be kind of concerned about dropping the test case instead of it
being fixed. It sort of seems like these last few spurious failures
may b
Hi Atin,
On 10/31/2014 05:47 AM, Atin Mukherjee wrote:
On 08/24/2014 11:41 PM, Justin Clift wrote:
I'd be kind of concerned about dropping the test case instead of it
being fixed. It sort of seems like these last few spurious failures
may be due to subtle bugs in GlusterFS (my impression :>),
On 08/24/2014 11:41 PM, Justin Clift wrote:
> On 24/08/2014, at 11:05 AM, Vijay Bellur wrote:
>
>>
>> On Sat, Aug 23, 2014 at 12:02 PM, Harshavardhana
>> wrote:
>> On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee wrote:
>>> IIRC, we were marking the verified as +1 in case of a known spurious
On 24/08/2014, at 11:05 AM, Vijay Bellur wrote:
>
> On Sat, Aug 23, 2014 at 12:02 PM, Harshavardhana
> wrote:
> On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee wrote:
> > IIRC, we were marking the verified as +1 in case of a known spurious
> > failure, can't we continue to do the same for the
On Sat, Aug 23, 2014 at 12:02 PM, Harshavardhana
wrote:
> On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee
> wrote:
> > IIRC, we were marking the verified as +1 in case of a known spurious
> > failure, can't we continue to do the same for the known spurious
> > failures just to unblock the patch
On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee wrote:
> IIRC, we were marking the verified as +1 in case of a known spurious
> failure, can't we continue to do the same for the known spurious
> failures just to unblock the patches getting merged till the problems
> are resolved?
While its under
IIRC, we were marking the verified as +1 in case of a known spurious
failure, can't we continue to do the same for the known spurious
failures just to unblock the patches getting merged till the problems
are resolved?
~Atin
On 08/22/2014 11:44 PM, Vijay Bellur wrote:
> On 06/12/2014 09:06 PM, Pra
On 06/12/2014 09:06 PM, Pranith Kumar Karampuri wrote:
Avra/Poornima,
Please look into this.
Patch ==> http://review.gluster.com/#/c/6483/9
Author==> Poornima pguru...@redhat.com
Build triggered by==> amarts
Build-url ==>
htt
Avra/Poornima,
Please look into this.
Patch ==> http://review.gluster.com/#/c/6483/9
Author==> Poornima pguru...@redhat.com
Build triggered by==> amarts
Build-url ==>
http://build.gluster.org/job/regression/4847/consoleFull
Do
26 matches
Mail list logo