Re: [Gluster-devel] Moratorium on new patch acceptance

2015-05-20 Thread Vijaikumar M



On Tuesday 19 May 2015 09:50 PM, Shyam wrote:

On 05/19/2015 11:23 AM, Vijaikumar M wrote:



On Tuesday 19 May 2015 08:36 PM, Shyam wrote:

On 05/19/2015 08:10 AM, Raghavendra G wrote:

After discussion with Vijaykumar mallikarjuna and other inputs in this
thread, we are proposing all quota tests to comply to following
criteria:

* use dd always with oflag=append (to make sure there are no parallel
writes) and conv=fdatasync (to make sure errors, if any are 
delivered to

application. Turning off flush-behind is optional since fdatasync acts
as a barrier)

OR

* turn off write-behind in nfs client and glusterfs server.

What do you people think is a better test scenario?

Also, we don't have confirmation on the RCA that parallel writes are
indeed the culprits. We are trying to reproduce the issue locally.
@Shyam, it would be helpful if you can confirm the hypothesis :).


Ummm... I thought we acknowledge that quota checks are done during the
WIND and updated during UNWIND, and we have io threads doing in flight
IOs (as well as possible IOs in io threads queue) and we have 256K
writes in the case mentioned. Put together, in my head this forms a
good RCA that we write more than needed due to the in flight IOs on
the brick. We need to control the in flight IOs as a resolution for
this from the application.

In terms of actual proof, we would need to instrument the code and
check. When you say it does not fail for you, does the file stop once
quota is reached or is a random size greater than quota? Which itself
may explain or point to the RCA.

The basic thing needed from an application is,
- Sync IOs, so that there aren't too many in flight IOs and the
application waits for each IO to complete
- Based on tests below if we keep block size in dd lower and use
oflag=sync we can achieve the same, if we use higher block sizes we
cannot

Test results:
1) noac:
  - NFS sends a COMMIT (internally translates to a flush) post each IO
request (NFS WRITES are still with the UNSTABLE flag)
  - Ensures prior IO is complete before next IO request is sent (due
to waiting on the COMMIT)
  - Fails if IO size is large, i.e in the test case being discussed I
changed the dd line that was failing as "TEST ! dd if=/dev/zero
of=$N0/$mydir/newfile_2 *bs=10M* count=1 conv=fdatasync" and this
fails at times, as the writes here are sent as 256k chunks to the
server and we still see the same behavior
  - noac + performance.nfs.flush-behind: off +
performance.flush-behind: off + performance.nfs.strict-write-ordering:
on + performance.strict-write-ordering: on +
performance.nfs.write-behind: off + performance.write-behind: off
- Still see similar failures, i.e at times 10MB file is created
successfully in the modified dd command above

Overall, the switch works, but not always. If we are to use this
variant then we need to announce that all quota tests using dd not try
to go beyond the quota limit set in a single IO from dd.

2) oflag=sync:
  - Exactly the same behavior as above.

3) Added all (and possibly the kitches sink) to the test case, as
attached, and still see failures,
  - Yes, I have made the test fail intentionally (of sorts) by using
3M per dd IO and 2 IOs to go beyond the quota limit.
  - The intention is to demonstrate that we still get parallel IOs
from NFS client
  - The test would work if we reduce the block size per IO (reliably
is a border condition here, and we need specific rules like block size
and how many blocks before we state quota is exceeded etc.)
  - The test would work if we just go beyond the quota, and then check
a separate dd instance as being able to *not* exceed the quota. Which
is why I put up that patch.

What next?


Hi Shyam,

I tried running the test with dd option 'oflag=append' and didn't see
the issue.Can you please try this option and see if it works?


Did that (in the attached script that I sent) and it still failed.

Please note:
- This dd command passes (or fails with EDQUOT)
  - dd if=/dev/zero of=$N0/$mydir/newfile_2 bs=512 count=10240 
oflag=append oflag=sync conv=fdatasync
  - We can even drop append and fdatasync, as sync sends a commit per 
block written which is better for the test and quota enforcement, 
whereas fdatasync does one in the end and sometimes fails (with larger 
block sizes, say 1M)

  - We can change bs to [512 - 256k]

Here you are trying to write 5M of data which is always written and test 
will fail.




- This dd command fails (or writes all the data)
  - dd if=/dev/zero of=$N0/$mydir/newfile_2 bs=3M count=2 oflag=append 
oflag=sync conv=fdatasync


Here you are trying to write 6M of data (Exceeding only 1M of quota 
limit) and test can fail. With count=3, test passes




The reasoning is that when we write a larger block size, NFS sends in 
multiple 256k chunks to write and then sends the commit before the 
next block. As a result if we exceed quota in the *last block* that we 
are writing, we *may* fail. If we exceed quota in the last but one 
block we wil

[Gluster-devel] Geo-rep: xattrs and acls syncing using tar over ssh!!

2015-05-20 Thread Kotresh Hiremath Ravishankar
Hi,

Recently geo-replication is enhanced to sync xattr and acls as well
from master gluster volume to slave. Geo-rep syncs data in two modes
namely using "rsync" and "tar over ssh". Syncing acls and xattrs via
rsync works fine. But using tar over ssh, there are couple of issues
syncing xattrs and acls.

Issue1: xattrs doesn't sync with tar over ssh.

Reason: untar doesn't respect '--overwrite' option when used along
with '--xattrs'. So it sends unlink to unlink if the file exists
on destination and re-creates afresh. But all entry operations are 
banned in
aux-gfid-mount as of now as it may lead to gfid-mismatch if am 
correct. Hence
fails with EPERM. This happens only when some xattr is set on a 
file in master
volume.


Issue2: acls on directories does not sync with tar over ssh.

Reason: tar tries to opendir ".gfid/" and is not supported
by gfid-access-translator as readirp can't be handled on virtual
inodes and hence fails with ENOTSUP where as it syncs for files.

Please let me know your comments on above issues. Is there anything that can
be done in gfid-access translator without breaking things to fix these issues?


Thanks and Regards,
Kotresh H R

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Moratorium on new patch acceptance

2015-05-20 Thread Raghavendra G
On Tue, May 19, 2015 at 8:36 PM, Shyam  wrote:

> On 05/19/2015 08:10 AM, Raghavendra G wrote:
>
>> After discussion with Vijaykumar mallikarjuna and other inputs in this
>> thread, we are proposing all quota tests to comply to following criteria:
>>
>> * use dd always with oflag=append (to make sure there are no parallel
>> writes) and conv=fdatasync (to make sure errors, if any are delivered to
>> application. Turning off flush-behind is optional since fdatasync acts
>> as a barrier)
>>
>> OR
>>
>> * turn off write-behind in nfs client and glusterfs server.
>>
>> What do you people think is a better test scenario?
>>
>> Also, we don't have confirmation on the RCA that parallel writes are
>> indeed the culprits. We are trying to reproduce the issue locally.
>> @Shyam, it would be helpful if you can confirm the hypothesis :).
>>
>
> Ummm... I thought we acknowledge that quota checks are done during the
> WIND and updated during UNWIND, and we have io threads doing in flight IOs
> (as well as possible IOs in io threads queue) and we have 256K writes in
> the case mentioned. Put together, in my head this forms a good RCA that we
> write more than needed due to the in flight IOs on the brick. We need to
> control the in flight IOs as a resolution for this from the application.
>
> In terms of actual proof, we would need to instrument the code and check.
> When you say it does not fail for you, does the file stop once quota is
> reached or is a random size greater than quota? Which itself may explain or
> point to the RCA.
>
> The basic thing needed from an application is,
> - Sync IOs, so that there aren't too many in flight IOs and the
> application waits for each IO to complete
> - Based on tests below if we keep block size in dd lower and use
> oflag=sync we can achieve the same, if we use higher block sizes we cannot
>
> Test results:
> 1) noac:
>   - NFS sends a COMMIT (internally translates to a flush) post each IO
> request (NFS WRITES are still with the UNSTABLE flag)
>   - Ensures prior IO is complete before next IO request is sent (due to
> waiting on the COMMIT)
>   - Fails if IO size is large, i.e in the test case being discussed I
> changed the dd line that was failing as "TEST ! dd if=/dev/zero
> of=$N0/$mydir/newfile_2 *bs=10M* count=1 conv=fdatasync" and this fails at
> times, as the writes here are sent as 256k chunks to the server and we
> still see the same behavior
>   - noac + performance.nfs.flush-behind: off + performance.flush-behind:
> off + performance.nfs.strict-write-ordering: on +
> performance.strict-write-ordering: on + performance.nfs.write-behind: off +
> performance.write-behind: off
> - Still see similar failures, i.e at times 10MB file is created
> successfully in the modified dd command above
>
> Overall, the switch works, but not always. If we are to use this variant
> then we need to announce that all quota tests using dd not try to go beyond
> the quota limit set in a single IO from dd.
>
> 2) oflag=sync:
>   - Exactly the same behavior as above.
>
> 3) Added all (and possibly the kitches sink) to the test case, as
> attached, and still see failures,
>   - Yes, I have made the test fail intentionally (of sorts) by using 3M
> per dd IO and 2 IOs to go beyond the quota limit.
>   - The intention is to demonstrate that we still get parallel IOs from
> NFS client
>   - The test would work if we reduce the block size per IO (reliably is a
> border condition here, and we need specific rules like block size and how
> many blocks before we state quota is exceeded etc.)
>   - The test would work if we just go beyond the quota, and then check a
> separate dd instance as being able to *not* exceed the quota. Which is why
> I put up that patch.
>
> What next?
>

The only thing left out now is background quota accounting. Vijay
mallikarjuna is working on making this behaviour configurable. The proposed
behaviour of marker is:

1. If fd is opened with O_SYNC, make accounting foreground (write is
unwound only after accounting is complete)
2. When soft-limit is crossed, enforcer instructs marker to make accounting
foreground.
3. In all other cases, marker accounting is done in background wrt to fops.

Handling parallel writes is a bit more involved and hence will be sent out
as a different patch. However our regression tests will be written using
osync flags in dd there by avoiding parallel writes.

We are waiting for these changes to test whether the issue is fixed.
Fingers crossed :).


>
>> regards,
>> Raghavendra.
>>
>> On Tue, May 19, 2015 at 5:27 PM, Raghavendra G > > wrote:
>>
>>
>>
>> On Tue, May 19, 2015 at 4:26 PM, Jeff Darcy > > wrote:
>>
>> > No, my suggestion was aimed at not having parallel writes. In
>> this case quota
>> > won't even fail the writes with EDQUOT because of reasons
>> explained above.
>> > Yes, we need to disable flush-behind along with this so that
>> errors are

Re: [Gluster-devel] Are we dropping tests/bugs/snapshot/bug-1112559.t ?

2015-05-20 Thread Avra Sengupta
I've sent a patch(http://review.gluster.org/#/c/10840/) to remove this 
from the test-suite. Once it get's merged I will re-open the clone of 
this bug for 3.7.0 branch, and backport the patch.


Regards,
Avra

On 05/20/2015 12:07 PM, Krishnan Parthasarathi wrote:

No concerns.

- Original Message -

Given that the fix which is tested by this patch is no longer present, I
think we should remove this patch from the test-suite itself. Could
anyone confirm if there are any concerns in doing so. If not I will send
a patch to do the same.

Regards,
Avra

On 05/08/2015 11:28 AM, Avra Sengupta wrote:

Raised a bug, and sent a patch(http://review.gluster.org/10660)
marking this test as a bad test.

Regards,
Avra

On 05/08/2015 11:24 AM, Krishnan Parthasarathi wrote:

It's a snapshot-clone feature team's decision. I have no objection if
you want it removed.

- Original Message -

On 05/08/2015 07:40 AM, Pranith Kumar Karampuri wrote:

hi,
Are we dropping tests/bugs/snapshot/bug-1112559.t? If yes is
the
patch for removing it already sent?

Waiting to get a confirmation from glusterd folks .

Rafi KC


Pranith




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to get rid of NFS on 3.7.0?

2015-05-20 Thread Atin Mukherjee


On 05/18/2015 08:50 PM, Atin Mukherjee wrote:
> Here is the issue:
> 
> Locking on a volume fails with the following error:
> 
> [2015-05-18 09:47:56.038463] E
> [glusterd-syncop.c:562:_gd_syncop_mgmt_lock_cbk] 0-management: Could not
> find peer with ID 70e65fb9-cc9d-16ba-a4f4-5fb90100
> 
> [2015-05-18 09:47:56.038527] E [glusterd-syncop.c:111:gd_collate_errors]
> 0-: Locking failed   on 85eb78cd-8ffa-49ca-b3e7-d5030bc3124d. Please
> check log file for details.
> [2015-05-18 09:47:56.038574] E
> [glusterd-syncop.c:1804:gd_sync_task_begin] 0-management:  Locking
> Peers Failed.
> 
> From the above log it is clear that peer was not found and that's
> because of an incorrect peer id passed to glusterd_peerinfo_find
> 
> http://review.gluster.org/#/c/10192 has introduced this problem. In
> _gd_syncop_mgmt_lock_cbk peerid is taken from the frame->cookie which is
> an address of a local variable in gd_syncop_mgmt_lock () which means the
> moment this function goes out of scope peerid is not the same. We would
> need to have pointer to peerid and allocate/deallocate based on demand
> to solve it (more or less in similar to
> http://review.gluster.org/#/c/10192/1 )
Posted a patch [1] to fix this. I'll backport it to 3.7 once its
accepted in mainline.

[1] http://review.gluster.org/#/c/10842/

~Atin
> 
> ~Atin
> 
> On 05/18/2015 02:15 PM, Emmanuel Dreyfus wrote:
>> On Mon, May 18, 2015 at 01:48:52AM -0400, Krishnan Parthasarathi wrote:
>>> I am not sure why volume-status isn't working.
>>
>> My understanding is that glusterd considers a lock is held by the 
>> NFS comonent, while it is not started.
>>
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to get rid of NFS on 3.7.0?

2015-05-20 Thread Emmanuel Dreyfus
On Wed, May 20, 2015 at 06:36:49PM +0530, Atin Mukherjee wrote:
> Posted a patch [1] to fix this. I'll backport it to 3.7 once its
> accepted in mainline.

I applied to release-3.7 along with review.gluster;org/10830
and that fixes the problem.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Moratorium on new patch acceptance

2015-05-20 Thread Vijay Bellur

On 05/19/2015 11:56 PM, Vijay Bellur wrote:

On 05/18/2015 08:03 PM, Vijay Bellur wrote:

On 05/16/2015 03:34 PM, Vijay Bellur wrote:



I will send daily status updates from Monday (05/18) about this so that
we are clear about where we are and what needs to be done to remove this
moratorium. Appreciate your help in having a clean set of regression
tests going forward!



We have made some progress since Saturday. The problem with glupy.t has
been fixed - thanks to Niels! All but following tests have developers
looking into them:

 ./tests/basic/afr/entry-self-heal.t

 ./tests/bugs/replicate/bug-976800.t

 ./tests/bugs/replicate/bug-1015990.t

 ./tests/bugs/quota/bug-1038598.t

 ./tests/basic/ec/quota.t

 ./tests/basic/quota-nfs.t

 ./tests/bugs/glusterd/bug-974007.t

Can submitters of these test cases or current feature owners pick these
up and start looking into the failures please? Do update the spurious
failures etherpad [1] once you pick up a particular test.


[1] https://public.pad.fsfe.org/p/gluster-spurious-failures



Update for today - all tests that are known to fail have owners. Thanks
everyone for chipping in! I think we should be able to lift this
moratorium and resume normal patch acceptance shortly.



Today's update - Pranith fixed a bunch of failures in erasure coding and 
Avra removed a test that was not relevant anymore - thanks for that!


Quota, afr, snapshot & tiering tests are being looked into. Will provide 
an update on where we are with these tomorrow.


Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Moratorium on new patch acceptance

2015-05-20 Thread Pranith Kumar Karampuri



On 05/21/2015 12:07 AM, Vijay Bellur wrote:

On 05/19/2015 11:56 PM, Vijay Bellur wrote:

On 05/18/2015 08:03 PM, Vijay Bellur wrote:

On 05/16/2015 03:34 PM, Vijay Bellur wrote:



I will send daily status updates from Monday (05/18) about this so 
that
we are clear about where we are and what needs to be done to remove 
this

moratorium. Appreciate your help in having a clean set of regression
tests going forward!



We have made some progress since Saturday. The problem with glupy.t has
been fixed - thanks to Niels! All but following tests have developers
looking into them:

 ./tests/basic/afr/entry-self-heal.t

 ./tests/bugs/replicate/bug-976800.t

 ./tests/bugs/replicate/bug-1015990.t

 ./tests/bugs/quota/bug-1038598.t

 ./tests/basic/ec/quota.t

 ./tests/basic/quota-nfs.t

 ./tests/bugs/glusterd/bug-974007.t

Can submitters of these test cases or current feature owners pick these
up and start looking into the failures please? Do update the spurious
failures etherpad [1] once you pick up a particular test.


[1] https://public.pad.fsfe.org/p/gluster-spurious-failures



Update for today - all tests that are known to fail have owners. Thanks
everyone for chipping in! I think we should be able to lift this
moratorium and resume normal patch acceptance shortly.



Today's update - Pranith fixed a bunch of failures in erasure coding 
and Avra removed a test that was not relevant anymore - thanks for that!

Xavi and I both sent a patch each for fixing these. But..
I ran the regression 4 times and it succeeded 3 times and failed once on 
xml.t before merging, I thought these were the last fixes for this 
problem. Ashish found a way to recreate these same EIO errors so all is 
not well yet. Xavi is sending one more patch tomorrow which addresses 
that problem as well. While testing another patch on master I found that 
there is use after free issue in ec :-(. I am not able to send the fix 
for it because gerrit ran out of space?


Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 1.10 KiB | 0 bytes/s, done.
Total 9 (delta 7), reused 0 (delta 0)
fatal: Unpack error, check server log
error: unpack failed: error No space left on device <<--


PS: Since valgrind is giving so much pain, I used Address sanitizer for 
debugging this mem-corruption. It is amazing! I followed 
http://tsdgeos.blogspot.in/2014/03/asan-and-gcc-how-to-get-line-numbers-in.html 
for getting the backtrace with line-numbers. It doesn't generate core 
with gcc-4.8 though (I had to use -N flag for starting mount process to 
get the output on stderr). I think in future versions of gcc we don't 
need to do all this. I will try and post my experience once I upgrade to 
fedora22 which has gcc5.


Pranith


Quota, afr, snapshot & tiering tests are being looked into. Will 
provide an update on where we are with these tomorrow.


Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Are we dropping tests/bugs/snapshot/bug-1112559.t ?

2015-05-20 Thread Avra Sengupta
Thanks for merging the patch. I have backported 
it(http://review.gluster.org/#/c/10871/) to release 3.7 branch as well.


Regards,
Avra

On 05/20/2015 05:52 PM, Avra Sengupta wrote:
I've sent a patch(http://review.gluster.org/#/c/10840/) to remove this 
from the test-suite. Once it get's merged I will re-open the clone of 
this bug for 3.7.0 branch, and backport the patch.


Regards,
Avra

On 05/20/2015 12:07 PM, Krishnan Parthasarathi wrote:

No concerns.

- Original Message -
Given that the fix which is tested by this patch is no longer 
present, I

think we should remove this patch from the test-suite itself. Could
anyone confirm if there are any concerns in doing so. If not I will 
send

a patch to do the same.

Regards,
Avra

On 05/08/2015 11:28 AM, Avra Sengupta wrote:

Raised a bug, and sent a patch(http://review.gluster.org/10660)
marking this test as a bad test.

Regards,
Avra

On 05/08/2015 11:24 AM, Krishnan Parthasarathi wrote:

It's a snapshot-clone feature team's decision. I have no objection if
you want it removed.

- Original Message -

On 05/08/2015 07:40 AM, Pranith Kumar Karampuri wrote:

hi,
Are we dropping tests/bugs/snapshot/bug-1112559.t? If 
yes is

the
patch for removing it already sent?

Waiting to get a confirmation from glusterd folks .

Rafi KC


Pranith






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Are we dropping tests/bugs/snapshot/bug-1112559.t ?

2015-05-20 Thread Krishnan Parthasarathi
Smoke test has failed due to jenkins related issue. We need to retrigger smoke
test. I am not aware of how the infrastructure works?

Anyone, any ideas?

- Original Message -
> Thanks for merging the patch. I have backported
> it(http://review.gluster.org/#/c/10871/) to release 3.7 branch as well.
> 
> Regards,
> Avra
> 
> On 05/20/2015 05:52 PM, Avra Sengupta wrote:
> > I've sent a patch(http://review.gluster.org/#/c/10840/) to remove this
> > from the test-suite. Once it get's merged I will re-open the clone of
> > this bug for 3.7.0 branch, and backport the patch.
> >
> > Regards,
> > Avra
> >
> > On 05/20/2015 12:07 PM, Krishnan Parthasarathi wrote:
> >> No concerns.
> >>
> >> - Original Message -
> >>> Given that the fix which is tested by this patch is no longer
> >>> present, I
> >>> think we should remove this patch from the test-suite itself. Could
> >>> anyone confirm if there are any concerns in doing so. If not I will
> >>> send
> >>> a patch to do the same.
> >>>
> >>> Regards,
> >>> Avra
> >>>
> >>> On 05/08/2015 11:28 AM, Avra Sengupta wrote:
>  Raised a bug, and sent a patch(http://review.gluster.org/10660)
>  marking this test as a bad test.
> 
>  Regards,
>  Avra
> 
>  On 05/08/2015 11:24 AM, Krishnan Parthasarathi wrote:
> > It's a snapshot-clone feature team's decision. I have no objection if
> > you want it removed.
> >
> > - Original Message -
> >> On 05/08/2015 07:40 AM, Pranith Kumar Karampuri wrote:
> >>> hi,
> >>> Are we dropping tests/bugs/snapshot/bug-1112559.t? If
> >>> yes is
> >>> the
> >>> patch for removing it already sent?
> >> Waiting to get a confirmation from glusterd folks .
> >>
> >> Rafi KC
> >>
> >>> Pranith
> >>>
> >
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Are we dropping tests/bugs/snapshot/bug-1112559.t ?

2015-05-20 Thread Vijay Bellur

On 05/21/2015 12:23 PM, Krishnan Parthasarathi wrote:

Smoke test has failed due to jenkins related issue. We need to retrigger smoke
test. I am not aware of how the infrastructure works?

Anyone, any ideas?



If you are logged in to Jenkins, you should be able to find 
Retrigger/Retrigger All links on the page for the job. Clicking on that 
should take care of launching smoke test(s) again.


-Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Are we dropping tests/bugs/snapshot/bug-1112559.t ?

2015-05-20 Thread Atin Mukherjee


On 05/21/2015 12:27 PM, Vijay Bellur wrote:
> On 05/21/2015 12:23 PM, Krishnan Parthasarathi wrote:
>> Smoke test has failed due to jenkins related issue. We need to
>> retrigger smoke
>> test. I am not aware of how the infrastructure works?
>>
>> Anyone, any ideas?
>>
> 
> If you are logged in to Jenkins, you should be able to find
> Retrigger/Retrigger All links on the page for the job. Clicking on that
> should take care of launching smoke test(s) again.
Retriggered
> 
> -Vijay
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel