Re: [Gluster-devel] compound fop design first cut

2015-12-07 Thread Pranith Kumar Karampuri



On 12/08/2015 09:02 AM, Pranith Kumar Karampuri wrote:



On 12/08/2015 02:53 AM, Shyam wrote:

Hi,

Why not think along the lines of new FOPs like fop_compound(_cbk) 
where, the inargs to this FOP is a list of FOPs to execute (either in 
order or any order)?
That is the intent. The question is how do we specify the fops that we 
want to do and the arguments to the fop. In this approach, for example 
xl_fxattrop_writev() is a new FOP. List of fops that need to be done 
are fxattrop, writev in that order and the arguments are a union of 
the arguments needed to perform the fops fxattrop, writev. The reason 
why this fop is not implemented through out the graph is to not change 
most of the stack on the brick side in the first cut of the 
implementation. i.e. quota/barrier/geo-rep/io-threads 
priorities/bit-rot may have to implement these new compund fops. We 
still get the benefit of avoiding the network round trips.


With a scheme like the above we could,
 - compound any set of FOPs (of course, we need to take care here, 
but still the feasibility exists)
It still exists but the fop space will be blown for each of the 
combination.
 - Each xlator can inspect the compound relation and chose to 
uncompound them. So if an xlator cannot perform FOPA+B as a single 
compound FOP, it can choose to send FOPA and then FOPB and chain up 
the responses back to the compound request sent to it. Also, the 
intention here would be to leverage existing FOP code in any xlator, 
to appropriately modify the inargs
 - The RPC payload is constructed based on existing FOP RPC 
definitions, but compounded based on the compound FOP RPC definition
This will be done in phase-3 after learning a bit more about how best 
to implement it to prevent stuffing arguments in xdata in future as 
much as possible. After which we can choose to retire 
compound-fop-sender and receiver xlators.


Possibly on the brick graph as well, pass these down as compounded 
FOPs, till someone decides to break it open and do it in phases 
(ultimately POSIX xlator).
This will be done in phase-2. At the moment we are not giving any 
choice for the xlators on the brick side.


The intention would be to break a compound FOP in case an xlator in 
between cannot support it or, even expand a compound FOP request, say 
the fxattropAndWrite is an AFR compounding decision, but a compound 
request to AFR maybe WriteandClose, hence AFR needs to extend this 
compound request.
Yes. There was a discussion with krutika where if shard wants to do 
write then xattrop in a single fop, then we need dht to implement 
dht_writev_fxattrop which should look somewhat similar to 
dht_writev(), and afr will need to implement afr_writev_fxattrop() as 
full blown transaction where it needs to take data+metadata domain 
locks then do data+metadata pre-op then wind to 
compound_fop_sender_writev_fxattrop() and then data+metadata post-op 
then unlocks.


If we were to do writev, fxattrop separately, fops will be (In 
unoptimized case):

1) finodelk for write
2) fxattrop for preop of write.
3) write
4) fxattrop for post op of write
5) unlock for write
6) finodelk for fxattrop
7) fxattrop for preop of shard-fxattrop
8) shard-fxattrop
9) fxattrop for post op of shard fxattrop
10) unlock forfxattrop

If AFR chooses to implement writev_fxattrop: means data+metadata 
transaction.
1) finodelk in data, metadata domain simultaneously (just like we take 
multiple locks in rename)

2) preop for data, metadata parts as part of the compound fop
3) writev+fxattrop
4)postop for data, metadata parts as part of the compound fop
5) unlocks simultaneously.

So it is still 2x reduction of the number of network fops except for 
may be locking.


The above is just a off the cuff thought on the same.
We need to arrive at a consensus about how to specify the list of fops 
and their arguments. The reason why I went against list_of_fops is to 
make discovery of possibile optimizations we can do easier per 
compound fop (Inspired by ec's implementation of multiplications by 
all possible elements in the Galois field, where multiplication with 
different number has a different optimization). Could you elaborate 
more about the idea you have about list_of_fops and its arguments? May 
be we can come up with combinations of fops where we can employ this 
technique of just list_of_fops and wind. I think rest of the solutions 
you mentioned is where it will converge towards over time. Intention 
is to avoid network round trips without waiting for the whole stack to 
change as much as possible.
May be I am over thinking it. Not a lot of combinations could be 
transactions. In any case do let me know what you have in mind.




Pranith


The scheme below seems too specific to my eyes, and looks like we 
would be defining specific compound FOPs than the ability to have 
generic ones.


On 12/07/2015 04:08 AM, Pranith Kumar Karampuri wrote:

hi,

Draft of the design doc:

Main motivation for the design of this feature is 

Re: [Gluster-devel] compound fop design first cut

2015-12-07 Thread Raghavendra Gowdappa
> 
> On 12/08/2015 09:02 AM, Pranith Kumar Karampuri wrote:
> >
> >
> > On 12/08/2015 02:53 AM, Shyam wrote:
> >> Hi,
> >>
> >> Why not think along the lines of new FOPs like fop_compound(_cbk)
> >> where, the inargs to this FOP is a list of FOPs to execute (either in
> >> order or any order)?
> > That is the intent. The question is how do we specify the fops that we
> > want to do and the arguments to the fop. In this approach, for example
> > xl_fxattrop_writev() is a new FOP. List of fops that need to be done
> > are fxattrop, writev in that order and the arguments are a union of
> > the arguments needed to perform the fops fxattrop, writev. The reason
> > why this fop is not implemented through out the graph is to not change
> > most of the stack on the brick side in the first cut of the
> > implementation. i.e. quota/barrier/geo-rep/io-threads
> > priorities/bit-rot may have to implement these new compund fops. We
> > still get the benefit of avoiding the network round trips.
> >>
> >> With a scheme like the above we could,
> >>  - compound any set of FOPs (of course, we need to take care here,
> >> but still the feasibility exists)
> > It still exists but the fop space will be blown for each of the
> > combination.
> >>  - Each xlator can inspect the compound relation and chose to
> >> uncompound them. So if an xlator cannot perform FOPA+B as a single
> >> compound FOP, it can choose to send FOPA and then FOPB and chain up
> >> the responses back to the compound request sent to it. Also, the
> >> intention here would be to leverage existing FOP code in any xlator,
> >> to appropriately modify the inargs
> >>  - The RPC payload is constructed based on existing FOP RPC
> >> definitions, but compounded based on the compound FOP RPC definition
> > This will be done in phase-3 after learning a bit more about how best
> > to implement it to prevent stuffing arguments in xdata in future as
> > much as possible. After which we can choose to retire
> > compound-fop-sender and receiver xlators.
> >>
> >> Possibly on the brick graph as well, pass these down as compounded
> >> FOPs, till someone decides to break it open and do it in phases
> >> (ultimately POSIX xlator).
> > This will be done in phase-2. At the moment we are not giving any
> > choice for the xlators on the brick side.
> >>
> >> The intention would be to break a compound FOP in case an xlator in
> >> between cannot support it or, even expand a compound FOP request, say
> >> the fxattropAndWrite is an AFR compounding decision, but a compound
> >> request to AFR maybe WriteandClose, hence AFR needs to extend this
> >> compound request.
> > Yes. There was a discussion with krutika where if shard wants to do
> > write then xattrop in a single fop, then we need dht to implement
> > dht_writev_fxattrop which should look somewhat similar to
> > dht_writev(), and afr will need to implement afr_writev_fxattrop() as
> > full blown transaction where it needs to take data+metadata domain
> > locks then do data+metadata pre-op then wind to
> > compound_fop_sender_writev_fxattrop() and then data+metadata post-op
> > then unlocks.
> >
> > If we were to do writev, fxattrop separately, fops will be (In
> > unoptimized case):
> > 1) finodelk for write
> > 2) fxattrop for preop of write.
> > 3) write
> > 4) fxattrop for post op of write
> > 5) unlock for write
> > 6) finodelk for fxattrop
> > 7) fxattrop for preop of shard-fxattrop
> > 8) shard-fxattrop
> > 9) fxattrop for post op of shard fxattrop
> > 10) unlock forfxattrop
> >
> > If AFR chooses to implement writev_fxattrop: means data+metadata
> > transaction.
> > 1) finodelk in data, metadata domain simultaneously (just like we take
> > multiple locks in rename)
> > 2) preop for data, metadata parts as part of the compound fop
> > 3) writev+fxattrop
> > 4)postop for data, metadata parts as part of the compound fop
> > 5) unlocks simultaneously.
> >
> > So it is still 2x reduction of the number of network fops except for
> > may be locking.
> >>
> >> The above is just a off the cuff thought on the same.
> > We need to arrive at a consensus about how to specify the list of fops
> > and their arguments. The reason why I went against list_of_fops is to
> > make discovery of possibile optimizations we can do easier per
> > compound fop (Inspired by ec's implementation of multiplications by
> > all possible elements in the Galois field, where multiplication with
> > different number has a different optimization). Could you elaborate
> > more about the idea you have about list_of_fops and its arguments? May
> > be we can come up with combinations of fops where we can employ this
> > technique of just list_of_fops and wind. I think rest of the solutions
> > you mentioned is where it will converge towards over time. Intention
> > is to avoid network round trips without waiting for the whole stack to
> > change as much as possible.
> May be I am over thinking it. Not a lot of combinations could be
> transactions. In any 

[Gluster-devel] building/installing on FreeBSD

2015-12-07 Thread Rick Macklem
Hi,

I've been trying to build/install 3.7.6 from the source tarball onto
FreeBSD and have had some luck.

I basically needed to apply this little patch:
--- cli/src/Makefile.in.sav 2015-12-06 17:06:59.252807000 -0500
+++ cli/src/Makefile.in 2015-12-06 17:07:44.741783000 -0500
@@ -63,8 +63,8 @@ am__DEPENDENCIES_1 =
 gluster_DEPENDENCIES =  \
$(top_builddir)/libglusterfs/src/libglusterfs.la \
$(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1) \
-   $(top_builddir)/rpc/xdr/src/libgfxdr.la \
$(top_builddir)/rpc/rpc-lib/src/libgfrpc.la \
+   $(top_builddir)/rpc/xdr/src/libgfxdr.la \
$(am__DEPENDENCIES_1)
 AM_V_lt = $(am__v_lt_$(V))
 am__v_lt_ = $(am__v_lt_$(AM_DEFAULT_VERBOSITY))
@@ -327,8 +327,9 @@ gluster_SOURCES = cli.c registry.c input
 cli-cmd-system.c cli-cmd-misc.c cli-xml-output.c cli-quotad-client.c 
cli-cmd-snapshot.c
 
 gluster_LDADD = $(top_builddir)/libglusterfs/src/libglusterfs.la $(GF_LDADD) \
-   $(RLLIBS) $(top_builddir)/rpc/xdr/src/libgfxdr.la \
+   $(RLLIBS) \
$(top_builddir)/rpc/rpc-lib/src/libgfrpc.la \
+   $(top_builddir)/rpc/xdr/src/libgfxdr.la \
$(XML_LIBS)
 
 gluster_LDFLAGS = $(GF_LDFLAGS)

to avoid an undefined reference for xdr_auth_glusterfs_parms_v2.
(I also found that building without libxml2 doesn't work, because fields
 #if HAVE_LIB_XML are used in the code. Maybe it would be nicer if configure
 failed when libxml2 isn't installed, like it does for Bison, etc.)

Now, I can build/install it, but it isn't building any shared *.so files.
As such, the binaries basically fail.

I have zero experience with libtool. So, does someone happen to know what
it takes to get it to build the shared libraries?
I didn't do autogen.sh. I just used configure. Do I need to run autogen.sh?

Thanks in advance for any help, rick
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs as dovecot backend storage

2015-12-07 Thread Emmanuel Dreyfus
On Mon, Dec 07, 2015 at 05:18:08PM +0530, Pranith Kumar Karampuri wrote:
> Do you mind adding gluster-users to that thread? It would be nice to know
> what are the problems they ran into to fix them.

If you have some anwsers, go there and cross-post with gluster-users...

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs as dovecot backend storage

2015-12-07 Thread Emmanuel Dreyfus
Hello

In case nobody noticed, there is ongoing discussion on the dovecot 
mailing list about using glusterfs as mail storage.  Some poeple
ran into trouble and I think knowledgable hints would be of great
value there.

NB: I did not dare to attempt such a setup, regardless of how
appealing it is. I fear troubles too much. :-)

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] compound fop design first cut

2015-12-07 Thread Pranith Kumar Karampuri

hi,

Draft of the design doc:

Main motivation for the design of this feature is to reduce network 
round trips by sending more
than one fop in a network operation, preferably without introducing new 
rpcs.


There are new 2 new xlators compound-fop-sender, compound-fop-receiver.
compound-fop-sender is going to be loaded on top of each client-xlator 
on the

mount/client and compound-fop-receiver is going to be loaded below
server-xlator on the bricks. On the mount/client side from the caller 
xlator
till compund-fop-encoder xlator, the xlators can choose to implement 
this extra

compound fop handling. Once it reaches "compound-fop-sender" it will try to
choose a base fop on which it encodes the other fop in the base-fop's 
xdata,

and winds the base fop to client xlator(). client xlator sends the base fop
with encoded xdata to server xlator on the brick using rpc of the base fop.
Once server xlator does resolve_and_resume() it will wind the base fop to
compound-fop-receiver xlator. This fop will decode the extra fop from 
xdata of
the base-fop. Based on the order encoded in the xdata it executes 
separate fops

one after the other and stores the cbk response arguments of both the
operations. It again encodes the response of the extra fop on to the 
base fop's
response xdata and unwind the fop to server xlator. Sends the response 
using

base-rpc's response structure. Client xlator will unwind the base fop to
compound-fop-sender, which will decode the response to the compound fop's
response arguments of the compound fop and unwind to the parent xlators.

I will take an example of fxattrop+write operation that we want to 
implement in

afr as an example to explain how things may look.

compound_fop_sender_fxattrop_write(call_frame_t *frame, xlator_t *this, 
fd_t * fd,

gf_xattrop_flags_t flags,
dict_t * fxattrop_dict,
dict_t * fxattrop_xdata,
struct iovec * vector,
int32_t count,
off_t off,
uint32_t flags,
struct iobref * iobref,
dict_t * writev_xdata)
) {
0) Remember the compound-fop
take base-fop as write()
in wriev_xdata add the following key,value pairs
1) "xattrop-flags", flags
2) for-each-fxattrop_dict key -> "fxattrop-dict-", 
value
3) for-each-fxattrop_xdata key -> 
"fxattrop-xdata-", value

4) "order" -> "fxattrop, writev"
5) "compound-fops" -> "fxattrop"
6) Wind writev()
}

compound_fop_sender_fxattrop_write_cbk(...)
{
/*decode the response args and call parent_fxattrop_write_cbk*/
}

_fxattrop_write_cbk (call_frame_t *frame, 
void *cookie,
xlator_t *this, int32_t 
fxattrop_op_ret,

int32_t fxattrop_op_errno,
dict_t *fxattrop_dict,
dict_t *fxattrop_xdata,
int32_t writev_op_ret, int32_t 
writev_op_errno,

struct iatt *writev_prebuf,
struct iatt *writev_postbuf,
dict_t *writev_xdata)
{
/**/
}

compound_fop_receiver_writev(call_frame_t *frame, xlator_t *this, fd_t * 
fd,

struct iovec * vector,
int32_t count,
off_t off,
uint32_t flags,
struct iobref * iobref,
dict_t * writev_xdata)
{
0) Check if writev_xdata has "compound-fop" else default_writev()
2) decode writev_xdata from above encoding -> flags, 
fxattrop_dict, fxattrop-xdata

3) get "order"
4) Store all the above in 'local'
5) wind fxattrop() with 
compound_receiver_fxattrop_cbk_writev_wind() as cbk

}

compound_receiver_fxattrop_cbk_writev_wind (call_frame_t *frame, void 
*cookie,
xlator_t *this, int32_t 
op_ret,
int32_t op_errno, dict_t 
*dict,

dict_t *xdata)
{
0) store fxattrop cbk_args
1) Perform writev() with writev_params with 
compound_receiver_writev_cbk() as the 'cbk'

}

compound_writev_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
 int32_t op_ret, int32_t op_errno, struct iatt 
*prebuf,

 struct iatt *postbuf, dict_t *xdata)
{
0) store writev cbk_args
1) Encode fxattrop response to writev_xdata with similar 
encoding in the compound_fop_sender_fxattrop_write()

2) unwind writev()
}

This example is just to show how things may look, but the actual 
implementation
may just have all base-fops calling common function to perform the 
operations
in the order given in the receriver xl. Yet to think about that. It is 
probably better to Encode
fop-number from glusterfs_fop_t rather than the fop-string in the 
dictionary.


This is phase-1 of the change because we don't 

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting (December 8, 2015)

2015-12-07 Thread Manikandan Selvaganesh
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC  
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you :-)

--
Regards,
Manikandan Selvaganesh.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tools/georepcli: Gluster Geo-replication CLI tool for ease of use

2015-12-07 Thread Aravinda

Hi,

Created a tool to simplify the manual steps involved in Geo-replication 
session creation(Previously known as georepsetup[1]) and enhancements to 
status command.


Posted the patch for review[2], Refer README.md[3] for documentation.

Please review. Thanks.

Comments & Suggestions Welcome.

[1] http://aravindavk.in/blog/introducing-georepsetup/
[2] http://review.gluster.org/#/c/12460/
[3] http://review.gluster.org/#/c/12460/3/tools/georepcli/README.md

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] glusterfs as dovecot backend storage

2015-12-07 Thread Pranith Kumar Karampuri
Do you mind adding gluster-users to that thread? It would be nice to 
know what are the problems they ran into to fix them.


Pranith
On 12/07/2015 03:44 PM, Emmanuel Dreyfus wrote:

Hello

In case nobody noticed, there is ongoing discussion on the dovecot
mailing list about using glusterfs as mail storage.  Some poeple
ran into trouble and I think knowledgable hints would be of great
value there.

NB: I did not dare to attempt such a setup, regardless of how
appealing it is. I fear troubles too much. :-)



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] intermittent test failure: tests/basic/tier/record-metadata-heat.t ?

2015-12-07 Thread Michael Adam
FYI: tests/basic/tier/record-metadata-heat.t failed in

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16562/consoleFull

triggered for

http://review.gluster.org/#/c/12830/

I can see no relation.
(In fact, that patch should not add any new failures.)

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent test failure: sparse-file-self-heal.t ?

2015-12-07 Thread Michael Adam
Here is a failure of sparse-file-self-heal.t:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16561/consoleFull

triggered by http://review.gluster.org/#/c/12826/

I can't see how this is related.

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent test failure: tests/basic/tier/record-metadata-heat.t ?

2015-12-07 Thread Vijay Bellur

On 12/07/2015 06:51 PM, Vijay Bellur wrote:

On 12/07/2015 06:47 PM, Michael Adam wrote:

FYI: tests/basic/tier/record-metadata-heat.t failed in

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16562/consoleFull


triggered for

http://review.gluster.org/#/c/12830/

I can see no relation.
(In fact, that patch should not add any new failures.)



tests/basic/tier/record-metadata-heat.t is listed in is_bad_test().
Hence this failure should not affect the result of a regression run.

Dan mentioned to me earlier today that a fix is being worked on for this
test marked as bad.



Looks like the run failed due to:

/tests/bugs/fuse/bug-924726.t (Wstat: 0 Tests: 20 Failed: 1)
  Failed test:  20

Raghavendra - this test has been reported previously too as affecting 
other regression runs. Can you please take a look in as you are the 
original author of this test unit? I tried reproducing the problem in my 
local setup a few times but that does not seem to happen easily.


Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] building/installing on FreeBSD

2015-12-07 Thread Kaleb S. KEITHLEY
On 12/07/2015 08:42 AM, Rick Macklem wrote:

> to avoid an undefined reference for xdr_auth_glusterfs_parms_v2.
> (I also found that building without libxml2 doesn't work, because fields
>  #if HAVE_LIB_XML are used in the code. Maybe it would be nicer if configure
>  failed when libxml2 isn't installed, like it does for Bison, etc.)

File a bug[1], and/or submit a patch[2]

> 
> Now, I can build/install it, but it isn't building any shared *.so files.
> As such, the binaries basically fail.
> 
> I have zero experience with libtool. So, does someone happen to know what
> it takes to get it to build the shared libraries?
> I didn't do autogen.sh. I just used configure. Do I need to run autogen.sh?
> 

I pretty much always run autogen.sh. On FreeBSD 10, my builds produce
shared libs and binaries. Your patch looked a little suspect.

I was able to build both 3.7.6 built from the tarball, and the head of
release-3.7 branches in git b with `./autogen.sh && ./configure
--disable-tiering && make`

[1]https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
[2]http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] more re: building on FreeBSD

2015-12-07 Thread Rick Macklem
Ok, after I did autogen.sh before configure, the build did make the
shared libraries. (The patch I posted in the last email wasn't needed,
since library ordering won't matter for shared libs. It must have been
trying to do static linking without doing autogen.sh.)

It fails near the end of "make install" in glusterfind, but that doesn't
much matter for me.

It does now seem to be working and the only thing I notice is that the
daemons "glusterfsd" spend most of their time in "R" (run) state, only
occasionally sleeping on "select" for a short period.
Is this expected?

It did allow me to build a volume and mount_glusterfs it.

Thanks for the "autogen.sh" hint, rick
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] intermittent test failure: tests/basic/tier/record-metadata-heat.t ?

2015-12-07 Thread Raghavendra Gowdappa
> Looks like the run failed due to:
> 
> /tests/bugs/fuse/bug-924726.t (Wstat: 0 Tests: 20 Failed: 1)
>Failed test:  20
> 
> Raghavendra - this test has been reported previously too as affecting
> other regression runs. Can you please take a look in as you are the
> original author of this test unit? I tried reproducing the problem in my
> local setup a few times but that does not seem to happen easily.

A fix has been sent to:
http://review.gluster.org/12906

> 
> Thanks,
> Vijay
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel