Re: [Gluster-infra] Build failure - Job #23157

2017-01-18 Thread Raghavendra Gowdappa
Thanks Anoop :).

- Original Message -
> From: "Anoop C S" 
> To: "Raghavendra Gowdappa" , "Nigel Babu" 
> 
> Cc: gluster-infra@gluster.org
> Sent: Wednesday, January 18, 2017 1:57:48 PM
> Subject: Re: [Gluster-infra] Build failure - Job #23157
> 
> On Wed, 2017-01-18 at 03:20 -0500, Raghavendra Gowdappa wrote:
> > 1414242
> 
> This is a private bug. You need to mark it as not private so as to extract
> the bug status correctly.
> 
> --Anoop C S.
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Build failure - Job #23157

2017-01-18 Thread Raghavendra Gowdappa
https://build.gluster.org/job/compare-bug-version-and-git-branch/23157/console

23:34:34 + /opt/qa/jenkins/scripts/compare-bug-version-and-git-branch.sh
23:34:36 Failed to get details for BUG id 1414242, please contact an admin or 
email gluster-infra@gluster.org.
23:34:36 1
23:34:37 BUG id 1414242 has an invalid status as . Acceptable status values are 
NEW, ASSIGNED or POST.
23:34:37 Build step 'Execute shell' marked build as failure
23:34:37 Finished: FAILURE

regards,
Raghavendra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [regression tests] Aborted runs and diagnostic information

2016-09-30 Thread Raghavendra Gowdappa


- Original Message -
> From: "Nigel Babu" 
> To: "Raghavendra Gowdappa" 
> Cc: "gluster-infra" , "Gluster Devel" 
> 
> Sent: Wednesday, September 28, 2016 6:28:11 PM
> Subject: Re: [regression tests] Aborted runs and diagnostic information
> 
> We don't collect any information before aborting tests. However, if we can
> put together a list of things we'd like to do collect before aborting jobs,
> I can look into what can be done to collect it.

Thanks Nigel. Apart from what I mentioned in my previous mail, lets wait for 
sometime for others to comment on. If no new additions are done, we can go 
ahead and collect items in my list. Some points to note are

* to enable dumping of all objects we need to do,

echo "all=yes" >> $statedumpdir/glusterdump.options

before we issue commands to collect statedumps. 

* we need to do

# kill -SIGUSR1 

to collect statedump of client process as there is no cli command to issue.

* for bricks,

[root@unused rhs-glusterfs]# gluster volume statedump
Usage: volume statedump  [nfs|quotad] 
[all|mem|iobuf|callpool|priv|fd|inode|history]

> 
> On Wed, Sep 28, 2016 at 4:19 PM, Raghavendra Gowdappa 
> wrote:
> 
> > Hi all,
> >
> > Do we collect any diagnostic information before aborting tests as in [1]?
> > If yes, where can I find them?
> >
> > If no, I think following information would be useful
> >
> > 1. ps output of all relevant gluster processes and tests running on them
> > (to find status of processes like 'D' etc)
> > 2. Statedump of client and brick processes (better to dump all information
> > like inodes, call-stack, etc)
> > 3. coredump of client and brick processes
> >
> > If you think any other information is helpful, please add to the list.
> >
> > In the specific case of [1], the test runs fine on my local machine, but
> > always hangs on build machines. So having this information is helpful.
> >
> > [1] https://build.gluster.org/job/netbsd7-regression/886/console
> >
> > regards,
> > Raghavendra
> >
> 
> 
> 
> --
> nigelb
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [regression tests] Aborted runs and diagnostic information

2016-09-28 Thread Raghavendra Gowdappa
Hi all,

Do we collect any diagnostic information before aborting tests as in [1]? If 
yes, where can I find them?

If no, I think following information would be useful

1. ps output of all relevant gluster processes and tests running on them (to 
find status of processes like 'D' etc)
2. Statedump of client and brick processes (better to dump all information like 
inodes, call-stack, etc)
3. coredump of client and brick processes

If you think any other information is helpful, please add to the list.

In the specific case of [1], the test runs fine on my local machine, but always 
hangs on build machines. So having this information is helpful.

[1] https://build.gluster.org/job/netbsd7-regression/886/console

regards,
Raghavendra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Smoke is failing for patch 14512

2016-05-31 Thread Raghavendra Gowdappa
+gluster-infra

- Original Message -
> From: "Hari Gowtham" 
> To: "Shyam" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, June 1, 2016 11:11:56 AM
> Subject: Re: [Gluster-devel] Smoke is failing for patch 14512
> 
> Hi,
> 
> I'm seeing the smoke test failures too
> http://review.gluster.org/#/c/14540/3
> 
> 
> - Original Message -
> > From: "Shyam" 
> > To: "Aravinda" , "Gluster Devel"
> > 
> > Sent: Tuesday, May 31, 2016 7:21:38 PM
> > Subject: Re: [Gluster-devel] Smoke is failing for patch 14512
> > 
> > On 05/30/2016 04:02 AM, Aravinda wrote:
> > > Hi,
> > >
> > > Smoke is failing for the patch http://review.gluster.org/#/c/14512
> > >
> > > I am unable to guess the reason for failure, Please help.
> > > https://build.gluster.org/job/glusterfs-devrpms/16768/console
> > 
> > I got similar failures here,
> > 1) https://build.gluster.org/job/glusterfs-devrpms/16755/console
> > (slave25, as your failure is also on slave25)
> > 
> > What failed was,
> > # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/
> > --releasever 22 install @buildsys-build --setopt=tsflags=nocontexts
> > 
> > 2) https://build.gluster.org/job/glusterfs-devrpms/16762/console (slave24)
> > 
> > What failed here is slightly different,
> > # /usr/bin/yum --installroot /var/lib/mock/fedora-22-x86_64/root/ -y
> > --releasever 22 update --setopt=tsflags=nocontexts
> > 
> > The console output shows, "Results and/or logs in:
> > /home/jenkins/root/workspace/glusterfs-devrpms/RPMS/fc22/x86_64/" and
> > "ERROR: Command failed. See logs for output.", but I guess these logs
> > are no longer available as this is a chroot env that is cleaned up post
> > the task, right?
> > 
> > I am just adding this to the list, as I do not know what the failure is
> > due to.
> > 
> > Any pointers anyone?
> > 
> > >
> > > --
> > > regards
> > > Aravinda
> > >
> > >
> > >
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
> --
> Regards,
> Hari.
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-11 Thread Raghavendra Gowdappa
+gluster-infra

- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Gluster Devel" 
> Sent: Thursday, May 12, 2016 10:44:07 AM
> Subject: [Gluster-devel] [smoke failure] Permission denied error while
> install-pygluypPYTHON
> 
> https://build.gluster.org/job/smoke/27674/console
> 
> 06:09:06 /bin/mkdir: cannot create directory
> `/usr/lib/python2.6/site-packages/gluster': Permission denied
> 06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1
> 06:09:06 make[5]: *** [install-am] Error 2
> 06:09:06 make[4]: *** [install-recursive] Error 1
> 06:09:06 make[3]: *** [install-recursive] Error 1
> 06:09:06 make[2]: *** [install-recursive] Error 1
> 06:09:06 make[1]: *** [install-recursive] Error 1
> 06:09:06 make: *** [install-recursive] Error 1
> 06:09:06 Build step 'Execute shell' marked build as failure
> 06:09:06 Finished: FAILURE
> 
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] hang in netbsd regression while building

2016-04-29 Thread Raghavendra Gowdappa
While trying to get netbsd regressions passed on [1], I am seeing hangs in 
building glusterfs. I had seen similar behavior for other patches earlier too, 
but Kaushal had fixed it. Any help is appreciated. Also, if you let me know a 
procedure to fix this issue if I encounter the same in future, I can do it 
myself.

[1] http://review.gluster.org/#/c/14102/

regards,
Raghavendra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] build seems to be hung again

2016-04-21 Thread Raghavendra Gowdappa
Hi all,

Netbsd regression [1] has been running for 3 hrs, but its still building. Seems 
like there is some issue here. Anyone aware of whats going on here?

[1] https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16000/

regards,
Raghavendra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] glibc bug and our upstream regression infra

2016-03-02 Thread Raghavendra Gowdappa
Hi all,

While working on a customer case we stumbled upon a corruption in glibc [1]. 
I've heard from various people that recently there are frequent crashes on 
upstream Linux regression. Just a pointer which is worth checking out.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1305406

regards,
Raghavendra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] NetBSD tests not running to completion.

2016-01-10 Thread Raghavendra Gowdappa
> On 01/07/2016 02:39 PM, Emmanuel Dreyfus wrote:
> > On Wed, Jan 06, 2016 at 05:49:04PM +0530, Ravishankar N wrote:
> >> I re triggered NetBSD regressions for
> >> http://review.gluster.org/#/c/13041/3
> >> but they are being run in silent mode and are not completing. Can some one
> >> from the infra-team take a look? The last 22 tests in
> >> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/ have
> >> failed. Highly unlikely that something is wrong with all those patches.
> > I note your latest test compelted with an error in mount-nfs-auth.t:
> > https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13260/consoleFull
> >
> > Would you have the jenkins build that did not complete s that I can have a
> > look at it?
> >
> > Generally speaking, I have to pôint that NetBSD regression does show light
> > on generic bugs, we had a recent exemple with quota-nfs.t. For now there
> > are not other well supported platforms, but if you want glusterfs to
> > be really portable, removing mandatory NetBSD regression is not a good
> > idea:
> > portability bugs will crop.
> >
> > Even a daily or weekly regression run seems a bad idea to me. If you do not
> > prevent integration of patches that break NetBSD regression, that will get
> > in, and tests will break one by one over time. I have a first hand
> > experience of this situation, when I was actually trying to catch on with
> > NetBSD regression. Many time I reached something reliable enough to become
> > mandatory, and got broken by a new patch before it became actualy
> > mandatory.
> >
> > IMO, relaxing NetBSD regression requirement means the project drops the
> > goal
> > of being portable.
> >
> hi Emmanuel,
>   This Sunday I have some time I can spend helping in making
> tests better for NetBSD. I have seen bugs that are caught only by NetBSD
> regression just recently, so I see value in making NetBSD more reliable.

+1. As Manu and Ravi's conversation pointed out, its better to take a call 
based on data (how many tests are failing, how many are spurious). As my recent 
work on quota-nfs.t shows, I was actively trying to seek a reproducer for 
write-behind issue, but the reproducer seemed elusive. We were able to hit the 
bug very inconsistently. Couple that with the pressure to take things to 
closure, a tendency to push things under carpet creeps in.

Having said that you can find some of my commits where netbsd results are 
skipped (or not waited for completion of netbsd runs). A knowledge that infra 
is stable and there are less false-positives (of bugs) will shift 
responsibility on developers to own the issue and fix it.

> Please let me know what are the things we can work on. It would help if
> you give me something specific to glusterfs to make it more valuable in
> the short term. Over time I would like to learn enough to share the load
> with you however little it may be (Please bear with me, I some times go
> quiet). Here are the initial things I would like to know to begin with:

I can try to help out here too. But mostly on best effort basis as there are 
other responsibilities where I am evaluated directly.

> 
> 1) How to set up NetBSD VMs on my laptop which is of exact version as
> the ones that are run on build systems.
> 2) How to prevent NetBSD machines hang when things crash (At least I
> used to see that the machines hang when fuse crashes before, not sure if
> this is still the case)? (This failure needs manual intervention at the
> moment on NetBSD regressions, if we make it report failures and pick
> next job that would be the best way forward)
> 3) We should come up with a list of known problems and how to
> troubleshoot those problems, when things are not going smooth in NetBSD.
> Again, we really need to make things automatic, this should be last
> resort. Our top goal should be to make NetBSD machines report failures
> and go to execute next job.
> 4) How can we make debugging better in NetBSD? In the worst case we can
> make all tests execute in trace/debug mode on NetBSD.
> 
> I really want to appreciate the fine job you have done so far in making
> sure glusterfs is stable on NetBSD.

++1. I appreciate Emmanuel's effort/support from such a long time and will try 
to chip in to whatever extent I can.

> 
> Infra team,
> I think we need to make some improvements to our infra. We need
> to get information about health of linux, NetBSD regression builds.
> 1) Something like, in the last 100 builds how many builds succeeded on
> Linux, how many succeeded on NetBSD.
> 2) What are the tests that failed in the last 100 builds and how many
> times on both Linux and NetBSD. (I actually wrote this part in some
> parts, but the whole command output has changed making my scripts stale)
> Any other ideas you guys have?
> 3) Which components have highest number of spurious failures.
> 4) How many builds did not complete/manually aborted etc.
> 
> Once we start measuring these things, next s

Re: [Gluster-infra] [Gluster-devel] NetBSD tests not running to completion.

2016-01-10 Thread Raghavendra Gowdappa


- Original Message -
> From: "Ravishankar N" 
> To: "Emmanuel Dreyfus" 
> Cc: "gluster-infra" , "Gluster Devel" 
> 
> Sent: Thursday, January 7, 2016 3:14:12 PM
> Subject: Re: [Gluster-devel] NetBSD tests not running to completion.
> 
> On 01/07/2016 02:39 PM, Emmanuel Dreyfus wrote:
> > On Wed, Jan 06, 2016 at 05:49:04PM +0530, Ravishankar N wrote:
> >> I re triggered NetBSD regressions for
> >> http://review.gluster.org/#/c/13041/3
> >> but they are being run in silent mode and are not completing. Can some one
> >> from the infra-team take a look? The last 22 tests in
> >> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/ have
> >> failed. Highly unlikely that something is wrong with all those patches.
> > I note your latest test compelted with an error in mount-nfs-auth.t:
> > https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13260/consoleFull
> Yes, the test that failed is "dd if=/dev/zero of=$N0/test-big-write
> count=500 bs=1024k"
> I don't know why.

Did the test fail (with an error)? or was it hung?

> > Would you have the jenkins build that did not complete s that I can have a
> > look at it?
> Unfortunately I did not save the link.
> >
> > Generally speaking, I have to pôint that NetBSD regression does show light
> > on generic bugs, we had a recent exemple with quota-nfs.t. For now there
> > are not other well supported platforms, but if you want glusterfs to
> > be really portable, removing mandatory NetBSD regression is not a good
> > idea:
> > portability bugs will crop.
> This is all IMHO:
> I am not that big a fan of portable software development at all- more
> so  for system software. Maybe it is a thing in desktop application
> development world . Even in a recent RFE on gluster-devel for
> eventing-framework I saw that systemd and dbus APIs are considered
> because they are Linux specific. To me that is a silly reason to
> reinvent the wheel. I'd rather that the software sticks to one platform
> and build on whatever the platform has to offer than attempting to run
> on all OSes.
> 
> >
> > Even a daily or weekly regression run seems a bad idea to me. If you do not
> > prevent integration of patches that break NetBSD regression, that will get
> > in, and tests will break one by one over time. I have a first hand
> > experience of this situation, when I was actually trying to catch on with
> > NetBSD regression. Many time I reached something reliable enough to become
> > mandatory, and got broken by a new patch before it became actualy
> > mandatory.
> >
> > IMO, relaxing NetBSD regression requirement means the project drops the
> > goal
> > of being portable.
> >
> It is not the failure of regressions that bother me. It is the lack of
> infrastructure to debug failures that is irritating. If Linux
> regressions fail, I can just run it in my laptop and figure things out
> .For NetBSD, I'd have to take a slave offline. Then I have to send a
> mail about it and hope that every one has seen it and does not
> accidentally use the system while I'm debugging. Then  there's this part
> where none of the bash/gdb/whatever-debugging-tools in Linux that you
> are used to will work on NetBSD. Like I pointed out in another thread,
> at least a VM image for NetBSD that I can run on my laptop would be
> beneficial to a big extent.
> 
> 
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Lot of Netbsd regressions 'Waiting for the next available executor'

2015-12-24 Thread Raghavendra Gowdappa
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12961/consoleFull
 

Seems to be hung. May be a hung syscall? I've tried to kill it, but seems like 
its not dead. May be patch #12594 is causing some issues on netbsd. It has 
passed gluster regression.

- Original Message -
> From: "Ravishankar N" 
> To: "Gluster Devel" , "gluster-infra" 
> 
> Sent: Thursday, December 24, 2015 9:27:53 AM
> Subject: [Gluster-devel] Lot of Netbsd regressions 'Waiting for the next  
> available executor'
> 
> $subject.
> Since yesterday.
> The build queue is growing. Something's wrong.
> 
> " If you see a little black clock icon in the build queue as shown below, it
> is an indication that your job is sitting in the queue unnecessarily." is
> what it says.
> 
> 
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Lot of Netbsd regressions 'Waiting for the next available executor'

2015-12-24 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Ravishankar N" 
> Cc: "Gluster Devel" , "gluster-infra" 
> 
> Sent: Thursday, December 24, 2015 12:11:46 PM
> Subject: Re: [Gluster-devel] Lot of Netbsd regressions 'Waiting for the next  
> available executor'
> 
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12961/consoleFull
> 
> Seems to be hung. May be a hung syscall? I've tried to kill it, but seems
> like its not dead. May be patch #12594 is causing some issues on netbsd. It
> has passed gluster regression.

s/gluster/Linux/

> 
> - Original Message -
> > From: "Ravishankar N" 
> > To: "Gluster Devel" , "gluster-infra"
> > 
> > Sent: Thursday, December 24, 2015 9:27:53 AM
> > Subject: [Gluster-devel] Lot of Netbsd regressions 'Waiting for the next
> > available executor'
> > 
> > $subject.
> > Since yesterday.
> > The build queue is growing. Something's wrong.
> > 
> > " If you see a little black clock icon in the build queue as shown below,
> > it
> > is an indication that your job is sitting in the queue unnecessarily." is
> > what it says.
> > 
> > 
> > 
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Unable to send patches to gerrit

2015-06-11 Thread Raghavendra Gowdappa
Now, not able to open/review/merge any patches on review.gluster.org.

- Original Message -
> From: "Avra Sengupta" 
> To: "Pranith Kumar Karampuri" , "Anoop C S" 
> , gluster-de...@gluster.org,
> "Kaushal M" , "Vijay Bellur" , 
> "gluster-infra" 
> Sent: Thursday, 11 June, 2015 11:16:15 AM
> Subject: Re: [Gluster-devel] Unable to send patches to gerrit
> 
> +Adding gluster-infra
> 
> On 06/11/2015 10:39 AM, Pranith Kumar Karampuri wrote:
> > Last time when this happened Kaushal/vijay fixed it if I remember
> > correctly.
> > +kaushal +Vijay
> >
> > Pranith
> > On 06/11/2015 10:38 AM, Anoop C S wrote:
> >>
> >> On 06/11/2015 10:33 AM, Ravishankar N wrote:
> >>> I'm unable to push a patch on release-3.6, getting different
> >>> errors every time:
> >>>
> >> This happens for master too. I continuously get the following error:
> >>
> >> error: unpack failed: error No space left on device
> >>
> >>> [ravi@tuxpad glusterfs]$ ./rfc.sh [detached HEAD a59646a] afr:
> >>> honour selfheal enable/disable volume set options Date: Sat May 30
> >>> 10:23:33 2015 +0530 3 files changed, 108 insertions(+), 4
> >>> deletions(-) create mode 100644 tests/basic/afr/client-side-heal.t
> >>> Successfully rebased and updated
> >>> refs/heads/3.6_honour_heal_options. Counting objects: 11, done.
> >>> Delta compression using up to 4 threads. Compressing objects: 100%
> >>> (11/11), done. Writing objects: 100% (11/11), 1.77 KiB | 0 bytes/s,
> >>> done. Total 11 (delta 9), reused 0 (delta 0) *error: unpack failed:
> >>> error No space left on device** **fatal: Unpack error, check server
> >>> log* To ssh://itisr...@git.gluster.org/glusterfs.git ! [remote
> >>> rejected] HEAD -> refs/for/release-3.6/bug-1230259 (n/a (unpacker
> >>> error)) error: failed to push some refs to
> >>> 'ssh://itisr...@git.gluster.org/glusterfs.git' [ravi@tuxpad
> >>> glusterfs]$
> >>>
> >>>
> >>> [ravi@tuxpad glusterfs]$ ./rfc.sh [detached HEAD 8b28efd] afr:
> >>> honour selfheal enable/disable volume set options Date: Sat May 30
> >>> 10:23:33 2015 +0530 3 files changed, 108 insertions(+), 4
> >>> deletions(-) create mode 100644 tests/basic/afr/client-side-heal.t
> >>> Successfully rebased and updated
> >>> refs/heads/3.6_honour_heal_options. *fatal: internal server
> >>> error** **fatal: Could not read from remote repository.** **
> >>> **Please make sure you have the correct access rights** **and the
> >>> repository exists.*
> >>>
> >>>
> >>> Anybody else facing problems? -Ravi
> >>>
> >>>
> >>>
> >>> ___ Gluster-devel
> >>> mailing list gluster-de...@gluster.org
> >>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra