[Gluster-devel] New Defects reported by Coverity Scan for gluster/glusterfs

2016-07-20 Thread scan-admin

Hi,

Please find the latest report on new defect(s) introduced to gluster/glusterfs 
found with Coverity Scan.

22 new defect(s) introduced to gluster/glusterfs found with Coverity Scan.
27 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
recent build analyzed by Coverity Scan.

New defect(s) Reported-by: Coverity Scan
Showing 20 of 22 defect(s)


** CID 1357876:  Memory - illegal accesses  (USE_AFTER_FREE)
/home/vijay/workspace/glusterfs/glusterfs/rpc/rpc-lib/src/rpc-transport.c: 680 
in rpc_transport_inet_options_build()



*** CID 1357876:  Memory - illegal accesses  (USE_AFTER_FREE)
/home/vijay/workspace/glusterfs/glusterfs/rpc/rpc-lib/src/rpc-transport.c: 680 
in rpc_transport_inet_options_build()
674 goto out;
675 }
676 
677 ret = dict_set_dynstr (dict, "remote-host", host);
678 if (ret) {
679 GF_FREE (host);
>>> CID 1357876:  Memory - illegal accesses  (USE_AFTER_FREE)
>>> Passing freed pointer "host" as an argument to "_gf_log".
680 gf_log (THIS->name, GF_LOG_WARNING,
681 "failed to set remote-host with %s", host);
682 goto out;
683 }
684 
685 ret = dict_set_int32 (dict, "remote-port", port);

** CID 1357875:  Code maintainability issues  (UNUSED_VALUE)
/xlators/experimental/jbr-server/src/jbr-cg.c: 667 in jbr_lk_perform_local_op()



*** CID 1357875:  Code maintainability issues  (UNUSED_VALUE)
/xlators/experimental/jbr-server/src/jbr-cg.c: 667 in jbr_lk_perform_local_op()
661 goto out;
662 } else {
663 list_add_tail(>qlinks, 
>aqueue);
664 ++(ictx->active);
665 }
666 UNLOCK(>lock);
>>> CID 1357875:  Code maintainability issues  (UNUSED_VALUE)
>>> Assigning value from "jbr_perform_lk_on_leader(frame, this, fd, cmd, 
>>> flock, xdata)" to "ret" here, but that stored value is overwritten before 
>>> it can be used.
667 ret = jbr_perform_lk_on_leader (frame, this, fd, cmd,
668 flock, xdata);
669 }
670 
671 ret = 0;
672 out:

** CID 1357874:  Insecure data handling  (TAINTED_SCALAR)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-geo-rep.c:
 819 in _fcbk_statustostruct()



*** CID 1357874:  Insecure data handling  (TAINTED_SCALAR)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-geo-rep.c:
 819 in _fcbk_statustostruct()
813 while (isspace (*v))
814 v++;
815 v = gf_strdup (v);
816 if (!v)
817 return -1;
818 
>>> CID 1357874:  Insecure data handling  (TAINTED_SCALAR)
>>> Assigning: "k" = "gf_strdup", which taints "k".
819 k = gf_strdup (resbuf);
820 if (!k) {
821 GF_FREE (v);
822 return -1;
823 }
824 

** CID 1357873:  Security best practices violations  (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-volume-ops.c:
 2159 in glusterd_op_create_volume()



*** CID 1357873:  Security best practices violations  (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-volume-ops.c:
 2159 in glusterd_op_create_volume()
2153 if (ret) {
2154 gf_msg (this->name, GF_LOG_ERROR, 0,
2155 GD_MSG_DICT_GET_FAILED, "Unable to get volume 
name");
2156 goto out;
2157 }
2158 
>>> CID 1357873:  Security best practices violations  (STRING_OVERFLOW)
>>> You might overrun the 261 byte fixed-size string "volinfo->volname" by 
>>> copying "volname" without checking the length.
2159 strncpy (volinfo->volname, volname, strlen (volname));
2160 GF_ASSERT (volinfo->volname);
2161 
2162 ret = dict_get_int32 (dict, "type", >type);
2163 if (ret) {
2164 gf_msg (this->name, GF_LOG_ERROR, 0,

** CID 1357872:  Security best practices violations  (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:
 3454 in glusterd_import_volinfo()

[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2016-07-20 Thread Atin Mukherjee
Hi,

Please find the latest report on new defect(s) introduced to
gluster/glusterfs found with Coverity Scan.

22 new defect(s) introduced to gluster/glusterfs found with Coverity Scan.
27 defect(s), reported by Coverity Scan earlier, were marked fixed in the
recent build analyzed by Coverity Scan.

New defect(s) Reported-by: Coverity Scan Showing 20 of 22 defect(s)

** CID 1357876: Memory – illegal accesses (USE_AFTER_FREE)
/home/vijay/workspace/glusterfs/glusterfs/rpc/rpc-lib/src/rpc-transport.c:
680 in rpc_transport_inet_options_build()

*_*_
*** CID 1357876: Memory – illegal accesses (USE_AFTER_FREE)
/home/vijay/workspace/glusterfs/glusterfs/rpc/rpc-lib/src/rpc-transport.c:
680 in rpc_transport_inet_options_build() 674 goto out; 675 } 676 677 ret =
dict_set_dynstr (dict, “remote-host”, host); 678 if (ret) { 679 GF_FREE
(host);

CID 1357876: Memory – illegal accesses (USE_AFTER_FREE) Passing freed
pointer “host” as an argument to “_gf_log”.

680 gf_log (THIS->name, GF_LOG_WARNING, 681 “failed to set remote-host with
%s”, host); 682 goto out; 683 } 684 685 ret = dict_set_int32 (dict,
“remote-port”, port);

** CID 1357875: Code maintainability issues (UNUSED_VALUE)
/xlators/experimental/jbr-server/src/jbr-cg.c: 667 in
jbr_lk_perform_local_op()

*_*_
*** CID 1357875: Code maintainability issues (UNUSED_VALUE)
/xlators/experimental/jbr-server/src/jbr-cg.c: 667 in
jbr_lk_perform_local_op() 661 goto out; 662 } else { 663
list_add_tail(>qlinks, >aqueue); 664 ++(ictx->active); 665 }
666 UNLOCK(>lock);

CID 1357875: Code maintainability issues (UNUSED_VALUE) Assigning value
from “jbr_perform_lk_on_leader(frame, this, fd, cmd, flock, xdata)” to
“ret” here, but that stored value is overwritten before it can be used.

667 ret = jbr_perform_lk_on_leader (frame, this, fd, cmd, 668 flock,
xdata); 669 } 670 671 ret = 0; 672 out:

** CID 1357874: Insecure data handling (TAINTED_SCALAR)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-geo-rep.c:
819 in _fcbk_statustostruct()

*_*_
*** CID 1357874: Insecure data handling (TAINTED_SCALAR)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-geo-rep.c:
819 in _fcbk_statustostruct() 813 while (isspace (*v)) 814 v++; 815 v =
gf_strdup (v); 816 if (!v) 817 return -1; 818

CID 1357874: Insecure data handling (TAINTED_SCALAR) Assigning: “k” =
“gf_strdup”, which taints “k”.

819 k = gf_strdup (resbuf); 820 if (!k) { 821 GF_FREE (v); 822 return -1;
823 } 824

** CID 1357873: Security best practices violations (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-volume-ops.c:
2159 in glusterd_op_create_volume()

*_*_
*** CID 1357873: Security best practices violations (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-volume-ops.c:
2159 in glusterd_op_create_volume() 2153 if (ret) { 2154 gf_msg
(this->name, GF_LOG_ERROR, 0, 2155 GD_MSG_DICT_GET_FAILED, "Unable to get
volume name"); 2156 goto out; 2157 } 2158

CID 1357873: Security best practices violations (STRING_OVERFLOW) You might
overrun the 261 byte fixed-size string “volinfo->volname” by copying
“volname” without checking the length.

2159 strncpy (volinfo->volname, volname, strlen (volname)); 2160 GF_ASSERT
(volinfo->volname); 2161 2162 ret = dict_get_int32 (dict, “type”,
>type); 2163 if (ret) { 2164 gf_msg (this->name, GF_LOG_ERROR, 0,

** CID 1357872: Security best practices violations (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:
3454 in glusterd_import_volinfo()

*_*_
*** CID 1357872: Security best practices violations (STRING_OVERFLOW)
/home/vijay/workspace/glusterfs/glusterfs/xlators/mgmt/glusterd/src/glusterd-utils.c:
3454 in glusterd_import_volinfo() 3448 goto out; 3449 } 3450 3451 ret =
glusterd_volinfo_new (_volinfo); 3452 if (ret) 3453 goto out;

CID 1357872: Security best practices violations (STRING_OVERFLOW) You might
overrun the 261 byte fixed-size string “new_volinfo->volname” by copying
“volname” without checking the length.

3454 strncpy (new_volinfo->volname, volname, strlen (volname)); 3455 3456
memset (key, 0, sizeof (key)); 3457 snprintf (key, sizeof (key),
“%s%d.type”, prefix, count); 3458 ret = dict_get_int32 (peer_data, key,
_volinfo->type); 3459 if (ret) {

** CID 1357871: (RESOURCE_LEAK)
/xlators/experimental/jbr-server/src/jbr-cg.c: 10664 in jbr_open_term()
/xlators/experimental/jbr-server/src/jbr-cg.c: 10668 in 

Re: [Gluster-devel] Question on merging zfs snapshot support into the mainline glusterfs

2016-07-20 Thread Vijay Bellur

On 07/19/2016 11:01 AM, Atin Mukherjee wrote:



On Tue, Jul 19, 2016 at 7:29 PM, Rajesh Joseph > wrote:



On Tue, Jul 19, 2016 at 11:23 AM, > wrote:

__
Hi Rajesh,

I'd thought about moving the zfs specific implementation to
something like

xlators/mgmt/glusterd/src/plugins/zfs-specifs-stuffs for the
inital go. Could you let me know if this works or in sync with
what you'd thought about?

Sriram


Hi Sriram,

Sorry, I was not able to send much time on this. I would prefer you
move the code to

xlators/mgmt/glusterd/plugins/src/zfs-specifs-stuffs



How about having it under
xlators/mgmt/glusterd/plugins/snapshot/src/zfs-specifs-stuffs such that
in future if we have to write plugins for other features they can be
segregated?



It would be nicer to avoid "specific-stuff" or similar from the naming. 
We can probably leave it at 
xlators/mgmt/glusterd/plugins/snapshot/src/zfs. The naming would be 
sufficient to indicate that code is specific to zfs snapshots.


-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tier: breaking down the monolith processing function

2016-07-20 Thread Vijay Bellur

On 07/19/2016 07:54 AM, Milind Changire wrote:

I've attempted to break the tier_migrate_using_query_file() function
into relatively smaller functions. The important one is
tier_migrate_link().




Can tier_migrate_link() be broken down further? Having more than 80-100 
LOC in a function does normally look excessive to me.


Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposing a framework to leverage existing Python unit test standards for our testing

2016-07-20 Thread Vijay Bellur

On 07/20/2016 08:43 AM, Kaushal M wrote:

On Wed, Jul 20, 2016 at 1:41 PM, Jonathan Holloway  wrote:

Hi Gluster-Devel,

There's been some conversation about standard Python unit test formats (PyUnit, 
PyTest, Nose) and potentially leveraging a tool I've been working on (called 
Glusto) that wraps those standards as well as covers the fundamentals required 
of the DiSTAF framework. I'm reaching out to propose this to the Gluster-Devel 
Community for consideration.


Finally! I'd been waiting for ever wondering we would start discussing
this in the community. Thanks for starting this Jonathan.



Some of the primary features Glusto offers are:
- Reads and writes yaml, json, and ini config file formats (including Ansible 
host files).
- Provides SSH, RPyC, logging (w/ ANSI color support), configuration, 
templating (via Jinja), and simple REST methods.
- Implements cartesian product combinations with standard PyUnit class format 
for the Gluster runs_on_volumes/runs_on_mounts/reuse-setup requirements.
- Wraps the Python standard framework modules (PyUnit, PyTest, Nose) in a 
single command with a config file option.
- Tests can also be run from the CLI, IDLE, or unittest savvy tools (e.g., 
Eclipse PyDev).
- Glusto methods can also be used from IDLE for troubleshooting during 
development--as well as in scripts.
- Allows for leveraging existing unit test features such as skip decorators, 
pytest markers, etc.



These all seem really good! I particularly like the idea of having the
ability to use standard python test frameworks.

Glusto is something that DiSTAF core would have become, just that it's now.
The work done to get test generation working (cartesian products),
also shows it's flexible as well.

Glusto+DiSTAF libs seems to me will be a good combination.


Agree here.





I know this was a brief and high-level intro to Glusto. This is just to get the 
topic started, and we can cover details in discussion.


Having a demo of glusto would be nice. Even a recorded demo would be
good as well.



+1. Can we schedule a demo of glusto over a hangout or bluejeans for the 
community?


Thank you for posting about glusto. I look forward to checking out its 
capabilities.


Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster weekly community meeting minutes 20-Jul-2016

2016-07-20 Thread Kaleb S. KEITHLEY
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.log.html

Next weeks meeting will be held at 12:00 UTC  27 July 2016 in
#gluster-meeting on freenode.  See you all next week.

===
#gluster-meeting: Community Meeting
===


Meeting started by kkeithley at 12:00:27 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 12:00:56)

* GlusterFS 4.0  (kkeithley, 12:03:50)

* next week's host  (kkeithley, 12:04:27)

* GlusterFS 4.0  (kkeithley, 12:07:17)

* GlusterFS 3.9  (kkeithley, 12:11:42)

* GlusterFS 3.8  (kkeithley, 12:14:48)
  * LINK:
https://download.gluster.org/pub/gluster/glusterfs/download-stats.html
(kkeithley, 12:17:04)

* GlusterFS 3.7  (kkeithley, 12:17:51)
  * ACTION: kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
(kkeithley, 12:23:49)
  * problem with the order in which packages are installed, geo-rep
package is installed after server, but server call gsyncd provided
by geo-rep resulting in using older version binary.  (kkeithley,
12:30:11)

* next week's meeting chair  (kkeithley, 12:31:19)

* GlusterFS 3.6  (kkeithley, 12:34:45)

* Infrastructure  (kkeithley, 12:37:42)

* NFS-Ganesha  (kkeithley, 12:42:08)

* Samba  (kkeithley, 12:42:51)

* AIs from last week  (kkeithley, 12:44:00)
  * ACTION: kshlm, csim to chat with nigelb about setting up faux/pseudo
user email for gerrit, bugzilla, github  (kkeithley, 12:47:43)
  * ACTION: rastar to look at 3.6 builds failures on BSD  (kkeithley,
12:48:32)
  * ACTION: kshlm will start a mailing list discussion on EOLing 3.6
(kkeithley, 12:49:58)
  * ACTION: kshlm to setup GD2 CI on centos-ci  (kkeithley, 12:53:02)

* chair for next week's meeting  (kkeithley, 12:53:17)

* Open Floor  (kkeithley, 12:55:20)
  * IDEA: quick summary of our release - what went well, what we can
improve, what we did improve this time.  (kkeithley, 12:59:30)

Meeting ended at 13:00:55 UTC.




Action Items

* kshlm and ndevos to respond to
  http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
* kshlm, csim to chat with nigelb about setting up faux/pseudo user
  email for gerrit, bugzilla, github
* rastar to look at 3.6 builds failures on BSD
* kshlm will start a mailing list discussion on EOLing 3.6
* kshlm to setup GD2 CI on centos-ci




Action Items, by person
---
* nigelb
  * kshlm, csim to chat with nigelb about setting up faux/pseudo user
email for gerrit, bugzilla, github
* rastar
  * rastar to look at 3.6 builds failures on BSD
* **UNASSIGNED**
  * kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
  * kshlm will start a mailing list discussion on EOLing 3.6
  * kshlm to setup GD2 CI on centos-ci




People Present (lines said)
---
* kkeithley (113)
* nigelb (16)
* post-factum (16)
* atinm (8)
* jdarcy (6)
* kotreshhr (6)
* aravindavk (6)
* rastar (6)
* partner (4)
* skoduri (3)
* zodbot (3)
* ira (2)
* msvbhat (1)
* Saravanakmr (1)
* karthik_ (1)
* ramky (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster's Public Social Media Accounts

2016-07-20 Thread Vijay Bellur

On 07/20/2016 01:04 PM, Michael Scherer wrote:

Le mercredi 20 juillet 2016 à 12:12 -0400, Vijay Bellur a écrit :

On 07/19/2016 10:57 AM, Brian Proffitt wrote:



On Tue, Jul 19, 2016 at 9:36 AM, Vijay Bellur > wrote:

[snip]



Do we have a process for notifying about URLs, articles etc.
that
could be on these social media forums?


What process would that be, beyond planet + already putting it on
twitter for personal accounts? BKP can weigh in if other
communities do
this differently, but that's part of why we have
planet.gluster.org 



One possible technique would be what the Fedora community does: they set
up a mailing list just for sharing social media
links. People send what they find in, and this gets posted to the
appropriate social media account, if the content is relevant.



sounds like a good idea to me!

misc - can you please create a new mailing list "gluster-social-media"
with Amye, Brian, Me and Jeff as the initial members? Anybody who is
interested would be welcome to join!


nigel would say "open a bug report", but he is not here so I say it :).



Before nigel chimes in with that, the bug report is at [1] :).

Thanks!
Vijay

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1358447

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster's Public Social Media Accounts

2016-07-20 Thread Vijay Bellur

On 07/19/2016 10:57 AM, Brian Proffitt wrote:



On Tue, Jul 19, 2016 at 9:36 AM, Vijay Bellur > wrote:

[snip]



Do we have a process for notifying about URLs, articles etc.
that
could be on these social media forums?


What process would that be, beyond planet + already putting it on
twitter for personal accounts? BKP can weigh in if other
communities do
this differently, but that's part of why we have
planet.gluster.org 



One possible technique would be what the Fedora community does: they set
up a mailing list just for sharing social media
links. People send what they find in, and this gets posted to the
appropriate social media account, if the content is relevant.



sounds like a good idea to me!

misc - can you please create a new mailing list "gluster-social-media" 
with Amye, Brian, Me and Jeff as the initial members? Anybody who is 
interested would be welcome to join!


-Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] NetBSD machine job and machine changes

2016-07-20 Thread Nigel Babu
Hello folks,

As you may know, the NetBSD machines have been having infra failures for
a while now. A lot of the failures were around disk space issues. Emmanuel has
pointed out that our NetBSD machines have about 24 GB of unpartitioned space.
I've taken a few machines and partitioned that into f and mountained it at
`/data`. I've created a symlink for `/build/ and `/archives` pointing to
folders inside `/data`. This should give us plenty of space. I'll soon have
a clean up script at the start of every job. The following machines have had
this done. If you notice any issues with them, please let me know:

nbslave71.cloud.gluster.org
nbslave72.cloud.gluster.org
nbslave74.cloud.gluster.org
nbslave75.cloud.gluster.org
nbslave77.cloud.gluster.org
nbslave7g.cloud.gluster.org

If you're curious to know more, you can find all my notes on bug 1351626.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD machine job and machine changes

2016-07-20 Thread Jeff Darcy
> As you may know, the NetBSD machines have been having infra failures for
> a while now. A lot of the failures were around disk space issues. Emmanuel
> has
> pointed out that our NetBSD machines have about 24 GB of unpartitioned space.
> I've taken a few machines and partitioned that into f and mountained it at
> `/data`. I've created a symlink for `/build/ and `/archives` pointing to
> folders inside `/data`. This should give us plenty of space. I'll soon have
> a clean up script at the start of every job. The following machines have had
> this done. If you notice any issues with them, please let me know:
> 
> nbslave71.cloud.gluster.org
> nbslave72.cloud.gluster.org
> nbslave74.cloud.gluster.org
> nbslave75.cloud.gluster.org
> nbslave77.cloud.gluster.org
> nbslave7g.cloud.gluster.org
> 
> If you're curious to know more, you can find all my notes on bug 1351626.

Thanks, Nigel.  Great job.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ./tests/basic/afr/split-brain-favorite-child-policy.t regressin failure on NetBSD

2016-07-20 Thread Pranith Kumar Karampuri
On Mon, Jul 18, 2016 at 4:18 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi,
>
> The above mentioned test has failed for the patch
> http://review.gluster.org/#/c/14927/1
> and is not related to my patch. Can someone from AFR team look into it?
>
>
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/18132/console


The logs are removed now. But at least from the log of the run, one
possibility for this is if the option didn't take effect in shd by the time
"gluster volume heal" is executed. I need to discuss somethings with Ravi
about this, I will send a patch for this tomorrow. Thanks for the
notification Kotresh.


>
> Thanks and Regards,
> Kotresh H R
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS-3.7.13 released!

2016-07-20 Thread Kaushal M
Apologies for the late announcement.

GlusterFS-3.7.13 has been released. This release fixes 2 serious
libgfapi bugs and several other bugs. The release notes can be found
at [1].

The source tarball and prebuilt packages can be downloaded from [2].

Please report any bugs found using [3].

Thanks,
Kaushal

[1] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.13.md
[2] https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.13/
[3] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS=3.7.13
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposing a framework to leverage existing Python unit test standards for our testing

2016-07-20 Thread Kaushal M
On Wed, Jul 20, 2016 at 1:41 PM, Jonathan Holloway  wrote:
> Hi Gluster-Devel,
>
> There's been some conversation about standard Python unit test formats 
> (PyUnit, PyTest, Nose) and potentially leveraging a tool I've been working on 
> (called Glusto) that wraps those standards as well as covers the fundamentals 
> required of the DiSTAF framework. I'm reaching out to propose this to the 
> Gluster-Devel Community for consideration.

Finally! I'd been waiting for ever wondering we would start discussing
this in the community. Thanks for starting this Jonathan.

>
> Some of the primary features Glusto offers are:
> - Reads and writes yaml, json, and ini config file formats (including Ansible 
> host files).
> - Provides SSH, RPyC, logging (w/ ANSI color support), configuration, 
> templating (via Jinja), and simple REST methods.
> - Implements cartesian product combinations with standard PyUnit class format 
> for the Gluster runs_on_volumes/runs_on_mounts/reuse-setup requirements.
> - Wraps the Python standard framework modules (PyUnit, PyTest, Nose) in a 
> single command with a config file option.
> - Tests can also be run from the CLI, IDLE, or unittest savvy tools (e.g., 
> Eclipse PyDev).
> - Glusto methods can also be used from IDLE for troubleshooting during 
> development--as well as in scripts.
> - Allows for leveraging existing unit test features such as skip decorators, 
> pytest markers, etc.


These all seem really good! I particularly like the idea of having the
ability to use standard python test frameworks.

Glusto is something that DiSTAF core would have become, just that it's now.
The work done to get test generation working (cartesian products),
also shows it's flexible as well.

Glusto+DiSTAF libs seems to me will be a good combination.

>
> I know this was a brief and high-level intro to Glusto. This is just to get 
> the topic started, and we can cover details in discussion.

Having a demo of glusto would be nice. Even a recorded demo would be
good as well.

>
> The Glusto repo is at http://github.com/loadtheaccumulator/glusto
> Docs are at http://glusto.readthedocs.io/ (with some additional information 
> being added over the next couple of days).
>
> Please take a look and provide any questions or comments.
>
> Cheers,
> Jonathan
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proposing a framework to leverage existing Python unit test standards for our testing

2016-07-20 Thread Jonathan Holloway
Hi Gluster-Devel,

There's been some conversation about standard Python unit test formats (PyUnit, 
PyTest, Nose) and potentially leveraging a tool I've been working on (called 
Glusto) that wraps those standards as well as covers the fundamentals required 
of the DiSTAF framework. I'm reaching out to propose this to the Gluster-Devel 
Community for consideration.

Some of the primary features Glusto offers are:
- Reads and writes yaml, json, and ini config file formats (including Ansible 
host files).
- Provides SSH, RPyC, logging (w/ ANSI color support), configuration, 
templating (via Jinja), and simple REST methods.
- Implements cartesian product combinations with standard PyUnit class format 
for the Gluster runs_on_volumes/runs_on_mounts/reuse-setup requirements.
- Wraps the Python standard framework modules (PyUnit, PyTest, Nose) in a 
single command with a config file option.
- Tests can also be run from the CLI, IDLE, or unittest savvy tools (e.g., 
Eclipse PyDev).
- Glusto methods can also be used from IDLE for troubleshooting during 
development--as well as in scripts.
- Allows for leveraging existing unit test features such as skip decorators, 
pytest markers, etc.

I know this was a brief and high-level intro to Glusto. This is just to get the 
topic started, and we can cover details in discussion.

The Glusto repo is at http://github.com/loadtheaccumulator/glusto
Docs are at http://glusto.readthedocs.io/ (with some additional information 
being added over the next couple of days).

Please take a look and provide any questions or comments.

Cheers,
Jonathan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri



On 07/20/2016 12:41 PM, Soumya Koduri wrote:



On 07/20/2016 12:00 PM, Soumya Koduri wrote:



On 07/20/2016 11:55 AM, Ravishankar N wrote:

On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:

Hi,

Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.

Thanks and Regards,
Kotresh H R



arbiter-mount.t has failed despite having this check.:-(


Hmm right. Wrt to your question posted in another mail -


All the tests seem to have failed because the NFS export is not

available (nothing wrong with the .t itself). I've CC'ed the NFS folks.
Maybe we can increase the value of NFS_EXPORT_TIMEOUT?

Increasing "NFS_EXPORT_TIMEOUT" will not help as it determines the
maximum mount of time "showmount' command should take to complete.
Probably we should either  wait/loop for "NFS_EXPORT_TIMEOUT" amount of
time till the NFS server becomes available before executing 'showmount'.



I have submitted below patch to query the "showmount" cmd output in a
loop. Comments are welcome.

- http://review.gluster.org/#/c/14961/


Ah. Sorry I misinterpreted "EXPECT_WITHIN" keyword. It seems to be 
already doing the iteration. From the arbiter test logs [1], I see that 
NFS service is already started by the time showmount command is issued.


[2016-07-19 13:00:40.533847] I [rpc-drc.c:689:rpcsvc_drc_init] 
0-rpc-service: DRC is turned OFF
[2016-07-19 13:00:40.533881] I [MSGID: 112110] [nfs.c:1524:init] 0-nfs: 
NFS service started

..

[2016-07-19 13:00:40.706206]:++ 
G_LOG:./tests/basic/afr/arbiter-mount.t: TEST: 18 1 
is_nfs_export_available ++


Not sure why the command would still fail. Are you able to reproduce 
this issue locally in any test machine. We can add exit whenever this 
command fails and then examine the service.


Thanks,
Soumya


[1] 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull


Thanks,
Soumya


Thanks,
Soumya


-Ravi

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri



On 07/20/2016 12:00 PM, Soumya Koduri wrote:



On 07/20/2016 11:55 AM, Ravishankar N wrote:

On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:

Hi,

Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.

Thanks and Regards,
Kotresh H R



arbiter-mount.t has failed despite having this check.:-(


Hmm right. Wrt to your question posted in another mail -


All the tests seem to have failed because the NFS export is not

available (nothing wrong with the .t itself). I've CC'ed the NFS folks.
Maybe we can increase the value of NFS_EXPORT_TIMEOUT?

Increasing "NFS_EXPORT_TIMEOUT" will not help as it determines the
maximum mount of time "showmount' command should take to complete.
Probably we should either  wait/loop for "NFS_EXPORT_TIMEOUT" amount of
time till the NFS server becomes available before executing 'showmount'.



I have submitted below patch to query the "showmount" cmd output in a 
loop. Comments are welcome.


- http://review.gluster.org/#/c/14961/

Thanks,
Soumya


Thanks,
Soumya


-Ravi

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri



On 07/20/2016 11:55 AM, Ravishankar N wrote:

On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:

Hi,

Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.

Thanks and Regards,
Kotresh H R



arbiter-mount.t has failed despite having this check.:-(


Hmm right. Wrt to your question posted in another mail -

>>> All the tests seem to have failed because the NFS export is not 
available (nothing wrong with the .t itself). I've CC'ed the NFS folks. 
Maybe we can increase the value of NFS_EXPORT_TIMEOUT?


Increasing "NFS_EXPORT_TIMEOUT" will not help as it determines the 
maximum mount of time "showmount' command should take to complete. 
Probably we should either  wait/loop for "NFS_EXPORT_TIMEOUT" amount of 
time till the NFS server becomes available before executing 'showmount'.


Thanks,
Soumya


-Ravi

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Ravishankar N

On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:

Hi,

Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.

Thanks and Regards,
Kotresh H R



arbiter-mount.t has failed despite having this check.:-(
-Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Kotresh Hiremath Ravishankar
Hi,

Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Poornima Gurusiddaiah" 
> To: "Gluster Devel" , "Kotresh Hiremath 
> Ravishankar" , "Rajesh
> Joseph" , "Ravishankar N" , 
> "Ashish Pandey" 
> Sent: Wednesday, July 20, 2016 10:43:33 AM
> Subject: Regression failures in last 3 days
> 
> Hi,
> 
> Below are the list of test cases that have failed regression in the last 3
> days. Please take a look at them:
> 
> ./tests/bitrot/br-stub.t ; Failed 8 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22356/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22355/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22325/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22322/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22316/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22313/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22293/consoleFull
> 
> ./tests/bugs/snapshot/bug-1316437.t ; Failed 6 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22361/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22343/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22329/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22327/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22324/consoleFull
> 
> ./tests/basic/afr/arbiter-mount.t ; Failed 4 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22353/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22311/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22306/consoleFull
> 
> ./tests/basic/ec/ec.t ; Failed 3 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22335/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22290/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22287/consoleFull
> 
> ./tests/bugs/disperse/bug-1236065.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22339/consoleFull
> 
> ./tests/basic/afr/add-brick-self-heal.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22315/consoleFull
> 
> ./tests/basic/tier/tierd_check.t ; Failed 2 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22299/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22296/consoleFull
> 
> ./tests/bugs/glusterd/bug-041.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22331/consoleFull
> 
> ./tests/bugs/glusterd/bug-1089668.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22304/consoleFull
> 
> ./tests/basic/ec/ec-new-entry.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22359/consoleFull
> 
> ./tests/basic/uss.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22352/consoleFull
> 
> ./tests/basic/geo-replication/marker-xattrs.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22337/consoleFull
> 
> ./tests/basic/bd.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22351/consoleFull
> 
> ./tests/bugs/bug-1110262.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22286/consoleFull
> 
> ./tests/performance/open-behind.t ; Failed 1 times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22284/consoleFull
> 
> ./tests/basic/quota-anon-fd-nfs.t ; Failed 

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Sakshi Bansal


- Original Message -
From: "Atin Mukherjee" 
To: "Poornima Gurusiddaiah" , "Avra Sengupta" 
, "Sakshi Bansal" 
Cc: "Gluster Devel" , "Kotresh Hiremath Ravishankar" 
, "Rajesh Joseph" , "Ravishankar N" 
, "Ashish Pandey" 
Sent: Wednesday, July 20, 2016 11:40:04 AM
Subject: Re: [Gluster-devel] Regression failures in last 3 days

On Wed, Jul 20, 2016 at 10:43 AM, Poornima Gurusiddaiah  wrote:

> Hi,
>
> Below are the list of test cases that have failed regression in the last 3
> days. Please take a look at them:
>
> *./tests/bitrot/br-stub.t* ; Failed *8* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22356/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22355/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22325/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22322/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22316/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22313/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22293/consoleFull
>
> *./tests/bugs/snapshot/bug-1316437.t* ; Failed *6* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22361/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22343/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22329/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22327/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22324/consoleFull
>
> *./tests/basic/afr/arbiter-mount.t* ; Failed *4* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22353/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22311/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22306/consoleFull
>
> *./tests/basic/ec/ec.t* ; Failed *3* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22335/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22290/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22287/consoleFull
>
> *./tests/bugs/disperse/bug-1236065.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22339/consoleFull
>
> *./tests/basic/afr/add-brick-self-heal.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22315/consoleFull
>
> *./tests/basic/tier/tierd_check.t* ; Failed *2* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22299/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22296/consoleFull
>
> *./tests/bugs/glusterd/bug-041.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22331/consoleFull
>

First of all, this bug needs to move to snapshot/USS directory. Who is up
for this patch? Issue here is glusterfsd does a fork () and hence while the
process comes up you may be able to see two PIDs for an interim time period
and that's what I could see here.
@Avra/Rajesh - Is there any reason why we are checking for snapd_pids in
two different ways? Does it give any benefit?


>
> *./tests/bugs/glusterd/bug-1089668.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22304/consoleFull
>

> Sakshi, I believe this is similar to http://review.gluster.org/14885 ,
> there is no time window between rebalance stop and remove brick and hence
> the later can fail any time if the rebalance process hasn't communicated
> back to glusterd by that time, remove-brick is bound to fail, isn't it?

Yes, that seems the issue here. I can send a patch to fix this issue.

>
> 

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Ravishankar N

On 07/20/2016 10:43 AM, Poornima Gurusiddaiah wrote:

*./tests/basic/afr/arbiter-mount.t* ; Failed *4* times
Regression Links: 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull
Regression Links: 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22353/consoleFull
Regression Links: 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22311/consoleFull
Regression Links: 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22306/consoleFull


All the tests seem to have failed because the NFS export is not 
available (nothing wrong with the .t itself). I've CC'ed the NFS folks. 
Maybe we can increase the value of NFS_EXPORT_TIMEOUT?


-Ravi

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Atin Mukherjee
On Wed, Jul 20, 2016 at 10:43 AM, Poornima Gurusiddaiah  wrote:

> Hi,
>
> Below are the list of test cases that have failed regression in the last 3
> days. Please take a look at them:
>
> *./tests/bitrot/br-stub.t* ; Failed *8* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22356/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22355/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22325/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22322/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22316/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22313/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22293/consoleFull
>
> *./tests/bugs/snapshot/bug-1316437.t* ; Failed *6* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22361/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22343/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22340/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22329/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22327/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22324/consoleFull
>
> *./tests/basic/afr/arbiter-mount.t* ; Failed *4* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22354/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22353/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22311/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22306/consoleFull
>
> *./tests/basic/ec/ec.t* ; Failed *3* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22335/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22290/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22287/consoleFull
>
> *./tests/bugs/disperse/bug-1236065.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22339/consoleFull
>
> *./tests/basic/afr/add-brick-self-heal.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22315/consoleFull
>
> *./tests/basic/tier/tierd_check.t* ; Failed *2* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22299/consoleFull
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22296/consoleFull
>
> *./tests/bugs/glusterd/bug-041.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22331/consoleFull
>

First of all, this bug needs to move to snapshot/USS directory. Who is up
for this patch? Issue here is glusterfsd does a fork () and hence while the
process comes up you may be able to see two PIDs for an interim time period
and that's what I could see here.
@Avra/Rajesh - Is there any reason why we are checking for snapd_pids in
two different ways? Does it give any benefit?


>
> *./tests/bugs/glusterd/bug-1089668.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22304/consoleFull
>

Sakshi, I believe this is similar to http://review.gluster.org/14885 ,
there is no time window between rebalance stop and remove brick and hence
the later can fail any time if the rebalance process hasn't communicated
back to glusterd by that time, remove-brick is bound to fail, isn't it?

>
> *./tests/basic/ec/ec-new-entry.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22359/consoleFull
>
> *./tests/basic/uss.t* ; Failed* 1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22352/consoleFull
>
> *./tests/basic/geo-replication/marker-xattrs.t* ; Failed *1* times
> Regression Links:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22337/consoleFull
>
> *./tests/basic/bd.t* ; Failed *1* times
> Regression Links:
>