Hi All,

Thanks Justin for the setup(slave30). 

Executed the whole regression suite on slave30 multiple times. 

Once there was a failure of ./tests/basic/mgmt_v3-locks.t with a core
 http://build.gluster.org/job/rackspace-regression-2GB-joe/12/console

Test Summary Report
-------------------
./tests/basic/mgmt_v3-locks.t                   (Wstat: 0 Tests: 14 Failed: 3)
  Failed tests:  11-13
Files=250, Tests=4897, 3968 wallclock secs ( 1.91 usr  1.41 sys + 330.06 cusr 
457.27 csys = 790.65 CPU)
Result: FAIL
+ RET=1
++ ls -l /core.20215
++ wc -l

There is a glusterd crash 

Log files and core files are available @ 
http://build.gluster.org/job/rackspace-regression-2GB-joe/12/console


And the very next regression test  bug-1112559.t failed with the same port 
unavailability.
 http://build.gluster.org/job/rackspace-regression-2GB-joe/13/console


After this I restart slave30 and executed the whole regression test again and 
never hit his issue.
Looks like the issue is not originated @  bug-1112559.t. The failure in  
bug-1112559.t test 10 is the result because of a previous failure or crash.

Regards,
Joe

----- Original Message -----
From: "Justin Clift" <jus...@gluster.org>
To: "Vijay Bellur" <vbel...@redhat.com>
Cc: "Avra Sengupta" <aseng...@redhat.com>, "Joseph Fernandes" 
<josfe...@redhat.com>, "Pranith Kumar Karampuri" <pkara...@redhat.com>, 
"Gluster Devel" <gluster-devel@gluster.org>
Sent: Thursday, July 10, 2014 8:26:49 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

On 10/07/2014, at 12:44 PM, Vijay Bellur wrote:
<snip>
> A lot of regression runs are failing because of this test unit. Given feature 
> freeze is around the corner, shall we provide a +1 verified manually for 
> those patchsets that fail this test?


Went through and did this manually, as "Gluster Build System".

Also got Joe set up so he can debug things on a Rackspace VM
to find out what's wrong.

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to