Re: [Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-17 Thread Sachin Pandit
One more spurious failure.

./tests/bugs/bug-1038598.t  (Wstat: 0 Tests: 28 Failed: 1)
  Failed test:  28
Files=237, Tests=4632, 4619 wallclock secs ( 2.13 usr  1.48 sys + 832.41 cusr 
697.97 csys = 1533.99 CPU)
Result: FAIL

Patch : http://review.gluster.org/#/c/8060/
Build URL : 
http://build.gluster.org/job/rackspace-regression-2GB/186/consoleFull

~ Sachin.


- Original Message -
From: Justin Clift jus...@gluster.org
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Sunday, June 15, 2014 3:55:05 PM
Subject: Re: [Gluster-devel] Want more spurious regression failure alerts...
?

On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:
 On 06/13/2014 06:41 PM, Justin Clift wrote:
 Hi Pranith,
 
 Do you want me to keep sending you spurious regression failure
 notification?
 
 There's a fair few of them isn't there?
 I am doing one run on my VM. I will get back with the ones that fail on my 
 VM. You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:

  * bug-859581.t -- SPURIOUS
* 4846 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140614:14:33:41.tgz
* 6009 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:20:24:58.tgz
* 6652 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:22:04:16.tgz
* 7796 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:14:22:53.tgz
* 7987 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:15:21:04.tgz
* 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz
* 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
* 8054 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:13:15:50.tgz
* 8062 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:13:28:48.tgz

  * mgmt_v3-locks.t -- SPURIOUS
* 6483 - build.gluster.org - 
http://build.gluster.org/job/regression/4847/consoleFull
* 6630 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140614:15:42:39.tgz
* 6946 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:20:57:27.tgz
* 7392 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:13:57:20.tgz
* 7852 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:19:23:17.tgz
* 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
* 8015 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:14:26:01.tgz
* 8048 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:18:13:07.tgz

  * bug-918437-sh-mtime.t -- SPURIOUS
* 6459 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
* 7493 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
* 7987 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
* 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

  * fops-sanity.t -- SPURIOUS
* 8014 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:18:18:33.tgz
* 8066 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:21:35:57.tgz

  * bug-857330/xml.t - SPURIOUS
* 7523 - logs may (?) be hard to parse due to other failure data for this 
CR in them
* 8029 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:16:46:03.tgz

If we resolve these five, our regression testing should be a *lot* more
predictable. :)

Text file (attached to this email) has the bulk test results.  Manually
cut-n-pasted from browser to the text doc, so be wary of possible typos. ;)


 Give the output of for i in `cat problematic-ones.txt`; do echo $i $(git log 
 $i | grep Author| tail -1); done
 
 Maybe we should make 1 BZ for the lot, and attach the logs
 to that BZ for later analysis?
 I am already using 1092850 for this.

Good info. :)

+ Justin



--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-15 Thread Pranith Kumar Karampuri


On 06/15/2014 03:55 PM, Justin Clift wrote:

On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:

On 06/13/2014 06:41 PM, Justin Clift wrote:

Hi Pranith,

Do you want me to keep sending you spurious regression failure
notification?

There's a fair few of them isn't there?

I am doing one run on my VM. I will get back with the ones that fail on my VM. 
You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:

   * bug-859581.t -- SPURIOUS
 * 4846 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140614:14:33:41.tgz
 * 6009 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:20:24:58.tgz
 * 6652 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:22:04:16.tgz
 * 7796 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:14:22:53.tgz
 * 7987 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:15:21:04.tgz
 * 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz
 * 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
 * 8054 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:13:15:50.tgz
 * 8062 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:13:28:48.tgz

Xavi,
 Please review http://review.gluster.org/8069



   * mgmt_v3-locks.t -- SPURIOUS
 * 6483 - build.gluster.org - 
http://build.gluster.org/job/regression/4847/consoleFull
 * 6630 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140614:15:42:39.tgz
 * 6946 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:20:57:27.tgz
 * 7392 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140613:13:57:20.tgz
 * 7852 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:19:23:17.tgz
 * 8014 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:20:39:01.tgz
 * 8015 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:14:26:01.tgz
 * 8048 - 
http://slave24.cloud.gluster.org/logs/glusterfs-logs-20140613:18:13:07.tgz

Avra,
 Could you take a look.



   * bug-918437-sh-mtime.t -- SPURIOUS
 * 6459 - 
http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
 * 7493 - 
http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
 * 7987 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
 * 7992 - 
http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

Vijay, Could you review and merge http://review.gluster.com/8068


   * fops-sanity.t -- SPURIOUS
 * 8014 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140613:18:18:33.tgz
 * 8066 - 
http://slave20.cloud.gluster.org/logs/glusterfs-logs-20140614:21:35:57.tgz

Still trying to figure this one out. May take a while.


   * bug-857330/xml.t - SPURIOUS
 * 7523 - logs may (?) be hard to parse due to other failure data for this 
CR in them
 * 8029 - 
http://slave23.cloud.gluster.org/logs/glusterfs-logs-20140613:16:46:03.tgz

Kaushal,
  Do you want to change the regression test to expect failures in 
commands executed by EXPECT_WITHIN. i.e. if the command it executes 
fails then give different output than the one it expects. I fixed quite 
a few of 'heal full' based spurious failures where they wait for 'cat 
some-file' to give some output but by the time EXPECT_WITHIN executes 
'cat' the file wouldn't even be created. I guess even normal.t will be 
benefited by this change?


Pranith


If we resolve these five, our regression testing should be a *lot* more
predictable. :)

Text file (attached to this email) has the bulk test results.  Manually
cut-n-pasted from browser to the text doc, so be wary of possible typos. ;)



Give the output of for i in `cat problematic-ones.txt`; do echo $i $(git log $i | 
grep Author| tail -1); done

Maybe we should make 1 BZ for the lot, and attach the logs
to that BZ for later analysis?

I am already using 1092850 for this.

Good info. :)

+ Justin



--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-15 Thread Vijay Bellur

On 06/15/2014 04:42 PM, Pranith Kumar Karampuri wrote:


On 06/15/2014 03:55 PM, Justin Clift wrote:

On 15/06/2014, at 3:36 AM, Pranith Kumar Karampuri wrote:

On 06/13/2014 06:41 PM, Justin Clift wrote:

Hi Pranith,

Do you want me to keep sending you spurious regression failure
notification?

There's a fair few of them isn't there?

I am doing one run on my VM. I will get back with the ones that fail on my VM. 
You can also do the same on your machine.

Cool, that should help. :)

These are the spurious failures found when running the rackspace-regression-2G
tests over friday and yesterday:







   * bug-918437-sh-mtime.t -- SPURIOUS
 * 6459 
-http://slave21.cloud.gluster.org/logs/glusterfs-logs-20140614:18:28:43.tgz
 * 7493 
-http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz
 * 7987 
-http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:14:23:02.tgz
 * 7992 
-http://slave10.cloud.gluster.org/logs/glusterfs-logs-20140613:20:21:15.tgz

Vijay, Could you review and merge http://review.gluster.com/8068




Reviewed and merged, thanks.

-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-14 Thread Pranith Kumar Karampuri


On 06/13/2014 06:41 PM, Justin Clift wrote:

Hi Pranith,

Do you want me to keep sending you spurious regression failure
notification?

There's a fair few of them isn't there?
I am doing one run on my VM. I will get back with the ones that fail on 
my VM. You can also do the same on your machine.


Give the output of for i in `cat problematic-ones.txt`; do echo $i 
$(git log $i | grep Author| tail -1); done


Maybe we should make 1 BZ for the lot, and attach the logs
to that BZ for later analysis?

I am already using 1092850 for this.


+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-13 Thread Justin Clift
Hi Pranith,

Do you want me to keep sending you spurious regression failure
notification?

There's a fair few of them isn't there?

Maybe we should make 1 BZ for the lot, and attach the logs
to that BZ for later analysis?

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel