Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-14 Thread Joseph Fernandes
Hi All,

Thanks Justin for the setup(slave30). 

Executed the whole regression suite on slave30 multiple times. 

Once there was a failure of ./tests/basic/mgmt_v3-locks.t with a core
 http://build.gluster.org/job/rackspace-regression-2GB-joe/12/console

Test Summary Report
---
./tests/basic/mgmt_v3-locks.t   (Wstat: 0 Tests: 14 Failed: 3)
  Failed tests:  11-13
Files=250, Tests=4897, 3968 wallclock secs ( 1.91 usr  1.41 sys + 330.06 cusr 
457.27 csys = 790.65 CPU)
Result: FAIL
+ RET=1
++ ls -l /core.20215
++ wc -l

There is a glusterd crash 

Log files and core files are available @ 
http://build.gluster.org/job/rackspace-regression-2GB-joe/12/console


And the very next regression test  bug-1112559.t failed with the same port 
unavailability.
 http://build.gluster.org/job/rackspace-regression-2GB-joe/13/console


After this I restart slave30 and executed the whole regression test again and 
never hit his issue.
Looks like the issue is not originated @  bug-1112559.t. The failure in  
bug-1112559.t test 10 is the result because of a previous failure or crash.

Regards,
Joe

- Original Message -
From: Justin Clift jus...@gluster.org
To: Vijay Bellur vbel...@redhat.com
Cc: Avra Sengupta aseng...@redhat.com, Joseph Fernandes 
josfe...@redhat.com, Pranith Kumar Karampuri pkara...@redhat.com, 
Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 10, 2014 8:26:49 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

On 10/07/2014, at 12:44 PM, Vijay Bellur wrote:
snip
 A lot of regression runs are failing because of this test unit. Given feature 
 freeze is around the corner, shall we provide a +1 verified manually for 
 those patchsets that fail this test?


Went through and did this manually, as Gluster Build System.

Also got Joe set up so he can debug things on a Rackspace VM
to find out what's wrong.

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-14 Thread Justin Clift
On 14/07/2014, at 7:40 AM, Joseph Fernandes wrote:
snip
 After this I restart slave30 and executed the whole regression test again and 
 never hit his issue.
 Looks like the issue is not originated @  bug-1112559.t. The failure in  
 bug-1112559.t test 10 is the result because of a previous failure or crash.


Thanks Joe, that's excellent.  Sounds like we're making
progress. :)

Git blame has Avra's name all over mgmt_v3-locks.t, so I
guess Avra will take it from here. ;

Avra, is that core file and the logs for it useful?  It's
easy to give you a login for slave30 too, so you can
investigate at your leisure.  (useful?)

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Vijay Bellur

On 07/08/2014 01:54 PM, Avra Sengupta wrote:

In the test case, we are checking gluster snap status to see if all the
bricks are alive. One of the snap bricks fail to start up, and hence we
see the failure. The brick fails to bind with Address already in use
error. But if we see clearly that same log also says binding to
failed, where the address is missing. So it might be trying to bind to
the wrong(or empty) address.

Following are the brick logs for the same:

[2014-07-07 11:20:15.662573] I
[rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
Configured rpc.outstanding-rpc-limit with value 64
[2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate]
0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is
deprecated, preferred is 'transport.socket.listen-port', continuing with
correction
[2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind]
0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to  failed:
Address already in use
[2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind]
0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
[2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create]
0-rpc-service: listening on transport failed
[2014-07-07 11:20:15.662810] W [server.c:920:init]
0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
[2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init]
0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume
'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again
[2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init]
0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
[2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate]
0-graph: init failed
[2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (--
0-: received signum (0), shutting down

Regards,
Avra

On 07/08/2014 11:28 AM, Joseph Fernandes wrote:

Hi Pranith,

I am looking into this issue. Will keep you posted on the process by EOD

Regards,
~Joe

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph
rjos...@redhat.com, Sachin Pandit span...@redhat.com,
aseng...@redhat.com
Sent: Monday, July 7, 2014 8:42:24 PM
Subject: Re: [Gluster-devel] regarding spurious failure
tests/bugs/bug-1112559.t


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:

Joseph,
 Any updates on this? It failed 5 regressions today.
http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull


http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull


http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull


http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull



One more :
http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith


CC some more folks who work on snapshot.



A lot of regression runs are failing because of this test unit. Given 
feature freeze is around the corner, shall we provide a +1 verified 
manually for those patchsets that fail this test?


-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Niels de Vos
On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
 On 07/08/2014 01:54 PM, Avra Sengupta wrote:
 In the test case, we are checking gluster snap status to see if all the
 bricks are alive. One of the snap bricks fail to start up, and hence we
 see the failure. The brick fails to bind with Address already in use
 error. But if we see clearly that same log also says binding to
 failed, where the address is missing. So it might be trying to bind to
 the wrong(or empty) address.
 
 Following are the brick logs for the same:
 
 [2014-07-07 11:20:15.662573] I
 [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
 Configured rpc.outstanding-rpc-limit with value 64
 [2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate]
 0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is
 deprecated, preferred is 'transport.socket.listen-port', continuing with
 correction
 [2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind]
 0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to  failed:
 Address already in use
 [2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind]
 0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
 [2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create]
 0-rpc-service: listening on transport failed
 [2014-07-07 11:20:15.662810] W [server.c:920:init]
 0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
 [2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init]
 0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume
 'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again
 [2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init]
 0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
 [2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate]
 0-graph: init failed
 [2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (--
 0-: received signum (0), shutting down
 
 Regards,
 Avra
 
 On 07/08/2014 11:28 AM, Joseph Fernandes wrote:
 Hi Pranith,
 
 I am looking into this issue. Will keep you posted on the process by EOD
 
 Regards,
 ~Joe
 
 - Original Message -
 From: Pranith Kumar Karampuri pkara...@redhat.com
 To: josfe...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph
 rjos...@redhat.com, Sachin Pandit span...@redhat.com,
 aseng...@redhat.com
 Sent: Monday, July 7, 2014 8:42:24 PM
 Subject: Re: [Gluster-devel] regarding spurious failure
 tests/bugs/bug-1112559.t
 
 
 On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:
 Joseph,
  Any updates on this? It failed 5 regressions today.
 http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull
 
 
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull
 
 
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull
 
 
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull
 
 
 One more :
 http://build.gluster.org/job/rackspace-regression-2GB/543/console
 
 Pranith
 
 CC some more folks who work on snapshot.
 
 
 A lot of regression runs are failing because of this test unit.
 Given feature freeze is around the corner, shall we provide a +1
 verified manually for those patchsets that fail this test?

I don't think that is easily possible. We also need to remove the -1 
verified that the Gluster Build System sets. I'm not sure how we 
should be doing that. Maybe its better to disable (parts of) the 
test-case?

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Justin Clift
On 10/07/2014, at 1:41 PM, Niels de Vos wrote:
 On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
snip
 A lot of regression runs are failing because of this test unit.
 Given feature freeze is around the corner, shall we provide a +1
 verified manually for those patchsets that fail this test?
 
 I don't think that is easily possible. We also need to remove the -1 
 verified that the Gluster Build System sets. I'm not sure how we 
 should be doing that. Maybe its better to disable (parts of) the 
 test-case?


We can set results manually as the Gluster Build System by using the
gerrit command from build.gluster.org.

Looking at the failure here:

  http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console

At the bottom, it shows this was the command run to communicate
failure:

  $ ssh bu...@review.gluster.org gerrit review --message ''\''
http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull 
: FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 
d8296086ddaf7ef4a4667f5cec413d64a56fd382

So, we run the same thing from the jenkins user on build.gluster.org,
but change the result bits to +1 and SUCCESS. And a better message:

  $ sudo su - jenkins
  $ ssh bu...@review.gluster.org gerrit review --message ''\''
Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs 
--verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382

Seems to work:

  http://review.gluster.org/#/c/8285/

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Joseph Fernandes
Hi All,

1) Tried reproducing the issue in local setup by running the regression test 
multiple times in a for loop. But the issue never hit!
2) As Avra pointed out the logs suggests that the port(49159) assigned by the 
glusterd(host1) to the snap brick is already in use by some other process
3) For time being I can comment out the TEST that is failing i,e comment the 
checking of the status of snap brick so the regression test doesnt block any 
check-in
4) If we can get rackspace system where actually the regression tests are run, 
We can reproduce and point out the root cause.

Regards,
~Joe 

- Original Message -
From: Justin Clift jus...@gluster.org
To: Niels de Vos nde...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 10, 2014 6:25:16 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

On 10/07/2014, at 1:41 PM, Niels de Vos wrote:
 On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
snip
 A lot of regression runs are failing because of this test unit.
 Given feature freeze is around the corner, shall we provide a +1
 verified manually for those patchsets that fail this test?
 
 I don't think that is easily possible. We also need to remove the -1 
 verified that the Gluster Build System sets. I'm not sure how we 
 should be doing that. Maybe its better to disable (parts of) the 
 test-case?


We can set results manually as the Gluster Build System by using the
gerrit command from build.gluster.org.

Looking at the failure here:

  http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console

At the bottom, it shows this was the command run to communicate
failure:

  $ ssh bu...@review.gluster.org gerrit review --message ''\''
http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull 
: FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 
d8296086ddaf7ef4a4667f5cec413d64a56fd382

So, we run the same thing from the jenkins user on build.gluster.org,
but change the result bits to +1 and SUCCESS. And a better message:

  $ sudo su - jenkins
  $ ssh bu...@review.gluster.org gerrit review --message ''\''
Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs 
--verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382

Seems to work:

  http://review.gluster.org/#/c/8285/

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Justin Clift
On 10/07/2014, at 2:27 PM, Joseph Fernandes wrote:
 Hi All,
 
 1) Tried reproducing the issue in local setup by running the regression test 
 multiple times in a for loop. But the issue never hit!
 2) As Avra pointed out the logs suggests that the port(49159) assigned by the 
 glusterd(host1) to the snap brick is already in use by some other process
 3) For time being I can comment out the TEST that is failing i,e comment the 
 checking of the status of snap brick so the regression test doesnt block any 
 check-in
 4) If we can get rackspace system where actually the regression tests are 
 run, We can reproduce and point out the root cause.

Sure.  Remote access via ssh is definitely workable.  I'll email
you the details. :)

+ Justin


 Regards,
 ~Joe 

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Joseph Fernandes
Sent a patch that temporarily disables the failing TEST 

http://review.gluster.org/#/c/8259/

- Original Message -
From: Joseph Fernandes josfe...@redhat.com
To: Justin Clift jus...@gluster.org
Cc: Niels de Vos nde...@redhat.com, Gluster Devel 
gluster-devel@gluster.org, Vijay Bellur vbel...@redhat.com
Sent: Thursday, July 10, 2014 6:57:34 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

Hi All,

1) Tried reproducing the issue in local setup by running the regression test 
multiple times in a for loop. But the issue never hit!
2) As Avra pointed out the logs suggests that the port(49159) assigned by the 
glusterd(host1) to the snap brick is already in use by some other process
3) For time being I can comment out the TEST that is failing i,e comment the 
checking of the status of snap brick so the regression test doesnt block any 
check-in
4) If we can get rackspace system where actually the regression tests are run, 
We can reproduce and point out the root cause.

Regards,
~Joe 

- Original Message -
From: Justin Clift jus...@gluster.org
To: Niels de Vos nde...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 10, 2014 6:25:16 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

On 10/07/2014, at 1:41 PM, Niels de Vos wrote:
 On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
snip
 A lot of regression runs are failing because of this test unit.
 Given feature freeze is around the corner, shall we provide a +1
 verified manually for those patchsets that fail this test?
 
 I don't think that is easily possible. We also need to remove the -1 
 verified that the Gluster Build System sets. I'm not sure how we 
 should be doing that. Maybe its better to disable (parts of) the 
 test-case?


We can set results manually as the Gluster Build System by using the
gerrit command from build.gluster.org.

Looking at the failure here:

  http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/console

At the bottom, it shows this was the command run to communicate
failure:

  $ ssh bu...@review.gluster.org gerrit review --message ''\''
http://build.gluster.org/job/rackspace-regression-2GB-triggered/276/consoleFull 
: FAILED'\''' --project=glusterfs --verified=-1 --code-review=0 
d8296086ddaf7ef4a4667f5cec413d64a56fd382

So, we run the same thing from the jenkins user on build.gluster.org,
but change the result bits to +1 and SUCCESS. And a better message:

  $ sudo su - jenkins
  $ ssh bu...@review.gluster.org gerrit review --message ''\''
Ignoring previous spurious failure : SUCCESS'\''' --project=glusterfs 
--verified=+1 --code-review=0 d8296086ddaf7ef4a4667f5cec413d64a56fd382

Seems to work:

  http://review.gluster.org/#/c/8285/

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-10 Thread Justin Clift
On 10/07/2014, at 12:44 PM, Vijay Bellur wrote:
snip
 A lot of regression runs are failing because of this test unit. Given feature 
 freeze is around the corner, shall we provide a +1 verified manually for 
 those patchsets that fail this test?


Went through and did this manually, as Gluster Build System.

Also got Joe set up so he can debug things on a Rackspace VM
to find out what's wrong.

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-08 Thread Avra Sengupta
In the test case, we are checking gluster snap status to see if all the 
bricks are alive. One of the snap bricks fail to start up, and hence we 
see the failure. The brick fails to bind with Address already in use 
error. But if we see clearly that same log also says binding to  
failed, where the address is missing. So it might be trying to bind to 
the wrong(or empty) address.


Following are the brick logs for the same:

[2014-07-07 11:20:15.662573] I 
[rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
Configured rpc.outstanding-rpc-limit with value 64
[2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate] 
0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is 
deprecated, preferred is 'transport.socket.listen-port', continuing with 
correction
[2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind] 
0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to  failed: 
Address already in use
[2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind] 
0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
[2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create] 
0-rpc-service: listening on transport failed
[2014-07-07 11:20:15.662810] W [server.c:920:init] 
0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
[2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init] 
0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume 
'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile again
[2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init] 
0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
[2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate] 
0-graph: init failed
[2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] (-- 
0-: received signum (0), shutting down


Regards,
Avra

On 07/08/2014 11:28 AM, Joseph Fernandes wrote:

Hi Pranith,

I am looking into this issue. Will keep you posted on the process by EOD

Regards,
~Joe

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph rjos...@redhat.com, 
Sachin Pandit span...@redhat.com, aseng...@redhat.com
Sent: Monday, July 7, 2014 8:42:24 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:

Joseph,
 Any updates on this? It failed 5 regressions today.
http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull


One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith


CC some more folks who work on snapshot.

Pranith

On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:

hi Joseph,
 The test above failed on a documentation patch, so it has got to
be a spurious failure.
Check
http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull
for more information

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-08 Thread Avra Sengupta

Adding rhs-gabbar

On 07/08/2014 01:54 PM, Avra Sengupta wrote:
In the test case, we are checking gluster snap status to see if all 
the bricks are alive. One of the snap bricks fail to start up, and 
hence we see the failure. The brick fails to bind with Address 
already in use error. But if we see clearly that same log also says 
binding to  failed, where the address is missing. So it might be 
trying to bind to the wrong(or empty) address.


Following are the brick logs for the same:

[2014-07-07 11:20:15.662573] I 
[rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
Configured rpc.outstanding-rpc-limit with value 64
[2014-07-07 11:20:15.662634] W [options.c:898:xl_opt_validate] 
0-ad94478591fc41648c9674b10143e3d2-server: option 'listen-port' is 
deprecated, preferred is 'transport.socket.listen-port', continuing 
with correction
[2014-07-07 11:20:15.662758] E [socket.c:710:__socket_server_bind] 
0-tcp.ad94478591fc41648c9674b10143e3d2-server: binding to  failed: 
Address already in use
[2014-07-07 11:20:15.662776] E [socket.c:713:__socket_server_bind] 
0-tcp.ad94478591fc41648c9674b10143e3d2-server: Port is already in use
[2014-07-07 11:20:15.662795] W [rpcsvc.c:1531:rpcsvc_transport_create] 
0-rpc-service: listening on transport failed
[2014-07-07 11:20:15.662810] W [server.c:920:init] 
0-ad94478591fc41648c9674b10143e3d2-server: creation of listener failed
[2014-07-07 11:20:15.662821] E [xlator.c:425:xlator_init] 
0-ad94478591fc41648c9674b10143e3d2-server: Initialization of volume 
'ad94478591fc41648c9674b10143e3d2-server' failed, review your volfile 
again
[2014-07-07 11:20:15.662836] E [graph.c:322:glusterfs_graph_init] 
0-ad94478591fc41648c9674b10143e3d2-server: initializing translator failed
[2014-07-07 11:20:15.662847] E [graph.c:525:glusterfs_graph_activate] 
0-graph: init failed
[2014-07-07 11:20:15.664283] W [glusterfsd.c:1182:cleanup_and_exit] 
(-- 0-: received signum (0), shutting down


Regards,
Avra

On 07/08/2014 11:28 AM, Joseph Fernandes wrote:

Hi Pranith,

I am looking into this issue. Will keep you posted on the process by EOD

Regards,
~Joe

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph 
rjos...@redhat.com, Sachin Pandit span...@redhat.com, 
aseng...@redhat.com

Sent: Monday, July 7, 2014 8:42:24 PM
Subject: Re: [Gluster-devel] regarding spurious failure 
tests/bugs/bug-1112559.t



On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:

Joseph,
 Any updates on this? It failed 5 regressions today.
http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull 



http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull 



http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull 



http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull 



One more : 
http://build.gluster.org/job/rackspace-regression-2GB/543/console


Pranith


CC some more folks who work on snapshot.

Pranith

On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:

hi Joseph,
 The test above failed on a documentation patch, so it has got to
be a spurious failure.
Check
http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull 


for more information

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-07 Thread Pranith Kumar Karampuri


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:

Joseph,
Any updates on this? It failed 5 regressions today.
http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull 



One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith



CC some more folks who work on snapshot.

Pranith

On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:

hi Joseph,
The test above failed on a documentation patch, so it has got to 
be a spurious failure.
Check 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull 
for more information


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-07 Thread Joseph Fernandes
Hi Pranith,

I am looking into this issue. Will keep you posted on the process by EOD

Regards,
~Joe

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph 
rjos...@redhat.com, Sachin Pandit span...@redhat.com, aseng...@redhat.com
Sent: Monday, July 7, 2014 8:42:24 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:
 Joseph,
 Any updates on this? It failed 5 regressions today.
 http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull
  


One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith


 CC some more folks who work on snapshot.

 Pranith

 On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:
 hi Joseph,
 The test above failed on a documentation patch, so it has got to 
 be a spurious failure.
 Check 
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull
  
 for more information

 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-04 Thread Pranith Kumar Karampuri

hi Joseph,
The test above failed on a documentation patch, so it has got to be 
a spurious failure.
Check 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull 
for more information


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel