Re: [Gluster-devel] netbsd regression update : cdc.t

2015-05-05 Thread Krishnan Parthasarathi


- Original Message -
> Krishnan Parthasarathi  wrote:
> 
> > On gdb'ing into one of the brick process, I see the following backtrace.
> > This is seen with other threads in the process too. This makes it difficult
> > to analyse what could have gone wrong. Is there something I am missing?
> 
> Obviously frame 4 and higher are irrelevant, but you have glusterfs code
> in frame 3, with arguments, file and line. I guess if you type frame 3
> you get the code.
> 
> This posix_health_check_thread_proc() is probably spawed by
> pthread_create() and therefore has no caller.

Hmm. I didn't put my point clearly. The backtrace I chose to demonstrate the
problem is not representative. My bad.

The following threads have only the top-most frame. Is that expected?


[Switching to thread 8 (LWP 8)]
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) bt
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) t 9
[Switching to thread 9 (LWP 7)]
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) bt
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) t 10 
[Switching to thread 10 (LWP 6)]
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) bt
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) t 11
[Switching to thread 11 (LWP 5)]
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) bt
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) t 12
[Switching to thread 12 (LWP 4)]
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12
(gdb) bt
#0  0xbb3c84e7 in ___lwp_park60 () from /usr/lib/libc.so.12



On a related note, how do I identify the thread processing network
events via event_dispatch_poll? I wanted to make sure that none of the
above threads could possibly have poll(2) in its backtrace.
If we can assert that none of the threads are waiting on poll(2), then
it explains what we observe. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] new test failure in tests/basic/mount-nfs-auth.t

2015-05-05 Thread Jiffin Tony Thottan



On 06/05/15 07:03, Pranith Kumar Karampuri wrote:

Niels,
 Any ideas?

http://build.gluster.org/job/rackspace-regression-2GB-triggered/8462/consoleFull

mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
dd: closing output file `/mnt/nfs/0/test-big-write': Input/output error
[20:48:27] ./tests/basic/mount-nfs-auth.t ..
not ok 33

Pranith



This is strange issue which is not noticed till now.

There is no notable errors in nfs.log when this feature is on.

I think when the test fails, we should able fetch  the contents of 
/var/lib/glusterd/nfs/{exports,netgroups} files to get a clear picture.


Thanks,
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression update : cdc.t

2015-05-05 Thread Emmanuel Dreyfus
Krishnan Parthasarathi  wrote:

> On gdb'ing into one of the brick process, I see the following backtrace.
> This is seen with other threads in the process too. This makes it difficult
> to analyse what could have gone wrong. Is there something I am missing?

Obviously frame 4 and higher are irrelevant, but you have glusterfs code
in frame 3, with arguments, file and line. I guess if you type frame 3
you get the code. 

This posix_health_check_thread_proc() is probably spawed by
pthread_create() and therefore has no caller.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-05 Thread Vijay Bellur

On 05/06/2015 10:44 AM, Nithya Balachandran wrote:

Hi,

Should I file a new BZ for this particular failure for the 3.7 branch?  I have 
used BZ 1163543 (generic test failure BZ) to submit the patch on master.




Filing a new bz is recommended.

Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression update : cdc.t

2015-05-05 Thread Krishnan Parthasarathi


- Original Message -
> Krishnan Parthasarathi  wrote:
> 
> > We need help in getting gdb to work with proper stack frames. It is mostly
> > my lack of *BSD knowledge.
> 
> What problem do you run into?

On gdb'ing into one of the brick process, I see the following backtrace.
This is seen with other threads in the process too. This makes it difficult
to analyse what could have gone wrong. Is there something I am missing?

Thread 1 (process 15311):
#0  0xbb35d7d7 in _sys___nanosleep50 () from /usr/lib/libc.so.12
#1  0xbb688aa7 in __nanosleep50 () from /usr/lib/libpthread.so.1
#2  0xbb3cbcd7 in sleep () from /usr/lib/libc.so.12
#3  0xb9cef8da in posix_health_check_thread_proc (data=0xbb1db030) at
posix-helpers.c:1685
#4  0xbb68cbca in ?? () from /usr/lib/libpthread.so.1
#5  0xbb3acbb0 in __mknod50 () from /usr/lib/libc.so.12
#6  0xb7aa5000 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-05 Thread Nithya Balachandran
Hi,

Should I file a new BZ for this particular failure for the 3.7 branch?  I have 
used BZ 1163543 (generic test failure BZ) to submit the patch on master.


Regards,
Nithya

- Original Message -
> From: "Nithya Balachandran" 
> To: "Soumya Koduri" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, May 6, 2015 10:07:56 AM
> Subject: Re: [Gluster-devel] spurious regression failures for 
> ./tests/basic/fops-sanity.t
> 
> Hi,
> 
> I have submitted patch http://review.gluster.org/#/c/10590/ for this.
> 
> 
> Regards,
> Nithya
> 
> - Original Message -
> > From: "Soumya Koduri" 
> > To: "Pranith Kumar Karampuri" , "Nithya Balachandran"
> > 
> > Cc: "Gluster Devel" 
> > Sent: Wednesday, May 6, 2015 9:51:41 AM
> > Subject: Re: [Gluster-devel] spurious regression failures for
> > ./tests/basic/fops-sanity.t
> > 
> > I consistently see this failure for one of my patches -
> > 
> > http://review.gluster.org/#/c/10568/ -
> > http://build.gluster.org/job/rackspace-regression-2GB-triggered/8483/consoleFull
> > 
> > This test passed when I ran it on my workspace.
> > 
> > Thanks,
> > Soumya
> > 
> > On 05/02/2015 08:00 AM, Pranith Kumar Karampuri wrote:
> > >
> > > On 05/01/2015 10:05 PM, Nithya Balachandran wrote:
> > >> Hi,
> > >>
> > >> Can you point me to a Jenkins run with this failure?
> > > I don't have one. But it is very easy to re-create. Just run the
> > > following in your workspace
> > > while prove -rfv tests/basic/fops-sanity.t; do :; done
> > > At least on my machine this failed in 5-10 minutes. Very consistent
> > > failure :-)
> > >
> > > Pranith
> > >>
> > >> Regards,
> > >> Nithya
> > >>
> > >>
> > >>
> > >> - Original Message -
> > >>> From: "Pranith Kumar Karampuri" 
> > >>> To: "Shyam" , "Raghavendra Gowdappa"
> > >>> , "Nithya Balachandran"
> > >>> , "Susant Palai" 
> > >>> Cc: "Gluster Devel" 
> > >>> Sent: Friday, 1 May, 2015 5:07:12 PM
> > >>> Subject: spurious regression failures for ./tests/basic/fops-sanity.t
> > >>>
> > >>> hi,
> > >>>   I see the following logs when the failure happens:
> > >>> [2015-05-01 10:37:44.157477] E
> > >>> [dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
> > >>> (null): failed to get the 'linkto' xattr No data avai
> > >>> lable
> > >>> [2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
> > >>> 0-glusterfs-fuse: 25: READ => -1 (No data available)
> > >>>
> > >>> Then the program fails with following message:
> > >>> read failed: No data available
> > >>> read returning junk
> > >>> fd based file operation 1 failed
> > >>> read failed: No data available
> > >>> read returning junk
> > >>> fstat failed : No data available
> > >>> fd based file operation 2 failed
> > >>> read failed: No data available
> > >>> read returning junk
> > >>> dup fd based file operation failed
> > >>> not ok 10
> > >>>
> > >>> Could you let us know when this can happen and post a patch which will
> > >>> fix it? Please let us know who is going to fix it.
> > >>>
> > >>> Pranith
> > >>>
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression update : cdc.t

2015-05-05 Thread Emmanuel Dreyfus
Krishnan Parthasarathi  wrote:

> We need help in getting gdb to work with proper stack frames. It is mostly
> my lack of *BSD knowledge.

What problem do you run into?

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-05 Thread Nithya Balachandran
Hi,

I have submitted patch http://review.gluster.org/#/c/10590/ for this.


Regards,
Nithya

- Original Message -
> From: "Soumya Koduri" 
> To: "Pranith Kumar Karampuri" , "Nithya Balachandran" 
> 
> Cc: "Gluster Devel" 
> Sent: Wednesday, May 6, 2015 9:51:41 AM
> Subject: Re: [Gluster-devel] spurious regression failures for 
> ./tests/basic/fops-sanity.t
> 
> I consistently see this failure for one of my patches -
> 
> http://review.gluster.org/#/c/10568/ -
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/8483/consoleFull
> 
> This test passed when I ran it on my workspace.
> 
> Thanks,
> Soumya
> 
> On 05/02/2015 08:00 AM, Pranith Kumar Karampuri wrote:
> >
> > On 05/01/2015 10:05 PM, Nithya Balachandran wrote:
> >> Hi,
> >>
> >> Can you point me to a Jenkins run with this failure?
> > I don't have one. But it is very easy to re-create. Just run the
> > following in your workspace
> > while prove -rfv tests/basic/fops-sanity.t; do :; done
> > At least on my machine this failed in 5-10 minutes. Very consistent
> > failure :-)
> >
> > Pranith
> >>
> >> Regards,
> >> Nithya
> >>
> >>
> >>
> >> - Original Message -
> >>> From: "Pranith Kumar Karampuri" 
> >>> To: "Shyam" , "Raghavendra Gowdappa"
> >>> , "Nithya Balachandran"
> >>> , "Susant Palai" 
> >>> Cc: "Gluster Devel" 
> >>> Sent: Friday, 1 May, 2015 5:07:12 PM
> >>> Subject: spurious regression failures for ./tests/basic/fops-sanity.t
> >>>
> >>> hi,
> >>>   I see the following logs when the failure happens:
> >>> [2015-05-01 10:37:44.157477] E
> >>> [dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
> >>> (null): failed to get the 'linkto' xattr No data avai
> >>> lable
> >>> [2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
> >>> 0-glusterfs-fuse: 25: READ => -1 (No data available)
> >>>
> >>> Then the program fails with following message:
> >>> read failed: No data available
> >>> read returning junk
> >>> fd based file operation 1 failed
> >>> read failed: No data available
> >>> read returning junk
> >>> fstat failed : No data available
> >>> fd based file operation 2 failed
> >>> read failed: No data available
> >>> read returning junk
> >>> dup fd based file operation failed
> >>> not ok 10
> >>>
> >>> Could you let us know when this can happen and post a patch which will
> >>> fix it? Please let us know who is going to fix it.
> >>>
> >>> Pranith
> >>>
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] When will 3.6 be considered stable? (was: Replace brick 3.4.2 with 3.6.2?)

2015-05-05 Thread David Robinson
Sorry for the delay... Long day of flights... OK.  Here goes my attempt 
to explain what was happening:


First, my setup.  I am using a replica-2 setup with four nodes.  These 
are:


gfsib01a
gfsib01b
gfsib02a
gfsib02b, where the 1a/1b and 2a/2b are replica pairs.

I am using a number of segregated networks.

gfsib01a 10.200.70.1
gfsib01b 10.200.71.1
gfsib02a 10.200.70.2
gfsib02b 10.200.71.2

where 10.200.x.x is my infiniband network.  Gluster is also connect to 
my super-computer nodes on a 10.214.x.x network thought the gigabit 
interface.


Our DNS resolves gfsib01a to the 10.200.x.x network.  When our initial 
system was setup and we were accessing gluster on a non-infiniband 
network space (i.e. on a machine with no infiniband card, and therefore, 
no access to the 10.200 network), we adjusted the DNS entries by placing 
the following in the /etc/hosts file on the machine:


/etc/hosts [only done on machines without access to 10.200 IB network]:
gfsib01a 10.214.70.1
gfsib01b 10.214.71.1
gfsib02a 10.214.70.2
gfsib02b 10.214.71.2

This setup was recommended by the Redhat guys who came out to demo 
gluster for us a year or two ago.  This is how we were instructed to 
setup multiple network access with gluster.  Basically, it tricked the 
traffic to resolve gfsib01a.corvidtec.com to something that could be 
seen on a given node that didn't have access to the 10.200 network.


10.200 traffic would be routed through ib0 on nodes where there was an 
IB card.
10.214 traffic would be routed through eth0 on nodes where there was no 
IB card, and hence, no access to the 10.200 network.


This worked for us until we upgraded to 3.6.3.  At that point, we ran 
into issues where some of the nodes would mount /homegfs and some would 
fail with timeout issues.  For those that did actually mount (430 of the 
nodes out of 1500 completed the mount, the rest timed out), /homegfs was 
accessible.  However, when I tried to switch to a user whose home 
directory was on /homegfs, it would sit there for roughly  20-30 seconds 
before completing.  Something in the ssh connection was taking a very 
long time.  Once you were connected, it behaved normally and operated 
fine without any performance issues.


Now begins my best guess as to what happened with my fully admitted 
novice level understanding of how this works.  Let the speculation 
begin... It looks like something changed in 3.6.3 with the name 
resolution/IP handling.  My best guess is that FUSE needs to "see" all 
of the nodes to be able to write to them.  When I mounted gfsib01a 
effectively using "10.214.70.1:/homegfs /homegfs", it found gfsib01a 
without any issues.  However, it looks like 3.6.3 now returns the 
10.200.x.x address space back to the FUSE mount for the other nodes in 
the volume (gfsib01b, gfsib02a, gfsib02b).  At which point, the route 
doesn't work as the node doesn't have access to the 10.200 network 
space.  I fixed this by adding a route to the nodes so that 10.200 
traffic goes out the 10.214 ethernet port, and removing the DNS 
adjustments in /etc/hosts.


Again, I am guessing here, but do you know if the name resolution that 
is passed back changed in 3.6.3.  Did it send back the machine name 
(gfsib01a, gfsib01b, gfsib02a, gfsib02b) prior to 3.6.3 and now it sends 
back IP addresses?  Or, something along these lines?


Once I added the routes and eliminated the "spoofing" in the /etc/hosts 
file, everything worked fine.


On a more positive note, it does seem to be behaving well.  The previous 
heal-fails have been cleaned up and it no longer continually shows 
failed heals.  The only thing I have noticed is that I am getting a lot 
of these in the logs:


[2015-05-06 04:25:15.293175] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.293184] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.293192] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.293200] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375447] D 
[cli-cmd-volume.c:1825:cli_check_gsync_present] 0-cli: Returning 0
[2015-05-06 04:25:15.375511] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375522] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375538] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375552] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375562] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375572] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375581] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375588] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375597] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:15.375604] D [registry.c:408:cli_cmd_register] 0-cli: 
Returning 0
[2015-05-06 04:25:1

[Gluster-devel] Regression failure of tests/basic/afr/data-self-heal.t

2015-05-05 Thread Ravishankar N
TL;DR: Need to come up with a fix for AFR data self-heal from clients 
(mounts).


/data-self-heal.t/  creates a 1x2 volume, sets  afr changelog xattrs 
directly on the files in the backend bricks, then runs full heal to heal 
the files.


The test fails intermittently when run in a loop because data self-heal 
attempts  non-blocking locks  before healing and the two heal threads 
(one per brick) might try to acquire the lock at the same time and both 
might fail. In afr-v1, only one thread gets spawned if both bricks are 
in the same node. In afr-v2, we cannot do this because unlike in v1, 
there is no conservative merge in afr_opendir_cbk() in v2. We are not 
sure that adding conservative merge in v2 is a good idea because it 
involves (multiple ) readdirs on both bricks and computing checksum on 
the entries to see if there is a mismatch, which can be a costly 
operation when done from clients. Making the locks blocking could cause 
one heal thread to block instead of trying to heal other files if the 
other thread holds the lock.  One approach is to do what ec does by 
using a virtual xattr and handling it in the getxattr FOP to trigger 
data heals from clients. More thought needs to be given to this.


Regards,
Ravi


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-05 Thread Soumya Koduri

I consistently see this failure for one of my patches -

http://review.gluster.org/#/c/10568/ -
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8483/consoleFull

This test passed when I ran it on my workspace.

Thanks,
Soumya

On 05/02/2015 08:00 AM, Pranith Kumar Karampuri wrote:


On 05/01/2015 10:05 PM, Nithya Balachandran wrote:

Hi,

Can you point me to a Jenkins run with this failure?

I don't have one. But it is very easy to re-create. Just run the
following in your workspace
while prove -rfv tests/basic/fops-sanity.t; do :; done
At least on my machine this failed in 5-10 minutes. Very consistent
failure :-)

Pranith


Regards,
Nithya



- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Shyam" , "Raghavendra Gowdappa"
, "Nithya Balachandran"
, "Susant Palai" 
Cc: "Gluster Devel" 
Sent: Friday, 1 May, 2015 5:07:12 PM
Subject: spurious regression failures for ./tests/basic/fops-sanity.t

hi,
  I see the following logs when the failure happens:
[2015-05-01 10:37:44.157477] E
[dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
(null): failed to get the 'linkto' xattr No data avai
lable
[2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
0-glusterfs-fuse: 25: READ => -1 (No data available)

Then the program fails with following message:
read failed: No data available
read returning junk
fd based file operation 1 failed
read failed: No data available
read returning junk
fstat failed : No data available
fd based file operation 2 failed
read failed: No data available
read returning junk
dup fd based file operation failed
not ok 10

Could you let us know when this can happen and post a patch which will
fix it? Please let us know who is going to fix it.

Pranith



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] core while running tests/bugs/snapshot/bug-1112559.t

2015-05-05 Thread Joseph Fernandes
CCing Venky and Kotresh

- Original Message -
From: "Jeff Darcy" 
To: "Pranith Kumar Karampuri" 
Cc: "Joseph Fernandes" , "Avra Sengupta" 
, "Rajesh Joseph" , "Gluster Devel" 

Sent: Wednesday, May 6, 2015 8:39:23 AM
Subject: Re: [Gluster-devel] core while running 
tests/bugs/snapshot/bug-1112559.t

>  Could you please look at this issue:
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull

I looked at this one for a while.  It looks like a brick failed to
start because changelog failed to initialize, but neither the core
nor the logs shed much light on why.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] core while running tests/bugs/snapshot/bug-1112559.t

2015-05-05 Thread Pranith Kumar Karampuri

Looping in kotresh and aravinda

Pranith
On 05/06/2015 08:39 AM, Jeff Darcy wrote:

  Could you please look at this issue:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull

I looked at this one for a while.  It looks like a brick failed to
start because changelog failed to initialize, but neither the core
nor the logs shed much light on why.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] core while running tests/bugs/snapshot/bug-1112559.t

2015-05-05 Thread Jeff Darcy
>  Could you please look at this issue:
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull

I looked at this one for a while.  It looks like a brick failed to
start because changelog failed to initialize, but neither the core
nor the logs shed much light on why.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression update : cdc.t

2015-05-05 Thread Krishnan Parthasarathi


- Original Message -
> On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
> > I see the following log from the brick process:
> > 
> > [2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
> > 4-tcp.patchy-server: binding to  failed: Address already in use
> 
> This happens before the failing test 52 (volume stop), on test 51, which is
> volume reset network.compression operation.
> 
> At that time the volume is already started, with the brick process running.
> volume reset network.compression cause the brick process to be started
> again. But since the previous brick process was not terminated, it still
> holds the port and the new process fails to start.
> 
> As a result we have a volume started with its only brick not running.
> It seems volume stop waits for the missing brick to get online and
> here is why we fail.

[Correction(s) in root cause analysis for posterity]
The brick process is _not_ restarted. The graph change detection algorithm
is not capable of handling addition of a translator to the server-side graph.
As a result, brick process calls init() on all its existing translators, more
importantly server translator in this case. As part of server translator's 
init(),
the listening socket is being bound (again). That is what the log messages say 
too.

> 
> The patch below is enough to workaround the problem: first stop the
> volume before doing colume reset network.compression.
> 
> Questions:
> 1) is it expected that volume reset network.compression. restarts
>the bricks?

It _doesn't_. See above.

> 2) shall we consider it a bug that volume stop waits for brcks  that
>are down? I thing we should.

volume-stop has two important phases in its execution. In the first phase,
glusterd sends an RPC asking the brick to kill itself after performing a 
cleanup.
Subsequently, glusterd issues a kill(2) system call, if the brick process didn't
kill itself (yet). In this case, glusterd issues the RPC, but brick process 
doesn't
'process' it. Since the stack frames were corrupted (as gdb claimed) we 
couldn't analyse
the cause further. At this point, we suspect that brick process' poll(3) thread 
is blocked
for some reason.

We need help in getting gdb to work with proper stack frames. It is mostly my 
lack of *BSD knowledge.
With the fix for cdc.t we avoid the immediate regression failure, but don't 
know the real underlying issue
yet. Any help would be appreciated.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] new test failure in tests/basic/mount-nfs-auth.t

2015-05-05 Thread Pranith Kumar Karampuri

Niels,
Any ideas?

http://build.gluster.org/job/rackspace-regression-2GB-triggered/8462/consoleFull

mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
dd: closing output file `/mnt/nfs/0/test-big-write': Input/output error
[20:48:27] ./tests/basic/mount-nfs-auth.t ..
not ok 33

Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious regression status

2015-05-05 Thread Pranith Kumar Karampuri

hi,
  Please backport the patches that fix spurious regressions to 3.7 
as well. This is the status of regressions now:


 * ./tests/bugs/quota/bug-1035576.t (Wstat: 0 Tests: 24 Failed: 2)

 * Failed tests:  20-21

 * 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8329/consoleFull


 * ./tests/bugs/snapshot/bug-1112559.t: 1 new core files

 * 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8308/consoleFull

 * One more occurrence -

 * Failed tests:  9, 11

 * 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8430/consoleFull


 * ./tests/basic/ec/ec-12-4.t (Wstat: 0 Tests: 541 Failed: 1)

 * Failed test:  11

 * 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8312/consoleFull

 * Hopefully fixed now!


 * ./tests/geo-rep/georep-rsync-changelog.t (Wstat: 256 Tests: 3 Failed: 0)

 * Non-zero exit status: 1

 * http://build.gluster.org/job/rackspace-regression-2GB-triggered/8168/console



 * ./tests/basic/quota-anon-fd-nfs.t (failed-test: 21)

 * Happens in: master
   
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/8147/consoleFull)
   


 * Being investigated by: ?


 * tests/bugs/snapshot/bug-1112559.t (Failed tests:  9, 11)

 * Happens in: master

 * Being investigated by: ?



 * tests/features/glupy.t

 * nuked tests 7153, 7167, 7169, 7173, 7212


 * tests/basic/volume-snapshot-clone.t

 * http://review.gluster.org/#/c/10053/

 * Came back on April 9

 * http://build.gluster.org/job/rackspace-regression-2GB-triggered/6658/


 * tests/basic/uss.t

 * https://bugzilla.redhat.com/show_bug.cgi?id=1209286

 * http://review.gluster.org/#/c/10143/

 * Came back on April 9

 * http://build.gluster.org/job/rackspace-regression-2GB-triggered/6660/

 * ./tests/bugs/glusterfs/bug-867253.t (Wstat: 0 Tests: 9 Failed: 1)

 * Failed test:  8


Please feel free to let us know if you take up something from this list.

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] core while running tests/bugs/snapshot/bug-1112559.t

2015-05-05 Thread Pranith Kumar Karampuri

hi,
Could you please look at this issue: 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-05 Thread Emmanuel Dreyfus
Justin Clift  wrote:

> This kind of error message at the end of a failure log indicates
> the VM has self-disconnected from Jenkins and needs rebooting.
> Haven't found any other way to fix it. :/

If the connexion is dropped, this means there is something that sends a
TCP RST. Perhaps we could run tcpdump on the VM to see if it comes from
the VM or from rackspace infrastructure?

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Gaurav Garg
forgot to mention patch urlhttp://review.gluster.org/#/c/10475.

owner should use upstream bug id while sending patch to upstream and downstream 
bug id while sending patch to downstream branch. 

Thanks 

Regards
Gaurav

- Original Message -
From: "Gaurav Garg" 
To: "Pranith Kumar Karampuri" 
Cc: "Gluster Devel" , "Sakshi Bansal" 

Sent: Tuesday, May 5, 2015 10:46:51 PM
Subject: Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

Hi Pranith,


Actually there is problem is sender patch. Its a intended behavior not a 
spurious. Current patch is not solving what bug actually say along with it the 
patch owner should look into the failure test case once and  patch owner should 
modify the test cases if there is actual need of modify test case based on the 
sender current patch. 

I have posted a comment on the patch itself. once that comments will resolve 
this problem will be disappear. 

ccing patch owner. 

Thank you

~Gaurav

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Gaurav Garg" 
Cc: "Gluster Devel" 
Sent: Tuesday, May 5, 2015 8:52:16 PM
Subject: spurious failure in tests/bugs/cli/bug-1087487.t

Gaurav,
  Please look into 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Gaurav Garg
Hi Pranith,


Actually there is problem is sender patch. Its a intended behavior not a 
spurious. Current patch is not solving what bug actually say along with it the 
patch owner should look into the failure test case once and  patch owner should 
modify the test cases if there is actual need of modify test case based on the 
sender current patch. 

I have posted a comment on the patch itself. once that comments will resolve 
this problem will be disappear. 

ccing patch owner. 

Thank you

~Gaurav

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Gaurav Garg" 
Cc: "Gluster Devel" 
Sent: Tuesday, May 5, 2015 8:52:16 PM
Subject: spurious failure in tests/bugs/cli/bug-1087487.t

Gaurav,
  Please look into 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] hacks to workaround glupy.t failure

2015-05-05 Thread Emmanuel Dreyfus
Some ideas on NetBSD glupy.t failure, which may also have some 
consequences on Linux

After http://review.gluster.org/10248/ was merged, glupy.py installation
path cnaged from @BUILD_PYTHON_SITE_PACKAGE@/gluster to 
@BUILD_PYTHON_SITE_PACKAGE@/gluster/glupy

Since nothing cleans up @BUILD_PYTHON_SITE_PACKAGE@/gluster ,
nodes that ran previosu regression tests get two versions of glupy.py,
one being outdated:
/usr/lib/python2.6/site-packages/gluster/glupy/glupy.py
/usr/lib/python2.6/site-packages/gluster/glupy/glupy.pyo
/usr/lib/python2.6/site-packages/gluster/glupy/__init__.py
/usr/lib/python2.6/site-packages/gluster/glupy/__init__.pyc
/usr/lib/python2.6/site-packages/gluster/glupy/glupy.pyc
/usr/lib/python2.6/site-packages/gluster/glupy/__init__.pyo
/usr/lib/python2.6/site-packages/gluster/glupy.py
/usr/lib/python2.6/site-packages/gluster/glupy.pyo
/usr/lib/python2.6/site-packages/gluster/__init__.py
/usr/lib/python2.6/site-packages/gluster/__init__.pyc
/usr/lib/python2.6/site-packages/gluster/glupy.pyc
/usr/lib/python2.6/site-packages/gluster/__init__.pyo

Cleaning oudated /usr/lib/python2.6/site-packages/gluster/* helps 
a bit, I get further:

Attmepting to load helloworld.py on the command line produces this
Traceback (most recent call last):
  File "xlators/features/glupy/examples/helloworld.py", line 2, in 
from gluster.glupy import *
ImportError: No module named gluster.glupy

I am not sur ehow it is suppsoed to work, but that is enough to get it
to pass this:
cd /usr/pkg/lib/python2.7/site-packages/gluster/
cp glupy/glupy.py ./__init__.py

Next problem:
Traceback (most recent call last):
  File "xlators/features/glupy/examples/helloworld.py", line 2, in 
from gluster.glupy import *
  File "/usr/pkg/lib/python2.7/site-packages/gluster/__init__.py", line 15, in 

dl = CDLL(os.getenv("PATH_GLUSTERFS_GLUPY_MODULE", ""),RTLD_GLOBAL)
  File "/usr/pkg/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /autobuild/install/lib/libglusterfs.so.0: Undefined PLT symbol 
"backtrace" (symnum = 21)


I try 
export LD_PRELOAD=/usr/pkg/lib/libexecinfo.so which contains the missing
symbol. I get:

Traceback (most recent call last):
  File "xlators/features/glupy/examples/helloworld.py", line 4, in 
class xlator (Translator):
NameError: name 'Translator' is not defined

Right, this is hack city!
cd /usr/pkg/lib/python2.7/site-packages/gluster
cat glupy/glupy.py >> glupy/__init__.py 

helloworld.py loads on the command line, and glupy.t passes:

Running tests in file ./tests/features/glupy.t
[16:38:58] ./tests/features/glupy.t .. 
[16:38:58] ./tests/features/glupy.t .. ok 6004 ms
[16:39:04]
All tests successful.
Files=1, Tests=6,  6 wallclock secs ( 0.03 usr  0.05 sys +  0.19 cusr  2.36 
csys =  2.63 CPU)
Result: PASS

Anyone knwoledgable with python can help doing the clean fixes?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Pranith Kumar Karampuri

Gaurav,
 Please look into 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 10 minutes)

2015-05-05 Thread Atin Mukherjee
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Etherpad to collect pending patches for 3.7

2015-05-05 Thread Raghavendra Talur

Hi,

Here is the etherpad link to track the pending patches to
be merged in 3.7 branch before release.
If you have any, please add them to the list.

https://public.pad.fsfe.org/p/pending_gluster_3.7_patches

Thanks,
Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-05 Thread Justin Clift
On 5 May 2015, at 03:40, Jeff Darcy  wrote:
Jeff's patch failed again with same problem:
>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
> 
> Wouldn't have expected anything different.  This one looks like a
> problem in the Jenkins/Gerrit infrastructure.

This kind of error message at the end of a failure log indicates
the VM has self-disconnected from Jenkins and needs rebooting.
Haven't found any other way to fix it. :/

Happens with both CentOS and NetBSD regression runs.

[...]
  ^
FATAL: Unable to delete script file /var/tmp/hudson8377790745169807524.sh

hudson.util.IOException2
: remote file operation failed: /var/tmp/hudson8377790745169807524.sh at 
hudson.remoting.Channel@2bae0315:nbslave72.cloud.gluster.org
at 
hudson.FilePath.act(FilePath.java:900)

at 
hudson.FilePath.act(FilePath.java:877)

at 
hudson.FilePath.delete(FilePath.java:1262)

at 
hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:101)

at 
hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:60)
[...]

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-05 Thread Vijay Bellur

On 05/05/2015 08:13 AM, Pranith Kumar Karampuri wrote:


On 05/05/2015 08:10 AM, Jeff Darcy wrote:

Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console


Wouldn't have expected anything different.  This one looks like a
problem in the Jenkins/Gerrit infrastructure.

Sorry for the mis-communication, I was referring to the same infra problem.



The situation seems much better now. Thanks everyone for your prompt 
actions!


We seem to be a little distance away from ensuring that our regression 
runs are clean. Let us continue our timely responses for addressing 
regression failures to help prevent a lockdown of master for all patches.


Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in quota-nfs.t

2015-05-05 Thread Sachin Pandit


- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Sachin Pandit" 
> Cc: "Pranith Kumar Karampuri" , "Gluster Devel" 
> 
> Sent: Tuesday, May 5, 2015 3:27:32 PM
> Subject: Re: [Gluster-devel] spurious failure in quota-nfs.t
> 
> 
> 
> - Original Message -
> > From: "Sachin Pandit" 
> > To: "Pranith Kumar Karampuri" 
> > Cc: "Gluster Devel" 
> > Sent: Tuesday, May 5, 2015 3:24:59 PM
> > Subject: Re: [Gluster-devel] spurious failure in quota-nfs.t
> > 
> > 
> > 
> > - Original Message -
> > > From: "Pranith Kumar Karampuri" 
> > > To: "Vijaikumar Mallikarjuna" , "Sachin Pandit"
> > > 
> > > Cc: "Gluster Devel" 
> > > Sent: Tuesday, May 5, 2015 8:43:18 AM
> > > Subject: spurious failure in quota-nfs.t
> > > 
> > > hi Vijai/Sachin,
> > > http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
> > > Doesn't seem like an obvious failure. Know anything about it?
> > 
> > Hi Pranith,
> > 
> > I checked the logs and could not find any significant information.
> > It seems like the marker has failed to update the extended
> > attributes till the root.
> 
> Any idea why it couldn't update till root?

No, after going through the logs, I did not see any information
regarding that.


> 
> > I have started the execution of this
> > test case in a loop, it has completed more than 20 successful
> > runs till now. I will update in this thread if I make any
> > progress on root causing the issue.
> > 
> > ~ Sachin.
> > > 
> > > Pranith
> > > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in quota-nfs.t

2015-05-05 Thread Raghavendra Gowdappa


- Original Message -
> From: "Sachin Pandit" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Gluster Devel" 
> Sent: Tuesday, May 5, 2015 3:24:59 PM
> Subject: Re: [Gluster-devel] spurious failure in quota-nfs.t
> 
> 
> 
> - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Vijaikumar Mallikarjuna" , "Sachin Pandit"
> > 
> > Cc: "Gluster Devel" 
> > Sent: Tuesday, May 5, 2015 8:43:18 AM
> > Subject: spurious failure in quota-nfs.t
> > 
> > hi Vijai/Sachin,
> > http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
> > Doesn't seem like an obvious failure. Know anything about it?
> 
> Hi Pranith,
> 
> I checked the logs and could not find any significant information.
> It seems like the marker has failed to update the extended
> attributes till the root. 

Any idea why it couldn't update till root?

> I have started the execution of this
> test case in a loop, it has completed more than 20 successful
> runs till now. I will update in this thread if I make any
> progress on root causing the issue.
> 
> ~ Sachin.
> > 
> > Pranith
> > 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in quota-nfs.t

2015-05-05 Thread Sachin Pandit


- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Vijaikumar Mallikarjuna" , "Sachin Pandit" 
> 
> Cc: "Gluster Devel" 
> Sent: Tuesday, May 5, 2015 8:43:18 AM
> Subject: spurious failure in quota-nfs.t
> 
> hi Vijai/Sachin,
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
> Doesn't seem like an obvious failure. Know anything about it?

Hi Pranith,

I checked the logs and could not find any significant information.
It seems like the marker has failed to update the extended
attributes till the root. I have started the execution of this
test case in a loop, it has completed more than 20 successful
runs till now. I will update in this thread if I make any
progress on root causing the issue.

~ Sachin.
> 
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ec spurious regression failures

2015-05-05 Thread Pranith Kumar Karampuri

I am yet to debug why it is failing in these new tests for netBSD.

Pranith
On 05/05/2015 01:54 PM, Emmanuel Dreyfus wrote:

On Tue, May 05, 2015 at 01:45:03PM +0530, Pranith Kumar Karampuri wrote:

Already updated the status about this in the earlier mail.
http://review.gluster.org/10539 is the fix.

That one only touches bug-1202244-support-inode-quota.t ...



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ec spurious regression failures

2015-05-05 Thread Pranith Kumar Karampuri


On 05/05/2015 01:54 PM, Emmanuel Dreyfus wrote:

On Tue, May 05, 2015 at 01:45:03PM +0530, Pranith Kumar Karampuri wrote:

Already updated the status about this in the earlier mail.
http://review.gluster.org/10539 is the fix.

That one only touches bug-1202244-support-inode-quota.t ...

RCA: http://www.gluster.org/pipermail/gluster-devel/2015-May/044799.html

Pranith




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ec spurious regression failures

2015-05-05 Thread Emmanuel Dreyfus
On Tue, May 05, 2015 at 01:45:03PM +0530, Pranith Kumar Karampuri wrote:
> Already updated the status about this in the earlier mail.
> http://review.gluster.org/10539 is the fix.

That one only touches bug-1202244-support-inode-quota.t ...

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ec spurious regression failures

2015-05-05 Thread Pranith Kumar Karampuri


On 05/05/2015 01:35 PM, Vijay Bellur wrote:

On 05/05/2015 11:40 AM, Emmanuel Dreyfus wrote:

Emmanuel Dreyfus  wrote:


I sent http://review.gluster.org/10540 to address it completely. Not
sure if it works on netBSD. Emmanuel help!!


I launched test runs in a loop on nbslave70. More later.


Failed on first pass:
Test Summary Report
---
./tests/basic/ec/ec-3-1.t(Wstat: 0 Tests: 217 Failed: 4)
   Failed tests:  133-134, 138-139
./tests/basic/ec/ec-4-1.t(Wstat: 0 Tests: 253 Failed: 6)
   Failed tests:  152-153, 157-158, 162-163
./tests/basic/ec/ec-5-1.t(Wstat: 0 Tests: 289 Failed: 8)
   Failed tests:  171-172, 176-177, 181-182, 186-187
./tests/basic/ec/ec-readdir.t(Wstat: 0 Tests: 9 Failed: 1)
   Failed test:  9
./tests/basic/ec/quota.t (Wstat: 0 Tests: 24 Failed: 1)
   Failed test:  24





In addition ec-12-4.t has started failing again [1]. Have added a note 
about this to the etherpad.
Already updated the status about this in the earlier mail. 
http://review.gluster.org/10539 is the fix.


-Vijay

[1] 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8312/consoleFull


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] ec spurious regression failures

2015-05-05 Thread Vijay Bellur

On 05/05/2015 11:40 AM, Emmanuel Dreyfus wrote:

Emmanuel Dreyfus  wrote:


I sent http://review.gluster.org/10540 to address it completely. Not
sure if it works on netBSD. Emmanuel help!!


I launched test runs in a loop on nbslave70. More later.


Failed on first pass:
Test Summary Report
---
./tests/basic/ec/ec-3-1.t(Wstat: 0 Tests: 217 Failed: 4)
   Failed tests:  133-134, 138-139
./tests/basic/ec/ec-4-1.t(Wstat: 0 Tests: 253 Failed: 6)
   Failed tests:  152-153, 157-158, 162-163
./tests/basic/ec/ec-5-1.t(Wstat: 0 Tests: 289 Failed: 8)
   Failed tests:  171-172, 176-177, 181-182, 186-187
./tests/basic/ec/ec-readdir.t(Wstat: 0 Tests: 9 Failed: 1)
   Failed test:  9
./tests/basic/ec/quota.t (Wstat: 0 Tests: 24 Failed: 1)
   Failed test:  24





In addition ec-12-4.t has started failing again [1]. Have added a note 
about this to the etherpad.


-Vijay

[1] 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8312/consoleFull

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel