On Wednesday 06 May 2015 10:53 PM, Vijay Bellur wrote:
On 05/06/2015 06:52 AM, Pranith Kumar Karampuri wrote:
hi,
Please backport the patches that fix spurious regressions to 3.7
as well. This is the status of regressions now:
* ./tests/bugs/quota/bug-1035576.t (Wstat: 0 Tests: 24
Nithya Balachandran nbala...@redhat.com wrote:
My apologies Emmanuel. We should have caught that.
Well, it is not your fault: you just assume Jenkins is there to catch
your errors, but Jenkins appears to be not reliable.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
On 05/07/2015 01:56 PM, Pranith Kumar Karampuri wrote:
Sorry wrong test. Correct test is: tests/bugs/quota/bug-1035576.t
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/8329/consoleFull)
Tests 20 and 21 failed because /mnt/glusterfs/0/a/f creation failed as
seen from the
On Thu, May 07, 2015 at 02:15:20PM -0400, Jeff Darcy wrote:
Last week, those of us who were together in Bangalore had a meeting to
discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
the plan[1] we had a very productive brainstorming session on what else
we might want to
There are no sleeps between polls in the event poll thread, ref:
event_dispatch_poll(...).
I am not sure if we are referring to the same 'poll'. I haven't gotten a chance
to look into
this. I will try adding logs when I get back to this.
~kp
- Original Message -
Krishnan Parthasarathi
Atin Mukherjee amukh...@redhat.com wrote:
Please look at glusterd_stop_volume () in
xlators/mgmt/glusterd/src/glusterd-volume-ops.c
I see oomeone thought it was a good idea to duplicate
GLUSTERFS_GET_AUX_MOUNT_PIDFILE in cli/src/cli.h and
xlators/mgmt/glusterd/src/glusterd.h...
In this
On Thu, May 07, 2015 at 02:44:08PM -0400, Jeff Darcy wrote:
I believe the right way to express this is: retire the Gluster NFS
(gnfs) server. (Ganesha does NFSv3, and will continue to do NFSv3, as
well as 4, 4.1, 4.2, and pNFS.)
Personally I'd like to go further and say that any
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the right time to think about enhancing our
regression.
Proposal 1:
Create a new option for the daemons to specify that it
On Thu, May 07, 2015 at 12:04:17PM -0700, Joe Julian wrote:
On 05/07/2015 11:15 AM, Jeff Darcy wrote:
Last week, those of us who were together in Bangalore had a meeting to
discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
the plan[1] we had a very productive brainstorming
Hi,
gdb debugging shows the rootcause seems to be quite straightforward. The
gluster version is 3.4.5 and the stack:
#0 0x7eff735fe354 in dht_getxattr_cbk (frame=0x7eff775b6360,
cookie=value optimized out, this=value optimized out, op_ret=value
optimized out, op_errno=0,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
I've added it in the etherpad as well.
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 05/08/2015 03:47 PM, Atin Mukherjee wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
I've added it in the etherpad as well.
Thanks Atin! I see that the test doesn't disable flush-behind, which can
Fixed a similar crash in dht_getxattr_cbk here:
http://review.gluster.org/#/c/10467/
Susant
- Original Message -
From: Paul Guo bigpaul...@foxmail.com
To: gluster-devel@gluster.org
Sent: Friday, 8 May, 2015 3:25:01 PM
Subject: [Gluster-devel] gluster crashes in dht_getxattr_cbk() due to
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/10667/
While it certainly doesn't help check
On 8 May 2015, at 13:16, Jeff Darcy jda...@redhat.com wrote:
snip
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this project to operating with _0 regression
On 8 May 2015, at 16:19, Jeff Darcy jda...@redhat.com wrote:
snip
Proposal 2:
Use ip address instead of host name, because it takes some good amount
of time to resolve from host name, and even some times causes spurious
failure.
If resolution is taking a long time, that's probably fixable
On 8 May 2015, at 04:15, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
2) If the same test fails on different patches more than 'x' number of times
we should do something drastic. Let us decide on 'x' and what the drastic
measure is.
Sure. That number is 0.
If it fails more than
Proposal 1:
Create a new option for the daemons to specify that it is running as
test mode, then we can skip fsync calls used for data durability.
Alternatively, run tests with the relevant directories (bricks and
/var/lib stuff) in ramdisks. No code change needed, but some tests
might need
On 8 May 2015, at 10:02, Mohammed Rafi K C rkavu...@redhat.com wrote:
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the right time to think about enhancing our
Fix is in. Master and release-3.7 branch is fine now.
Reason:
Patches 10620 and 9572 dev started in parallel. Both regressions
completed successfully. But both had conflicting changes. After one
patch is merged, regression was not run again or missed in manual rebase.
Patches:
Thanks Paul. That's for an ancient series of GlusterFS (3.4.x) we're
not really looking to release further updates for.
If that's the version you guys are running in your production
environment, having you looked into moving to a newer release series?
+ Justin
On 8 May 2015, at 10:55, Paul
On 05/08/2015 09:14 PM, Aravinda wrote:
Fix is in. Master and release-3.7 branch is fine now.
Reason:
Patches 10620 and 9572 dev started in parallel. Both regressions
completed successfully. But both had conflicting changes. After one
patch is merged, regression was not run again or missed in
On 05/08/2015 08:34 PM, Justin Clift wrote:
On 8 May 2015, at 13:16, Jeff Darcy jda...@redhat.com wrote:
snip
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this
On 05/08/2015 10:53 PM, Justin Clift wrote:
Seems like a new one, so it's been added to the Etherpad.
http://build.gluster.org/job/regression-test-burn-in/23/console
This looks a lot similar to the data-self-heal.t test where healing
fails to happen because both the threads end up not
On 05/08/2015 08:54 PM, Justin Clift wrote:
On 8 May 2015, at 10:02, Mohammed Rafi K C rkavu...@redhat.com wrote:
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @
Hi All,
Thanks to all our efforts, we are inching closer towards 3.7.0. Here is
the updated release schedule for 3.7.0:
1. Patch merge deadline and beta2 tagging - 1600 UTC 05/09
2. Final test window starts at 1600 UTC 05/09
3. If no significant problems are observed in 2., I will go ahead
On 05/09/2015 12:33 AM, Jeff Darcy wrote:
I submit a patch for new-component/changing log-level of one of the logs
for which there is not a single caller after you moved it from INFO -
DEBUG. So the code is not at all going to be executed. Yet the
regressions will fail. I am 100% sure it has
On 8 May 2015, at 18:41, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
Break the regression tests into parts that can be run in parallel.
So, instead of the regression testing for a particular CR going from the
first test to the last in a serial sequence, we break it up into a
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under tests/bugs/glusterd. Most of them don't
On 05/09/2015 02:31 AM, Jeff Darcy wrote:
What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
It is still code, if maintainers
are maintaining feature code and held responsible, why not test code? It
is not that
hi Kotresh/Aravinda,
Do you guys know anything about following core which comes because
of changelog xlator init failure? It just failed regression on one of my
patches: http://review.gluster.org/#/c/10688
24 [2015-05-08 21:34:47.750460] E [xlator.c:426:xlator_init]
0-patchy-changelog:
On 05/09/2015 03:26 AM, Pranith Kumar Karampuri wrote:
hi Kotresh/Aravinda,
Do you guys know anything about following core which comes
because of changelog xlator init failure? It just failed regression on
one of my patches: http://review.gluster.org/#/c/10688
Sorry wrong URL, this is
On Thursday 07 May 2015 10:50 AM, Sachin Pandit wrote:
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel
gluster-devel@gluster.org, Rafi Kavungal
Chundattu Parambil rkavu...@redhat.com, Aravinda
On 05/09/2015 01:25 AM, Pranith Kumar Karampuri wrote:
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Hi Pranith,
I think you pasted the wrong patch link.
Could you share the correct patch link?
Thanks and Regards,
Kotresh H R
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com, Aravinda
Vishwanathapura Krishna
Hi all,
I Install gluster in one node,disable gluster nfs and export nfs by kernel nfs
server,they are in the same linux system;
another linux client mount nfs path,when dd file on the nfs mount
point;Sometimes the io return error;
Who knows what is the reason, if there is a corresponding
I think we should remove if it is a known bad test treat it as success
code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible to make any
progress at all. The cycle had
The deluge of regression failures is a direct consequence of last minute
merges during (extended) feature freeze. We did well to contain this. Great
stuff!
If we want to avoid this we should not accept (large) feature merges just
before feature freeze.
I would add that we shouldn't accept
40 matches
Mail list logo