On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/10667/
While it certainly doesn't help check
On 8 May 2015, at 13:16, Jeff Darcy jda...@redhat.com wrote:
snip
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this project to operating with _0 regression
On 8 May 2015, at 04:15, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
2) If the same test fails on different patches more than 'x' number of times
we should do something drastic. Let us decide on 'x' and what the drastic
measure is.
Sure. That number is 0.
If it fails more than
On 05/08/2015 08:34 PM, Justin Clift wrote:
On 8 May 2015, at 13:16, Jeff Darcy jda...@redhat.com wrote:
snip
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @
On 05/09/2015 12:33 AM, Jeff Darcy wrote:
I submit a patch for new-component/changing log-level of one of the logs
for which there is not a single caller after you moved it from INFO -
DEBUG. So the code is not at all going to be executed. Yet the
regressions will fail. I am 100% sure it has
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under tests/bugs/glusterd. Most of them don't
On 05/09/2015 02:31 AM, Jeff Darcy wrote:
What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
It is still code, if maintainers
are maintaining feature code and held responsible, why not test code? It
is not that
On 05/09/2015 01:25 AM, Pranith Kumar Karampuri wrote:
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
I think we should remove if it is a known bad test treat it as success
code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible to make any
progress at all. The cycle had
The deluge of regression failures is a direct consequence of last minute
merges during (extended) feature freeze. We did well to contain this. Great
stuff!
If we want to avoid this we should not accept (large) feature merges just
before feature freeze.
I would add that we shouldn't accept
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be code
problems in quota/afr/ec, not sure about
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be code
problems in quota/afr/ec, not sure about tier.t (Need to get more info
about
14 matches
Mail list logo