> Ah! now I understood the confusion. I never said maintainer should fix
> all the bugs in tests. I am only saying that they maintain tests, just
> like we maintain code. Whether you personally work on it or not, you at
> least have an idea of what is the problem and what is the solution so
> some
> Hmm... I am not sure, most of the fixes in the last week I saw were bugs
> in tests or .rc files. The failures in afr and ec were problems that
> existed even in 3.6. They are showing up more now probably because 3.7
> is a bit more parallel.
If we merged features ahead in time and spaced them
On 05/09/2015 01:25 AM, Pranith Kumar Karampuri wrote:
>
> On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
>>
>> - Original Message -
>>> hi,
>>> I think we fixed quite a few heavy hitters in the past week and
>>> reasonable number of regression runs are passing which is a
On 05/08/2015 11:37 PM, Pranith Kumar Karampuri wrote:
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the
On 05/09/2015 02:31 AM, Jeff Darcy wrote:
What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
It is still code, if maintainers
are maintaining feature code and held responsible, why not test code? It
is not that maintai
> What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
> It is still code, if maintainers
> are maintaining feature code and held responsible, why not test code? It
> is not that maintainer is the only one who fixes all the
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be
On 05/09/2015 12:33 AM, Jeff Darcy wrote:
I submit a patch for new-component/changing log-level of one of the logs
for which there is not a single caller after you moved it from INFO ->
DEBUG. So the code is not at all going to be executed. Yet the
regressions will fail. I am 100% sure it has no
> I submit a patch for new-component/changing log-level of one of the logs
> for which there is not a single caller after you moved it from INFO ->
> DEBUG. So the code is not at all going to be executed. Yet the
> regressions will fail. I am 100% sure it has nothing to do with my
> patch. I neithe
I agree on the experience and have no questions/comments on that.
The deal is though, we at least had people (including myself) ignoring
spurious failures, re-triggering jobs, to get that +1 V and move on.
Which causes issues, as the failures could have been at least flagged
for others to be
On 05/08/2015 01:27 PM, Pranith Kumar Karampuri wrote:
On 05/08/2015 06:45 PM, Shyam wrote:
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/
On 05/08/2015 05:27 PM, Jeff Darcy wrote:
I think we should remove "if it is a known bad test treat it as success"
code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible
On 05/08/2015 06:45 PM, Shyam wrote:
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under t
On 05/08/2015 08:34 PM, Justin Clift wrote:
On 8 May 2015, at 13:16, Jeff Darcy wrote:
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this project to operating wit
On 8 May 2015, at 04:15, Pranith Kumar Karampuri wrote:
> 2) If the same test fails on different patches more than 'x' number of times
> we should do something drastic. Let us decide on 'x' and what the drastic
> measure is.
Sure. That number is 0.
If it fails more than 0 times on different
On 8 May 2015, at 13:16, Jeff Darcy wrote:
> Perhaps the change that's needed
> is to make the fixing of likely-spurious test failures a higher
> priority than adding new features.
YES! A million times Yes.
We need to move this project to operating with _0 regression
failures_ as the normal st
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under tests/bugs/glusterd. Most of them don't
> The deluge of regression failures is a direct consequence of last minute
> merges during (extended) feature freeze. We did well to contain this. Great
> stuff!
> If we want to avoid this we should not accept (large) feature merges just
> before feature freeze.
I would add that we shouldn't accep
> I think we should remove "if it is a known bad test treat it as success"
> code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible to make any
progress at all. The cycle
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/10667/
While it certainly doesn't help check re
- Original Message -
> hi,
> I think we fixed quite a few heavy hitters in the past week and
> reasonable number of regression runs are passing which is a good sign.
> Most of the new heavy hitters in regression failures seem to be code
> problems in quota/afr/ec, not sure about t
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be code
problems in quota/afr/ec, not sure about tier.t (Need to get more info
about ar
23 matches
Mail list logo