Is this the case where we have two 'categories'?

  1) tests that never worked

  2) tests that recently broke

I think that a #2 should never persist for more than one build
iteration, as either things get fixed or backed out.  I suppose then we
are really talking about category #1, and that we don't have the "broken
window" problem as we never had the window there in the first place?

I think it's important to understand this (if it's actually true).

geir


Tim Ellison wrote:
> Nathan Beyer wrote:
>> How are other projects handling this? My opinion is that tests, which are
>> expected and know to pass should always be running and if they fail and the
>> failure can be independently recreated, then it's something to be posted on
>> the list, if trivial (typo in build file?), or logged as a JIRA issue.
> 
> Agreed, the tests we have enabled are run on each build (hourly if
> things are being committed), and failures are sent to commit list.
> 
>> If it's broken for a significant amount of time (weeks, months), then rather
>> than excluding the test, I would propose moving it to a "broken" or
>> "possibly invalid" source folder that's out of the test path. If it doesn't
>> already have JIRA issue, then one should be created.
> 
> Yes, though I'd be inclined to move it sooner -- tests should not stay
> broken for more than a couple of days.
> 
> Recently our breakages have been invalid tests rather than broken
> implementation, but they still need to be investigated/resolved.
> 
>> I've been living with consistently failing tests for a long time now.
>> Recently it was the unstable Socket tests, but I've been seeing the WinXP
>> long file name [1] test failing for months.
> 
> IMHO you should be shouting about it!  The alternative is that we
> tolerate a few broken windows and overall quality slips.
> 
>> I think we may be unnecessarily complicating some of this by assuming that
>> all of the donated tests that are currently excluded and failing are
>> completely valid. I believe that the currently excluded tests are either
>> failing because they aren't isolated according to the suggested test layout
>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
>> later.
>>
>> So I go back to my original suggestion, implement the testing proposal, then
>> fix/move any excluded tests to where they work properly or determine that
>> they are invalid and delete them.
> 
> Yes, the tests do need improvements too.
> 
> Regards,
> Tim
> 
> 
>> [1] https://issues.apache.org/jira/browse/HARMONY-619
>>
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to