2010/5/31 Nicolas Cellier <nicolas.cellier.aka.n...@gmail.com>:
> 2010/5/30 Stéphane Ducasse <stephane.duca...@inria.fr>:
>>
>> On May 30, 2010, at 8:52 PM, Chris Muller wrote:
>>
>>> (Copying squeak-dev too).
>>>
>>> I'm not sold on the whole test timeout thing.  When I run tests, I
>>> want to know the answer to the question, "is the software working?"
>>>
>>> Putting a timeout on tests trades a slower, but definitive, "yes" or
>>> "no" for a supposedly-faster "maybe".  But is getting a "maybe" back
>>> really faster?  I've just incurred the cost of running a test suite,
>>> but left without my answer.  I get a "maybe", what am I supposed to do
>>> next?  Find a faster machine?  Hack into the code to fiddle with a
>>> timeout pragma?  That's not faster..
>>
>> Thanks this is a really good point.
>>
>>> But, the reason given for the change was not for running tests
>>> interactively (the 99% case), rather, all tests form the beginning of
>>> time are now saddled with a timeout for the 1% case:
>>>
>>>  "The purpose of the timeout is to catch issues like infinite loops,
>>> unexpected user input etc. in automated test environments."
>>>
>>> If tests are supposed to be quick (and deterministic) anyway, wouldn't
>>> an infinite loop or user-input be caught the first time the test was
>>> run (interactively)?  Seriously, when you make software changes, we
>>> run the tests interactively first, and then the purpose of night-time
>>> automated test environment is to catch regressions on the merged
>>> code..
>>
>> Yes this is what I was also implying in my previous mail.
>> If we have a test server this does not really help to have a time out
>> and I wonder the case of infinite loop because this may be really rare.
>>
>
> My opinion differs here. Every test should run in a short time frame
> but a few exceptions. So it seems reasonnable to just have to specify
> a default timeout on your architecture and some specific timeouts for
> a few specific tests (or test classes).
>
> Your main argument is that manual tuning will always be better than
> automated default behaviour, and we can only agree on that one.
>
> But there are two pragmatic cases you don't take into account:
> - the case when community supplied test cases do not comply with these
> rather implicit requirements, and image integrator does not have time
> to dig into each case and do the fine tuning for the rest of
> community.
> - the case when automated tests are used for exploratory package testing.
>
> In the first case, integrator just put a threshold that will reject
> some tests, up to rest of community to inject more time in solving the
> problem.
>

In other words, the timeout has the advantage to turn a rather
implicit requirement into an explicit requirement.
And it's then up to test producers to fine tune their tests wrt this
requirement or use the available hooks in case of long tests, rather
than letting the integrator guess.

Nicolas

> Maybe Andreas was also addressing case of network-in-the-loop tests.
> Thus it can be seen as a quick hack for by-passing the low level
> timeout and number of retries (which are not always accessible that
> easily in some APIs...).
>
> Concerning infinite loops occurence, some are produced by uncompatible
> packages. So when you automate compatible package exploration, it
> might help because I doubt you will have explored each case
> interactively. Of course, you can always put a timeout at upper level
> in your bash or something, but it would not be particularly fine
> grained, would it ?
>
> One typical case I often bump into is those classes defining printOn:
> by sending storeOn: et vice et versa, same for printString and
> printOn:. If core happens to change between two releases, and you have
> a subclass defined in your package, probability to run into one of
> these infinite loops increases.
>
> Nicolas
>
>>> In that case, the high-level test-controller which spits out the
>>> results could and should be responsible for handling "unexpected user
>>> input" and/or putting in a timeout, not each and every last test
>>> method..
>>>
>>> IMO, we want short tests, so let's just write them to be short.  If
>>> they're too long, then the encouragement to shorten them comes from
>>> our own impatience of running them interactively.  Running them in
>>> batch at night requires no patience, because we're sleeping, and
>>> besides, the batch processor should take responsibility for handling
>>> those rare scenarios at a higher-level..
>>
>> I agree.
>> Thanks for sharing your thoughts.
>> So the issue is done. :)
>>
>>
>>>
>>> Regards,
>>>  Chris
>>>
>>>
>>> On Sat, May 29, 2010 at 2:53 AM, stephane ducasse
>>> <stephane.duca...@free.fr> wrote:
>>>> Hi guys
>>>>
>>>> in Squeak andreas introduced the idea of test time out
>>>> Do you think that this is interesting?
>>>>
>>>> Stef
>>>>
>>>> SUnit
>>>> -----
>>>> All test cases now have an associated timeout after which the test is 
>>>> considered failed. The purpose of the timeout is to catch issues like 
>>>> infinite loops, unexpected user input etc. in automated test environments. 
>>>> Timeouts can be set on an individual test basis using the <timeout: 
>>>> seconds> tag or for an entire test case by implementing the 
>>>> #defaultTimeout method.
>>>> _______________________________________________
>>>> Pharo-project mailing list
>>>> Pharo-project@lists.gforge.inria.fr
>>>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>>>
>>>
>>> _______________________________________________
>>> Pharo-project mailing list
>>> Pharo-project@lists.gforge.inria.fr
>>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>>
>> _______________________________________________
>> Pharo-project mailing list
>> Pharo-project@lists.gforge.inria.fr
>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>

_______________________________________________
Pharo-project mailing list
Pharo-project@lists.gforge.inria.fr
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to