On Tue Jul 08 22:59:40 2008, particle wrote:
> the configure tests take too much time to run, and should be sped up
> by whatever means necessary so as to take a much smaller percentage of
> the overall time for the test suite.

Another solution here would be to not run them by default. The purpose of 'make 
test' 
should be to verify that the parrot functionality works on the target system.

The purpose of testing the configuraiton system should be to diagnose when we 
are unable 
to build parrot, or wish to port to a new platform that doesn't work out of the 
box, no?

My vote would be to, in addition to making the tests faster, also not run them 
by default. (I 
know I can just do "coretest". I'm arguing that something like coretest should 
be the default.)

There's no point in running more tests, even if they're faster, if they're not 
telling us 
something we need to know. Not saying the tests NEVER tell us that, they just 
don't typically 
help the person running 'make test'.

> from the looks of it, there's a
> lot of setup and teardown in each step test file that requires the
> same code to be repeated.
> 
> one example is that only a few lines of code differ between
> t/steps/inter_charset-01.t and t/steps/inter_charset-02.t, and mainly
> because there is one more test in the latter. why are all these tests
> being repeated?
> 
> another example is that with so many files, perl is invoked many, many
> times. starting processes on windows is expensive, so every file (300
> in t/steps and t/configure) is another invocation of perl. this adds
> up quickly to slow windows testing to a crawl.
> 
> if instead these tests were data-driven, so the tests were in data
> format, and the configure system had a way to reset itself to a
> previous state between tests, then we could have many fewer
> invocations of perl--at least two orders of magnitude fewer
> invocations, i reckon.
> 
> comments, suggestions, and questions welcome.
> ~jerry


-- 
Will "Coke" Coleda

Reply via email to