> I thought of an alternative which might have a number of the benefits of
> this solution with less of the drawbacks.
>
> The idea is to create one big file test file that is run in the normal
> way. Everything would only need to be loaded once instead of N times.
> There wouldn't be the usual persistence issues, either.

I tried a couple of different acceleration techniques earlier this
year.  I do remember playing with the "one big test file" approach.  I
can't remember why I didn't go further with it, because it was indeed fast. 
I think it may have been due to the fact that the output of all the test
scripts was flattened into one huge list.  So when a test failed
somewhere, it was hard to figure out what file it was in.

I also tried a forking system suggested by Fergal Daly:  I had a main
script that preloaded as many library modules as possible and then
forked each test script as a child process.  This was also quite fast,
but I don't think I ever figured out how to get Test::Harness to parse
the output of the forked children properly.  

>  - BEGIN and END blocks may need some care. For example, an END block
>    may be used to remove test data before the next text runs.

That's another caveat with the PersistentPerl approach - END blocks seem
only to run on the first request.


Michael



---
Michael Graham <[EMAIL PROTECTED]>

Reply via email to