On 31 Dec 2007, at 23:07, Eric Wilhelm wrote:
# from Adrian Howard
# on Monday 31 December 2007 11:18:
That's been my experience too. I've caught many nice bugs that would
have been missed by completely-clean slate tests.
Are they bugs in the tests or actual bugs?
Both. More the latter.
In any case I don't care where the bugs are - I still want to find
them :-)
1) Windows & fork :-/
Big deal? Are the issues relevant to preloading or only concurrency?
If the latter, then just turn off the concurrency feature.
Both I would have thought - but it's been some time since I've played
with fork on windows. Maybe things have got better since I last tried.
2) With previous test parallelisation hacks that I've put together
the biggest problem I come across is test suites that assume that
only one test script is running at a time. My hunch, from having had
to mess with several large messy commercial test suites in the past,
is that something like T::A is more likely to work than a forking/
parallel solution for many suites.
Again, preloading or concurrency? Concurrency. If preloading-only
via
fork gives a working test suite and loads faster than aggregation, it
is a win independent of whether your tests play nice with concurrency.
[snip]
Fair point with preloading vs concurrency.
That said I've had more problems with tests suites hiding bugs
because of per-process isolation than I have with the reverse. So a
pre-loading fork still isn't a clear win for me in all situations.
To put it another way: I have tools available to help me
intentionally isolate things. Things that hide any unintentional
interactions may also be hiding bugs.
Not that a nice parallelisation system wouldn't be welcome too -
especially one that can distribute tests over multiple machines.....
Bit of a different bag of beasts there. Though I suspect it is doable
in much the same code as a forking preloader (e.g. reading results
on a
pipe and etc.) For instance, assume a master and slave preloaders
where the master does the SGI::FAM bits and restarts the slaves
whenever the files have changed (or something like that.) Which test
to run maps from per-process to per-node quite nicely and
independently
of the startup code.
I had some evil hacks in the past that just SSHed the test onto
multiple machines then aggregated/munged the results into a single
TAP stream. Evil - but it worked. Should be much easier the next time
I want to do it with the new TAP infrastructure.
Cheers,
Adrian