# from Sam Vilain
# on Sunday 30 December 2007 18:24:
>> Essentially, it concatenates tests together and runs them
>> in one process.
>> ...
>Yuck.
>
>Why not just load Perl once and fork for the execution of each test
>script.
I think most approaches of this nature are going to fall into one form
of yuck or another.
You're basically saying that we do() the test file, which was something
I was playing with almost a year ago when we started discussing the
parallel tests. I think I bailed out of pursuing that because the
tests were failing but I guess I didn't dig into it. On cursory
inspection, at least one error was an issue with the DATA sections not
working.
Now, assuming that both ways carry the same caveats, I too would prefer
the latter because it not only eliminates load time, but also allows
concurrency and provides isolation via the boundary between child
processes.
Hmm, could one slurp the __DATA__/__END__ block into a scalar and open
DATA, '<', \$scalar it on behalf of the subprocess?
PPI?
I don't see anything in Test::Aggregate where "which modules to preload"
is specified. Presumably, it lets perl's compile-time handle that.
Perhaps something in-between would be workable where we wrap each $code
string in e.g. "$___var_your'e_sure_to_never_need_or_want222 = sub
{". "}" or so and then compile everything and call each sub in the
fork?
Of course, there would still be the issue of Test::More compilation,
skip_all and that other fun stuff.
So to get away from that wholesale blindfolded compilation, you really
need to either non-invasively parse (PPI) and reassemble or externally
declare the preloaded modules, right?
I think I would something like a gateway drug to Test::Class where
the "what to preload" is something like include/excludes of lib/*
(don't forget that sticky cross-platform shim that requires Win32
modules) though maybe all of that should be handled by the Build tool
so we're just looking at roughly $(find blib -name '*.pm').
And, maybe even looking at it through the lens of mst's PPerl with
SGI::FAM causing it to continually try pre-loading as you're making
changes so by the time you can hit alt+tab it has already compiled all
of blib?
Of course, a lot of that probably happens at the harness level (pprove?)
rather than in a .t file.
I also wonder about debug-ability where the `prove aggtests/foo.t` is
potentially a completely different landscape than the full run. If the
full run is 10 minutes and the landscape-specific failure is at the end
that seems a bit costly.
--Eric
--
The only thing that could save UNIX at this late date would be a new $30
shareware version that runs on an unexpanded Commodore 64.
--Don Lancaster (1991)
---------------------------------------------------
http://scratchcomputing.com
---------------------------------------------------