On 11/06/2012 01:25 PM, Karen Etheridge wrote:
> On Tue, Nov 06, 2012 at 09:59:48AM -0800, Jonathan Swartz wrote:
>> For each test run, instead of loading a .t file, you're making a request 
>> against the Starman server. So you can obviously hit it with multiple 
>> simultaneous requests.
> 
> For something so simple, you could also use Parallel::ForkManager, with
> each child passing back a serialized Test::Builder object which contained
> the results of all the tests that were run.  The trickiest problem is
> consolidating all the test results back together and then emitting the
> proper TAP output.

Not long ago I worked on a project where I needed to move 8 million
images to S3, where each image had a file and some associated database rows.

We wrote a solution using Parallel::ForkManager, but it benchmarked to
take 9 days to complete.

I rewrote a solution much like the one that Jonathan describes, where a
small control script submitted jobs to a pre-forking Apache/mod_perl
serve to process.

It benchmarked to take 2 days, and ultimately bottlenecked at bandwidth,
rather than CPU power as the first solution had.

Based on that, I think Starman-prove could perform very well. I also
have a large test suite that I'm always trying to make run faster, so I
like the idea a lot.

Using a number of other techniques, I've already been get the run time
down from about 25 minutes to 4.5 minutes. We now run the full suite for
every "push", rather than a few times per day.

A lot of our run time reduction was getting the tests to be
parallel-friendly, which involved some different tricks to allow them to
share the same database without problems.

I also use Test::Class, and still run all of those tests one at a time.
One of our future optimizations is to use something like
Test::Class::Load, but I suspect we will run into some problems there,
that Starman-prove would solve.

  Mark

Reply via email to