I'd like to propose an addition to the Test::Harness parsing
rules to support dependency analysis. That, in turn, allows
monitoring for file changes and selective, immediate
re-execution of test files. Is this the right forum for
that discussion?
Building on the mini_harness.plx example from
Test::Harness::Straps, I added checks for declarations like
the following:
DEPENDS_ON "file" # implicit test file dependency
"test_file" DEPENDS_ON "module_file"
"test_file" DEPENDS_ON "data_file"
I'd expect a new Test::Harness::depends_on(@) function is
the best way to generate the declarations. For now, however,
I just add the following line to the end of my test files:
map {print qq(DEPENDS_ON "$INC{$_}"\n) } keys(%INC);
In a script with a perpetual loop, I check the modification
times of the test files and the modules each uses. Then,
whenever something changes, I rerun the affected tests.
Also inside that loop is the generation of a simple dashboard.
With this code, less than 200 lines all told, I have
continuous, automatic testing going on as I write new tests
and new code. I've found it a very powerful feedback system.
As a newcomer to this list, I'm not sure what needs to
happen to expanded Test::Harness. I can provide reference code,
but I expect a discussion needs to happen first.
Scott
P.S. Btw, I also will be requesting that the stderr output
from tests be captured as well. That will allow a more
sophisticated dashboard, an html document for example,
to have a cross link from a failed test to its
diagnostics.