On Mon, Sep 16, 2019 at 04:13:49PM -0700, Jonathan Nieder wrote:

> Most tests use "setup" or "set up" in the names of test assertions
> that are required by later tests.  It's very helpful for debugging and
> maintenance to be able to skip or reorder some tests, so I've been
> able to rely on this a bit.  Of course there's no automated checking
> in place for that, so there are plenty of test scripts that are
> exceptions to it.
> 
> If we introduce a test_setup helper, then we would not have to rely on
> convention any more.  A test_setup test assertion would represent a
> "barrier" that all later tests in the file can rely on.  We could
> introduce some automated checking that these semantics are respected,
> and then we get a maintainability improvement in every test script
> that uses test_setup.  (In scripts without any test_setup, treat all
> test assertions as barriers since they haven't been vetted.)
> 
> With such automated tests in place, we can then try updating all tests
> that say "setup" or "set up" to use test_setup and see what fails.
> 
> Some other tests cannot run in parallel for other reasons (e.g. HTTP
> tests).  These can be declared as such, and then we have the ability
> to run arbitrary individual tests in parallel.

This isn't quite the same, but couldn't we get most of the gain just by
splitting the tests into more scripts? As you note, we already run those
in parallel, so it increases the granularity of our parallelism. And you
don't have to worry about skipping tests 1 through 18 if they're in
another file; you just don't consider them at all. It also Just Works
with things like HTTP, which choose ports under the assumption that the
other tests are running simultaneously.

It doesn't help with the case that test 1 does setup, and then tests 2,
3, and 4 are logically independent (and some could be skipped or not).

If anybody is interested in splitting up scripts, the obvious ones to
look at are the ones that take the longest (t9001 takes 55s on my
system, though the whole suite runs in only 95s). Of course you can get
most of the parallelism benefit by using "prove --state=slow,save",
which ends up with lots of short scripts at the end (rather than one
slow one chewing one CPU while the rest sit idle).

> Most of the time in a test run involves multiple test scripts running
> in parallel already, so this isn't a huge win for the time to complete
> a normal test run.  It helps more with expensive runs like --valgrind.

Two easier suggestions than trying to make --valgrind faster:

  - use SANITIZE=address, which is way cheaper than valgrind (and
    catches more things!)

  - use --valgrind-only=17 to run everything else in "fast" mode, but
    check the test you care about

-Peff

Reply via email to