SZEDER Gábor wrote:
> On Mon, Sep 16, 2019 at 11:42:08AM -0700, Emily Shaffer wrote:

>>  - try and make progress towards running many tests from a single test
>>    file in parallel - maybe this is too big, I'm not sure if we know how
>>    many of our tests are order-dependent within a file for now...
>
> Forget it, too many (most?) of them are order-dependent.

Hm, I remember a conversation about this with Thomas Rast a while ago.
It seemed possible at the time.

Most tests use "setup" or "set up" in the names of test assertions
that are required by later tests.  It's very helpful for debugging and
maintenance to be able to skip or reorder some tests, so I've been
able to rely on this a bit.  Of course there's no automated checking
in place for that, so there are plenty of test scripts that are
exceptions to it.

If we introduce a test_setup helper, then we would not have to rely on
convention any more.  A test_setup test assertion would represent a
"barrier" that all later tests in the file can rely on.  We could
introduce some automated checking that these semantics are respected,
and then we get a maintainability improvement in every test script
that uses test_setup.  (In scripts without any test_setup, treat all
test assertions as barriers since they haven't been vetted.)

With such automated tests in place, we can then try updating all tests
that say "setup" or "set up" to use test_setup and see what fails.

Some other tests cannot run in parallel for other reasons (e.g. HTTP
tests).  These can be declared as such, and then we have the ability
to run arbitrary individual tests in parallel.

Most of the time in a test run involves multiple test scripts running
in parallel already, so this isn't a huge win for the time to complete
a normal test run.  It helps more with expensive runs like --valgrind.

Thanks,
Jonathan

Reply via email to