Jeff King <p...@peff.net> writes:

> I do wonder if in general it should be the responsibility of skippable
> tests to make sure we end up with the same state whether they are run or
> not. That might manage the complexity more. But I certainly don't mind
> tests being defensive like you have here.

If we speak "in general", I would say that any test should be
prepared to be turned into a skippable one, and they should all make
sure they leave the same state whether they are skipped, they
succeed, or they fail in the middle.

That can theoretically be achievable (e.g. you assume you would
always start from an empty repository, do your thing and arrange to
leave an empty repository by doing test_when_finished), and the
cognitive cost of developers to do so can be reduced by teaching
test_expect_{success/failure} helpers to be responsible for the
"arrange to leave an empty repository" part.  But it is quite a big
departure from the way our tests are currently done, i.e. prepare
the environment once and then each of multiple tests observes one
thing in that environment (e.g. "does it work well with --dry-run?
how about without?").

Also it will make the runtime cost of the tests a lot larger, as
setup and teardown need to happen for each individual test.  So I do
not think it is a good goal in practice.

Perhaps what you suggest may be a good middle-ground.  When you add
prerequisite to an existing test, it will become your responsibility
to make sure the test will leave the same state.  That way, you
would know that tests that come later will not be affected by your
change.

Reply via email to