> From: Mark Stosberg <m...@summersault.com>
>
>> OK, but you still have to clean out your database before you start each
>> independent chunk of your test suite, otherwise you start from an
>> unknown state. 
>
>In a lot of cases, this isn't true. This pattern is quite common:
>
>1. Insert entity.
>2. Test with entity just inserted.
>
>Since all that my test cares about is the unique entity or entities, the
>state of the rest of database doesn't matter. The state the matters is
>in a "known state".

For many of the test suites I've worked on, the business rules are complex 
enough that this is a complete non-starter. I *must* have a database in a 
known-good state at the start of every test run.

    is $customer_table->count, 2, "We should find the correct number of 
records";

>We have a cron job that runs overnight to clean up anything that was
>missed in Jenkin's runs.

No offense, but that scares me. If this strategy was so successful, why do you 
even need to clean anything up? You can accumulate cruft forever, right?

For example, I might want to randomize the order in which I run my tests 
(theoretically, the order in which you run separate test cases SHOULD NOT 
MATTER), but if I don't have a clean environment, I can't know if a passing 
test is accidentally relying on something a previous test case created. This 
often manifests when a test suite passes but an individual test program fails 
(and vice versa). That's a big no-no. (Note that I distinguish between a test 
case and a test: a test case might insert some data, test it, insert more data, 
test the altered data, and so on. There are no guarantees in that scenario if I 
have a dirty database of unknown state).

>We expect our tests to generally work in the face of a "dirty" database.
>If they don't, that's considered a flaw in the test. 

Which implies that you might be unknowingly relying on something a previous 
test did, a problem I've repeatedly encountered in poorly designed test suites.


>This is important
>to run several tests against the same database at the same time. Even if
>we did wipe the database for we tested, all the other tests running in
>parallel would be considered to making the database "dirty". Thus, if a
>pristine database is a requirement, only one test could run against the
>database at the time.

There are multiple strategies people use to get around this limitation, but 
this is the first time I've ever heard of anyone suggesting that a dirty test 
database is desirable.

>We run our tests 4x parallel against the same database, matching the
>cores available in the machine.

Your tests run against a different test database per pid.

Or you run them against multiple remote databases with TAP::Harness::Remote or 
TAP::Harness::Remote::EC2.

Or you run them single-threaded in a single process instead of multiple 
processes.

Or maybe profiling exposes issues that weren't previously apparent.

Or you fall back on a truncating strategy instead of rebuilding 
(http://www.slideshare.net/Ovid/turbo-charged-test-suites-presentation). That's 
often a lot faster.

There are so many ways of attacking this problem which don't involve trying to 
debug an unknown, non-deterministic state.

>We also share the same database between developers and the test suite.
>This "dirty" environment can work like a feature, as it can sometimes
>produce unexpected and "interesting" states that were missed by a
>clean-room testing approach that so carefully controlled the environment
>that some real-world possibilities.

I've been there in one of my first attempts at writing tests about a decade 
ago. I got very tired of testing that I successfully altered the state of the 
database only to find out that another developer was running the test suite at 
the same time and also altered the state of the database and both of us tried 
to figure out why our tests were randomly failing.

I'll be honest, I've been doing testing for a long, long time and this is the 
first time that I can recall anyone arguing for an approach like this. I'm not 
saying you're wrong, but you'll have to do a lot of work to convince people 
that starting out with an effectively random environment is a good way to test 
code.

Cheers,
Ovid
--
Twitter - http://twitter.com/OvidPerl/
Buy my book - http://bit.ly/beginning_perl
Buy my other book - http://www.oreilly.com/catalog/perlhks/
Live and work overseas - http://www.overseas-exile.com/

Reply via email to