On 09/30/10 14:19, Frank Schönheit wrote:
However, if you have a complex test for which you can show that it works
reliably enough on all relevant platforms and on all buildbots so that
it can be executed during every build -- no problem to actually include
that test in every build (i.e., go down the "if there ever crop up new
tests..." route detailed in the OP).
What would be your requirement for "can show"? 10 tests in a row which
don't fail? 100? 1000? On one, two, three, four, five, or six platforms?
In other words: I'd prefer doing it the other way 'round: Include tests
for which we're *very* sure that they work reliably, and later exclude
those for which reality prove us wrong.
Personally, I'd put a large number (but not all) of dbaccess/qa/complex,
forms/qa/integration, and connectivity/qa/complex (the latter only after
the integration of CWS dba34b) into the "reliable" list. At the moment,
I execute all of those manually for each and every CWS, but this is
somewhat unfortunate, given that we (nearly) have a ready infrastructure
to automate this.
I am not better at giving useful numbers here than you or anybody else
are. If you want your tests integrated directly in the build, and think
their failure rate is acceptable, go ahead, put them in the build.
People will start telling you if your assumption about tolerable failure
rates matches theirs. (And if it doesn't, be prepared to remove your
tests from the build again. Basically, that's what I've been through
with integrating the currently existing subsequenttests wholesale into
the build.)
-Stephan
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]