On 09/24/10 15:32, Frank Schönheit wrote:
If there ever crop up new tests that do require a complete OOo
installation,
While I agree that the unoapi tests are quite fragile, the current
subsequenttests are more than this. In particular, there are complex
test cases which I'd claim are much more stable. (More precise, I'd
claim this for the complex tests in at least forms and dbaccess, since
we spent significant efforts in the past to actually make them stable
and reliable.)
So, I would be somewhat unhappy to throw all those "they require a
running OOo instance" tests into the same "unreliable" category.
See the list of sporadic failures at the end of
<http://wiki.services.openoffice.org/wiki/Test_Cleanup#unoapi_Tests_2>.
Many of them deal with problems during process shut down, and many of
them are probably generic enough to not only affect qa/unoapi tests, but
also qa/complex tests.
However, if you have a complex test for which you can show that it works
reliably enough on all relevant platforms and on all buildbots so that
it can be executed during every build -- no problem to actually include
that test in every build (i.e., go down the "if there ever crop up new
tests..." route detailed in the OP).
Other than that, I'd claim that for a halfway complex implementation,
you pretty early reach a state where you need an UNO infrastructure at
least, and quickly even a running office. So, I don't share your
optimism that new tests can nearly always be written to not require a
running OOo.
The trick is to let writing tests guide you when writing an
implementation, so that the resulting implementation is indeed (unit)
testable. See for example
<http://www.growing-object-oriented-software.com/> for some food for
thought. However, how well this works out for us needs to be seen,
indeed...
One reason more to keep a subsequenttests infrastructure which can be
run all the time (i.e. excludes unoapi) - we'll need it more sooner than
later, if we take "write tests" serious.
The subsequenttests infrastructure will not go away. And I urge every
developer to routinely run subsequenttests for each CWS (just as you
routinely ran cwscheckapi for each CWS in the past) -- it is just that
its output is apparently not stable enough for automatic processing.
-Stephan
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]