Hi Stephan,

>> So, I would be somewhat unhappy to throw all those "they require a
>> running OOo instance" tests into the same "unreliable" category.
> 
> See the list of sporadic failures at the end of 
> <http://wiki.services.openoffice.org/wiki/Test_Cleanup#unoapi_Tests_2>. 
>   Many of them deal with problems during process shut down, and many of 
> them are probably generic enough to not only affect qa/unoapi tests, but 
> also qa/complex tests.

Indeed, this list is horrifying. Given that the problems there not only
affect UNO-API tests, or complex tests, but probably (potentially) each
and every client accessing OOo via a remote bridge - shouldn't their
fixes have a somewhat higher priority? (yes, that's a rhetorical question)

> However, if you have a complex test for which you can show that it works 
> reliably enough on all relevant platforms and on all buildbots so that 
> it can be executed during every build -- no problem to actually include 
> that test in every build (i.e., go down the "if there ever crop up new 
> tests..." route detailed in the OP).

What would be your requirement for "can show"? 10 tests in a row which
don't fail? 100? 1000? On one, two, three, four, five, or six platforms?

In other words: I'd prefer doing it the other way 'round: Include tests
for which we're *very* sure that they work reliably, and later exclude
those for which reality prove us wrong.

Personally, I'd put a large number (but not all) of dbaccess/qa/complex,
forms/qa/integration, and connectivity/qa/complex (the latter only after
the integration of CWS dba34b) into the "reliable" list. At the moment,
I execute all of those manually for each and every CWS, but this is
somewhat unfortunate, given that we (nearly) have a ready infrastructure
to automate this.

> The trick is to let writing tests guide you when writing an 
> implementation, so that the resulting implementation is indeed (unit) 
> testable.  See for example 
> <http://www.growing-object-oriented-software.com/> for some food for 
> thought.  However, how well this works out for us needs to be seen, 
> indeed...

Well, this "the trick is ..." part is exactly why I think that issueing
a statement like "from now on, we do tests for our code" won't work -
this is a complex topic, with a lot of "tricks" to know, so "Just Do
It!" is an approach which simply doesn't work. But okay, that's a
different story.

Even if I (and others) get my fingers onto a TDD book (something I plan
for a longer time already), this doesn't mean that everything is
immediately testable without a running UNO environment (or even OOo).
So, I continue to think having the infrastructure for this is good and
necessary.

>> One reason more to keep a subsequenttests infrastructure which can be
>> run all the time (i.e. excludes unoapi) - we'll need it more sooner than
>> later, if we take "write tests" serious.
> 
> The subsequenttests infrastructure will not go away.  And I urge every 
> developer to routinely run subsequenttests for each CWS (just as you 
> routinely ran cwscheckapi for each CWS in the past) -- it is just that 
> its output is apparently not stable enough for automatic processing.

Not even for manual ... There's a few quirks which gave me headaches in
my last CWS'es, when I ran subsequenttests, and which finally often
resulted in some "Just Don't Do It" habit. But that's a different story,
too - and yes, we better should embark on fixing those quirks.

Ciao
Frank

-- 
ORACLE
Frank Schönheit | Software Engineer | [email protected]
Oracle Office Productivity: http://www.oracle.com/office

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to