Martin Hollmichel wrote:

> Jörg Jahnke schrieb:
>> Hi,
>>
>> one of the questions is whether it would be acceptable for everyone to
>> run a small regression test-suite prior to the QA-approval of a CWS.
>> These tests would probably run several hours, depending on the
>> hardware being used, and therefore cost time and hardware-resources.
>>
>> Do you think it's worth it?
>>
> I think it's not primarly the matter of running the regression-suite
> before QA approval but to have a small set of meaningful regression
> tests available ?

The whole discussion IMHO goes into the wrong direction as it neglects
an important word mentioned in this mail from Martin: "meaningful".

Before discussing additional regression tests we first must find out
*what* we have to test. Yes, having reliable tests that hopefully don't
take too much time is the *necessary* precondition without that I even
wouldn't think about it. But there is more: we must get an idea which
areas of the code we need to check as they are known to have a history
of appearing regressions. Why should we do "regression tests" of code
and functionality that never contained a high regression risk and most
probably never will? That would be a waste of time and we already waste
too much of it.

Please consider: even in the QA process we currently do not execute
every possible existing test on a CWS for several reasons, mainly the
extraordinary long time it takes to execute them all. I assume the same
should apply to the tests we are considering now. So what we are
currently discussing are *selected* tests that most probably help to
avoid regressions. *What* we select here is crucial for the success.
Martin tried to consider this by his "20%" rule mentioned in another
mail but I'm not sure if that makes sense - IMHO we need to cover *the*
20% (or maybe even less) of the code that is worth the effort.

If we neglect that we will most probably end up doing things for the
sake of doing things (or more precicely testing things for the sake of
testing things). Please lets first come up with a strategy to define the
areas of interest and implement it (not on this list ;-)) before we jump
to conclusions. After that we can think about what kind of tests are
probably the best to safeguard us against future regressions in these
areas, if they perhaps already exist or if we have to create new tests.

There is something else that should be thought-provoking: AFAIK most or
nearly all discovered regressions we had on the master in the last
releases haven't been found by the existing automated tests. They have
been found by manual testing of users. So what makes us think that
applying the existing test cases earlier and more often will help us to
find these regressions? For me this is a hint that we might need at
least additional or even other tests if we wanted to test efficiently.
I'm not sure about that but it would be careless to ignore this fact.

So currently I don't know where this discussion will end. If the
expected result is a confirmation that developers agreed to executing
some arbitrary tests not known yet to test something not defined yet I
doubt that this will come to an end. But if we are talking about tests
that will be reliable, not too slow and that will be specifically
designed to investigate the "dark corners" that are known to produce
regressions more frequently: I think that wouldn't get a lot resistance.
But that's not where we are now.

So my question is: any objections against my three suggested
preconditions? I know, "not too slow" still must be defined. But as IMHO
it is the least important condition I don't expect that this will be the
most critical point.

If we agreed on that the next steps should be working on fulfilling
these conditions and then discussing the procedures we want to
implement. Not the other way around.

BWT: if more testing on developer side is considered to be important:
please first let's put more effort (and resources!) on getting the API
and integration tests final so that we can run them with good faith on a
regular base. Regressions in our API implementations are very serious
ones as they most probably also cause regressions elsewhere and they
also can "kill" 3rd party code. And they can be detected comparably fast
and easy!

We also should use C++ based test cases more frequently so that we can
integrate testing code into the build process. This will help to get a
better acceptance in the community.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to "[EMAIL PROTECTED]".
I use it for the OOo lists and only rarely read other mails sent to it.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to