Hi Mathias,

Mathias Bauer wrote:
Martin Hollmichel wrote:

Jörg Jahnke schrieb:
Hi,

one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore cost time and hardware-resources.

Do you think it's worth it?

I think it's not primarly the matter of running the regression-suite
before QA approval but to have a small set of meaningful regression
tests available ?

The whole discussion IMHO goes into the wrong direction as it neglects
an important word mentioned in this mail from Martin: "meaningful".

Before discussing additional regression tests we first must find out
*what* we have to test. Yes, having reliable tests that hopefully don't
take too much time is the *necessary* precondition without that I even
wouldn't think about it. But there is more: we must get an idea which
areas of the code we need to check as they are known to have a history
of appearing regressions. Why should we do "regression tests" of code
and functionality that never contained a high regression risk and most
probably never will? That would be a waste of time and we already waste
too much of it.

The QA team identified 45 tests, which show very quickly about broken
functionality. We take these test for Release testing, when the team
has to identify quickly if a build is good or not. So we do have such
a bunch of test cases. These tests cover more than 80% of all OOo files.

Please consider: even in the QA process we currently do not execute
every possible existing test on a CWS for several reasons, mainly the
extraordinary long time it takes to execute them all. I assume the same
should apply to the tests we are considering now. So what we are
currently discussing are *selected* tests that most probably help to
avoid regressions. *What* we select here is crucial for the success.
Martin tried to consider this by his "20%" rule mentioned in another
mail but I'm not sure if that makes sense - IMHO we need to cover *the*
20% (or maybe even less) of the code that is worth the effort.

I do not want to let run each test on each CWS. That doesn't make sense.
But then the tooling must have a selection field. So the developer can
select the field 'tables' and 'writer', when he changed something in
tables in Writer. Then all test cases are selected, which works with
tables in writer and check there functionality. On top some other tests
will run, to avoid general regressions in other applications (4hours).

[...]

There is something else that should be thought-provoking: AFAIK most or
nearly all discovered regressions we had on the master in the last
releases haven't been found by the existing automated tests. They have
been found by manual testing of users. So what makes us think that
applying the existing test cases earlier and more often will help us to
find these regressions? For me this is a hint that we might need at
least additional or even other tests if we wanted to test efficiently.
I'm not sure about that but it would be careless to ignore this fact.

You are right, there were not found all regressions in Master by the
automated tests. But some of them were found, when some more tests are
mandatory. In the past only 2 smaller tests are mandatory for approving
a CWS. Many testers run more than these tests, but not all. Therefore
some regressions went into the Master, which could be identified by
the test cases.

On the other hand, do not forget the regressions, which were identified
by the automated test scripts and when the CWS goes back to development.
This process will be speed up, because the developer do not have to wait
until the QA responsible person have time.

So mandatory tests will help to identify more regressions before
integration of a CWS, but not all. That is right and cannot be denied.

So currently I don't know where this discussion will end. If the
expected result is a confirmation that developers agreed to executing
some arbitrary tests not known yet to test something not defined yet I
doubt that this will come to an end. But if we are talking about tests
that will be reliable, not too slow and that will be specifically
designed to investigate the "dark corners" that are known to produce
regressions more frequently: I think that wouldn't get a lot resistance.
But that's not where we are now.

I don't think so.

So my question is: any objections against my three suggested
preconditions? I know, "not too slow" still must be defined. But as IMHO
it is the least important condition I don't expect that this will be the
most critical point.

When 'not too slow' means, the rest of automated testing has to be done
by QA team. Then we do not need mandatory tests for developers. Because
than the QA will need the same effort to check the CWSs.

For me it is important, that the automated testing time will be spend
between development process and qa process. I do not want to bring more
effort to the developers. If the tests runs successfully neither the
developer nor the QA person have to bring up resources at these tests.
And this must be the goal. If this isn't the goal, then we shouldn't
talk about this change in our processes anymore.

If we agreed on that the next steps should be working on fulfilling
these conditions and then discussing the procedures we want to
implement. Not the other way around.

I think most of the conditions are fulfilled. Therefore Jörg started
the discussion on OOo. But you are right, I saw from the long threat,
that we have to talked more about it internally, before we start to
implement something.

BWT: if more testing on developer side is considered to be important:
please first let's put more effort (and resources!) on getting the API
and integration tests final so that we can run them with good faith on a
regular base. Regressions in our API implementations are very serious
ones as they most probably also cause regressions elsewhere and they
also can "kill" 3rd party code. And they can be detected comparably fast
and easy!

That is, what I ever want. More testing on code level. This will
identify regressions faster and cheaper.

We also should use C++ based test cases more frequently so that we can
integrate testing code into the build process. This will help to get a
better acceptance in the community.

+1

Thorsten

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to