Hi, -------- Original-Nachricht -------- > Von: Ingrid Halama <[email protected]> ...
> So I would like to see mandatory automatic tests that detect whether the > important user scenarios still work properly, whether files are still > rendered as they should, whether the performance of the office has not > significantly decreased, .... . We have a lot tests already even if > there is much room for improvement. In principle some of the tests are > mandatory already, but this rule gets ignored very often. The problem with this rule is, that there is only a very limited set of people who can follow this rule. E.g. running automated test and retting reliable results is almost restricted to the Sun QA Team in Hamburg at the moment. So - no matter what rules we define - for the moment we either have to break them or we will delay the integration of CWSes. If we delay the integration, we will delay public testing. If we delay public testing, we will find critical errors (that cannot be identified by automatic testing) even later. I know, I still have to write some more complete report about automatic testing. :( But as I suggested in another thread, I did some comparisions with automated testing on a OOO310m1 build from sund and one from a buildbot. The good thing is, that there are not many differences (buildbot had about 3 errors and 10 warnings more). The bad thing is, that I hat a total of 190 Errors in release and required tests. I did not yet have the time to analyze what happened. But these results are not usable. (And I still would say, I know how to get "good" results from the testtool). André -- Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01 --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
