Jörg Jahnke wrote: > Hi, > > the reason why the Wiki page speaks of mandatory tests I have mentioned > in a previous mail: > > Jörg Jahnke schrieb: >> >> The problem with such tests not being mandatory is that, sooner or >> later, some tests would break. That again would lead to a state where >> the user of the tests could not be sure whether a broken test-case >> means that he introduced a bug or whether he just encountered an old >> problem that broke the test-cases before. He would have to start a >> tedious search to find out the cause of the problem - just like the >> testers have to do nowadays. And then people would simply not use the >> tests because the efforts are too high... >> > > Ause just informed me about another solution that might remove the need > to have the test run on every CWS i.e. we wouldn't need to have the > tests mandatory. His idea is to run the tests on the Master Workspace > prior to announcing the CWS as "ready for CWS use". If a test fails then > this would result in a P1 issue that has to be fixed before the MWS can > be used by everyone. Very similar to how we handle it for the Smoketest > on the MWS nowadays.
That's not the same (not even similar). The smoketest is also done on every CWS and it's mandatory. This ensures that it works in all cases with the exception of the rare cases where 1) either a platform specific problem appears and this platform was not tested or 2) the integration of multiple CWSs exposed to a problem which was not there in the respective CWSs. > > Additionally the list of tests to run would be checked in to CVS, so > that we could disable a tests for every user on a given milestone if a > fix cannot be done in time. > > That way a developer could get an _optional_ means at hand of doing > regression tests, with no obligation to always run these tests. If the > developer feels that he should run the tests, then he could do so and > invest the (machine) time. If he thinks that the tests will be no > additional help, he just does not run them. > > Of course the question then is how often such a regression happens. If > we have to expect to have half a dozen P1 bugs each milestone due to the > mass of regressions, then the "mandatory for every CWS" seems the better > solution to me. But if we expect to have such a P1 bug from the > automatic tests only once every 2 or 3 milestones (or hopefully even > less often), then this seems an acceptable way to me. > > Does that make sense? Well, this moves the burden of hunting down which of the 40 or so simultaneously integrated CWSs is responsible for the regression down on RE, usually even with a considerable time pressure (you don't want to know how many times Kurt and I have been asked when m212 will be ready). That's absolutely not the idea behind all this. We know that the cost of a bug is smallest if found as early as possible. The best thing is if the responsible developer finds it. The developer usually has an immediate idea where to look if something happens and there is no need for having the considerable overhead of filing and tracking a specific bug. One of the better way of ensuring that as many bugs can be found as early as possible are regression test. For this to work well four things need to be ensured: 0) The regression test must be reasonable. This means it must be easy to start and must finish in an acceptable time. After it's finished it must be immediately clear if it was successful or if it failed. 1) If the regression test fails than only a change in the latest development (read CWS) can be responsible for triggering it, otherwise we'll place to much burden on the developer (look at the current situation with assertions). This means, of course, that the main code line has to be clean with respect to the regression test all the time. 2) To fulfill the condition mentioned in 1) (main code line always clean) no CWS with a regression regarding these tests can ever be integrated. It's of course allowed to temporarily disable certain tests (if they need to be rewritten etc) with a CWS, but then they will be disabled in the main code line as well. 3) The only way to fulfill 2) is to make the regression test mandatory on a CWS. RE will do the same tests on the MWS as well. But this is only for catching the issues arising due to simultaneous integration and to have a fallback for the odd cases (like regressions which only happen on a single platform which was not tested etc). I will vote strongly against moving the whole responsibility of ensuring that the regression tests still do work on RE where it will lie if the tests are optional on a CWS. The reason is that RE usually has no immediate idea why something goes wrong, thus RE would have to initiate a full bug search, file and handling cycle. This could delay the publishing of a milestone quite a bit. It's also nearly as costly as the current situation, where QA does initiate the issue cycle. If we want to make regression tests a working tool we have to share responsibility: The developer ensure on their CWS that their changes do not break the regression tests, RE ensures that there is no breakage due to integration and also tests on all platforms, which would be to much of a burden in a CWS. Heiner > > Jörg > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]