On Wed, Jan 11, 2012 at 2:10 PM, Michael Scherer <m...@zarb.org> wrote: > Le mercredi 11 janvier 2012 à 11:24 -0500, Juan Luis Baptiste a écrit : > > So trusting and having bugs are totally unrelated. And if you doubt that > bugs appear, just see our bugzilla. > We trust upstream ( most of them ), and yet there is bugs.
No, they're not totally unrelated when we don't have the man power to do through QA on every package, we need to trust on the packager (and upstream of course) that he did his best to test the new version without expecting him to have tested all the new features, Or do you expect that a QA member get a list of all the new features of a backport and start testing them one by one ? that's what I call unrealistic in practice. > >> If you think that all version backports should be tested in the same >> way as updates by QA, then all versions upgrades in cauldron should be >> tested by QA before pushing them to the BS right ? > > No, they should be tested before being put in the stable release. And > that's exactly what we do by freezing and testing before release. > Of course but again, we can't test *all* the new features of *all* the programs that are going to a new release, we do our best for most of them. Critical components like installer, kernel, drak* tools, etc need more testing and that's where (our very small team) QA should spend their time after a freeze. The rest we have to do our best to test after each version update of a package. >> why risk for a bug >> on a program when updating to a new mga version and not when doing a >> backport ?, it's exactly the same situation. > > That was already extensively discussed in the past, but if we do the > same stuff than in Mandriva, we will end with the same result than in > Mandriva. > - people don't test backports, because that's not mandatory > => some bugs slips. > Of course and that will also happen when updating packages during the development cycle of cauldron. Yes, we do freeze to be able to test, but we cant test every new feature of all applications. We test the most critical stuff which we can't risk to have bugs (and they also slip some times). > > In the end, users complain that distribution is broken, and that impact > our image. We cannot tell "do not mix", because we cannot tell them to > update backports without fear, as that would be lying. And in the end, > saying "this is not supported, but we offer to you" is just sending a > confusing message. > > If we start to give low quality stuff as Mageia, people will just think > Mageia is low quality. > Users will complain anyway, they will complain because there aren't backports of their favorite application or because a backported version has a bug, so we need to find a balance between those two. Expecting to do the same amount of testing to a backport will put too much burden on QA and will make the process of backporting a version too slow for the users. So we need to have more lax tests for backports, enough to guarantee that the application works for it's main features and doesn't put too much burnden on QA, than for updates which need to gurantee that a bug is really fixed. How to define which should be those tests ? that's the issue as I see it. We could have a "backports team" thought, that would do QA for backports without taking time from the updates QA team... Also the other problem is the third-party repos which brings lots of problems because packages are of low quality and don't follow our standards, and if we don't have our own backports and move fast enough users will continue to use those third-party repos, which will also bring the "Mageia is of low quality" problem. -- Juancho