Re: QA process for "middleware" (libfoo, python-bar) packages: some ideas
On Thu, Dec 17, 2009 at 8:28 AM, Anderson Lizardo wrote: > One such case I found where QA failed was on the "rootsh" package (I > copied Faheem who maintains the package). I found that the rootsh > version in the "extras" repository could not be removed using the > Application Manager. So I tried on the command line, I noticed there > was a syntax error on the postrm script (missing a "then"), preventing > the removal of the package. I just noticed it looks like > https://bugs.maemo.org/show_bug.cgi?id=6014, I will comment more > there. Correction: this is not the same bug, it just shows the same error on the logs attached there (in a different context). I'll open a separate bug report on it. Regards, -- Anderson Lizardo OpenBossa Labs - INdT Manaus - Brazil ___ maemo-developers mailing list maemo-developers@maemo.org https://lists.maemo.org/mailman/listinfo/maemo-developers
Re: QA process for "middleware" (libfoo, python-bar) packages: some ideas
On Wed, Dec 16, 2009 at 4:39 PM, Tim Teulings wrote: > Hello! > >> I believe we can improve on this area: to have safer and optimized > > Do we have a problem? Does testing applications instead of enablers make > us let problems go into extras that do not pass the extras criteria? I believe the application testing will get most of problems before packages hit extras. But there are some kind of problems (such as installation/removal problems) which I believe are not being covered by the QA criteria (and I think could have a help of automated tests). > >> Let me give some examples of such possible bugs: > >> * A new version of libfoo is uploaded with unexpected ABI (Application >> Binary Interface) changes. At the same time some application is >> compiled against it and it works fine, so the package (together with >> libfoo) is promoted to extras. OTOH, this new version libfoo does not >> play well with the existing packages in extras, which will break or >> even fail to start > > Yes, this might be a problem. Could we test this automatically by > checking missing symbols of binaries against offered symbols of > libraries (for hard linking errors). Not hard linking errors can only be > detected by testing all depending applications? While I've not seem a single mention of this problem so far (maybe because we don't upload updated library packages too often, AFAICT from my spare monitoring of the extras-cauldron mailing list), I think these kinds of "cross application" breaks are very possible as long as there is not a clear policy on how to handle enablers/middleware used by many applications. For the official Maemo libraries we trust the internal Nokia QA, but for community enablers (that includes all of PyMaemo stack), the QA is our own responsibility. Testing all depending applications, if done manually, might not be scalable at the long term, as the number of applications tend to grow. But I think it might be worth including on the QA crieteria some requirement to review depending applications if the reviewing application will also trigger a enabler update. > Yes. There were discussions in the past how to handle, manage, maintain > libraries that have multiple dependend applications with different > maintainers. I do not remember that a solution was found (besides "talk > to each other", "make a bug") As I said earlier, I think we need to come up with specific QA guidelines for common libraries/bindings, so that a library update due to application A does not break a application B that depends on that same library. For that we might start by creating a list of such libraries. "apt-cache" has the information needed to check dependencies shared by more than one application. >> * Require (or strongly recommend) *all* packages to have at least some >> sort of unit/functional testing. In fact, most packages imported from >> Debian has tests, but what usually happens is that they are disabled >> so the package can build on scratchbox. > > IMHO that does not solve above problems and such strong requirement will > possibly keep a number of libraries ot of the repository (including > mine). Possibly even once that are part of the platform? In fcat to > solve above problems this would mean that I do not have to test my > application but I must test in my application if all functions I call > from foreign code is available and does what I expect it does. Of course > if I would write tests for my library they would always pass and still > could break applications anytime. If I drop functions I will drop the > test, too. If I change the meaning of an function I will adapt the test, > too. Same goes for applications. You want to test interactions between > applications and libraries so you must have test cases for this > interaction. And while I apreciate automatic test suits I and most other > small problems cannot manage this because of lack of resources. I likely > find 90% of my bugs using application functionality tests much faster > (doing server development in my job things are different...). The "unit testing" is one approach (of many possible). Maybe they can be made optional (and some application might gain "bonus points" on QA if it has a good test coverage), but I think there should be at least some infrastructure that allows to run any available automated tests on the application, and collect the results, so the developer does not have to remember to run them before each upload. This would not "block" the upload, but the upload of a source package could trigger the automatic run of tests. >> * Have some way to run these tests automatically from the package >> sources when uploading packages to the autobuilder. >> * Exercise installation of packages (maybe using piuparts? See >> http://packages.debian.org/sid/piuparts), if possible on a real >> device. > > I think the maemo version of lintian does/will do such stuff but not by > installing but by checking package for known failures. A working >
Re: QA process for "middleware" (libfoo, python-bar) packages: some ideas
Hello! > I believe we can improve on this area: to have safer and optimized Do we have a problem? Does testing applications instead of enablers make us let problems go into extras that do not pass the extras criteria? > Let me give some examples of such possible bugs: > * A new version of libfoo is uploaded with unexpected ABI (Application > Binary Interface) changes. At the same time some application is > compiled against it and it works fine, so the package (together with > libfoo) is promoted to extras. OTOH, this new version libfoo does not > play well with the existing packages in extras, which will break or > even fail to start Yes, this might be a problem. Could we test this automatically by checking missing symbols of binaries against offered symbols of libraries (for hard linking errors). Not hard linking errors can only be detected by testing all depending applications? > * A new version of python-foo (some binding for libfoo) is uploaded to > extras-devel, but contains a serious flaw that makes it segfault when > calling method xyz(). At the same time, GUI application is uploaded > which depends on that newer version, but does not use xyz() at all. In > this case, the current QA checks would basically allow the application > and python-foo to enter extras, but some future application which > tries to use xyz() will segfault, blocking it from entering extras > until python-foo is fixed. Yes. There were discussions in the past how to handle, manage, maintain libraries that have multiple dependend applications with different maintainers. I do not remember that a solution was found (besides "talk to each other", "make a bug") > * Require (or strongly recommend) *all* packages to have at least some > sort of unit/functional testing. In fact, most packages imported from > Debian has tests, but what usually happens is that they are disabled > so the package can build on scratchbox. IMHO that does not solve above problems and such strong requirement will possibly keep a number of libraries ot of the repository (including mine). Possibly even once that are part of the platform? In fcat to solve above problems this would mean that I do not have to test my application but I must test in my application if all functions I call from foreign code is available and does what I expect it does. Of course if I would write tests for my library they would always pass and still could break applications anytime. If I drop functions I will drop the test, too. If I change the meaning of an function I will adapt the test, too. Same goes for applications. You want to test interactions between applications and libraries so you must have test cases for this interaction. And while I apreciate automatic test suits I and most other small problems cannot manage this because of lack of resources. I likely find 90% of my bugs using application functionality tests much faster (doing server development in my job things are different...). > * Have some way to run these tests automatically from the package > sources when uploading packages to the autobuilder. > * Exercise installation of packages (maybe using piuparts? See > http://packages.debian.org/sid/piuparts), if possible on a real > device. I think the maemo version of lintian does/will do such stuff but not by installing but by checking package for known failures. A working installation is not good enough. You would need to start the application but how do you check that it works? We should solve easy problems first and extending such mechanism possibly fixes/finds more problems faster? > * Test disk-full conditions, and randon errors during package > installation of packages from Extras. Disk full on installation is a problem of the packaging mechnism and normally not a problem of the package (if it does not run space using scripts on its own during installation). For checking disk full conditions on the application you must install it, run it and trigger its writing functionality. To do this automatically is somewhere between difficult and impossible. > * Promote usage of ABI check tools. Yes. As mentioned above. I would suggest to the tester to collect reoccuring testing failures they have the feeling that could found automatically and contact the build masters in such case (by filing an bug/enhacement request) - if they are not doing this anyway already -- Gruß... Tim ___ maemo-developers mailing list maemo-developers@maemo.org https://lists.maemo.org/mailman/listinfo/maemo-developers
QA process for "middleware" (libfoo, python-bar) packages: some ideas
Hi, I'm not sure if there's a better list to send this (or even if it is better to user talk.maemo.org), so feel free to redirect me to the write channel. In lack of a better name for the packages the packages that exist solely as "enablers" for developing our great GUI applications. So far, looks like the QA process is gearing towards the user/GUI applications. For instance, the "thumbs up/down" system is restricted to applications is user/* categories, and the QA testers are not instructed to check for possible problems in packages outside this category. I believe we can improve on this area: to have safer and optimized middleware packages, such as libraries, language bindings, data packages, build tools, and so on. While the user will obviously notice bugs with on the UI, problems with the middleware (for instance, some "unknown symbol" bug or a segfault due to unexpected ABI changes) are likely to be the most serious ones IMHO. Let me give some examples of such possible bugs: * A new version of libfoo is uploaded with unexpected ABI (Application Binary Interface) changes. At the same time some application is compiled against it and it works fine, so the package (together with libfoo) is promoted to extras. OTOH, this new version libfoo does not play well with the existing packages in extras, which will break or even fail to start * A new version of python-foo (some binding for libfoo) is uploaded to extras-devel, but contains a serious flaw that makes it segfault when calling method xyz(). At the same time, GUI application is uploaded which depends on that newer version, but does not use xyz() at all. In this case, the current QA checks would basically allow the application and python-foo to enter extras, but some future application which tries to use xyz() will segfault, blocking it from entering extras until python-foo is fixed. I have some ideas to improve (and assure) the quality of middleware packages: * Require (or strongly recommend) *all* packages to have at least some sort of unit/functional testing. In fact, most packages imported from Debian has tests, but what usually happens is that they are disabled so the package can build on scratchbox. * Have some way to run these tests automatically from the package sources when uploading packages to the autobuilder. * Exercise installation of packages (maybe using piuparts? See http://packages.debian.org/sid/piuparts), if possible on a real device. * Test disk-full conditions, and randon errors during package installation of packages from Extras. * Promote usage of ABI check tools. Any other ideas? Regards, -- Anderson Lizardo OpenBossa Labs - INdT Manaus - Brazil ___ maemo-developers mailing list maemo-developers@maemo.org https://lists.maemo.org/mailman/listinfo/maemo-developers