Hello!

> I believe we can improve on this area: to have safer and optimized

Do we have a problem? Does testing applications instead of enablers make
us let problems go into extras that do not pass the extras criteria?

> Let me give some examples of such possible bugs:

> * A new version of libfoo is uploaded with unexpected ABI (Application
> Binary Interface) changes. At the same time some application is
> compiled against it and it works fine, so the package (together with
> libfoo) is promoted to extras. OTOH, this new version libfoo does not
> play well with the existing packages in extras, which will break or
> even fail to start

Yes, this might be a problem. Could we test this automatically by
checking missing symbols of binaries against offered symbols of
libraries (for hard linking errors). Not hard linking errors can only be
detected by testing all depending applications?

> * A new version of python-foo (some binding for libfoo) is uploaded to
> extras-devel, but contains a serious flaw that makes it segfault when
> calling method xyz(). At the same time, GUI application is uploaded
> which depends on that newer version, but does not use xyz() at all. In
> this case, the current QA checks would basically allow the application
> and python-foo to enter extras, but some future application which
> tries to use xyz() will segfault, blocking it from entering extras
> until python-foo is fixed.

Yes. There were discussions in the past how to handle, manage, maintain
libraries that have multiple dependend applications with different
maintainers. I do not remember that a solution was found (besides "talk
to each other", "make a bug")

> * Require (or strongly recommend) *all* packages to have at least some
> sort of unit/functional testing. In fact, most packages imported from
> Debian has tests, but what usually happens is that they are disabled
> so the package can build on scratchbox.

IMHO that does not solve above problems and such strong requirement will
possibly keep a number of libraries ot of the repository (including
mine). Possibly even once that are part of the platform? In fcat to
solve above problems this would mean that I do not have to test my
application but I must test in my application if all functions I call
from foreign code is available and does what I expect it does. Of course
if I would write tests for my library they would always pass and still
could break applications anytime. If I drop functions I will drop the
test, too. If I change the meaning of an function I will adapt the test,
too. Same goes for applications. You want to test interactions between
applications and libraries so you must have test cases for this
interaction. And while I apreciate automatic test suits I and most other
small problems cannot manage this because of lack of resources. I likely
find 90% of my bugs using application functionality tests much faster
(doing server development in my job things are different...).

> * Have some way to run these tests automatically from the package
> sources when uploading packages to the autobuilder.
> * Exercise installation of packages (maybe using piuparts? See
> http://packages.debian.org/sid/piuparts), if possible on a real
> device.

I think the maemo version of lintian does/will do such stuff but not by
installing but by checking package for known failures. A working
installation is not good enough. You would need to start the application
but how do you check that it works? We should solve easy problems first
and extending such mechanism possibly fixes/finds more problems faster?

> * Test disk-full conditions, and randon errors during package
> installation of packages from Extras.

Disk full on installation is a problem of the packaging mechnism and
normally not a problem of the package (if it does not run space using
scripts on its own during installation). For checking disk full
conditions on the application you must install it, run it and trigger
its writing functionality. To do this automatically is somewhere between
difficult and impossible.

> * Promote usage of ABI check tools.

Yes. As mentioned above.

I would suggest to the tester to collect reoccuring testing failures
they have the feeling that could found automatically and contact the
build masters in such case (by filing an bug/enhacement request) - if
they are not doing this anyway already

-- 
Gruß...
       Tim

_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to