Le 04/01/2013 09:11, Jochen Breuer a écrit :
Hi everyone,

I've already posted this to the Mageia forum, but doktor5000 suggested
to also post this to the mailing list.

I'd like to ask if it would make sense for Mageia to automatically test
generated RPM packages. The idea isn't new. Ubuntu is using regression
tests with python-unit for a lot of packages in their distribution:
https://wiki.ubuntu.com/MeetingLogs/devweek0909/RegressionTests

I think that using automated regession tests would make the job of QA a
lot easier. Failures within packages would surface much faster and a lot
of manual checking could be avoided. Don't get me wrong. Every package
that would pass the regression tests should still be checked by a human. ;)

The following scenario could be used for this. After the package hast
been generated, the RPM (or job) is passed to the regession test server.
This server spawns a new virtual machine with the Mageia version the
package should be checked on. When the machine is booted, the package is
installed with all the needed dependencies and the tests are executed.
If everyting went smooth, the package might be checked again by QA
manually. If one or more tests failed, there is something wrong with
package, although the package generation worked, and the
maintainer/developer might want to take a look at it again.
We already run existing tests suite during package build... Running them again on a different host would just change the execution environment to use runtime dependencies instead of build time dependencies. I don't know if the potential results are worth the additional infrastructure needed.

To setup this up a combination of hudson (hudson-ci.org
<http://hudson-ci.org>) and vagrant (vagrantup.com
<http://vagrantup.com>) could be used. vagrant allows to use VirtualBox
to boot serveral VM instances from base images that are thrown away
after everything is done. This way multiple versions of Mageia could be
tested on one well equipped server. One drawback with VirtualBox would
be the missing ARM support.

Let's use something like a JSON lib for Python as an example. The
package generation was successful. But Python is a highly dynamic
language, so some submodule is missing due to a wrong install path,
although "import json-xy" of the main module works flawlessly. Without
further testing this will only surface when a poor developer uses the
submodule or when a second package that depends on that submodule is
tested by QA and shows errors or misbehaviours.
Second example, same lib. The JSON lib usually ships with a handy
executable to lint check JSON files, but upstream decided to change
that. Due to an error in the upstream repo, the executable script file
is still there but it's empty or there is gibberish in it. There is no
more lint checking possible with this executable. Automatic testing
would show this very fast and without someone sitting there, pasting
some JSON into a file to feed the executable with it.
Which sounds likes "writing tests", not "executing already existing tests", which is a whole different story...

--
BOFH excuse #139:

UBNC (user brain not connected)

Reply via email to