Tarek,
We recently talked about incorporating the testing scheme I started in
distutils-buildbot directly into the release path for Distutils to avoid issues
like the ones with the most recent release.
I am putting this out on the list for comments; I'm NOT interested in
bikeshedding this to death. Anyone else is welcome to do their own project if
they feel strongly enough about it. I'm going to take opinions into account
but I want to have the tool built before the next release, not the next
millennium release of Python.
The way I started with the project in distutils-buildbot
(http://bitbucket.org/tarek/distutils-buildbot/ smoke-and-mirrors branch,
bin/smoke2.py, bin/smoke2.cfg, and bin/smoke_base_products.cfg) was to assume a
brand new system e.g. a newly initialized Cloud Server and to build out
everything necessary to run the tests under Python 2.4, 2.5, 2.6, 2.7dev, 3.1,
and 3.2dev.
While this approach will be fine when we move it to building on-demand
buildbots, it is much too time consuming both in development time and real-time
processing to use for this release cycle. It would be much better (for now) to
just make sure you're running it on a machine that can do the job.
The main goal is to come up with a simple way to run as many full-scope
tests i.e. from install through full test-suite on as many large products, with
as many versions of Python as practical.
What we had originally started was to build and run the test suites for
(spec'd in bin/smoke_base_products.cfg):
Twisted
numpy
tarek_extensions
warped
I had stopped where it reads the configuration files and knows on what
versions of Python to test which products.
At this point, I'd like to move this into Distribute proper and get it
into the "pre-release" test process.
So...
First, we need to decide where to put the testing sub-project. I'm
asuming it can just live in the /tests subdirectory, maybe with a subdir to
hold this whole project as its own module.
Here's what I'd like to see it do:
Run pre-test checks i.e. verify that the machine is capable of running
the tests:
1> Assume that `python2.4`, `python2.5`, `python2.6`,
`python3.1` will invoke a properly set up Python interpreter. This is beyond
the scope of this project; might be taken up at a later time.
2> Create a simple way to verify installed utilities
before proceeding with tests (anyone have one of these around?).
Something like:
ASSERT(`nosetests -V` == `nosetests version 0.11.1`)
3> Allow a certain amount of corrective action to be able
to be taken to satisfy the assertions above (like, install nose version 0.1.11
from PyPi, for example) for each supported version of Python.
Then:
for each product to be tested:
for each Python version this product is to be tested on:
install the product
run its test suite
record any failures
Pretty simple, but will avoid any of the sorts of issues we just had
with the most recent release, is very easy to add new products to (just add a
configuration section), and can be easily moved out to the buildbots when it
does what we want.
Comments etc. gladly accepted as long as there are no bikeshedding
tools (hammers, paint-brushes, Erlang) in hand.
Thanks,
S
aka/Steve Steiner
aka/ssteinerX
_______________________________________________
Distutils-SIG maillist - [email protected]
http://mail.python.org/mailman/listinfo/distutils-sig