Vincent Massol wrote:
-----Original Message-----
From: Jesse McConnell [mailto:[EMAIL PROTECTED]
Sent: mercredi 28 décembre 2005 20:44
To: Maven Developers List
Subject: Re: [discussion] Integration testing location
I worry a bit about mixing unit and integration tests generally...
maybe we have the recommended case for them go into
src/integration-test/java or something along those lines...
logically the structure src/test/java and src/test/it doesn't do it for me
since it tests are probably written in java anyway, so that kinda breaks
the
spirit of 'src'/'type'/'language' convention we kind of have going..
Right. I hadn't seen it this way (I thought src/test was for all tests and
that src/main was for all runtime sources) but I think you're right. I'm
fine with src/it/java.
as for the rest of it, I like adding the phases...
Sure but there's a very common use case for integration testing: the need to
have environment setup before the test and to clean it after the tests. Of
course you could write all sort of plugin to that the plugin support doing
this itself but then you're no longer flexible and you're not providing a
solution to lots of other use cases.
For example cargo could define a cargo:test goal which would start the
container, run the tests and stop the containers. But this doesn't lead to
all variations like:
- the user simply wants to redeploy the new artifacts in the container but
not start the container again
- the user want to have his/her tests written using tetstng and not junit
(or any other test framework)
Sounds more like functional testing to me...anything that needs a
deployed system is a v. more complex situation.
I am working with a PhD student at CERN on distributed testing, and
there is a project gridunit that does some good stuff already, running
junit tests across a farm of nodes, collecting and presenting the
results. Clearly presenting the results gets more complex once you have
tested on 20 boxes; you want to know what failed everywhere, what failed
somewhere, and if there is any commonality for the partial failures.
The perspective we are taking is that a test run is just another thing
to deploy; you have a test listener to collect results and logs from
across the machines, then test runners on different machines running
different tests. The listener collects the results, post-processes them
and then you can act on the outcome (report failures, host the reports,
etc).
In this view, functional testing of a deployment is just another
deployment. Its different from a production deployment, but not very;
just a different deployment descriptor to process.
-Steve
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]