Ok, responses inline.

Vincent Massol wrote:

It's a test so I guess it's "normal" you may wish to run it in the "release"
lifecycle as part of "m2 install".
We don't actually have a release lifecycle, so we need to think about how this applies. I would like to keep it a little separate though. I think we might use a specific profile for a release - as you can affect the plugins bound there, as well as configuration (eg, set a higher standard for release, turn off debugging, add in a class obfuscation plugin, etc)

Hmm... I had forgotten about that. I think that there is a valid use case
for having clover:check as part of "m2 install" even though it's a bit heavy
as you say. But it'd be optional of course. The same applies for checkstyle,
pmd, simian, etc.
I agree.



Yep, this is what I've done yesterday while implementing the clover plugin.
I've found only 2 use cases so far:
Thanks for this, it's much easier to get a whole view this way.

Use case 1:
===========

- Run "m2 site:site"
- It generates the clover report by calling the clover:report goal
- Still open: whether we spawn a "m2 install" lifecycle or not.
I think we have to. It gets quite confusing which belongs to which otherwise, and you go through steps in the "wrong order" - eg compile, clover:compile, test, clover:test, ...

Maybe we
shouldn't by default so that "m2 site:site" generates a report of what is
currently in the existing clover database. The user would run whatever
he/she wants before that (m2 install site).
I'd prefer that the information could be encapsulated in the POM, which the "forking" could do by specifying to the POM whether to run integration tests or normal tests to the clover plugin.

Or maybe we should check the
freshness of the clover database file against the source files (main and
test) and regenerate only if a source is newer that the clover db.
This is the long term goal, and I think we can do this. The best thing about this is that it works as designed now, and gets more efficient when we implement more timestamp checking later. And that checking can apply to anything taking a set of sources and a set of outputs.

- If the project has modules, clover:report will merge the clover databases
of the modules to generate an aggregated clover report. I guess this could
be turned on/off by a configuration property.
Yep, we already have aggregation proposals lying around somewhere, and this is how it should work. The clover plugin needs to operate contextually.

Use case 2:
===========

- Run "m2 install"
- It runs clover:instrument in the generate-sources phase and then the other
phase depending on the packaging type
not sure what you mean by "and then the other phase depending on the packaging type".

- It runs clover:check as part of the "verification" phase (I've currently
bound it to the integration-tests phase while we discuss it).
I agree with a verification phase, but I think the goals in here should fork the goals they require each time, rerunning things when the inputs have changed and relying on timestamp checking to avoid doing too much work.

The reason I suggest this is that if you have something like jcoverage and clover both configured (which you might do if they produce different metrics), and the classes they produce are not compatible with each other, you really need to have two sets of output classes, and run the tests twice.

I guess the downside is always running the tests twice when you really just want clover enabled on the normal run. I don't see any other way presently, though - a fundamental issue here is, "should you package a set of classes, different to what you have just tested?" I know it is theoretically ok for clover, it just sounds like dangerous ground.

Let's look at this operate in the real world, then aim to improve it in Maven 2.1. So clover:check gets bound to a verification phase, executes the new lifecycle, configured by an xml file to do the binding.

I'm still not confident in this design. It seems that if verification is registered it should actually happen in test (like the test plugin itself does), and everything funnels through the one lifecycle, but then if that goal is executed standalone it does the fork. I will think about it some more.

This is run
only if it is activated:
This only seems necessary to workaround the fact it may or may not be included in the packaging, so I wouldn't think it is really necessary. Activation should be done by the way the plugin is bound to the lifecycle.

- It does NOT generate any report. For that you'll need to explicitely call
"m2 clover:report" or run the site generation (see use case 1).
Agreed.

- Should we have 2 sets of sources and should we make sure that the clover
plugin does not affect the main sources so that we cannot mistakenly package
for production clovered sources? If so, we'll need to spawn a full new "m2
install" lifecycle from within the clover plugin. I guess we need this. I
haven't implemented it yet. ATM my clover plugin clovers the main sources
and not a copy. I'll need some help for how to implement this.
You should be spitting out the clovered sources to a second directory, and IIUC you compile both and test with both.

I think where this gets tricky is packaging, which is only a problem when you want to clover something in integration tests. I'm confident we can handle this using the existing lifecycle and attached artifacts, but we'd need to flesh it out some more. I'd prefer we let this particular case slide to Maven 2.1, however.

Use case 3:
===========

- Run "m2 clover:check" or "m2 clover:report"
- This would work even if the "activated" config property is false.
- Users of this use case will typically set "activated" to false so that
clover doesn't run automatically as part of "m2 install".
Activation is by inclusion in the POM. I think execution of these goals would either have no configuration in the POM, or at least no executions listed (just using the plugin configuration to set what would be used from the command line).

So, to summarise:
- add a verification phase, so it goes ..., test, ..., package, integration-test, verify, install, ... (though this is quite late - verify after packaging?)
- existing lifecycle design sufficies for each use case, as detailed below:

Use case 1: site:site
- execution of clover report operates in either aggregation mode (top level), or execute mode (individual project). - execution of report in execute mode will first run m2 test or appropriately configured goal, using an overlaid lifecycle mapping and appropriately modified classes directories
- report then takes those results and produces the report from the database
- works the same for clover:report standalone

Use case 2: clover:check in verification phase
- execution of check works as above
- takes those results and performs the check from the database

Use case 3: clover:check standalone
- execution of check works as above
- takes those results and performs the check from the database

- Brett


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to