John Casey wrote:

Not at all; I mean running the test. In order to run one of these tests (which are orchestrated by something akin to the maven-verifier from a JUnit or other java-driven test case), you must run JUnit or whatever, so you can be sure you have the same options enabled, environment present, and assertions tested as take place in the test-case code.

I stand by my claim that running JUnit is very easy.

For instance, simply cd into src/test/resources/it0105 in core-integration-tests, and see if you can figure out what it tests, and how. You can't, not without looking at the JUnit code that drives it, to determine what the criteria for success is, and which flags and goals/lifecycle phases to invoke.

I have two remarks about this. 1) Don't do it that way, and 2) We can compromise on the directory structure.

1) Don't do it that way: I think this is a very weird way to approach the tests that only seems natural because people are used to doing it that way. I'll try to explain what I mean with a hypothetical story.

Suppose you were testing some XML serializer/deserializer, and you had a bunch of JUnit tests checked into src/test/java and a bunch of XML files checked into src/test/resources.

If you're going to go browsing around to see what kinds of tests are available, where are you going to go first? I'd argue that the most natural thing to do is to go look at the tests themselves in src/test/java. The XML files aren't tests... they're just test RESOURCES.

Looking at the tests is a good idea for another reason also: the tests can (must) have some reference in their code to the resources they're using, telling you (and the JVM!) where to find the resources. The resources, on the other hand, don't have to (and typically won't) have a reference to the tests that will run them.

Similarly, Maven projects aren't tests. They may be test resources. It's weird to go browsing around the test resources trying to figure out what tests might run them. Instead, you should be looking at the tests themselves. Indeed, looking at the tests it's very easy to see what resources they're using.

Even in the maven-invoker-plugin case, you shouldn't go digging around in the project-under-test *first*. First you should look at the commands/assertions in the POM; see what it's doing. THEN go look at the project-under-test. It's the same thing: look at the test first, not the resource.

2) We can compromise the directory structure: Despite those remarks, sure, it would be nicer if you could have your JUnit/TestNG integration test in the same directory as its own test resources.

That's true of all JUnit/TestNG tests by the way, not just Maven integration tests. The Maven standard is to keep test resources in src/test/resources, separate from the tests in src/test/java. But it doesn't have to be that way... lots of people put their resources in the same directory as the classes that use them.

You could, if you wished, just put them all together into one directory, claiming that this directory is both the testSourceDirectory AND the testResource directory. (I think you'd want some exclusion/inclusion rules to keep everything straight, but that's not so bad.)

With that said, I think that one should resist the temptation to do this in all but the simplest cases. What about integration tests that test multiple projects? You can't put the same test in multiple directories. Or what about projects that are used by multiple integration tests? The relationship of tests to resources can be many-to-many.

I claim that the clearest thing is to follow the Maven standard, even if it's sub-optimal in this case: keep your resources in one directory and have your tests refer to them.

Ultimately, this part of the argument is about which directory structure is clearest. As we know, arguments about clarity of directory structure can be highly controversial, but are never conclusive and don't amount to very much. Nobody "wins" arguments about directory structure; everyone's a loser.

If you buy my argument that it's good to write tests in a real test framework in a normal language, that writing tests in a POM is not ideal, then I'm pretty sure we can deal with any remaining concerns about directory structure. I'm easy! :-)

Again, it's not just about running the tests, but being able to actually debug a failing test effectively. Tests work best when they're easy to understand and work with, and when a maven core-integration-test fails, you can definitely see how this setup falls down. Running and re-running the same test without change from the IDE isn't useful for debugging, and running the invoker from this kind of code with the remote debugging enabled is virtually impossible...incidentally, if you've figured out how to do it, I'd be interested in learning.

I wrote a lot of new integration tests in the maven-verifier style for Surefire, and I think debugging it doesn't suck, and I'll happily share my "secret." Here's my work cycle:

0) I do some work in the IDE. When I feel good about what I've done, I mvn install to get the plugin into my local repository. (My IDE has a button to run mvn.) Now we're ready for integration testing.

1) Run the tests from the IDE; tests fail. I read the log file in the stacktrace.

Suppose I don't understand why the test is failing yet.

I've got two options now: I can run the test under a debugger purely in the IDE, or I can attempt to reproduce the test failure manually.

2a) Run the test in a debugger purely in the IDE

I tweak the test to add a MAVEN_OPTS environment variable, including the -Xrunjdwp string. (It would be easy to add some sugar to Verifier and/or Invoker to make this easier; I didn't want to go fooling around with the Verifier, so I just copied and pasted out of my notes when I needed this.)

I re-run the test in Run mode (not Debug mode); eventually the forked process suspends for me to attach a debugger. I attach and step through both processes and analyze the problem. [I could have run the test in Debug mode, and then had two debuggers: one for the test and its assertions, and another for the forked Maven. But that's rarely useful.]

(You can also use -Dmaven.surefire.debug=true if you need to debug Surefire's forked JUnit processes. [In 2.4, you can even configure the port!])

2b) Manual repro

I double-click on the test failure in my IDE. The test states clearly which project it's using, and what it's doing to the resource. I open a command line window to that directory and manually reproduce the test by calling mvn with appropriate environment variables, system properties, etc.

[In particular, I DON'T "bounce back and forth" between the resources directory and the test. The test is open in a window in my IDE; my command window is only open to the resources subdirectory.]

If I can't manually repro the failure, I examine the test more closely to see if there's a bug in the test or if I made a mistake. If I can repro it, I might launch the test with mvnDebug, allowing me to debug Surefire's forked JUnit tests remotely.

In my opinion, there should certainly be hooks available to generate a JUnit wrapper around an integration test, but that wrapper should not carry information only exists outside the test project directory. I'd favor something more like having an orchestrator POM that called something like the invoker to run the real build using the real POM, then verifies the results of that build.

-0 on putting the test in the same directory as the project-under-test, -1 on putting the assertions in a POM.

Verifications can get pretty interesting. For example, for Surefire, I verify that it's working correctly by instantiating the SurefireReportParser and passing it the XML files in target/surefire-reports. I even made a little reusable helper function called assertTestSuiteResults that lets me cleanly assert in one line how many passes, failures, and skipped tests there should be in a given Surefire run.

Writing snippets of reusable test code is trivial when you're writing your tests in Java with JUnit/TestNG, but a pain in the butt when you need to state your assertions in a POM file.

And what if you want to write data driven tests? Re-run failures only? Mark tests as temporarily ignored (showing up as "yellow" on the report)? Make one test depend on the results of another, auto-skipping if earlier tests fail? These features are nice when you're running unit tests but they are lifesavers when you're running slow integration tests.

My real point is that by trying to write another test runner plugin, you have to write another test framework; we should not do this, because the existing test frameworks are great. We DEFINITELY shouldn't do this simply for directory structure reasons... we can tweak those if we want.

Best of all, it could be written as a very simple archetype that generates a portable test case which can live in almost any directory structure within the integration-test aggregator build to make it easier to organize the test cases according to functionality.

I believe someone (Brian?) has already done this for tests written in the maven-verifier style. I approve.

-Dan


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to