I think Tim has a valid point, or at least the point I'm inferring seems
valid: the testing technology is not the real issue. This problem can be
solved by either JUnit or TestNG. More specifically, this problem can be
solved utilizing the grouping of arbitrary tests.

I'm been playing with reorganizing the 'luni' module using the suggested
directory layout and it really doesn't seem to provide much value. Also, I'm
a bit partial to the concept of one source directory (src/main/java), one
test source directory (src/test/java) and any number of resource
(src/main/resources/*) and test resource directories (src/test/resources/*)
as defined by the Maven 2 POM.

The only practical value I saw in the directory layout was that in Eclipse I
could just select a single folder and run all of the API tests against an
RI. The same can be said for any of the other test folders, but this same
feature can also be achieved via TestSuites.

As such, I'm in alignment with Tim's thoughts on just using TestSuites to
define the major groupings. I think the proposed naming conventions of
'o.a.h.test.<module>.java.package' are fine. The only addition I would make
is to at guidelines on class name, so that pure API tests, Harmony tests and
failing tests can live in the same package. Something as trivial as
XXXAPITest, XXXImplTest and XXXFailingTest would work. Perhaps a similar
approach can be used for platform-specific tests. These tests would then be
grouped, per-module into an APITestSuite, an ImplTestSuite, a
FailingTestSuite and Platform-specificTestSuites.

In regards to tests that must be on the bootclasspath, I would say either
just put everything on the bootclasspath (any real harm) or use pattern sets
for bootclasspath tests (80% of the time the classes will be java*/*).

In regards to stress tests, performance tests and integration tests, I
believe these are patently different and should be developed in their own
projects.

My 2 cents...

-Nathan

> -----Original Message-----
> From: Tim Ellison [mailto:[EMAIL PROTECTED]
> <snip/>
> 
> Considering just the JUnit tests that we have at the moment...
> 
> Do I understand you correctly that you agree with the idea of creating
> 'suites of tests' using metadata (such as TestNG's annotations or
> whatever) and not by using the file system layout currently being
> proposed?
> 
> I know that you are also thinking about integration tests, stress tests,
> performance tests, etc. as well but just leaving those aside at the
> moment.
> 
> Regards,
> Tim
> 
> 
> > Thanks
> > Mikhail
> >
> >
> >> Stress
> >> ...and so on...
> >>
> >>
> >> If you weigh up all of the different possible permutations and then
> >> consider that the above list is highly likely to be extended as things
> >> progress it is obvious that we are eventually heading for large amounts
> >> of related test code scattered or possibly duplicated across numerous
> >> "hard wired" source directories. How maintainable is that going to be ?
> >>
> >> If we want to run different tests in different configurations then IMHO
> >> we need to be thinking a whole lot smarter. We need to be thinking
> about
> >> keeping tests for specific areas of functionality together (thus easing
> >> maintenance); we need something quick and simple to re-configure if
> >> necessary (pushing whole directories of files around the place does not
> >> seem a particularly lightweight approach); and something that is not
> >> going to potentially mess up contributed patches when the file they
> >> patch is found to have been recently pushed from source folder A to B.
> >>
> >> To connect into another recent thread, there have been some posts
> lately
> >> about handling some test methods that fail on Harmony and have meant
> >> that entire test case classes have been excluded from our test runs. I
> >> have also been noticing some API test methods that pass fine on Harmony
> >> but fail when run against the RI. Are the different behaviours down to
> >> errors in the Harmony implementation ? An error in the RI
> implementation
> >> ? A bug in the RI Javadoc ? Only after some investigation has been
> >> carried out do we know for sure. That takes time. What do we do with
> the
> >> test methods in the meantime ? Do we push them round the file system
> >> into yet another new source folder ? IMHO we need a testing strategy
> >> that enables such "problem" methods to be tracked easily without
> >> disruption to the rest of the other tests.
> >>
> >> A couple of weeks ago I mentioned that the TestNG framework [2] seemed
> >> like a reasonably good way of allowing us to both group together
> >> different kinds of tests and permit the exclusion of individual
> >> tests/groups of tests [3]. I would like to strongly propose that we
> >> consider using TestNG as a means of providing the different test
> >> configurations required by Harmony. Using a combination of annotations
> >> and XML to capture the kinds of sophisticated test configurations that
> >> people need, and that allows us to specify down to the individual
> >> method, has got to be more scalable and flexible than where we are
> >> headed now.
> >>
> >> Thanks for reading this far.
> >>
> >> Best regards,
> >> George
> >>
> >>
> >> [1]
> >>
> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.htm
> l
> >>
> >> [2] http://testng.org
> >> [3]
> >> http://mail-archives.apache.org/mod_mbox/incubator-harmony-
> dev/200606.mbox/[EMAIL PROTECTED]
> >>
> >>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to