George Harley wrote:
Richard Liang wrote:
George Harley wrote:
Hi,
If annotations were to be used to help us categorise tests in order
to simplify the definition of test configurations - what's included
and excluded etc - then a core set of annotations would need to be
agreed by the project. Consider the possibilities that the TestNG
"@Test" annotation offers us in this respect.
First, if a test method was identified as being broken and needed to
be excluded from all test runs while awaiting investigation then it
would be a simple matter of setting its enabled field like this:
@Test(enabled=false)
public void myTest() {
...
}
Temporarily disabling a test method in this way means that it can be
left in its original class and we do not have to refer to it in any
suite configuration (e.g. in the suite xml file).
If a test method was identified as being broken on a specific
platform then we could make use of the groups field of the "@Test"
type by making the method a member of a group that identifies its
predicament. Something like this:
@Test(groups={"state.broken.win.IA32"})
public void myOtherTest() {
...
}
The configuration for running tests on Windows would then
specifically exclude any test method (or class) that was a member of
that group.
Making a test method or type a member of a well-known group
(well-known in the sense that the name and meaning has been agreed
within the project) is essentially adding some descriptive
attributes to the test. Like adjectives (the groups) and nouns (the
tests) in the English language. To take another example, if there
was a test class that contained methods only intended to be run on
Windows and that were all specific to Harmony (i.e. not API tests)
then one could envisage the following kind of annotation:
@Test(groups={"type.impl", "os.win.IA32"})
public class MyTestClass {
public void testOne() {
...
}
public void testTwo() {
...
}
@Test(enabled=false)
public void brokenTest() {
...
}
}
Here the annotation on MyTestClass applies to all of its test methods.
So what are the well-known TestNG groups that we could define for
use inside Harmony ? Here are some of my initial thoughts:
* type.impl -- tests that are specific to Harmony
* state.broken.<platform id> -- tests bust on a specific platform
* state.broken -- tests broken on every platform but we want to
decide whether or not to run from our suite configuration
* os.<platform id> -- tests that are to be run only on the
specified platform (a test could be member of more than one of these)
What does everyone else think ? Does such a scheme sound reasonable ?
Just one question: What's the default test annotation? I mean the
successful api tests which will be run on every platform. Thanks a lot.
Best regards,
Richard
Hi Richard,
I think that just the basic @Test annotation on its own will suffice.
Any better suggestions are welcome.
Just thinking about how to filter out the target test groups :-)
I tried to use the following groups to define the win.IA32 API tests,
but it seems that the tests with the default annotation @Test cannot be
selected. Do I miss anything? Thanks a lot.
<groups>
<run>
<include name=".*" />
<include name="os.win.IA32" />
<exclude name="type.impl" />
<exclude name="state.broken" />
<exclude name="state.broken.win.IA32" />
<exclude name="os.linux.IA32" />
</run>
</groups>
The groups I defined:
@Test
@Test(groups={"os.win.IA32"})
@Test(groups={"os.win.IA32", "state.broken.win.IA32"})
@Test(groups={"type.impl"})
@Test(groups={"state.broken"})
@Test(groups={"os.linux.IA32"})
@Test(groups={"state.broken.linux.IA32"})
Best regards,
Richard.
Best regards,
George
Thanks for reading this far.
Best regards,
George
George Harley wrote:
Hi,
Just seen Tim's note on test support classes and it really caught
my attention as I have been mulling over this issue for a little
while now. I think that it is a good time for us to return to the
topic of class library test layouts.
The current proposal [1] sets out to segment our different types of
test by placing them in different file locations. After looking at
the recent changes to the LUNI module tests (where the layout
guidelines were applied) I have a real concern that there are
serious problems with this approach. We have started down a track
of just continually growing the number of test source folders as
new categories of test are identified and IMHO that is going to
bring complexity and maintenance issues with these tests.
Consider the dimensions of tests that we have ...
API
Harmony-specific
Platform-specific
Run on classpath
Run on bootclasspath
Behaves different between Harmony and RI
Stress
...and so on...
If you weigh up all of the different possible permutations and then
consider that the above list is highly likely to be extended as
things progress it is obvious that we are eventually heading for
large amounts of related test code scattered or possibly duplicated
across numerous "hard wired" source directories. How maintainable
is that going to be ?
If we want to run different tests in different configurations then
IMHO we need to be thinking a whole lot smarter. We need to be
thinking about keeping tests for specific areas of functionality
together (thus easing maintenance); we need something quick and
simple to re-configure if necessary (pushing whole directories of
files around the place does not seem a particularly lightweight
approach); and something that is not going to potentially mess up
contributed patches when the file they patch is found to have been
recently pushed from source folder A to B.
To connect into another recent thread, there have been some posts
lately about handling some test methods that fail on Harmony and
have meant that entire test case classes have been excluded from
our test runs. I have also been noticing some API test methods that
pass fine on Harmony but fail when run against the RI. Are the
different behaviours down to errors in the Harmony implementation ?
An error in the RI implementation ? A bug in the RI Javadoc ? Only
after some investigation has been carried out do we know for sure.
That takes time. What do we do with the test methods in the
meantime ? Do we push them round the file system into yet another
new source folder ? IMHO we need a testing strategy that enables
such "problem" methods to be tracked easily without disruption to
the rest of the other tests.
A couple of weeks ago I mentioned that the TestNG framework [2]
seemed like a reasonably good way of allowing us to both group
together different kinds of tests and permit the exclusion of
individual tests/groups of tests [3]. I would like to strongly
propose that we consider using TestNG as a means of providing the
different test configurations required by Harmony. Using a
combination of annotations and XML to capture the kinds of
sophisticated test configurations that people need, and that allows
us to specify down to the individual method, has got to be more
scalable and flexible than where we are headed now.
Thanks for reading this far.
Best regards,
George
[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
[2] http://testng.org
[3]
http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL PROTECTED]
---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
--
Richard Liang
China Software Development Lab, IBM
---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]