Nathan Beyer wrote:
Two suggestions:
1. Approve the testing strategy [1] and implement/rework the modules
appropriately.
2. Fix the tests!

-Nathan

[1]
http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html

Hi Nathan,

What are your thoughts on running or not running test cases containing problematic test methods while those methods are being investigated and fixed up ?


Best regards,
George



-----Original Message-----
From: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 27, 2006 12:09 PM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests



George Harley wrote:
Hi Geir,

As you may recall, a while back I floated the idea and supplied some
seed code to define all known test failing test methods in an XML file
(an "exclusions list") that could be used by JUnit at test run time to
skip over them while allowing the rest of the test methods in a class to
run [1]. Obviously I thought about that when catching up with this
thread but, more importantly, your comment about being reluctant to have
more dependencies on JUnit also motivated me to go off and read some
more about TestNG [2].

It was news to me that TestNG provides out-of-the-box support for
excluding specific test methods as well as groups of methods (where the
groups are declared in source file annotations or Javadoc comments).
Even better, it can do this on existing JUnit test code provided that
the necessary meta-data (annotations if compiling to a 1.5 target;
Javadoc comments if targeting 1.4 like we currently are). There is a
utility available in the TestNG download and also in the Eclipse support
plug-in that helps migrate directories of existing JUnit tests to TestNG
by adding in the basic meta-data (although for me the Eclipse version
also tried to break the test class inheritance from
junit.framework.TestCase which was definitely not what was required).

Perhaps ... just perhaps ... we should be looking at something like
TestNG (or my wonderful "exclusions list" :-) ) to provide the
granularity of test configuration that we need.

Just a thought.
How 'bout that ;)

geir

Best regards,
George

[1] http://issues.apache.org/jira/browse/HARMONY-263
[2] http://testng.org



Geir Magnusson Jr wrote:
Alexei Zakharov wrote:

Hi,
+1 for (3), but I think it will be better to define suite() method and
enumerate passing tests there rather than to comment out the code.

I'm reluctant to see more dependencies on JUnit when we could control
at
a level higher in the build system.

Hard to explain, I guess, but if our exclusions are buried in .java, I
would think that reporting and tracking over time is going to be much
harder.

geir


2006/6/27, Richard Liang <[EMAIL PROTECTED]>:

Hello Vladimir,

+1 to option 3) . We shall comment the failed test cases out and add
FIXME to remind us to diagnose the problems later. ;-)

Vladimir Ivanov wrote:

I see your point.
But I feel that we can miss regression in non-tested code if we
exclude
TestCases.
Now, for example we miss testing of

java.lang.Class/Process/Thread/String

and some other classes.

While we have failing tests and don't want to pay attention to these
failures we can:
1) Leave things as is - do not run TestCases with failing tests.
2) Split passing/failing TestCase into separate "failing TestCase"
and
"passing TestCase" and exclude "failing TestCases". When test or
implementation is fixed we move tests from failing TestCase to
passing
TestCase.
3) Comment failing tests in TestCases. It is better to run 58 tests
instead
of 0 for String.
4) Run all TestCases, then, compare test run results with the 'list
of
known
failures' and see whether new failures appeared. This, I think, is

better

then 1, 2 and 3, but, overhead is that we support 2 lists - list of

known

failing tests and exclude list where we put crashing tests.

Thanks, Vladimir
On 6/26/06, Tim Ellison <[EMAIL PROTECTED]> wrote:

Mikhail Loenko wrote:

Hi Vladimir,

IMHO the tests are to verify that an update does not introduce any
regression. So there are two options: remember which exactly

tests may

fail

and remember that all tests must pass. I believe the latter one is

a bit

easier and safer.

+1

Tim


Thanks,
Mikhail

2006/6/26, Vladimir Ivanov <[EMAIL PROTECTED]>:

Hi,
Working with tests I noticed that we are excluding some tests
just
because
several tests from single TestCase fail.

For example, the TestCase 'tests.api.java.lang.StringTest' has 60
tests and
only 2 of them fails. But the build excludes the whole TestCase

and we

just
miss testing of java.lang.String implementation.

Do we really need to exclude TestCases in 'ant test' target?

My suggestion is: do not exclude any tests until it crashes VM.
If somebody needs a list of tests that always passed a separated
target can
be added to build.

Do you think we should add target 'test-all' to the build?
 Thanks, Vladimir



Tim Ellison ([EMAIL PROTECTED])
IBM Java technology centre, UK.

--
Richard Liang
China Software Development Lab, IBM



---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to