Hi folks,
I'd like to investigate tests/api/java/net/DatagramSocketTest.java and
tests/api/java/net/DatagramSocketTest.java in luni module. I have updated
the wiki page(http://wiki.apache.org/harmony/Excluded_tests). I'll also plan
to study other excluded tests in luni module when I finish these
Vladimir Ivanov wrote:
New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
(refered from http://wiki.apache.org/harmony/ClassLibrary).
It would be good if before test investigation one would specify 'in
progress, Name' near module name, showing it is under investigation
Great job. Vladimir ;-)
Vladimir Ivanov wrote:
New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
(refered from http://wiki.apache.org/harmony/ClassLibrary).
It would be good if before test investigation one would specify 'in
progress, Name' near module name, showing it
Yes Vladimir, nice job!
I have updated the data for beans module. Since the reason of failures
for the most of excluded test is not known yet I just put their names
there without any comment why they were excluded.
Thanks,
2006/7/14, Richard Liang [EMAIL PROTECTED]:
Great job. Vladimir ;-)
Vladimir Ivanov wrote:
On 7/7/06, Tim Ellison [EMAIL PROTECTED] wrote:
...
Currently I'm looking on the excluded TestCases and it requires more time
than I expected.
I'll prepare a report/summary about excluded TestCases at the end of this
process.
Hello Vladimir,
How about the progress
New page http://wiki.apache.org/harmony/Excluded_tests was added to WIKI
(refered from http://wiki.apache.org/harmony/ClassLibrary).
It would be good if before test investigation one would specify 'in
progress, Name' near module name, showing it is under investigation being
done by Name.
Thanks,
On 7/7/06, Tim Ellison [EMAIL PROTECTED] wrote:
...
Currently I'm looking on the excluded TestCases and it requires more time
than I expected.
I'll prepare a report/summary about excluded TestCases at the end of this
process.
Thanks, Vladimir
On 7/7/06, Tim Ellison [EMAIL PROTECTED] wrote:
Hi,
If there are really useful tests that are being unnecessarily excluded
by being in the same *Test class, then you may want to consider moving
the failing tests into SecureRandom3Test and excluding that -- but by
the sound of it all SecureRandom tests will be failing.
I think it's a nice
Alexei Zakharov wrote:
Hi,
If there are really useful tests that are being unnecessarily excluded
by being in the same *Test class, then you may want to consider moving
the failing tests into SecureRandom3Test and excluding that -- but by
the sound of it all SecureRandom tests will be failing.
Alexei Zakharov wrote:
Hi,
If there are really useful tests that are being unnecessarily excluded
by being in the same *Test class, then you may want to consider moving
the failing tests into SecureRandom3Test and excluding that -- but by
the sound of it all SecureRandom tests will be
Thanks George Tim, I was out during last week and today was reading
threads from oldest to the newest. :)
I agree, general solution using TestSuites or even TestNG is better
than my temporary one. However, defining a general approach can take a
long period of time. Anyway, let's move our
Message-
From: Vladimir Ivanov [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 05, 2006 12:41 AM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
Yesterday I tried to add a regression test to existing in security
module
TestCase, but, found
Vladimir Ivanov wrote:
More details: it is
org/apache/harmony/security/tests/java/security/SecureRandom2Test.java
test.
At present time it has 2 failing tests with messages about SHA1PRNG
algorithm (no support for SHA1PRNG provider).
Looks like it is valid tests for non implemented
because of a bug, then log an issue about the bug and
try to fix the issue.
-Nathan
-Original Message-
From: Vladimir Ivanov [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 05, 2006 12:41 AM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
Yesterday I tried to add a regression test to existing in security module
TestCase, but, found that the TestCase is in exclude list. I had to
un-exclude it, run, check my test passes and exclude the TestCase again – it
was a little bit inconvenient, besides, my new valid (I believe) regression
Nathan Beyer wrote:
How are other projects handling this? My opinion is that tests, which are
expected and know to pass should always be running and if they fail and the
failure can be independently recreated, then it's something to be posted on
the list, if trivial (typo in build file?), or
Is this the case where we have two 'categories'?
1) tests that never worked
2) tests that recently broke
I think that a #2 should never persist for more than one build
iteration, as either things get fixed or backed out. I suppose then we
are really talking about category #1, and that we
Based on what I've seen of the excluded tests, category 1 is the predominate
case. This could be validated by looking at old revisions in SVN.
-Nathan
-Original Message-
From: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]
Is this the case where we have two 'categories'?
1) tests
Nathan Beyer wrote:
Based on what I've seen of the excluded tests, category 1 is the predominate
case. This could be validated by looking at old revisions in SVN.
I'm sure that is true, I'm just saying that the build system 'normal'
state is that all enabled tests pass. My concern was over
Best regards,
George
-Original Message-
From: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 27, 2006 12:09 PM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
George Harley wrote:
Hi Geir
: Re: [classlib][testing] excluding the failed tests
George Harley wrote:
Hi Geir,
As you may recall, a while back I floated the idea and supplied some
seed code to define all known test failing test methods in an XML file
(an exclusions list) that could be used by JUnit at test run time
: Geir Magnusson Jr [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 27, 2006 12:09 PM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
George Harley wrote:
Hi Geir,
As you may recall, a while back I floated the idea and supplied some
seed
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 27, 2006 12:09 PM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
George Harley wrote:
Hi Geir,
As you may recall, a while back I floated the idea and supplied some
seed code
: Tuesday, June 27, 2006 12:09 PM
To: harmony-dev@incubator.apache.org
Subject: Re: [classlib][testing] excluding the failed tests
George Harley wrote:
Hi Geir,
As you may recall, a while back I floated the idea and supplied some
seed code to define all known test failing test methods
Hi,
+1 for (3), but I think it will be better to define suite() method and
enumerate passing tests there rather than to comment out the code.
2006/6/27, Richard Liang [EMAIL PROTECTED]:
Hello Vladimir,
+1 to option 3) . We shall comment the failed test cases out and add
FIXME to remind us to
There was a submission that enabled finer control of failing tests (even
by platform etc.)
I may be wrong but commenting out tests usually means that they never
get fixed; even putting them into exclude clauses in the ant script is
too hidden for me -- I prefer to see the exclusions and failures
Tim Ellison wrote:
There was a submission that enabled finer control of failing tests (even
by platform etc.)
I may be wrong but commenting out tests usually means that they never
get fixed;
Yes, that was my concern as well.
even putting them into exclude clauses in the ant script is
Hi Vladimir,
IMHO the tests are to verify that an update does not introduce any
regression. So there are two options: remember which exactly tests may fail
and remember that all tests must pass. I believe the latter one is a bit
easier and safer.
Thanks,
Mikhail
2006/6/26, Vladimir Ivanov
Mikhail Loenko wrote:
Hi Vladimir,
IMHO the tests are to verify that an update does not introduce any
regression. So there are two options: remember which exactly tests may fail
and remember that all tests must pass. I believe the latter one is a bit
easier and safer.
+1
Tim
Thanks,
I see your point.
But I feel that we can miss regression in non-tested code if we exclude
TestCases.
Now, for example we miss testing of java.lang.Class/Process/Thread/String
and some other classes.
While we have failing tests and don't want to pay attention to these
failures we can:
1) Leave
Hello Vladimir,
+1 to option 3) . We shall comment the failed test cases out and add
FIXME to remind us to diagnose the problems later. ;-)
Vladimir Ivanov wrote:
I see your point.
But I feel that we can miss regression in non-tested code if we exclude
TestCases.
Now, for example we miss
31 matches
Mail list logo