Alexei Zakharov wrote:
Hi George,

Wow, they are fast guys! Thanks for the link. Do you know when do they
plan to release 5.0 officially?

Regards,


Hi Alexei,

Actually, I just saw this announcement in my news reader about 15 minutes ago ...

http://beust.com/weblog/archives/000400.html

Best regards,
George



2006/7/19, George Harley <[EMAIL PROTECTED]>:
Hi Alexei,

I just downloaded the latest working build of TestNG 5.0 [1] and support
for the "jvm" attribute is in there. This is not the official release
build.

Best regards,
George

[1] http://testng.org/testng-5.0.zip


Alexei Zakharov wrote:
> Hi George,
>
> Agree, we may experience problems in case of VM hang or crash. I
> suggest this only as a temporary solution. BTW, the fact that TestNG
> ant task still doesn't have such attributes looks like a sign for me -
> TestNG can be still immature in some aspects. Still comparing TestNG
> and JUnit.
>
> Regards,
>
> 2006/7/19, George Harley <[EMAIL PROTECTED]>:
>> Hi Alexei,
>>
>> It's encouraging to hear that (Ant + TestNG + sample tests) all worked >> fine together on Harmony. In answer to your question I suppose that the >> ability to fork the tests in a separate VM means that we do not run the
>> risk of possible bugs in Harmony affecting the test harness and
>> therefore the outcome of the tests.
>>
>> Best regards,
>> George
>>
>>
>> Alexei Zakharov wrote:
>> > Probably my previous message was not clear enough.
>> > Why can't we just invoke everything including ant on top of Harmony
>> > for now? At least I was able to build and run test-14 examples from
>> > TestNG 4.7 distribution solely on top of j9 + our classlib today.
>> >
>> > C:\Java\testng-4.7\test-14>set
>> > JAVA_HOME=c:\Java\harmony\enhanced\classlib\trunk
>> > \deploy\jdk\jre
>> >
>> > C:\Java\testng-4.7\test-14>ant
>> > -Dbuild.compiler=org.eclipse.jdt.core.JDTCompiler
>> > Adapter run
>> > Buildfile: build.xml
>> >
>> > prepare:
>> >
>> > compile:
>> >     [echo]                                  -- Compiling JDK 1.4
>> tests --
>> >
>> > run:
>> >     [echo]                                  -- Running JDK 1.4
>> tests   --
>> >     [echo]                                  --
>> testng-4.7-jdk14.jar  --
>> >
>> > [testng-14] ===============================================
>> > [testng-14] TestNG JDK 1.4
>> > [testng-14] Total tests run: 179, Failures: 10, Skips: 0
>> > [testng-14] ===============================================
>> > ...
>> >
>> > Exactly the same results as with Sun JDK 1.4.
>> > Note: you may need to hatch the build.xml a little bit to achieve
>> this.
>> >
>> > Thanks,
>> >
>> > 2006/7/19, George Harley <[EMAIL PROTECTED]>:
>> >> Hi Richard,
>> >>
>> >> Actually the Ant task always runs the tests in a forked VM. At
>> present,
>> >> however, the task does not support specifying the forked VM (i.e.
>> there
>> >> is no equivalent to the JUnit Ant task's "jvm" attribute). This
>> matter
>> >> has already been raised with the TestNG folks who seem happy to
>> >> introduce this.
>> >>
>> >> In the meantime we could run the tests using the Ant java task.
>> >>
>> >>
>> >> Best regards,
>> >> George
>> >>
>> >>
>> >>
>> >> Richard Liang wrote:
>> >> > According to "TestNG Ant Task" [1], it seems that the TestNG Ant
>> task
>> >> > does not support to fork a new JVM, that is, we must launch ant
>> using
>> >> > Harmony itself. Any comments? Thanks a lot.
>> >> >
>> >> > [1]http://testng.org/doc/ant.html
>> >> >
>> >> > Best regards,
>> >> > Richard
>> >> >
>> >> > George Harley wrote:
>> >> >> Andrew Zhang wrote:
>> >> >>> On 7/18/06, George Harley <[EMAIL PROTECTED]> wrote:
>> >> >>>>
>> >> >>>> Oliver Deakin wrote:
>> >> >>>> > George Harley wrote:
>> >> >>>> >> <SNIP!>
>> >> >>>> >>
>> >> >>>> >> Here the annotation on MyTestClass applies to all of its test
>> >> >>>> methods.
>> >> >>>> >>
>> >> >>>> >> So what are the well-known TestNG groups that we could define
>> >> >>>> for use
>> >> >>>> >> inside Harmony ? Here are some of my initial thoughts:
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >> * type.impl  --  tests that are specific to Harmony
>> >> >>>> >
>> >> >>>> > So tests are implicitly API unless specified otherwise?
>> >> >>>> >
>> >> >>>> > I'm slightly confused by your definition of impl tests as
>> "tests
>> >> >>>> that
>> >> >>>> are
>> >> >>>> > specific to Harmony". Does this mean that impl tests are only
>> >> >>>> > those that test classes in org.apache.harmony packages?
>> >> >>>> > I thought that impl was our way of saying "tests that need to
>> >> go on
>> >> >>>> > the bootclasspath".
>> >> >>>> >
>> >> >>>> > I think I just need a little clarification...
>> >> >>>> >
>> >> >>>>
>> >> >>>> Hi Oliver,
>> >> >>>>
>> >> >>>> I was using the definition of implementation-specific tests
>> that we
>> >> >>>> currently have on the Harmony testing conventions web page. That
>> >> is,
>> >> >>>> implementation-specific tests are those that are dependent on
>> some
>> >> >>>> aspect of the Harmony implementation and would therefore not
>> >> pass when
>> >> >>>> run against the RI or other conforming implementations. It's
>> >> >>>> orthogonal
>> >> >>>> to the classpath/bootclasspath issue.
>> >> >>>>
>> >> >>>>
>> >> >>>> >> * state.broken.<platform id>  --  tests bust on a specific
>> >> platform
>> >> >>>> >>
>> >> >>>> >> * state.broken  --  tests broken on every platform but we
>> >> want to
>> >> >>>> >> decide whether or not to run from our suite configuration
>> >> >>>> >>
>> >> >>>> >> * os.<platform id> -- tests that are to be run only on the
>> >> >>>> >> specified platform (a test could be member of more than
>> one of
>> >> >>>> these)
>> >> >>>> >
>> >> >>>> > And the defaults for these are an unbroken state and runs
>> on any
>> >> >>>> > platform.
>> >> >>>> > That makes sense...
>> >> >>>> >
>> >> >>>> > Will the platform ids be organised in a similar way to the
>> >> >>>> platform ids
>> >> >>>> > we've discussed before for organisation of native code [1]?
>> >> >>>> >
>> >> >>>>
>> >> >>>> The actual string used to identify a particular platform can be
>> >> >>>> whatever
>> >> >>>> we want it to be, just so long as we are consistent. So, yes,
>> >> the ids
>> >> >>>> mentioned in the referenced email would seem a good starting
>> >> point. Do
>> >> >>>> we need to include a 32-bit/64-bit identifier ?
>> >> >>>>
>> >> >>>>
>> >> >>>> > So all tests are, by default, in an all-platforms (or shared)
>> >> group.
>> >> >>>> > If a test fails on all Windows platforms, it is marked with
>> >> >>>> > state.broken.windows.
>> >> >>>> > If a test fails on Windows but only on, say, amd hardware,
>> >> >>>> > it is marked state.broken.windows.amd.
>> >> >>>> >
>> >> >>>>
>> >> >>>> Yes. Agreed.
>> >> >>>>
>> >> >>>>
>> >> >>>> > Then when you come to run tests on your windows amd machine,
>> >> >>>> > you want to include all tests in the all-platform (shared)
>> group,
>> >> >>>> > os.windows and os.windows.amd, and exclude all tests in
>> >> >>>> > the state.broken, state.broken.windows and
>> >> state.broken.windows.amd
>> >> >>>> > groups.
>> >> >>>> >
>> >> >>>> > Does this tally with what you were thinking?
>> >> >>>> >
>> >> >>>>
>> >> >>>> Yes, that is the idea.
>> >> >>>>
>> >> >>>>
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >> What does everyone else think ? Does such a scheme sound
>> >> >>>> reasonable ?
>> >> >>>> >
>> >> >>>> > I think so - it seems to cover our current requirements.
>> >> Thanks for
>> >> >>>> > coming up with this!
>> >> >>>> >
>> >> >>>>
>> >> >>>> Thanks, but I don't see it as final yet really. It would be
>> >> great to
>> >> >>>> prove the worth of this by doing a trial on one of the existing
>> >> >>>> modules,
>> >> >>>> ideally something that contains tests that are
>> platform-specific.
>> >> >>>
>> >> >>>
>> >> >>> Hello George, how about doing a trial on NIO module?
>> >> >>>
>> >> >>> So far as I know, there are several platform dependent tests
>> in NIO
>> >> >>> module.
>> >> >>> :)
>> >> >>>
>> >> >>> The assert statements are commented out in these tests, with
>> "FIXME"
>> >> >>> mark.
>> >> >>>
>> >> >>> Furthurmore, I also find some platform dependent behaviours of
>> >> >>> FileChannel.
>> >> >>> If TestNG is applied on NIO, I will supplement new tests for
>> >> >>> FileChannel and
>> >> >>> fix the bug of source code.
>> >> >>>
>> >> >>> What's your opnion? Any suggestions/comments?
>> >> >>>
>> >> >>> Thanks!
>> >> >>>
>> >> >>
>> >> >> Hi Andrew,
>> >> >>
>> >> >> That sounds like a very good idea. If there is agreement in the
>> >> >> project that 5.0 annotations are the way to go (as opposed to the
>> >> >> pre-5.0 Javadoc comment support offered by TestNG) then to the
>> best
>> >> >> of my knowledge all that is stopping us from doing this trial
>> is the
>> >> >> lack of a 5.0 VM to run the Harmony tests on. Hopefully that
>> will be
>> >> >> addressed soon. When it is I would be happy to get stuck into this
>> >> >> trial.
>> >> >>
>> >> >> Best regards,
>> >> >> George
>> >> >>
>> >> >>
>> >> >>> Best regards,
>> >> >>>> George
>> >> >>>>
>> >> >>>>
>> >> >>>> > Regards,
>> >> >>>> > Oliver
>> >> >>>> >
>> >> >>>> > [1]
>> >> >>>> >
>> >> >>>>
>> >>
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/[EMAIL PROTECTED]
>>
>> >>
>> >> >>>>
>> >> >>>> >
>> >> >>>> >
>> >> >>>> >>
>> >> >>>> >> Thanks for reading this far.
>> >> >>>> >>
>> >> >>>> >> Best regards,
>> >> >>>> >> George
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >> George Harley wrote:
>> >> >>>> >>> Hi,
>> >> >>>> >>>
>> >> >>>> >>> Just seen Tim's note on test support classes and it really
>> >> >>>> caught my
>> >> >>>> >>> attention as I have been mulling over this issue for a
>> little
>> >> >>>> while
>> >> >>>> >>> now. I think that it is a good time for us to return to the
>> >> >>>> topic of
>> >> >>>> >>> class library test layouts.
>> >> >>>> >>>
>> >> >>>> >>> The current proposal [1] sets out to segment our different
>> >> >>>> types of
>> >> >>>> >>> test by placing them in different file locations. After
>> >> looking at
>> >> >>>> >>> the recent changes to the LUNI module tests (where the
>> layout
>> >> >>>> >>> guidelines were applied) I have a real concern that there
>> are
>> >> >>>> >>> serious problems with this approach. We have started down a
>> >> >>>> track of
>> >> >>>> >>> just continually growing the number of test source folders
>> >> as new
>> >> >>>> >>> categories of test are identified and IMHO that is going to
>> >> bring
>> >> >>>> >>> complexity and maintenance issues with these tests.
>> >> >>>> >>>
>> >> >>>> >>> Consider the dimensions of tests that we have ...
>> >> >>>> >>>
>> >> >>>> >>> API
>> >> >>>> >>> Harmony-specific
>> >> >>>> >>> Platform-specific
>> >> >>>> >>> Run on classpath
>> >> >>>> >>> Run on bootclasspath
>> >> >>>> >>> Behaves different between Harmony and RI
>> >> >>>> >>> Stress
>> >> >>>> >>> ...and so on...
>> >> >>>> >>>
>> >> >>>> >>>
>> >> >>>> >>> If you weigh up all of the different possible
>> permutations and
>> >> >>>> then
>> >> >>>> >>> consider that the above list is highly likely to be
>> extended as
>> >> >>>> >>> things progress it is obvious that we are eventually heading
>> >> for
>> >> >>>> >>> large amounts of related test code scattered or possibly
>> >> >>>> duplicated
>> >> >>>> >>> across numerous "hard wired" source directories. How
>> >> >>>> maintainable is
>> >> >>>> >>> that going to be ?
>> >> >>>> >>>
>> >> >>>> >>> If we want to run different tests in different
>> >> configurations then
>> >> >>>> >>> IMHO we need to be thinking a whole lot smarter. We need
>> to be
>> >> >>>> >>> thinking about keeping tests for specific areas of
>> >> functionality
>> >> >>>> >>> together (thus easing maintenance); we need something
>> quick and
>> >> >>>> >>> simple to re-configure if necessary (pushing whole
>> >> directories of
>> >> >>>> >>> files around the place does not seem a particularly
>> lightweight
>> >> >>>> >>> approach); and something that is not going to potentially
>> >> mess up
>> >> >>>> >>> contributed patches when the file they patch is found to
>> >> have been
>> >> >>>> >>> recently pushed from source folder A to B.
>> >> >>>> >>>
>> >> >>>> >>> To connect into another recent thread, there have been some
>> >> posts
>> >> >>>> >>> lately about handling some test methods that fail on Harmony
>> >> and
>> >> >>>> >>> have meant that entire test case classes have been excluded
>> >> >>>> from our
>> >> >>>> >>> test runs. I have also been noticing some API test
>> methods that
>> >> >>>> pass
>> >> >>>> >>> fine on Harmony but fail when run against the RI. Are the
>> >> >>>> different
>> >> >>>> >>> behaviours down to errors in the Harmony implementation ? An
>> >> error
>> >> >>>> >>> in the RI implementation ? A bug in the RI Javadoc ? Only
>> after
>> >> >>>> some
>> >> >>>> >>> investigation has been carried out do we know for sure. That
>> >> takes
>> >> >>>> >>> time. What do we do with the test methods in the meantime ?
>> >> Do we
>> >> >>>> >>> push them round the file system into yet another new source
>> >> >>>> folder ?
>> >> >>>> >>> IMHO we need a testing strategy that enables such "problem"
>> >> >>>> methods
>> >> >>>> >>> to be tracked easily without disruption to the rest of the
>> >> other
>> >> >>>> tests.
>> >> >>>> >>>
>> >> >>>> >>> A couple of weeks ago I mentioned that the TestNG
>> framework [2]
>> >> >>>> >>> seemed like a reasonably good way of allowing us to both
>> group
>> >> >>>> >>> together different kinds of tests and permit the
>> exclusion of
>> >> >>>> >>> individual tests/groups of tests [3]. I would like to
>> strongly
>> >> >>>> >>> propose that we consider using TestNG as a means of
>> >> providing the
>> >> >>>> >>> different test configurations required by Harmony. Using a >> >> >>>> >>> combination of annotations and XML to capture the kinds of >> >> >>>> >>> sophisticated test configurations that people need, and that
>> >> >>>> allows
>> >> >>>> >>> us to specify down to the individual method, has got to
>> be more
>> >> >>>> >>> scalable and flexible than where we are headed now.
>> >> >>>> >>>
>> >> >>>> >>> Thanks for reading this far.
>> >> >>>> >>>
>> >> >>>> >>> Best regards,
>> >> >>>> >>> George
>> >> >>>> >>>
>> >> >>>> >>>
>> >> >>>> >>> [1]
>> >> >>>> >>>
>> >> >>>>
>> >>
>> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html
>>
>> >>
>> >> >>>>
>> >> >>>> >>>
>> >> >>>> >>> [2] http://testng.org
>> >> >>>> >>> [3]
>> >> >>>> >>>
>> >> >>>>
>> >>
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/[EMAIL PROTECTED]




---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to