Not to add another fire to this topic, but with all things being relative, so far this topic has been comparison of the TestNG and JUnit v3.8. From what I understand, the latest JUnit v4.1 provides many of the same annotation features that TestNG does, as well guaranteed compatibility with JUnit v3-based tests.
If we were to compare moving to TestNG with upgrading to JUnit 4.1, would there still be as much value in the proposition to move to TestNG? -Nathan > -----Original Message----- > From: George Harley [mailto:[EMAIL PROTECTED] > Sent: Monday, July 10, 2006 3:57 PM > To: harmony-dev@incubator.apache.org > Subject: Re: [classlib] Testing conventions - a proposal > > Alexei Zakharov wrote: > > Hi George, > > > >> For the purposes of this discussion it would be fascinating to find out > >> why you refer to TestNG as being an "unstable" test harness. What is > >> that statement based on ? > > > > My exact statement was referring to TestNG as "probably unstable" > > rather than simply "unstable". ;) This statement was based on posts > > from Richard Liang about the bug in the TestNG migration tool and on > > common sense. If the project has such an obvious bug in one place it > > may probably have other bugs in other places. JUnit is quite famous > > and widely used toolkit that proved to be stable enough. TestNG is > > neither famous nor widely used. And IMHO it makes sense to be careful > > with new exciting tools until we *really* need their innovative > > functionality. > > > > Hi Alexei, > > Last I heard, Richard posted saying that there was no bug in the > migration tool [1]. The command line tool is designed to locate JUnit > tests under a specified location and add the TestNG annotations to them. > That's what it does. > > You are right to say that it makes sense to be careful in this matter. > Nobody wants to do anything that affects Harmony in an adverse way. > > Best regards, > George > > [1] > http://mail-archives.apache.org/mod_mbox/incubator-harmony- > dev/200607.mbox/[EMAIL PROTECTED] > > > > > > 2006/7/10, George Harley <[EMAIL PROTECTED]>: > >> Alexei Zakharov wrote: > >> >> Actually, there's a very valid benefit for using TestNG markers (= > >> >> annotations/JavaDoc) for grouping tests; the directory structure is > a > >> >> tree, whereas the markers can form any slice of tests, and the sets > >> > > >> > Concerning TestNG vs JUnit. I just like to pay your attention on the > >> > fact what it is possible to achieve the same level of test > >> > grouping/slicing with JUnit TestSuites. You may define any number of > >> > intersecting suites - XXXAPIFailingSuite, XXXHYSpecificSuite, > >> > XXXWinSpecificSuite or whatever. Without necessity of migrating to > new > >> > (probably unstable) test harness. > >> > Just my two cents. > >> > > >> > > >> > >> Hi Alexei, > >> > >> You are quite correct that JUnit test suites are another alternative > >> here. If I recall correctly, their use was discussed in the very early > >> days of this project but it came to nothing and we instead went down > the > >> route of using exclusion filters in the Ant JUnit task. That approach > >> does not offer much in the way of fine grain control and relies on us > >> pushing stuff around the repository. Hence the kicking off of this > >> thread. > >> > >> For the purposes of this discussion it would be fascinating to find out > >> why you refer to TestNG as being an "unstable" test harness. What is > >> that statement based on ? > >> > >> Best regards, > >> George > >> > >> > >> > 2006/7/8, Alex Blewitt <[EMAIL PROTECTED]>: > >> >> On 08/07/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote: > >> >> > > >> >> > So while I like the annotations, and expect we can use them > >> >> effectively, > >> >> > I have an instinctive skepticism of annotations right now > >> because in > >> >> > general (in general in Java), I'm not convinced we've used them > >> enough > >> >> > to grok good design patterns. > >> >> > >> >> There's really no reason to get hung up on the annotations. TestNG > >> >> works just as well with JavaDoc source comments; annotations are > only > >> >> another means to that end. (They're probably a better one for the > >> >> future, but it's just an implementation detail.) > >> >> > >> >> > Now since I still haven't read the thread fully, I'm jumping to > >> >> > conclusions, taking it to the extreme, etc etc, but my thinking in > >> >> > writing the above is that if we bury everything about our test > >> >> > 'parameter space' in annotations, some of the visible > >> organization we > >> >> > have now w/ on-disk layout becomes invisible, and the readable > >> >> > "summaries" of aspects of testing that we'd have in an XML > metadata > >> >> > document (or whatever) also are hard because you need to scan the > >> >> > sources to find all instances of annotation "X". > >> >> > >> >> I'm hoping that this would be just as applicable to using JavaDoc > >> >> variants, and that the problem's not with annotations per se. > >> >> > >> >> In either case, both are grokkable with tools -- either > >> >> annotation-savy readers or a JavaDoc tag processor, and it > >> wouldn't be > >> >> hard to configure one of those to periodically scan the codebase to > >> >> generate reports. Furthermore, as long as the annotation X is well > >> >> defined, *you* don't have to scan it -- you leave it up to TestNG to > >> >> figure it out. > >> >> > >> >> Actually, there's a very valid benefit for using TestNG markers (= > >> >> annotations/JavaDoc) for grouping tests; the directory structure is > a > >> >> tree, whereas the markers can form any slice of tests, and the sets > >> >> don't need to be strict subsets (with a tree, everything has to be a > >> >> strict subset of its parents). That means that it's possible to > >> define > >> >> a marker IO to run all the IO tests, or a marker Win32 to run all > the > >> >> Win32 tests, and both of those will contain IO-specific Win32 tests. > >> >> You can't do that in a tree structure without duplicating content > >> >> somewhere along the line (e.g. /win/io or /io/win). Neither of these > >> >> scale well, and every time you add a new dimension, you're doubling > >> >> the structure of the directory, but merely adding a new marker with > >> >> TestNG. So if you wanted to have (say) boot classpath tests vs api > >> >> tests, then you'd ahve to have /api/win/io and /boot/win/io (or > >> >> various permutations as applicable). > >> >> > >> >> Most of the directory-based arguments seem to be along the lines of > >> >> "/api/win/io is better! No, /win/io/api is better!". Just have an > >> >> 'api', 'win', 'io' TestNG marker, and then let TestNG figure out > >> which > >> >> ones to run. You can then even get specific, and only run the > Windows > >> >> IO API tests, if you really want -- but if you don't, you get the > >> >> benefit of being able to run all IO tests (both API and boot). > >> >> > >> >> There doesn't seem to be any benefit to having a strict tree-like > >> >> structure to the tests when it's possible to have a multi- > dimensional > >> >> matrix of all possible combinations that's managed by the tool. > >> >> > >> >> Alex. > > > > > > > --------------------------------------------------------------------- > Terms of use : http://incubator.apache.org/harmony/mailing.html > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- Terms of use : http://incubator.apache.org/harmony/mailing.html To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]