On Mon, Oct 14, 2013 at 10:48 AM, Robert Muir <[email protected]> wrote: > On Mon, Oct 14, 2013 at 10:41 AM, Michael McCandless > <[email protected]> wrote: >> This has actually bit me before too ... >> >> I mean, sure, I do eventually notice that it ran too quickly and so it >> was not in fact really SUCCESSFUL. >> >> Why would Rob's example fail? In that case, it would have in fact run >> TestIndexWriter, right? (Sure, other modules didn't have such a test, >> but the fact that one of the visited modules did have the test should >> mean that the overall ant run is SUCCESSFUL?). Is it just too hard >> with ant to make this logic be "across modules"? >> > > 'ant test' needs to do a lot more than the specialized python script > you have to repeat one test.
Right, I agree this is hard to fix, because of ant / randomizedtesting / our build scripts limitations. But I still think it's wrong that "ant test -Dtestcase=foo -Dtestmethod=bar" finishes with "BUILD SUCCESSFUL" when you have an accidental typo and in fact nothing ran. It's like javac declaring success when you mis-typed one of your java source files. I know and agree this is really, really hard for us to fix, but I still think it's wrong: it's so trappy. Maybe we need a new "ant run-this-test-for-certain" target or something. > so I think you should modify the latter instead of trying to make the > whole build system complicated. Yeah, I fixed luceneutil ... it's of course hackity, since I peek in the stdout for "OK (0 tests)" and then call that a failure. Also, luceneutil "cheats" since this particular beasting tool (repeatLuceneTest.py) only runs one module (you have to cd to that directory first). The distributed beasting tool (runRemoteTests.py) runs all modules though ... Mike McCandless http://blog.mikemccandless.com --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
