On Saturday 16 September 2006 14:42, Fergal Daly wrote: > So, a passing TODO tells me that either the author > - writes bad tests > - means something else entirely by TODO > - didn't run the test suite before this release
I can buy these, to some degree. > or > - my environment is different to the authors in a way that has an > impact on the workings of the module This is the one that bothers me. Ignoring the issue that this is Really Not a Job for the Parser, I'm uncomfortable effectively recommending that people *not* use a module until someone deliberately reports "Hey, it works better than you think on this platform!" to the author and the author releases a completely new version to un-TODO those tests. I don't want to get in the habit of telling installers to ignore test failures, because that is Just Bad. I don't want to get in the habit of making busywork for authors, because I don't want to do it myself, I don't want them to skip adding legitimate and useful tests, and I don't want them to skimp on tests where a TODO test is appropriate. I don't want someone to suggest maintaining external test metadata about expected platforms and expected skips and expected TODOs because hey, look over there, that's overengineering something that could be simple. If and only if bonus tests actually *harm* the use of the module should it be a possibility that they cause an error condition. I've seen a lot of hypothetical situations, but how about real examples of TODO tests that actually exist? I know how Schwern and Ian and I have documented them in various forms; how do people actually use them? (Also, again, Not The Parser's Job.) -- c