2009/7/28 Nikodemus Siivola <[email protected]>: >> If new tests are added, it may be that suddenly N+M tests are failing, >> indicating regressions their changes introduced---which they actually >> did not do. > I would not be concerned about this.
Me neither - I quite recently ran into this situation with abcl. I have a controlled enough procedure of doing changes that I knew what happened. I never update both abcl and the tests simultaneously. I have some 0.02 about this discussion. I don't see a need for excluding new tests, because if they are correct ansi tests, I do believe they should be run just like the others. I also don't think new tests need to be separated in any special way, just add them to a file that's most suitable, and if absolutely necessary, create a new file. About the suggested consensus, I'd expect there to be 3-4 people who agree that a test is correct and in compliance with the spec. I don't see it necessary that such people are implementation representatives, I believe it's enough that they have a general agreement and some reasonable justification for a new test. This doesn't mean that the 3-4 people have to be on the mailing list. I don't suggest a strict process, I expect people to be able to use some sense. And I don't recommend having any veto powers. _______________________________________________ Implementation-hackers mailing list [email protected] http://common-lisp.net/cgi-bin/mailman/listinfo/implementation-hackers
