* I use my own test framework, attached, which was inspired by the
original version of JUnit. Its important features are:
* Every test is lexically enclosed.
What does this mean precisely?
* Every test or group of tests has a name.
Testcase names are optional in SRFI 64 and C/C. IIRC group names are
required.
* Tests are first-class objects.
This is nice. SRFI 64 uses "test specifiers" to refer to tests, I can't
find objects in the public interface.
Test objects should probably go into a distinct SRFI from the runners
and the definition framework. It could be a test middleware in a similar
vein as WSGI/Rack/Ring are HTTP middleware.
* One can run tests in a mode where only failures are reported. This
way, one doesn't have to wade through output in order to figure out
whether everything passed, or what failed.
Also nice, for the runner.
* It's possible to run individual tests or test groups or all defined
tests.
* Test groups can be defined concisely.
In SRFI 64 and C/C this is `test-group` or `test-begin/test-end`.
* Tests only pass if they return the symbol passed. That makes it
harder for buggy tests to appear to pass when they actually never ran.
This is completely novel.
* The assert macro uses simple heuristics to display the values that
were passed to it. This makes it less necessary to have a family of
assert macros for different purposes.
* There is an assert-signals-condition macro to test that an
expression causes a particular condition to be raised.
Similar to test-error in SRFI 64 and C/C.
* Failure reports show the captured continuation of the failing test.
This continuation can be used with MIT Scheme's debug to walk the
stack of the failure, examining variables, etc. This is particularly
useful when an unexpected condition is raised during the test.
This is seriously cool. Would be great to have this in a runner.
I think in addition to MIT Scheme, at least Gambit and Chez have a
continuation-aware interactive debugger.