Bob Friesenhahn <bfrie...@simple.dallas.tx.us> writes: > The philosophy of TAP is rather different than Automake's existing test > suite. These differences may prove challenging when switching from > Automake tests to TAP:
> * Every TAP test needs to somehow know its unique test number (so it > can print a result "ok 35"). If the tests are all in the same > module then they can simply increment an integer but if they are > not then there is a problem (or they can always plan for one > test). This isn't the case if you have a sufficiently new version of the TAP driver. The TAP protocol allows the tests to not be numbered. You lose some robustness, since you then potentially don't know if a test was skipped silently, but it's supported by C TAP Harness in the runtests.c driver. (The basic C TAP library doesn't support generating that style of output, but that's because the library takes care of the counting for you, so you don't need to keep track of test numbers.) > * TAP tests need to know if they are expected to pass or fail. This is true, and there isn't support for supplying that information outside of the test program itself. I'd be happy to implement a TAP extension to allow this for my test driver, though, since it seems obviously useful. But I think your particular use case may have a better solution. > A major feature of how GraphicsMagick is using the Automake tests > framework is that it determines which tests are XFAIL based on what > features were configured into the software. The list of tests which are > expected to fail is produced based on Automake conditionals. It is > useful for the user to see tests XFAIL as a reminder that the > configuration may be missing something they wanted. How is this best > handled for TAP tests? What I do for similar situations in my packages is that I test at the start of a test program that relies on optional features whether those features are available and, if not, call skip_all. (The TAP protocol representation being an initial "1..0 # skip ..." plan line.) That will result in that test being marked as skipped, which is *sort of* like XFAIL. You then get output like: Running all tests listed in TESTS. If any tests fail, run the failing test program with runtests -o to see more details. kafs/basic..............skipped (AFS not available) kafs/haspag.............skipped (AFS not available) pam-util/args...........ok pam-util/fakepam........ok pam-util/logging........ok pam-util/options........ok pam-util/vector.........ok portable/asprintf.......ok portable/daemon.........ok portable/getaddrinfo....ok portable/getnameinfo....ok portable/getopt.........ok portable/inet_aton......ok portable/inet_ntoa......ok portable/inet_ntop......ok portable/mkstemp........ok portable/setenv.........ok portable/snprintf.......ok portable/strlcat........ok portable/strlcpy........ok portable/strndup........ok util/buffer.............ok util/concat.............ok util/fdflag.............ok util/messages...........ok util/messages-krb5......ok util/network............ok util/vector.............ok util/xmalloc............skipped (xmalloc tests only run for maintainer) util/xwrite.............ok All tests successful, 3 tests skipped. Files=30, Tests=2110, 5.13 seconds (0.00 usr + 0.00 sys = 0.00 CPU) where the first two are a missing system feature and the last is a test case that I only run if a particular environment variable is set. You can do similar things for any other missing feature, and of course skip only part of the tests in a particular case case if you want (although currently, with the C TAP Harness harness, the latter doesn't tell you the reason why the test was skipped the way that ones skipped with skip_all do). -- Russ Allbery (r...@stanford.edu) <http://www.eyrie.org/~eagle/>