Since std.experimental.testing got rejected, I've started to think of what next. I started working on a new project that would allow to run its test runner for any project with no other intervention from the user. The idea would be to use `-unittest` as now, until a test failed. Then use the tool and get the output and options from std.experimental.testing / unit-threaded without changing or adding to the source files. It turned out to be more complicated than I thought.

The main problem is that the only way that is currently possible to get at individual unit tests is __traits(getUnitTests). That means compile-time reflection, which means enumerating all modules that should be tested. This is a duplication of the information already known by the build system. Furthermore, knowing _how_ to reflect on modules is... tricky. If a module bar in a package foo is in directory src, then the compile-time string should be "foo.bar", _not_ "src.foo.bar". No regular tool can know that, this is build-system specific. One could open the files and parse the module declaration at the top, but obviously this is not the best idea ever.

This isn't a problem for anybody using unit-threaded because I encouraged the tests to be kept separate from production code. If all tests are in a base package called `tests`, things are considerably easier.

So I started working on a tool that would use the information from "dub describe". dub knows where all the files are and how they should be imported. However, there are two problems with that:

1) It's a _lot_ of work, and who knows who'll actually use this
2) It would limit the tool's applicability to dub projects

The only other way of running unit tests differently is to change the runner at runtime. However, the way it works right now is that the module info contains a function that calls all unit tests in that module. There's no granularity.

I'm thinking that it might be worth changing how the module info is written so that unit tests can be chained in a linked list or just written as an array of function pointers instead. No reflection needed, no figuring out how to build the project all over again. Also, metadata for each individual test could be added. This is where a string for its name could go. To keep it extensible, the metadata could be an associative array of string to string.

Is this worth pursuing?

Atila

Reply via email to