Johannes Pfau wrote:
> Current situation:
> The compiler combines all unittests of a module into one huge function.
> If a unittest in a module fails, the rest won't be executed. The
> runtime (which is responsible for calling that per module unittest
> method) must always run all unittests of a module.
> 
> Goal:
> The runtime / test runner can decide for every test if it wants to
> continue testing or abort. It should also be possible to run single
> tests and skip some tests. As a secondary goal the runtime should
> receive the filename and line number of the unittest declaration.
> 
> Proposal:
> Introduce a new 'MInewunitTest' ModuleInfo flag in object.d and in the
> compiler. If MInewunitTest is present, the moduleinfo does not contain
> a unittester function. Instead it contains an array (slice) of UnitTest
> structs. So the new module property looks like this:
> ----
> @property UnitTest[] unitTest() nothrow pure;
> ----
> 
> the UnitTest struct looks like this:
> ----
> struct UnitTest
> {
>    string name; //Not used yet
>    string fileName;
>    uint line;
>    void function() testFunc;
> }
> ----
> 
> The compiler generates a static array of all UnitTest objects for every
> module and sets the UnitTest[] slice in the moduleinfo to point to this
> static array. As the compiler already contains individual functions for
> every unittest, this isn't too difficult.
> 
> 
> Proof of Concept:
> I haven't done any dmd hacking before so this might be terrible code,
> but it is working as expected and can be used as a guide on how to
> implement this:
> https://github.com/jpf91/druntime/compare/newUnittest
> https://github.com/jpf91/dmd/compare/newUnittest
> 
> In this POC the MInewunitTest flag is not present yet, the new method
> is always used. Also the implementation in druntime is only minimally
> changed. The compiler changes allow an advanced testrunner to do a lot
> more:
> 
> * Be a GUI tool / use colored output / ...
> * Allow to run single, specific tests, skip tests, ...
> * Execute tests in a different process, communicate with IPC. This way
>   we can deal with segmentation faults in unit tests.

Very recently I have polished a tool I wrote called dtest.
http://jkm.github.com/dtest/dtest.html
And the single thing I want to support but failed to implement is
calling individual unittests. I looked into it. I thought I could find a
way to inspect the assembly with some C library. But I couldn't make it
work. Currently each module has a __modtest which calls the unittests.

I haven't looked into segmentation faults but I think you can handle
them already currently. You just need to provide your own segmentation
fault handler. I should add this to dtest. dtest also let's you continue
executing the tests if an assertion fails and it can turn failures into
break points. When you use GNU ld you can even continue and break on any
thrown Throwable.

In summary I think everything can be done already but not on an
individual unittest level. But I also think that this is important and
this restriction alone is enough to merge your pull request after a
review.
But the changes should be backward compatible. I think there is no need
to make the runtime more complex. Just let it execute the single
function __modtest as it was but add the array of unittests. I'd be
happy to extend dtest to use this array because I found no different
solution.

> Sample output:
> Testing generated/linux/debug/32/unittest/std/array
> std/array.d:86          SUCCESS
> std/array.d:145         SUCCESS
> std/array.d:183         SUCCESS
> std/array.d:200         SUCCESS
> std/array.d:231         SUCCESS
> std/array.d:252         SUCCESS
> std/array.d:317         SUCCESS

See
https://buildhive.cloudbees.com/job/jkm/job/dtest/16/console
for dtest's output.
$ ./dtest --output=xml
Testing 1 modules: ["dtest_unittest"]
====== Run 1 of 1 ======
PASS dtest_unittest
========================
All modules passed: ["dtest_unittest"]

This also generates a JUnit/GTest-compatible XML report.

Executing ./failing gives more interesting output:
$ ./failing --abort=asserts
Testing 3 modules: ["exception", "fail", "pass"]
====== Run 1 of 1 ======
FAIL exception
object.Exception@tests/exception.d(3): first exception
object.Exception@tests/exception.d(4): second exception
FAIL fail
core.exception.AssertError@fail(5): unittest failure
PASS pass
========================
Failed modules (2 of 3): ["exception", "fail"]

I also found some inconsistency in the output when asserts have no
message. It'll be nice if that could be fixed too.
http://d.puremagic.com/issues/show_bug.cgi?id=8652

> The perfect solution:
> Would allow user defined attributes on tests, so you could name them,
> assign categories, etc. But till we have those user defined attributes,
> this seems to be a good solution.

This is orthogonal to your proposal. You just want that every unittest
is exposed as a function. How to define attributes for functions is a
different story.

Jens

Reply via email to