dmd D 2.045 improves the built-in unit tests resuming their run when they fail 
(it reports only the first failed assert for each unit test).

There are many features that today a professional unittest system is expected 
to offer, I can write a long list. But in the past I have explained that it's a 
wrong idea to try to implement all those things in dmd.

So a good solution that has all the advantages is:
- To add dmd the "core" features that are both important and hard to implement 
nicely in an external library or IDE (or make D more flexible so writing such 
libs is possible, but this can be not easy).
- To add dmd the compile-time reflection, run-time reflection or hooks that 
external unittest libs/IDEs can use to extend the built-in unit testing 
functionality.

It's not easy to find such core features (that can be used by an IDE, but are 
usable from the normal command line too), this is my first try, and I can be 
wrong. Feel free to add items you think are necessary, or to remove items you 
know can be implemented nicely in an external library. Later I can write an 
enhancement request.

---------------------

1) It's very useful to have a way to catch static asserts too, because 
templates and other things can contain static asserts too, that for example can 
be used to test if input types or constants are correct. When I write unittests 
for those templates I want to test that they actually statically asserts when I 
use them in a wrong way.

A possible syntax (this has to act like static assert):
static throws(foo(10), StaticException1);
static throws(foo(10), StaticException1, StaticException2, ...);

A version that catches run-time asserts (this has to act like asserts):
throws(foo(10), Exception1);
throws(foo(10), Exception1, Exception2, ...);


There are ways to partially implement this for run-time asserts, but badly:

void throws(Exceptions, TCallable, string filename=__FILE__, int line=__LINE__)
           (lazy TCallable callable) {
    try
        callable();
    catch (Exception e) {
        if (cast(Exceptions)e !is null)
            return;
    }

    assert(0, text(filename, "(", line, "): doesn't throw any of the specified 
exceptions."));
}


If that syntax is asking too much, an intermediate solution can be acceptable, 
like (but the point of this list is to enumerate important things that are not 
easy to implement in an external library):

static throws!(StaticException1)(foo(10));
static throws!(StaticException1, StaticException2)(foo(10));

throws!(Exception1)(foo(10));
throws!(Exception1, Exception2(foo(10));

---------------------

2) Names for unittests. Giving names to things in the universe is a first 
essential step if you want to try to understand some part of it. The compiler 
makes sure in each module two unittest tests don't share the same name. An 
example:

int sqr(int x) { return 2 * x; }

/// asserts that it doesn't return a negative value
unittest(sqr) {
    assert(sqr(10) >= 0);
    assert(sqr(-10) >= 0);
}

---------------------

3) Each unittest error has to say the (optional) name of the unit tests it is 
contained into. For example:

test4(sqr,6): unittest failure

---------------------

4) The dmd JSON output has to list all the unitttests, with their optional name 
(because the IDE can use this information to do many important things).

---------------------

5) Optional ddoc text for unittests (to allow IDEs to answer the programmer the 
purpose of a  specific test that has failed).

Unittest ddocs don't show up inside the HTML generated with -D because the user 
of the module doesn't need to know the purpose of its unittests. So maybe they 
appear only inside the JSON output.

---------------------

6a) A way to enable only unittests of a module. Because in a project there are 
several modules, and when I work on a module I often want to run only its 
unittests. In general it's quite useful to be able to disable unittests.

6b) A way to define groups of unittests (that cross modules too): because you 
sometimes want to unittest for a feature spread in more than one module, or you 
want to tell apart quick and slow unittests, to run fast unittests often and 
slow ones only once in a while.


One way to support unittest groups is to allow for tag names after the unittest 
name that define overlapping groups:

unittest(foo_name, group1_tag_name, group2_tag_name, ...) {...}


One way to extend the unittest compiler switch syntax to use those tags: 
allowing multiple unittest switches in a command line, allowing a = that can be 
used to specify a module name to run the unittests of, or a =tag: to specify 
one tag name:

-unittest=module_foo -unittest=tag:group1_tag_name

This is just a first idea for unittest groups management, better designs are 
possible.

====================================


Three more half-backed things, if you know how to improve/design this ideas you 
can tell me:

A) Serious unittest system needs a way to allow sharing of setup and shutdown 
code for tests.

>From Python unittest: a test fixture is the preparation needed to perform one 
>or more tests, and any associate cleanup actions. This may involve, for 
>example, creating temporary or proxy databases, directories, or starting a 
>server process, etc.

Fixtures can be supported at the package, module, class and function level. 
Setup always runs before any test (or groups of tests).

setUp(): Method called to prepare the test fixture.
tearDown(): Method called immediately after the test method has been called and 
the result recorded.

---------------------

B) There are situations when you don't want to count how many unittests have 
failed, but you want to fix a bug, with a debugger. For this it can be useful a 
command line switch to turn unittest asserts back into normal asserts.

---------------------

C) I'd like a way to associate a specific unittest to a function, class or 
module, or something else that is being tested, because this can be useful in 
several different ways. But this seems not easy to design. Keep in mind that 
tests can be in a different module. I don't know how to design this, or if it 
can be designed well.

---------------------

Bye,
bearophile

Reply via email to