On 25/03/11 06:09, Steven Schveighoffer wrote:
On Thu, 24 Mar 2011 00:17:03 -0400, Graham St Jack <graham.stj...@internode.on.net> wrote:

Regarding unit tests - I have never been a fan of putting unit test code into the modules being tested because: * Doing so introduces stacks of unnecessary imports, and bloats the module.

As Jonathan says, version(unittest) works. No need to bloat unnecessarily.

Agreed. However, all the circularity problems pop up when you compile with -unittest.


* Executing the unittests happens during execution rather than during the build.

Compile-time code execution is not a good idea for unit tests. It is always more secure and accurate to execute tests in the environment of the application, not the compiler.

I didn't say during compilation - the build tool I use executes the test programs automatically.


Besides, this is an implementation detail. It is easily mitigated. For example, phobos' unit tests can be run simply by doing:

make -f posix.mak unittest

and it builds + runs all unit tests. This can be viewed as part of the "Build process".

The problem I have with this is that executing the tests requires a "special" build and run which is optional. It is the optional part that is the key problem. In my last workplace, I set up a big test suite that was optional, and by the time we got around to running it, so many tests were broken that it was way too difficult to maintain. In my current workplace, the tests are executed as part of the build process, so you discover regressions ASAP.


All unittests (as in the keyword) seem to have going for them is to be an aid to documentation.


The huge benefit of D's unit tests are that the test is physically close to the code it's testing. This helps in debugging and managing updates. When you update the code to fix a bug, the unit test is right there to modify as well.

I guess that was what I was alluding to as well. I certainly agree that having the tests that close is handy for users of a module. The extra point you make is that the unittest approach is also easier for the maintainer, which is fair enough.


The whole point of unittests are, if they are not easy to do and conveniently located, people won't do them. You may have a really good system and good coding practices that allows you to implement tests the way you do. But I typically will forget to update tests when I'm updating code. It's much simpler if I can just add a new line right where I'm fixing the code.

In practice I find that unit tests are often big and complex, and they deserve to be separate programs in their own right. The main exception to this is low-level libraries (like phobos?).


What I do instead is put unit tests into separate modules, and use a custom build system that compiles, links AND executes the unit test modules (when out of date of course). The build fails if a test does not pass.

The separation of the test from the code under test has plenty of advantages and no down-side that I can see - assuming you use a build system that understands the idea. Some of the advantages are:
* No code-bloat or unnecessary imports.

Not a real problem with version(unittest).

* Much easier to manage inter-module dependencies.

Not sure what you mean here.

I mean that the tests typically have to import way more modules than the code under test, and separating them is a key step in eliminating circular imports.


* The tests can be fairly elaborate, and can serve as well-documented examples of how to use the code under test.

This is not an against for unit tests, they can be this way as well. Unit testing phobos takes probably a minute on my system, including building the files. They are as complex as they need to be.

Conceded - it doesn't matter where the tests are, they can be as big as they need to be.

As for the time tests take, an important advantage of my approach is that the test programs only execute if their test-passed file is out of date. This means that in a typical build, very few (often 0 or 1) tests have to be run, and doing so usually adds way less than a second to the build time. After every single build (even in release mode), you know for sure that all the tests pass, and it doesn't cost you any time or effort.


* Since they only execute during the build, and even then only when out of date, they can afford to be more complete tests (ie use plenty of cpu time)

IMO unit tests should not be run along with the full application. I'd suggest a simple unit test blank main function. I think even dmd (or rdmd?) will do this for you.

There is no requirement to also run your application when running unit tests.

That is my point exactly. I don't run tests as part of the application - the tests are separate utilities intended to be run automatically by the build tool. They can also be run manually to assist in debugging when something goes wrong.


* If the code builds, you know all the unit tests pass. No need for a special unittest build and manual running of assorted programs to see if the tests pass.

This all stems from your assumption that you have to run unittests along with your main application.

When I use D unit tests, my command line is:

<command to build library/app> unittests

e.g.

make unittests

No special build situations are required. You can put this into your normal build script if you wish (i.e. build 2 targets, one unit tested one and one release version).

i.e.:

all: unittests app

* No need for special builds with -unittest turned on.

Instead you need a special build of other external files? I don't see any advantage here -- on one hand, you are building special extra files, on the other hand you are building the same files you normally build (which you should already have a list of) with the -unittest flag. I actually find the latter simpler.

-Steve

The difference in approach is basically this:

With unittest, tests and production code are in the same files, and are either built together and run together (too slow); or built separately and run separately (optional testing).

With my approach, tests and production code are in different files, built at the same time and run separately. The build system also automatically runs them if their results-file is out of date (mandatory testing).


Both approaches are good in that unit testing happens, which is very important. What I like about my approach is that the tests get run automatically when needed, so regressions are discovered immediately (if the tests are good enough). I guess you could describe the difference as automatic incremental testing versus manually-initiated batch testing.


--
Graham St Jack

Reply via email to