On 8/1/2014 1:02 PM, H. S. Teoh via Digitalmars-d wrote:
Not to mention, it lets you *really* unittest your code thoroughly,
because it isolates each separate stage of the processing into its own
self-contained unit, with clearly-defined standard interconnects. Your
unittest can therefore easily hook up with the unit's interfaces, and
inject arbitrary inputs to the unit and run arbitrary tests on its
output -- without needing such hackery as redirecting stdout just so you
can confirm correct computation of a numerical result, for example.

In the traditional imperative programming style, you often have code
that has loops within loops within loops, with complex interactions
between each loop body and its surrounding context. It's generally
impossible (or very hard) to extricate the inner loop code from its
tightly-coupled context, which means your unittest will have a hard time
"reaching into" the innards of the nested loops to verify the
correctness of each piece of code.  Often, the result is that rather
than testing each *unit* of the code, you have to do a holistic test by
running the entire machinery of nested loops inside a sandbox and
capturing its output (via stdout redirection, or instrumenting dependent
subsystems, etc.) -- hardly a *unit* test anymore! -- and still, you
have the problem that there are too many possible code paths that these
nested loops may run through, so it would be hard to have any confidence
that you've covered sufficiently many paths in the test. You're almost
certain to miss important boundary cases.

I agree and want to amplify this a bit. When a function accepts ranges as its input/output parameters, then the function tends to be a template function. This means that it becomes easy for the unittests to "mock up" ranges and feed them to the function to test it.

I've used this to great success with Warp.

Reply via email to