On 7/25/2015 2:53 AM, Jonathan M Davis wrote:
Oh, definitely. But while 100% unit test coverage is a huge step forward, I also
think that for truly solid code, you want to go beyond that and make sure that
you test corner cases and the like, test with a large enough variety of types
with templates to catch behavioral bugs, etc. So, I don't think that we want to
stop at 100% code coverage, but we do need to make sure that we're at 100% first
and foremost.

There's another thing I discovered. If functions are broken up into smaller logical units, the unit testing gets easier and there are fewer bugs. For example, the dmd code that reads the dmd.conf file was one function that read the files, allocated memory, did the parsing, built the data structures, etc.

By splitting all these things up, suddenly it gets a lot easier to test! For example, just use the normal file I/O functions to read the files, which are tested elsewhere. Boom, don't need to construct test files. The parsing logic can be easily handled by its own unit tests. And so on.

I think there's also a learned skill to having the fewest number of orthogonal unit tests that give 100% coverage, rather than a blizzard of tests that overlap each other.


(I remember the calculator revolution. It happened my freshman year at
college. September 1975 had $125 slide rules in the campus bookstore. December
they were at $5 cutout prices, and were gone by January. I never saw anyone
use a slide rule again. I've never seen a technological switchover happen so
fast, before or since.)

If only folks thought that D's advantages over C++ were that obvious. ;)

Unfortunately, D's advantages only become more than a grab-bag of features after you've used it for a while. We are also still discovering the right way to use them.

Reply via email to