On 27/10/12 00:45, H. S. Teoh wrote:
http://d.puremagic.com/issues/show_bug.cgi?id=8900

:-(

(The code there is called cartesianProd but it's the reduced code, so it
doesn't really compute the cartesian product. But that's where it's
from.)

So far, the outstanding blockers for cartesianProduct are:
1) Compiler bug which causes unittest failure:

        std/range.d(4629): Error: variable lower used before set
        std/range.d(4630): Error: variable upper used before set

(Jonathan had a pull request with a Phobos workaround for this, which I
_think_ is already merged, but the autotester is still failing at this
point. :-/)

2) Issue 8542 (crosstalk between template instantiations)

3) And now, issue 8900 (zip fails to compile with repeat(char[]))

So there's still no joy for cartesianProduct. :-(

I'm getting a bit frustrated with the Phobos bugs related to ranges and
std.algorithm. I think we need to increase the number of unittests. And
by that I mean, GREATLY increase the number of unittests. Most of the
current tests are merely sanity tests for the most common usage
patterns, most basic types, or tests added as a result of fixed bugs.

This is inadequate.

We need to actively unittest corner cases, rare combinations, unusual
usages, etc.. Torture test various combinations of range constructs.
Algorithms. Nested range constructs. Nested algorithms. Deliberately
cook up nasty tests that try their best to break the code by using
unusual parameters, unusual range-like objects, strange data, etc.. Go
beyond the simple cases to test non-trivial things.  We need unittests
that pass unusual structs and objects into the range constructs and
algorithms, and make sure they actually work as we have been _assuming_
they should.

I have a feeling there are a LOT of bugs lurking in there behind
overlooked corner cases, off by 1 errors, and other such careless slips,
as well as code that only works for basic types like arrays, which
starts breaking when you hand it something non-trivial.  All these
issues must be weeded out and prevented from slipping back in.

Here's a start:

- Create a set of structs/classes (inside a version(unittest) block)
   that are input, forward, bidirectional, output, etc., ranges, that are
   NOT merely arrays.

- There should be some easy way, perhaps using std.random, of creating
   non-trivial instances of these things.  These should be put in a
   separate place, perhaps outside the std/ subdirectory, where they can
   be imported into unittest blocks by std.range, std.algorithm, whatever
   else that needs extensive testing.

- Use these ranges as input for testing range constructs and algorithms.

- For best results, use a compile-time loop to loop over a given
   combination of these range types, and run them through the same set of
   tests. This will improve the currently spotty test coverage.  Perhaps
   provide some templated functions that, given a set of range types
   (from the above structs/classes) and a set of functions, run through
   all combinations of them to make sure they all work. (We run unittests
   separately anyway, we aren't afraid of long-running tests.)


T


I think that unit tests aren't very effective without code coverage.

One fairly non-disruptive thing we could do: implement code coverage for templates. Currently, templates get no code coverage numbers. We could do a code-coverage equivalent for templates: which lines actually got instantiated?
I bet this would show _huge_ gaps in the existing test suite.

Reply via email to