On Thu, 2016-06-02 at 12:29 +0200, Bernd Schmidt wrote:
> On 06/01/2016 11:19 PM, David Malcolm wrote:
> > This is effectively v5 of the unittests proposal; for the earlier
> > versions see:
> 
> In general: nice to see this moving forward.

Thanks.

> There are some issues with the patch kit, some patches seem to fix 
> issues with earlier ones (#3 or #13). One patch adds a sorry, the
> next 
> changes it to inform; patch #1 adds includes top toplev.c without
> adding 
> the included files. All these issues should be resolved: any patch 
> series should compile if it is applied only partially up to a point;
> no 
> early patch should depend on later ones. Also, bugfixes and behaviour
> changes should be merged directly into the code so as to not
> hopelessly 
> confuse the reviewers.
> Some of the cover letters, in particular #1 seem to contain outdated 
> information.

Sorry about that.  I was hoping to emphasize the changes from the last
version, but clearly it made things more confusing.

> > I also patched things so that it aborts on the first failure,
> > making
> > failures easy to track down in the debugger, though requiring us to
> > make the selftests robust.
> 
> Is there any kind of doubt that this is a good requirement?

I was uncertain about how acceptable the "run the tests from within
gcc/Makefile.in" approach is, so I was hedging my bets somewhat in the
kit.  It seems like you do like the gcc/Makefile.in approach, so this
implies each of:
(1) that tests should be fast (which is why I left out the test-ggc
tests in this iteration for now, as one of them is relatively slow)
(2) that tests must be 100% reliable
(3) it must be trivially easy to track down failures (since in this
approach test failure means build failure); "make selftests-gdb" is the
solution here

Given (3), this implies that we should halt on the first failure (as in
this version of the kit).  My experience when writing new tests and
experimenting with the gcc/Makefile.in approach was that "halt on first
failure" was much easier to work with than "run everything and report".


> > * conversion of the Levenshtein selftest from a plugin to using
> >    -fself-test (we could move various other tests from plugins to
> > run
> >    earlier).
> 
> That sounds good.
> 
> > Patch 4 shows a way of filtering the tests using a command-line
> > regex.
> 
> What's the use case for this?

Say there's a problem with many tests and that you suspect an issue in,
say, bitmap.c.  This would let you run e.g. ".*bitmap.*" to better
isolate the issue.

Maybe this suggests that we should go even simpler, and hard-code the
order in which tests run, manually encoding the dependencies (e.g. test
the fundamental types first, then test the things that build on top of
them, etc).  This would be losing auto-registration, but has the virtue
of simplicity.

Thought experiment: imagine trying to bring up gcc on a new host, and
3/4 of the self tests are failing; it turns out to be some sizeof()
assumption about int vs long or somesuch in one of the fundamental data
structures.  How do we make it easy to isolate such a problem?

> > For (a), this version of the patch kit switches to stopping at the
> > first failure.
> > For (b), this version of the patch kit stays with auto
> > -registration.
> > For (c), this version stays with a "lite" version of GTest.
> > 
> > Thoughts?   Does the "running tests early" approach suggest answers
> > to
> > these?
> 
> I think this is mostly a good resolution, 

Given that we're going with "halt on first failure", that means that
there's no longer a distinction between EXPECT_EQ and ASSERT_EQ.  I'll
eliminate the former in favor of the latter.

> although I have one particular 
> issue that rubs me the wrong way where I'd go even "liter" on the
> GTest. 
> In my view, tests are functions, and using object-orientation leads
> to 
> oddly contorted code. An example can be found in patch #5:
> 
> +class range_contains_point_tests : public ::selftest::test
> +{
> + protected:
> +  layout_range
> +  make_range (int start_line, int start_col,
> +           int end_line, int end_col)
> (note also that this defeats the purpose of the GNU function
> formatting 
> which is to enable grep ^function_name to find them)
> +  {
> +    const expanded_location start_exploc
> +      = {"test.c", start_line, start_col, NULL, false};
> +    expanded_location finish_exploc
> +      = {"test.c", end_line, end_col, NULL, false};
> +
> +    return layout_range (&start_exploc, &finish_exploc, false,
> +                      &start_exploc);
> +  }
> +};
> 
> I think I raised this before, but defining a class only to define 
> functions inside them seems relatively pointless; if anything you
> want 
> namespaces. This one doesn't even seem to contain any references to
> the 
> selftest framework so it could immediately be converted to a
> function.
> 
> Other such cases appear to use the EXPECT_EQ etc. macros, which call 
> pass and fail methods, but these just delegate immediately to the 
> runner's pass and fail. Which then raises the question whether runner
> needs to be an object if there's only ever going to be one - once
> again 
> I think a functional rather than object-oriented style would be more 
> suitable to the problem. The main (only?) reason to have tests
> declared 
> as objects is for the auto-registration, beyond that it serves very 
> little purpose.

I agree that there's plenty of scope for simplification here (I think
the choice to go with halt-on-first-failure makes the runner class
redundant).  I'll have a go at simplifying things; I'll try to keep
auto-registration for now, but I'm in two minds about it.

> There are some inconsistent spellings of an end comment across the
> series:
> +}  // anon namespace
> +}  /* anon namespace.  */
> 
> I wasn't sure we were doing these at all, but it turns out the former
> is 
> the canonical one (with a single space).

Will fix.

Thanks

Dave

Reply via email to