On Thu, 26 Mar 2015 20:17:27 +0100 Mateusz Loskot <[email protected]> wrote:

ML> >  Assuming there are no objections to using it, I'd like to add catch.hpp
ML> > 
(https://github.com/philsquared/Catch/blob/master/single_include/catch.hpp)
ML> > directly to SOCI repository, e.g. as tests/catch.hpp. I don't think it's
ML> > worth the trouble to use a submodule for it, but we could do this too if
ML> > people prefer it.
ML> 
ML> Sure. I'd create a branch in the upstream repo, so we can all test and
ML> contribute to the new tests transition.

 As you've noticed, currently there is catch-tests in my fork, but I could
push it to SOCI/soci, of course. The trouble is that it doesn't do what you
prefer...

ML> >  I am less sure about how to handle the transition from assert() to CATCH.
ML> > One possibility would be to just globally replace all asserts with
ML> > REQUIRE() macros. This would be suboptimal (if only because some of the
ML> > asserts should probably be replaced with CHECK(), to allow the tests to
ML> > continue if one of them fails), but should be relatively simple to do.
ML> 
ML> I believe in single assert per test routine/case...see below.

 Sorry, this is unworkable. Currently, i.e. with catch-tests and Firebird
we have

        All tests passed (1851 assertions in 54 test cases)

i.e. an average of more than 34 assertions per test. Multiplying the number
of test cases by 34 is not going to help.

 Of course, the current tests are not well-organized and there is ample
scope for improvement here. And we definitely should be splitting some test
cases in sections using the eponymous macro. But we still won't have a
single assert per test case or even per section. But why should this be a
goal in the first place? I really don't see any good reasons to require
it.


ML> My plan was to keep the current tests and start (re)writing with CATCH

 This would be my preference as well, in theory, but looking at the amount
of the existing testing code, I realized it would take an inordinate time
to do it, so it's just not going to happen in practice. Hence I decided to
do what I can to improve things right now instead of hoping that the best
case somehow happens on its own in the future...

ML> in new folder: tests/integrate.

 What would be the meaning of integration tests for SOCI? I.e. integration
of what with what would they be testing? I am not sure how can we write
more than just unit tests for a library like it.

ML> Finally, along the transition, I had an idea to refactor and clean up
ML> the tests slightly to test single thing
ML> per routines/cases, as short as possible, with Arrange/Act/Assert
ML> principle in mind.
ML> Perhaps some modularisation and groupping would be a good idea too.

 Yes, it would be definitely nice to streamline all this. But,
realistically, who is going to do it and when is it going to happen?


ML> Having explained my points above, I don't think that find & replace
ML> assert with CATCH macro is the way to go.

 OK, as you know I've already done this in meanwhile. I actually didn't
want to start on this, I just wanted to add a new test (for checking the
error messages) using CATCH. But it turned out that I couldn't do it
without switching all the rest to CATCH first because I had to use
TEST_CASE or TEST_CASE_METHOD for all the test functions to run them, so I
started doing this and then was annoyed by seeing CATCH messages about N
tests passing but 0 checks being verified, so I also went ahead and did
replace assert()s with CHECKs.

 So the question now is whether this makes things better or worse. IMHO it
does improve them, as the commit message says, using CATCH:

- Provides information about the failed test and the values of variables in it.
- Allows to continue running the tests even if one of them fails.
- Allows to run just some of the tests with flexible selection mechanism.

and all this is pretty convenient already. Additionally, it allows to write
new tests using CATCH and organizing them better, which was the original
impetus for this change.

 It took me a couple of hours to make the changes so far and I'll probably
need at least as much to update the other backends (so far I've only done
it for Firebird), so I'd rather not do it if we don't want to replace
asserts with CATCH CHECK()s. But IMHO it would be a waste, the benefits
above are real and can be had today without any drawbacks (AFAICS).

 The possibilities I see are:

1. Throw away catch-tests branch and start adding new tests using CATCH
   in tests/unit.
  + This would allow cleaner tests, eventually.
  - This still needs more work, notably at CMake level (which I try to
    avoid as much as possible personally).
  - We will be stuck with inconvenient to use/debug assert tests for
    the observable future.
  - It will also be confusing to have two parallel tests, we'd have to
    explain people submitting PRs that they need to update this one and
    not the other one, probably more than once.

2. Complete the work done for Firebird on catch-tests branch for the other
   backends and merge it.
  + This is the fastest way to get a working CATCH-based test suite with
    all the advantages listed above.
  - I don't see any, really.

3. Improve the tests on the catch-tests branch to make them more modular,
   better organized etc.
  + This would be the best outcome.
  - Requires more work than I or, I suspect, anybody else, can spend on it
    right now.

 I would prefer to do (2) as debugging assert failures in the current tests
is painful and I really the more detailed error messages CATCH gives us in
case of failure, so for me the benefit of getting them a.s.a.p. seems
pretty important. But if there is a strong opposition to doing it, I'd go
with (1).

 Please let me know what do you think,
VZ

Attachment: pgpcrgIvs4qQZ.pgp
Description: PGP signature

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
soci-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/soci-users

Reply via email to