On Sat, 1 Jan 2011, Vincent Torri wrote:

500 Kb: you mean the tarball ? Because if you look at the source code
itself, it's 120 Kb (the other files are mostly autotools one, around 460
Kb) and if you look at the library, the static one is about 38Kb. Not a lot.
Also, these days, package managers can install it easily.

Sure, for those systems that have package managers for which Checks is packaged for and Check works on. I somehow suspect that curl runs on far more platforms than Check does.

What's the problem with using an external tool for unit testing ? If you use your own unit testing framework, you duplicate code (hence more work, possible bugs in your unit testing code).

Why I want the test tools "bundled":

A - We avoid the risk of version skew and that some Check versions have bugs
    and others don't, or that they change format/API at some point etc.

B - We support lots of platforms and targets that don't easily install Check
    with a package manager. Perhaps especially important for the autobuilds
    concept to work as smoothly and easily as possible.

Why I think our own code is a good idea instead of using Check:

1 - I don't know what Check aims for in terms of portability, but I'm afraid
    that our bar is (much) higher. We risk limiting the unit tests to fewer
    architectures.

    And yes this matters. I want everyone to be able to not only use (lib)curl
    but also to be able to develop curl and you cannot do that easily and
    safely if you can't run the tests on your platform.

    Portability is one of the corner-stones curl stands on. I work hard on
    not restricting it any further than what we already do and I much rather
    focus on *increasing* portability.

    I also have no desire to have to work on yet another supporting library or
    tool in the case we WOULD face problems in this or other areas.

2 - There is not a lot of code for the actual unit testing. Most of what Check
    is and does is things we ALREADY support and do in curl (most likely in
    a different way).

3 - We have an established architecture for tests. We would have to adapt
    Check and its tests quite a bit for it to offer the same features and the
    same integration in the way our own can. I think of valgrind support,
    easy repeat of a test case to run gdb on a failure and doing memory leak
    and error detections with our own memory debugging system.

    I think I am one of the most frequent users of these features and I would
    not like to see these things sacrifized.

    (As a somewhat telling sign of how useful this is: my rewrite of the
    unittest for the patch I posted yesterday immediately showed a flaw and
    memory leak in the test - that is now corrected in my version of it.)

4 - We do not sacrifize anything in terms of how hard it is or not to write
    the unit tests themselves as we can basically use the exact same set of
    macros. And the actual unit-tests should be the really important part of
    this exercise I believe.

Calling this "rewriting stuff that others have already written" shows a huge lack of understanding of what we already have and what work it takes to have Check incorporated into this. I'm not just a "NIH" guy.

Btw, as we are speaking of unit testing, what we are doing with our unit testing is code coverage: use of gcov and lcov to have html output of the coverage of the unit tests. If you are interested, i have written m4 macro to integrate sich process quite easily in configure.ac (Makefile.am stuff is also very easy).

That'd be great! But of course it could just as well be done for all testing and not "just" the unit testing parts.

I would like to get a code coverage for a test "round" on a regular basis so that we can forcus on expanding the test suit more on the areas where we have poor test coverage right now.

--

 / daniel.haxx.se
-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Reply via email to