on Mon Jan 19 2009, Brad King <brad.king-AT-kitware.com> wrote:

> Hi Dave,
>
> I think some of the confusion is because my original posting proposed
> *two* different approaches to testing:
>
> 1.) Build all the (run) tests for one library into a single executable.
>  Each test gets its own TU but the tests are linked together.  Execution
> of the tests is delayed until test time at which point CTest runs the
> executable over and over with different arguments for each test.

That approach has promise for us, but it would take some investment
because, in general, our test executables are more granular than that.

> 2.) Build the tests as individual executables, but not until *test*
> time.  The idea is that we drive testing with CTest, but each test is a
> *recursive invocation* of CTest with its --build-and-test feature.  This
> feature drives the build of a test source file through the native
> toolchain as part of running the test.  The output of the test includes
> all the native toolchain stuff and the test executable's output.
> However, every test is its own separate recursive invocation of ctest,
> so its output is recorded separately from other tests.

Seems plausible.

> Run-tests can use either #1 or #2.
>
> Compile-only tests should use #2 since the interesting part of the test
> is the compilation, and compile-fail tests can clearly not be linked to
> other tests.

Right, I think.

> David Abrahams wrote:
>> on Thu Jan 15 2009, Brad King <brad.king-AT-kitware.com> wrote:
>>> The question here is whether one wants to test with the same tools users
>>> might use to build the project.  If one user's tool doesn't provide
>>> per-rule information then we need log-scraping to test it.  
>> 
>> Except that I contest your premise that no intrinsic per-rule
>> information support implies log scraping.  If there is support for the
>> use of replacement tools ("cl-wrapper" instead of "cl"), you can also
>> avoid log scraping.
>
> My argument is simply that if there *were no way* to get per-rule info
> from a native build tool then log scraping is necessary.  

'course

> I fully agree
> that it may be possible to avoid it with sufficient effort for every
> native tool.  Log scraping has been working well enough for us that
> we've not been motivated to put in this effort.

Well, if someone else is maintaining it, I might be convinced not to
care whether you do the job by log scraping or by reading tea
leaves. ;-)

> If you can point me at documentation about how to do this in VS I'd love
> to see it.  I know the Intel compiler does it, but that is a
> full-fledged plugin that even supports its own project file format.  We
> would probably need funding to do something that heavy-weight.

I know nothing about how to do that.  I think Eric Niebler might be able
to help you; he's done this kind of tool integration in the past.

>>>>   Frankly I'm not sure what logfile scraping has to do with the
>>>>   structural problems you've mentioned.
>>>
>>> I'm only referring to the test part of the anti-logscraping code.  The
>>> python command wrappers are there to avoid log scraping, 
>> 
>> Sorry, I'm not up on the details of the system, so I don't know what
>> "python command wrappers" refers to.
>
> Currently in boost every test compilation command line is invoked
> through a python command that wraps around the real command.  In the
> current system this is necessary to avoid log scraping since the tests
> are done during the main build.

OK.

>>> but if the tests were run through CTest then no log scraping would be
>>> needed.
>> 
>> Now I'm really confused.  On one hand, you say it's necessary to run
>> tests through the native toolchains, and that implies log scraping.  On
>> the other, you suggest running tests through CTest and say that doesn't
>> imply log scraping.  I must be misinterpreting something.  Could you
>> please clarify?
>
> See approach #2 above.

OK.

>>>> * Boost developers need the ability to change something in their
>>>>   libraries and then run a test that checks everything in Boost that
>>>>   could have been affected by that change without rebuilding and
>>>>   re-testing all of Boost (i.e. "incremental retesting").
>>>
>>> How does the current solution solve that problem (either Boost.Build or
>>> the current CMake system)?
>> 
>> Boost.Build does it by making test results into targets that depend on
>> successful runs of up-to-date test executables.  Test executables are
>> targets that depend on boost library binaries and headers.
>
> CTest will need some work to make this totally minimal.  Basically it is
> missing timestamps to avoid re-running tests, which is probably why Troy
> put the tests into the build in the current system.

Prolly.  If you can make the changes, I'm on board.

> As things stand now, the above approaches work as follows.  Approach #1
> will compile/link test executables during the main build with full
> dependencies.  Approach #2 will drive the individual native build system
> for every test which has its own dependencies.  Both approaches will
> still run every test executable though.  I'm sure we can address this
> problem.

Great.

> How does Boost.Build decide whether a compile-fail test needs to be
> re-attempted?  Does its dependency scanning decide when something has
> changed that could affect whether a new attempt at compilation could be
> different?

Yep; a compile-fail result has the same invalidation properties as
a compile-success result.

>> I don't see how that solves problem a).  If one TU of a test executable
>> (corresponding to a feature) fails to compile, do you somehow build the
>> executable with all the remaining TUs?
>
> Approach #1 will not.  

Maybe it should, though.  That seems like a kick*ss feature.  Make the
test report which test TUs failed to build and run the other tests as
requested.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
_______________________________________________
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake

Reply via email to