Re: [Boost-cmake] Analysis of the current CMake system

2009-03-05 Thread Brad King

David Abrahams wrote:

Click on Show Filters, and you can create a filter that shows only errors, 
warnings or
other items that you want.  The idea will be to be able to save the queries and 
create
your own custom views for different purposes.


Nice capability; not a great interface.  If I want to see just things
with errors, I need a filter that says

+--+ +--+ ++
|build errors  | |is not| ||
+--+ +--+ ++


I often click on the errors column label to sort by error count.  Then all the
zeros go to the bottom.

-Brad
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-05 Thread David Abrahams

on Wed Mar 04 2009, Bill Hoffman  wrote:

> David Abrahams wrote:
>
>>> Another new feature in CTest/CDash development heads which may be of 
>>> interest
>>> to Boost is support for subproject labeling. See their main dashboard page 
>>> here:
>>>
>>>   
>>> http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&date=2009-02-27
>>
>> Functionally, it's pretty darned good.  
>>
> Cool!
>> Room for improvement:
>>
>> * I think we'd want a view that's limited to issues, so a subproject
>>   doesn't show up if it's all green (or has only warnings, configurable
>>   according to project).  See
>>   http://www.boost.org/development/tests/trunk/developer/issues.html
>>
> You can do that now from here:
>
> http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&display=project&date=2009-02-27
>
> Click on Show Filters, and you can create a filter that shows only errors, 
> warnings or
> other items that you want.  The idea will be to be able to save the queries 
> and create
> your own custom views for different purposes.

Nice capability; not a great interface.  If I want to see just things
with errors, I need a filter that says

+--+ +--+ ++
|build errors  | |is not| ||
+--+ +--+ ++

>> * It doesn't make good use of space in my browser window unless I make
>>   the browser really narrow.  It would take some design work, but it
>>   would be good to be able to get a bigger picture view at a glance,
>>   especially when the browser window is large.
>>
> We are open to suggestions.  

I'll give it some thought.

> There is a cdash mailing list here:
>
> http://www.cdash.org/cdash/help/mailing.html
>> * It's kinda ugly, but I suppose we can tweak the CSS ourselves.
>>
> Hey!  (we actually hired a graphics designer for that...)

Sorry!

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-04 Thread Bill Hoffman

David Abrahams wrote:


Another new feature in CTest/CDash development heads which may be of interest
to Boost is support for subproject labeling. See their main dashboard page here:

  
http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&date=2009-02-27


Functionally, it's pretty darned good.  


Cool!

Room for improvement:

* I think we'd want a view that's limited to issues, so a subproject
  doesn't show up if it's all green (or has only warnings, configurable
  according to project).  See
  http://www.boost.org/development/tests/trunk/developer/issues.html


You can do that now from here:

http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&display=project&date=2009-02-27

Click on Show Filters, and you can create a filter that shows only 
errors, warnings or other items that you want.  The idea will be to be 
able to save the queries and create your own custom views for different 
purposes.



* It doesn't make good use of space in my browser window unless I make
  the browser really narrow.  It would take some design work, but it
  would be good to be able to get a bigger picture view at a glance,
  especially when the browser window is large.


We are open to suggestions.  There is a cdash mailing list here:

http://www.cdash.org/cdash/help/mailing.html

* It's kinda ugly, but I suppose we can tweak the CSS ourselves.


Hey!  (we actually hired a graphics designer for that...)


-Bill
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-03 Thread Brad King

troy d. straszheim wrote:
Maybe I'm missing something...  how do I tell what code has been 
compiled?  By the build timestamp?


CTest's submissions are broken into "Nightly", "Experimental", and
"Continuous" categories.  Look at a page like this:

  
http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&subproject=Teuchos&date=2009-02-27

The "Nightly" section tests code from the repository trunk as of a
project-defined "nightly start time" on the specified date. Additional
Nightly sections can be defined for branches, like the
"Nightly 2.6 Release" section on CMake's dashboard:

  http://www.cdash.org/CDash/index.php?project=CMake

"Experimental" submissions are for users' builds of their own versions,
possibly with local changes.  "Continuous" submissions come from machines
that constantly check the SCM server for new changes, build, and then
report their updates along with the results.

Normally the "Updates" column shows changes, and there is a link at the top
of the page for the version, but it looks like the Trilinos CDash is not
configured to show it.  CMake's does.

I've been working on better CTest support for version control tools, and
one of my next TODOs when I get back to the topic is to submit a global
project revision number (except for CVS).  CDash would then be able to
have a link to the SCM's web viewer for the exact tested version.

On their main page I see only a download for a main Trilinos tarball... 
 Trilinos version X doesn't tell me much about what version of Stokhos 
is included, or what Stokhos' required and optional dependencies (and 
their verison ranges) are.   Do you happen to know how they handle this?


I don't know how Trilinos folks manage subproject versioning.  It might
just be all one version for the entire project.

CDash currently does not have a notion of mix-and-match subproject versioning.

-Brad

___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-03 Thread troy d. straszheim

David Abrahams wrote:



Trilinos is a very large project, so they label pieces of it in CMake,
CTest propagates the labels with failure reports, and CDash interprets
the labels to break results down by subproject.  The same thing could
be used for Boost's library hierarchy.


Awesome.



Maybe I'm missing something...  how do I tell what code has been 
compiled?  By the build timestamp?


On their main page I see only a download for a main Trilinos tarball... 
 Trilinos version X doesn't tell me much about what version of Stokhos 
is included, or what Stokhos' required and optional dependencies (and 
their verison ranges) are.   Do you happen to know how they handle this?


-t
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-03 Thread David Abrahams

on Tue Mar 03 2009, Brad King  wrote:

> Brad King wrote:
>> The current CDash release will not understand CTest build submissions that
>> use this launcher interface, so one would need CDash from its SVN trunk to
>> try it.
>
> Sandia's Trilinos project is using CTest launchers with CDash from SVN trunk.
> Here is an example page showing errors recorded by the launcher mode:
>
>   http://trilinos-dev.sandia.gov/cdash/viewBuildError.php?buildid=927
>
> We're still tweaking the layout if you have suggestions.

It definitely delivers the important basics.  I could spin a nice
wishlist for you, but it looks OK.

> Another new feature in CTest/CDash development heads which may be of interest
> to Boost is support for subproject labeling. See their main dashboard page 
> here:
>
>   
> http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&date=2009-02-27

Functionally, it's pretty darned good.  

Room for improvement:

* I think we'd want a view that's limited to issues, so a subproject
  doesn't show up if it's all green (or has only warnings, configurable
  according to project).  See
  http://www.boost.org/development/tests/trunk/developer/issues.html

* It doesn't make good use of space in my browser window unless I make
  the browser really narrow.  It would take some design work, but it
  would be good to be able to get a bigger picture view at a glance,
  especially when the browser window is large.

* It's kinda ugly, but I suppose we can tweak the CSS ourselves.

* It should use JavaScript to allow drilling down without rebuilding the
  whole screen, ala https://svn.boost.org/trac/boost/browser

> Trilinos is a very large project, so they label pieces of it in CMake,
> CTest propagates the labels with failure reports, and CDash interprets
> the labels to break results down by subproject.  The same thing could
> be used for Boost's library hierarchy.

Awesome.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-03 Thread Brad King

Brad King wrote:

The current CDash release will not understand CTest build submissions that
use this launcher interface, so one would need CDash from its SVN trunk to
try it.


Sandia's Trilinos project is using CTest launchers with CDash from SVN trunk.
Here is an example page showing errors recorded by the launcher mode:

  http://trilinos-dev.sandia.gov/cdash/viewBuildError.php?buildid=927

We're still tweaking the layout if you have suggestions.

Another new feature in CTest/CDash development heads which may be of interest
to Boost is support for subproject labeling. See their main dashboard page here:

  
http://trilinos-dev.sandia.gov/cdash/index.php?project=Trilinos&date=2009-02-27

Trilinos is a very large project, so they label pieces of it in CMake, CTest
propagates the labels with failure reports, and CDash interprets the labels to
break results down by subproject.  The same thing could be used for Boost's
library hierarchy.

-Brad
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-03-03 Thread Brad King

troy d. straszheim wrote:

So the first order of business would be to remove this kind of thing:

set(CMAKE_CXX_COMPILE_OBJECT "\"${PYTHON_EXECUTABLE}\" 
\"${BOOST_TEST_DRIVER}\"  cxx_compile_object 
 ${CMAKE_CXX_COMPILE_OBJECT}" )


We're 100% agreed on the need for this, as far as I can see.


A couple weeks ago I committed changes in CMake CVS HEAD to provide an
interface like this.  It works only for Makefile generators, but is now
a builtin feature.

Troy, if you build CMake from CVS and run with "--help-properties" you
will see these properties:

  RULE_LAUNCH_COMPILE
   Specify a launcher for compile rules.

   Makefile generators prefix compiler commands with the given launcher
   command line.  This is intended to allow launchers to intercept build
   problems with high granularity.  Non-Makefile generators currently
   ignore this property.

  RULE_LAUNCH_CUSTOM
   Specify a launcher for custom rules.

   Makefile generators prefix custom commands with the given launcher
   command line.  This is intended to allow launchers to intercept build
   problems with high granularity.  Non-Makefile generators currently
   ignore this property.

  RULE_LAUNCH_LINK
   Specify a launcher for link rules.

   Makefile generators prefix link and archive commands with the given
   launcher command line.  This is intended to allow launchers to
   intercept build problems with high granularity.  Non-Makefile
   generators currently ignore this property.

CTest provides a launcher interface too.  Look in Modules/CTest.cmake for
this code:

  IF(CTEST_USE_LAUNCHERS)
SET(CTEST_LAUNCH_COMPILE "\"${CMAKE_CTEST_COMMAND}\" --launch --target-name  --build-dir 
 --output  --source  --language  --")
SET(CTEST_LAUNCH_LINK"\"${CMAKE_CTEST_COMMAND}\" --launch --target-name  --build-dir 
 --output  --target-type  --language  --")
SET(CTEST_LAUNCH_CUSTOM  "\"${CMAKE_CTEST_COMMAND}\" --launch --target-name  
--build-dir  --output  --")
SET_PROPERTY(GLOBAL PROPERTY RULE_LAUNCH_COMPILE "${CTEST_LAUNCH_COMPILE}")
SET_PROPERTY(GLOBAL PROPERTY RULE_LAUNCH_LINK "${CTEST_LAUNCH_LINK}")
SET_PROPERTY(GLOBAL PROPERTY RULE_LAUNCH_CUSTOM "${CTEST_LAUNCH_CUSTOM}")
  ENDIF(CTEST_USE_LAUNCHERS)

When a project sets CTEST_USE_LAUNCHERS to true, CTest uses this interface
to run all Make rules through an (undocumented) API 'ctest --launch'.  It
allows us to report full information about failed build rules (working dir,
full command line, stdout, stderr, return code, etc.) with no log scraping.
The launcher also does some simple scraping of compiler output to detect
warnings (needed for some Windows compilers which warn on stdout).

The current CDash release will not understand CTest build submissions that
use this launcher interface, so one would need CDash from its SVN trunk to
try it.  However, you should be able to use the above code as an example
to create your python/Trac reporting stuff.

-Brad
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-02-05 Thread Bill Hoffman

troy d. straszheim wrote:


Further, I have to note that command-line VS builds should be
supported for one simple reason: nmake does not support parallel
builds and probably never will. This makes VS the easiest way of
running a parallel build on Windows (locally or distributed with
additional tools). GNU make from MSYS is out of the question
because MSYS seems far from production-grade.

Should IDE builds be considered support-worthy, it would of
course be necessary to test manually before releases.

I hope that some of this helps


Interesting, thanks for this.   Would the 'cwrap' approach (where the 
compiler is called via a wrapper) work under vcbuild?  (Forgive me, I 
don't do windows).  The only reason I paid attention to NMAKE is that it 
was somewhat familiar to me.




No, that would not help.  vcbuild uses the project files. CTest can 
already run devenv from the command line which is almost the same thing. 
 I think with VS you really have to test the project files for 
performance, and to make sure they work.  We are working on a way of 
extracting the build information from the log files produced by VS.


-Bill
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-02-05 Thread troy d. straszheim

Ingo Albrecht wrote:

Note that vcbuild (the command line driver for VS builds) has command
line arguments for specifying strings to prefix log messages at various log
levels with. This should make log scraping of the compilation much more
reliable, although it still disgusts me. This does not work for CTest 
though

because it tests using cmake scripts.

Running vcbuild is certainly no alternative for trying the build in the 
IDE,

but it should be sufficient for continuous integration.

Further, I have to note that command-line VS builds should be
supported for one simple reason: nmake does not support parallel
builds and probably never will. This makes VS the easiest way of
running a parallel build on Windows (locally or distributed with
additional tools). GNU make from MSYS is out of the question
because MSYS seems far from production-grade.

Should IDE builds be considered support-worthy, it would of
course be necessary to test manually before releases.

I hope that some of this helps


Interesting, thanks for this.   Would the 'cwrap' approach (where the 
compiler is called via a wrapper) work under vcbuild?  (Forgive me, I 
don't do windows).  The only reason I paid attention to NMAKE is that it 
was somewhat familiar to me.


-t
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-02-05 Thread Ingo Albrecht

troy d. straszheim schrieb:

Brad King wrote:

troy d. straszheim wrote:

I don't quite get "That doesn't mean we can't test some tools without
log-scraping".

I see two different cases here.  There's the developer working under
visual studio or emacs who wants to run some tests.  This guy knows (or
should know) how to find what compile/link flags were used, chase down
warnings and whatnot.  In this case the message "test X failed, go look
at the logs" is fine.  Let the user use his tool.


How do we know that a VS IDE build works unless we test it?  Testing it
requires log scraping unless we write some kind of plugin, which may not
be possible for all tools.  Even if we do log scraping for VS IDE
builds, we can still use "CWrap" for generators where it can be 
implemented.


I think we're agreeing here.  There is testing the VS IDE build, and 
then there is testing the VS IDE build and trying to report every 
single thing that could go wrong in the course of the build to a 
cdashish server somewhere.  I'd assume that IDE builds would need to 
be tested by somebody sitting in front of the IDE.  If something goes 
wrong the IDE tells you and you go figure out what it is. make and 
nmake builds, running on slaves in a datacenter someplace, would need 
to report everything that goes  wrong.  I don't think the 
slave-testing-process should get complicated by IDEs that constantly 
get tested manually anyhow.

Note that vcbuild (the command line driver for VS builds) has command
line arguments for specifying strings to prefix log messages at various log
levels with. This should make log scraping of the compilation much more
reliable, although it still disgusts me. This does not work for CTest though
because it tests using cmake scripts.

Running vcbuild is certainly no alternative for trying the build in the IDE,
but it should be sufficient for continuous integration.

Further, I have to note that command-line VS builds should be
supported for one simple reason: nmake does not support parallel
builds and probably never will. This makes VS the easiest way of
running a parallel build on Windows (locally or distributed with
additional tools). GNU make from MSYS is out of the question
because MSYS seems far from production-grade.

Should IDE builds be considered support-worthy, it would of
course be necessary to test manually before releases.

I hope that some of this helps
 Ingo

PS On my background:
I'm a Berlin-based software developer currently building
a new CMake-based build system for our pretty extensive
in-house media engine, to be found with source at
http://y60.artcom.de/. No boost in there though. Yet.

___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-21 Thread troy d. straszheim

Brad King wrote:

troy d. straszheim wrote:

I don't quite get "That doesn't mean we can't test some tools without
log-scraping".

I see two different cases here.  There's the developer working under
visual studio or emacs who wants to run some tests.  This guy knows (or
should know) how to find what compile/link flags were used, chase down
warnings and whatnot.  In this case the message "test X failed, go look
at the logs" is fine.  Let the user use his tool.


How do we know that a VS IDE build works unless we test it?  Testing it
requires log scraping unless we write some kind of plugin, which may not
be possible for all tools.  Even if we do log scraping for VS IDE
builds, we can still use "CWrap" for generators where it can be implemented.


I think we're agreeing here.  There is testing the VS IDE build, and then there 
is testing the VS IDE build and trying to report every single thing that could 
go wrong in the course of the build to a cdashish server somewhere.  I'd assume 
that IDE builds would need to be tested by somebody sitting in front of the IDE. 
 If something goes wrong the IDE tells you and you go figure out what it is. 
make and nmake builds, running on slaves in a datacenter someplace, would need 
to report everything that goes  wrong.  I don't think the slave-testing-process 
should get complicated by IDEs that constantly get tested manually anyhow.




CTest has several parts.  One is to monitor builds, another is to run
the tests.  Currently it uses log scraping to monitor builds since the
native tools don't provide per-rule reports and we haven't created
"CWrap" to work around the limitation.  For running the tests it has
always separated output on a per-test basis.



OK, we're on the same page.


It runs all the tests through python command wrappers to capture
individual output, and therefore has to generate its own compiler
command line invocations instead of using CMake's knowledge of the
native tools.  Currently -c and -o options are hard-coded AFAICS.

Right... do you see a different way to get this done?


Use --build-and-test and with "CWrap" enabled (or currently the python
approximation of it).



Ok.


We could make this a CMake feature by teaching the generators to wrap
the compiler up with a tool we distribute with CMake.  Then you won't
have to hack the compilation rule variables for Boost or depend on
python.

Presumably the name of this tool (let's call it CWrap?) would have some
interface that you could implement however you like... and if that is
implementable in python, you've given your users lots of ways to tweak
their build system.   We're OK with being dependent on python.


Whatever interface we create to tell the generators to do this can also
specify the wrapper (hook) command.  We can provide a simple tool to be
used by default and just specify a command-line interface for a custom tool.


Sounds good.


Testing with ctest's --build-and-test feature.  The entire build and
execution of every test would be captured independently of other tests.

Or a python script that does what ctest's --build-and-test does...
IMV a lot more flexibility, a lot less code.


How is duplicating functionality less code?  If --build-and-test is
missing something, we can extend it.


Well, the ability to easily talk to anything that has python bindings and e.g. 
report results via XML-RPC to a trac server doesn't duplicate anything in ctest, 
and that'd be a real PITA to code up in C++.   To support this kind of thing, I 
could envision some python bindings to ctest: ctest does its thing, calls a 
python function (that it has been passed) as it collects results.  I'd have to 
think about this.



The code in question tells CMake to generate a python script that looks
like this (on Windows):

  sys.path.append("c:\path\with\backslashes\to\some\file.txt")
  #^^ escape sequence?


I'd have to look back.  This stuff was indeed working on windows;
anyhow, looks like we can detangle a lot of this with some of your help
making tweaks to cmake itself.


Python seems to leave the backslashes if the sequence doesn't happen to
be a supported escape.  You may just be getting lucky.



Yeah it'd be nice to not have to do all this, obviously bugs breed in this kind 
of stuff.


So the first order of business would be to remove this kind of thing:

set(CMAKE_CXX_COMPILE_OBJECT "\"${PYTHON_EXECUTABLE}\" \"${BOOST_TEST_DRIVER}\" 
 cxx_compile_object  
${CMAKE_CXX_COMPILE_OBJECT}" )


We're 100% agreed on the need for this, as far as I can see.  You may have ideas 
about the interface.  It gets executed as:


   wrapper build_dir opcode target arg0 arg1 arg2 ... argN

e.g.:

   mywrap.py /path/to/build create_shared_library libsomething.so gcc -shared 
-o libsomething.so somebody.o somebodyelse.o -lstdc++


'opcode' is one of cxx_compile_object, create_shared_library, 
create_static_library, link_executable.


The driver currently pickles all this informa

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-19 Thread David Abrahams

on Mon Jan 19 2009, Brad King  wrote:

> Hi Dave,
>
> I think some of the confusion is because my original posting proposed
> *two* different approaches to testing:
>
> 1.) Build all the (run) tests for one library into a single executable.
>  Each test gets its own TU but the tests are linked together.  Execution
> of the tests is delayed until test time at which point CTest runs the
> executable over and over with different arguments for each test.

That approach has promise for us, but it would take some investment
because, in general, our test executables are more granular than that.

> 2.) Build the tests as individual executables, but not until *test*
> time.  The idea is that we drive testing with CTest, but each test is a
> *recursive invocation* of CTest with its --build-and-test feature.  This
> feature drives the build of a test source file through the native
> toolchain as part of running the test.  The output of the test includes
> all the native toolchain stuff and the test executable's output.
> However, every test is its own separate recursive invocation of ctest,
> so its output is recorded separately from other tests.

Seems plausible.

> Run-tests can use either #1 or #2.
>
> Compile-only tests should use #2 since the interesting part of the test
> is the compilation, and compile-fail tests can clearly not be linked to
> other tests.

Right, I think.

> David Abrahams wrote:
>> on Thu Jan 15 2009, Brad King  wrote:
>>> The question here is whether one wants to test with the same tools users
>>> might use to build the project.  If one user's tool doesn't provide
>>> per-rule information then we need log-scraping to test it.  
>> 
>> Except that I contest your premise that no intrinsic per-rule
>> information support implies log scraping.  If there is support for the
>> use of replacement tools ("cl-wrapper" instead of "cl"), you can also
>> avoid log scraping.
>
> My argument is simply that if there *were no way* to get per-rule info
> from a native build tool then log scraping is necessary.  

'course

> I fully agree
> that it may be possible to avoid it with sufficient effort for every
> native tool.  Log scraping has been working well enough for us that
> we've not been motivated to put in this effort.

Well, if someone else is maintaining it, I might be convinced not to
care whether you do the job by log scraping or by reading tea
leaves. ;-)

> If you can point me at documentation about how to do this in VS I'd love
> to see it.  I know the Intel compiler does it, but that is a
> full-fledged plugin that even supports its own project file format.  We
> would probably need funding to do something that heavy-weight.

I know nothing about how to do that.  I think Eric Niebler might be able
to help you; he's done this kind of tool integration in the past.

   Frankly I'm not sure what logfile scraping has to do with the
   structural problems you've mentioned.
>>>
>>> I'm only referring to the test part of the anti-logscraping code.  The
>>> python command wrappers are there to avoid log scraping, 
>> 
>> Sorry, I'm not up on the details of the system, so I don't know what
>> "python command wrappers" refers to.
>
> Currently in boost every test compilation command line is invoked
> through a python command that wraps around the real command.  In the
> current system this is necessary to avoid log scraping since the tests
> are done during the main build.

OK.

>>> but if the tests were run through CTest then no log scraping would be
>>> needed.
>> 
>> Now I'm really confused.  On one hand, you say it's necessary to run
>> tests through the native toolchains, and that implies log scraping.  On
>> the other, you suggest running tests through CTest and say that doesn't
>> imply log scraping.  I must be misinterpreting something.  Could you
>> please clarify?
>
> See approach #2 above.

OK.

 * Boost developers need the ability to change something in their
   libraries and then run a test that checks everything in Boost that
   could have been affected by that change without rebuilding and
   re-testing all of Boost (i.e. "incremental retesting").
>>>
>>> How does the current solution solve that problem (either Boost.Build or
>>> the current CMake system)?
>> 
>> Boost.Build does it by making test results into targets that depend on
>> successful runs of up-to-date test executables.  Test executables are
>> targets that depend on boost library binaries and headers.
>
> CTest will need some work to make this totally minimal.  Basically it is
> missing timestamps to avoid re-running tests, which is probably why Troy
> put the tests into the build in the current system.

Prolly.  If you can make the changes, I'm on board.

> As things stand now, the above approaches work as follows.  Approach #1
> will compile/link test executables during the main build with full
> dependencies.  Approach #2 will drive the individual native build system
> for every test which has its 

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-19 Thread Brad King
Hi Dave,

I think some of the confusion is because my original posting proposed
*two* different approaches to testing:

1.) Build all the (run) tests for one library into a single executable.
 Each test gets its own TU but the tests are linked together.  Execution
of the tests is delayed until test time at which point CTest runs the
executable over and over with different arguments for each test.

2.) Build the tests as individual executables, but not until *test*
time.  The idea is that we drive testing with CTest, but each test is a
*recursive invocation* of CTest with its --build-and-test feature.  This
feature drives the build of a test source file through the native
toolchain as part of running the test.  The output of the test includes
all the native toolchain stuff and the test executable's output.
However, every test is its own separate recursive invocation of ctest,
so its output is recorded separately from other tests.

Run-tests can use either #1 or #2.

Compile-only tests should use #2 since the interesting part of the test
is the compilation, and compile-fail tests can clearly not be linked to
other tests.

David Abrahams wrote:
> on Thu Jan 15 2009, Brad King  wrote:
>> The question here is whether one wants to test with the same tools users
>> might use to build the project.  If one user's tool doesn't provide
>> per-rule information then we need log-scraping to test it.  
> 
> Except that I contest your premise that no intrinsic per-rule
> information support implies log scraping.  If there is support for the
> use of replacement tools ("cl-wrapper" instead of "cl"), you can also
> avoid log scraping.

My argument is simply that if there *were no way* to get per-rule info
from a native build tool then log scraping is necessary.  I fully agree
that it may be possible to avoid it with sufficient effort for every
native tool.  Log scraping has been working well enough for us that
we've not been motivated to put in this effort.

If you can point me at documentation about how to do this in VS I'd love
to see it.  I know the Intel compiler does it, but that is a
full-fledged plugin that even supports its own project file format.  We
would probably need funding to do something that heavy-weight.

>>>   Frankly I'm not sure what logfile scraping has to do with the
>>>   structural problems you've mentioned.
>> I'm only referring to the test part of the anti-logscraping code.  The
>> python command wrappers are there to avoid log scraping, 
> 
> Sorry, I'm not up on the details of the system, so I don't know what
> "python command wrappers" refers to.

Currently in boost every test compilation command line is invoked
through a python command that wraps around the real command.  In the
current system this is necessary to avoid log scraping since the tests
are done during the main build.

>> but if the tests were run through CTest then no log scraping would be
>> needed.
> 
> Now I'm really confused.  On one hand, you say it's necessary to run
> tests through the native toolchains, and that implies log scraping.  On
> the other, you suggest running tests through CTest and say that doesn't
> imply log scraping.  I must be misinterpreting something.  Could you
> please clarify?

See approach #2 above.

>>> * Boost developers need the ability to change something in their
>>>   libraries and then run a test that checks everything in Boost that
>>>   could have been affected by that change without rebuilding and
>>>   re-testing all of Boost (i.e. "incremental retesting").
>> How does the current solution solve that problem (either Boost.Build or
>> the current CMake system)?
> 
> Boost.Build does it by making test results into targets that depend on
> successful runs of up-to-date test executables.  Test executables are
> targets that depend on boost library binaries and headers.

CTest will need some work to make this totally minimal.  Basically it is
missing timestamps to avoid re-running tests, which is probably why Troy
put the tests into the build in the current system.

As things stand now, the above approaches work as follows.  Approach #1
will compile/link test executables during the main build with full
dependencies.  Approach #2 will drive the individual native build system
for every test which has its own dependencies.  Both approaches will
still run every test executable though.  I'm sure we can address this
problem.

How does Boost.Build decide whether a compile-fail test needs to be
re-attempted?  Does its dependency scanning decide when something has
changed that could affect whether a new attempt at compilation could be
different?

>>> c) Adding a feature to a library requires modifying existing test code.
>> I don't understand what you mean here.  Are you saying that to test a
>> new feature, the test dispatcher needs to be updated to link in the new
>> test?  
> 
> I don't know what a test dispatcher is.  If you want to maximally
> isolate the tests for the new feature, you can put them 

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-16 Thread David Abrahams

on Thu Jan 15 2009, Brad King  wrote:

> troy d. straszheim wrote:
>> I don't quite get "That doesn't mean we can't test some tools without
>> log-scraping".
>> 
>> I see two different cases here.  There's the developer working under
>> visual studio or emacs who wants to run some tests.  This guy knows (or
>> should know) how to find what compile/link flags were used, chase down
>> warnings and whatnot.  In this case the message "test X failed, go look
>> at the logs" is fine.  Let the user use his tool.
>
> How do we know that a VS IDE build works unless we test it?  Testing it
> requires log scraping unless we write some kind of plugin, which may not
> be possible for all tools.  Even if we do log scraping for VS IDE
> builds, we can still use "CWrap" for generators where it can be implemented.

I know for a fact it's possible to hook the builtin VS IDE compiler
command.  After all, that happens when you install the Intel compiler
chain.  In fact, I think you get a menu that lets you choose which
toolset you're using under the covers.

>>> We could make this a CMake feature by teaching the generators to wrap
>>> the compiler up with a tool we distribute with CMake.  Then you won't
>>> have to hack the compilation rule variables for Boost or depend on
>>> python.
>> 
>> Presumably the name of this tool (let's call it CWrap?) would have some
>> interface that you could implement however you like... and if that is
>> implementable in python, you've given your users lots of ways to tweak
>> their build system.   We're OK with being dependent on python.
>
> Whatever interface we create to tell the generators to do this can also
> specify the wrapper (hook) command.  We can provide a simple tool to be
> used by default and just specify a command-line interface for a custom tool.

e.g. "python some_script.py ..."

>>> Testing with ctest's --build-and-test feature.  The entire build and
>>> execution of every test would be captured independently of other tests.
>> 
>> Or a python script that does what ctest's --build-and-test does...
>> IMV a lot more flexibility, a lot less code.
>
> How is duplicating functionality less code?  If --build-and-test is
> missing something, we can extend it.

Whatever you guys can provide quickly enough to keep us from wanting to
build it ourselves will be most gratefully appreciated!

>>> The code in question tells CMake to generate a python script that looks
>>> like this (on Windows):
>>>
>>>   sys.path.append("c:\path\with\backslashes\to\some\file.txt")
>>>   #^^ escape sequence?
>>>
>> 
>> I'd have to look back.  This stuff was indeed working on windows;
>> anyhow, looks like we can detangle a lot of this with some of your help
>> making tweaks to cmake itself.
>
> Python seems to leave the backslashes if the sequence doesn't happen to
> be a supported escape.  You may just be getting lucky.

Fortunately it's easy enough to keep luck out of the equation in this
case.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-16 Thread David Abrahams

on Thu Jan 15 2009, Brad King  wrote:

> David Abrahams wrote:
>> * logfile scraping is too hopelessly fragile to make for a good testing
>>   system, and there are better and possibly even easier alternatives.
>
> The question here is whether one wants to test with the same tools users
> might use to build the project.  If one user's tool doesn't provide
> per-rule information then we need log-scraping to test it.  

Except that I contest your premise that no intrinsic per-rule
information support implies log scraping.  If there is support for the
use of replacement tools ("cl-wrapper" instead of "cl"), you can also
avoid log scraping.

> That doesn't mean we can't test some tools without log-scraping
> though.
>
>>   Frankly I'm not sure what logfile scraping has to do with the
>>   structural problems you've mentioned.
>
> I'm only referring to the test part of the anti-logscraping code.  The
> python command wrappers are there to avoid log scraping, 

Sorry, I'm not up on the details of the system, so I don't know what
"python command wrappers" refers to.

> but if the tests were run through CTest then no log scraping would be
> needed.

Now I'm really confused.  On one hand, you say it's necessary to run
tests through the native toolchains, and that implies log scraping.  On
the other, you suggest running tests through CTest and say that doesn't
imply log scraping.  I must be misinterpreting something.  Could you
please clarify?

>> * Boost developers need the ability to change something in their
>>   libraries and then run a test that checks everything in Boost that
>>   could have been affected by that change without rebuilding and
>>   re-testing all of Boost (i.e. "incremental retesting").
>
> How does the current solution solve that problem (either Boost.Build or
> the current CMake system)?

Boost.Build does it by making test results into targets that depend on
successful runs of up-to-date test executables.  Test executables are
targets that depend on boost library binaries and headers.  From what I
gather of the current CMake system it is doing something similar.

Of course I understand the downsides of this arrangement that you are
describing (too much / too complicated dependency info leads to slow
execution).  On the other hand, it turns out that Boost.Build spends
relatively little time doing actual dependency analysis (its slowness at
incremental re-test time comes from elsewhere).

>>> The large number of high-level targets places many rules in the outer
>>> make level which leads to very long startup times (look at
>>> CMakeFiles/Makefile2, which make needs to parse many times).  
>> 
>> Just curious: why does make need to parse the same file many times?
>
> I haven't looked at it in detail recently, but on quick inspection I
> think it is now just twice.  Each of the two times works with separate
> rules so they could probably be split into two files.  However, the file
> has never been very big for any of our projects because we don't have a
> huge number of top-level targets (since VS doesn't work in that case).

Thanks for explaining.

>>> Boost needs to build libraries, documentation, and other files to be
>>> placed in the install tree.  Rules to build these parts can fit in
>>> relatively few high-level targets and should certainly use them.  
>> 
>> Sorry, what "should certainly use" what?  Rules should use the targets?
>
> Bad wording on my part.  I meant that it is fine to use top-level
> targets to drive the build of libraries and documentation.
>
>>> However, there are some disadvantages:
>>>
>>>   (a) If one test fails to compile none of its tests can run
>>>   (b) A bad test may accidentally link due to symbols from another test
>> 
>> c) Adding a feature to a library requires modifying existing test code.
>
> I don't understand what you mean here.  Are you saying that to test a
> new feature, the test dispatcher needs to be updated to link in the new
> test?  

I don't know what a test dispatcher is.  If you want to maximally
isolate the tests for the new feature, you can put them in a new
translation unit, but something has to call into that translation unit
from main if the tests are going to run.

> FYI, CMake provides a command to generate the dispatcher for you
> (create_test_sourcelist).

Oh, nice; problem solved.

>>> CTest doesn't do log-scraping to detect errors.  Every test gets run
>>> individually and its output is recorded separately.  Boost's current
>>> system puts the tests inside the build and then jumps through hoops
>>> to avoid log-scraping of the results.
>> 
>> What kind of hoops?
>
> It runs all the tests through python command wrappers to capture
> individual output, and therefore has to generate its own compiler
> command line invocations instead of using CMake's knowledge of the
> native tools.  Currently -c and -o options are hard-coded AFAICS.

Hm.  It shouild be possible to use CMake's knowledge of native tools and
still inject a wrapper.  If

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread Brad King
troy d. straszheim wrote:
> I don't quite get "That doesn't mean we can't test some tools without
> log-scraping".
> 
> I see two different cases here.  There's the developer working under
> visual studio or emacs who wants to run some tests.  This guy knows (or
> should know) how to find what compile/link flags were used, chase down
> warnings and whatnot.  In this case the message "test X failed, go look
> at the logs" is fine.  Let the user use his tool.

How do we know that a VS IDE build works unless we test it?  Testing it
requires log scraping unless we write some kind of plugin, which may not
be possible for all tools.  Even if we do log scraping for VS IDE
builds, we can still use "CWrap" for generators where it can be implemented.

(We can use the term "CWrap" for this discussion and come up with a
better name later.)

> One problem with the current implementation is that every test is a
> toplevel target;  this is easy to fix, if you just write the tests to
> flatfiles in the build hierarchy (like ctest does it) and use a driver
> script to find the files and run the tests (that is nice and clean for
> tests that involve just running compiled binaries, I haven't thought
> about how it would work for compile and compile-fail tests).

As I said in my original post, compile and compile-fail tests will work
fine with --build-and-test.  You shouldn't need to create your own
python testing script.  We can add features to CTest as needed.

> If I remember correctly the 'log scraping' was more about ctest having
> to scrape
> the build log to find compile errors.   Is this no longer necessary?

CTest has several parts.  One is to monitor builds, another is to run
the tests.  Currently it uses log scraping to monitor builds since the
native tools don't provide per-rule reports and we haven't created
"CWrap" to work around the limitation.  For running the tests it has
always separated output on a per-test basis.

>> It runs all the tests through python command wrappers to capture
>> individual output, and therefore has to generate its own compiler
>> command line invocations instead of using CMake's knowledge of the
>> native tools.  Currently -c and -o options are hard-coded AFAICS.
> 
> Right... do you see a different way to get this done?

Use --build-and-test and with "CWrap" enabled (or currently the python
approximation of it).

>> We could make this a CMake feature by teaching the generators to wrap
>> the compiler up with a tool we distribute with CMake.  Then you won't
>> have to hack the compilation rule variables for Boost or depend on
>> python.
> 
> Presumably the name of this tool (let's call it CWrap?) would have some
> interface that you could implement however you like... and if that is
> implementable in python, you've given your users lots of ways to tweak
> their build system.   We're OK with being dependent on python.

Whatever interface we create to tell the generators to do this can also
specify the wrapper (hook) command.  We can provide a simple tool to be
used by default and just specify a command-line interface for a custom tool.

>> Testing with ctest's --build-and-test feature.  The entire build and
>> execution of every test would be captured independently of other tests.
> 
> Or a python script that does what ctest's --build-and-test does...
> IMV a lot more flexibility, a lot less code.

How is duplicating functionality less code?  If --build-and-test is
missing something, we can extend it.

>> The code in question tells CMake to generate a python script that looks
>> like this (on Windows):
>>
>>   sys.path.append("c:\path\with\backslashes\to\some\file.txt")
>>   #^^ escape sequence?
>>
> 
> I'd have to look back.  This stuff was indeed working on windows;
> anyhow, looks like we can detangle a lot of this with some of your help
> making tweaks to cmake itself.

Python seems to leave the backslashes if the sequence doesn't happen to
be a supported escape.  You may just be getting lucky.

-Brad
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread troy d. straszheim

Brad King wrote:

David Abrahams wrote:

* logfile scraping is too hopelessly fragile to make for a good testing
  system, and there are better and possibly even easier alternatives.


The question here is whether one wants to test with the same tools users
might use to build the project.  If one user's tool doesn't provide
per-rule information then we need log-scraping to test it.  That doesn't
mean we can't test some tools without log-scraping though.



I don't quite get "That doesn't mean we can't test some tools without 
log-scraping".

I see two different cases here.  There's the developer working under visual 
studio or emacs who wants to run some tests.  This guy knows (or should know) 
how to find what compile/link flags were used, chase down warnings and whatnot. 
 In this case the message "test X failed, go look at the logs" is fine.  Let 
the user use his tool.


Then there's the 'tester' who operates testing slave boxes that poll svn, build 
and run things.  The test slave operator doesn't care about which tool, just 
that it works.   The output of these slave runs ends up on a website somewhere,
and in this case the information about the build should be exhaustive:  every 
step, every flag, every warning, every command linee, every 
pass/fail/warn/notrun/timeout should be available, so that the library author 
can get as much info as possible w/o asking the slave operator to dig around

in his build.


  Frankly I'm not sure what logfile scraping has to do with the
  structural problems you've mentioned.


I'm only referring to the test part of the anti-logscraping code.  The
python command wrappers are there to avoid log scraping, but if the
tests were run through CTest then no log scraping would be needed.


One problem with the current implementation is that every test is a toplevel 
target;  this is easy to fix, if you just write the tests to flatfiles in the 
build hierarchy (like ctest does it) and use a driver script to find the files 
and run the tests (that is nice and clean for tests that involve just running 
compiled binaries, I haven't thought about how it would work for compile and 
compile-fail tests).  I have since converted a different project (as large as 
boost) to this method and am very happy with the results.  You can easily 
customize the test-running business... its just a python script, which seems to 
me the right tool for the job, a scripting task.  Getting information about 
building/testing out to python, intact and in its entirety, makes communicating 
with servers a real pleasure (using xmlrpc under python, for instance, to talk 
to the xmlrpc plugin inside a Trac instance).


If I remember correctly the 'log scraping' was more about ctest having to scrape
the build log to find compile errors.   Is this no longer necessary?


* Boost developers need the ability to change something in their
  libraries and then run a test that checks everything in Boost that
  could have been affected by that change without rebuilding and
  re-testing all of Boost (i.e. "incremental retesting").


How does the current solution solve that problem (either Boost.Build or
the current CMake system)?


The large number of high-level targets places many rules in the outer
make level which leads to very long startup times (look at
CMakeFiles/Makefile2, which make needs to parse many times).  

Just curious: why does make need to parse the same file many times?


I haven't looked at it in detail recently, but on quick inspection I
think it is now just twice.  Each of the two times works with separate
rules so they could probably be split into two files.  However, the file
has never been very big for any of our projects because we don't have a
huge number of top-level targets (since VS doesn't work in that case).


Boost needs to build libraries, documentation, and other files to be
placed in the install tree.  Rules to build these parts can fit in
relatively few high-level targets and should certainly use them.  

Sorry, what "should certainly use" what?  Rules should use the targets?


Bad wording on my part.  I meant that it is fine to use top-level
targets to drive the build of libraries and documentation.


However, there are some disadvantages:

  (a) If one test fails to compile none of its tests can run
  (b) A bad test may accidentally link due to symbols from another test

c) Adding a feature to a library requires modifying existing test code.


I don't understand what you mean here.  Are you saying that to test a
new feature, the test dispatcher needs to be updated to link in the new
test?  FYI, CMake provides a command to generate the dispatcher for you
(create_test_sourcelist).


CTest doesn't do log-scraping
to detect errors.  Every test gets run individually and its output is
recorded separately.  Boost's current system puts the tests inside the
build and then jumps through hoops to avoid log-scraping of the
results.

What kind of hoops?


It runs all the tests through pyth

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread Brad King
David Abrahams wrote:
> * logfile scraping is too hopelessly fragile to make for a good testing
>   system, and there are better and possibly even easier alternatives.

The question here is whether one wants to test with the same tools users
might use to build the project.  If one user's tool doesn't provide
per-rule information then we need log-scraping to test it.  That doesn't
mean we can't test some tools without log-scraping though.

>   Frankly I'm not sure what logfile scraping has to do with the
>   structural problems you've mentioned.

I'm only referring to the test part of the anti-logscraping code.  The
python command wrappers are there to avoid log scraping, but if the
tests were run through CTest then no log scraping would be needed.

> * Boost developers need the ability to change something in their
>   libraries and then run a test that checks everything in Boost that
>   could have been affected by that change without rebuilding and
>   re-testing all of Boost (i.e. "incremental retesting").

How does the current solution solve that problem (either Boost.Build or
the current CMake system)?

>> The large number of high-level targets places many rules in the outer
>> make level which leads to very long startup times (look at
>> CMakeFiles/Makefile2, which make needs to parse many times).  
> 
> Just curious: why does make need to parse the same file many times?

I haven't looked at it in detail recently, but on quick inspection I
think it is now just twice.  Each of the two times works with separate
rules so they could probably be split into two files.  However, the file
has never been very big for any of our projects because we don't have a
huge number of top-level targets (since VS doesn't work in that case).

>> Boost needs to build libraries, documentation, and other files to be
>> placed in the install tree.  Rules to build these parts can fit in
>> relatively few high-level targets and should certainly use them.  
> 
> Sorry, what "should certainly use" what?  Rules should use the targets?

Bad wording on my part.  I meant that it is fine to use top-level
targets to drive the build of libraries and documentation.

>> However, there are some disadvantages:
>>
>>   (a) If one test fails to compile none of its tests can run
>>   (b) A bad test may accidentally link due to symbols from another test
> 
> c) Adding a feature to a library requires modifying existing test code.

I don't understand what you mean here.  Are you saying that to test a
new feature, the test dispatcher needs to be updated to link in the new
test?  FYI, CMake provides a command to generate the dispatcher for you
(create_test_sourcelist).

>> CTest doesn't do log-scraping
>> to detect errors.  Every test gets run individually and its output is
>> recorded separately.  Boost's current system puts the tests inside the
>> build and then jumps through hoops to avoid log-scraping of the
>> results.
> 
> What kind of hoops?

It runs all the tests through python command wrappers to capture
individual output, and therefore has to generate its own compiler
command line invocations instead of using CMake's knowledge of the
native tools.  Currently -c and -o options are hard-coded AFAICS.

>> This brings us to the log-scraping issue in general.  CMake permits
>> users to build projects using their native tools, including Visual
>> Studio, Xcode and Makefiles.  In order to make sure builds with these
>> tools work, the test system must drive builds through them too.
>> Testing with just one type of native tool is insufficient.  
> 
> That's at least somewhat debatable.  To get results for all the
> different toolchains would require more testing resources, would it not?

Yes.  In our model we ask users to contribute testing resources for the
toolchains they want supported which we don't have.  If no one cares
enough about a platform/compiler to submit tests, we don't need to
support that platform.

>> Since the native tools do not support per-rule reporting log-scraping
>> is necessary.
> 
> Also somewhat debatable.  If we can get xcode to invoke
> "boost-g++-wrapper" instead of "g++," we can still get per-rule
> reporting, right?

If per-rule reporting is not available from a native tool we have to do
log-scraping.  What's debatable is whether we can work around a lack of
explicit support in the native tools.

We could make this a CMake feature by teaching the generators to wrap
the compiler up with a tool we distribute with CMake.  Then you won't
have to hack the compilation rule variables for Boost or depend on python.

>> Problem (a) is automatically handled by the testing solution I propose
>> above since test results are recorded and reported individually.
> 
> Sorry, what did you propose above?

Testing with ctest's --build-and-test feature.  The entire build and
execution of every test would be captured independently of other tests.

>> Problem (c) is a lack of convenience when the build error is subtle
>> enough to requi

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread David Abrahams

on Thu Jan 15 2009, "troy d. straszheim"  wrote:

> What to you say to the original question about the preferred format?

Don't know what to say yet, sorry.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread David Abrahams

on Thu Jan 15 2009, "Beman Dawes"  wrote:

>> Have you tried helping a Boost newbie go through the process of
>> building and installing Boost lately? It's extremely painful, but we
>> don't see that pain because we've all gone through the initial hurdles
>> of getting bjam setup just right for our own configurations.
>
> I certainly see the pain - I'm the one who does the builds for a
> client of mine. It is a painful mess now, that's for sure.
>
> Any progress on pre-built binaries?

Boostpro Computing provides them for Windows.  We've just done the extra
work necessary to make it fairly easy to push out a new release, and
we're expecting to post a 1.37 release in the next 24 hours.

>> That's the wrong thing to optimize: we need to optimize for the case
>> where a new user downloads Boost and wants to build/use it
>> immediately. Those users only care about a single compiler---the one
>> they use for their day-to-day work---and would greatly benefit from
>> being able to use their platform-specific tools (Visual Studio,
>> xCode, whatever).
>
> True, but they have to build several variants for that compiler, and
> on some platforms the 32 and 64 bit flavors are more like two separate
> compilers.

I think the number of actual end-users who need to build multiple
variants is relatively small.  Even if a company is shipping multiple
versions of a product, end users are typically doing the
compile-edit-debug cycle with a single compiler and variant.

>> If we're going to go through the effort of introducing a new build
>> system, we need to focus on the user experience first and foremost.
>
> If you don't give the developers tools they can use, they won't switch
> to CMake.
>
> If you don't give the release managers tools they can use, they won't
> switch to CMake.
>
> And of course the users needs have to be met too.
>
> It is a three-legged stool. All three legs have to bear the weight, or
> it topples.

Good.  Maybe you could be explicit about the needs that, in your view,
are brought by each group?

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread David Abrahams

on Wed Jan 14 2009, Brad King  wrote:

> Hi Folks,
>
> I'm considering attending BoostCon 2009 to provide developer-level
> CMake expertise, 

Yes, please!

> and I'm looking into proposing a session as Hartmut
> requested.  

Also yes please.

> In preparation I've downloaded and tried the current
> system and read back through some of the discussion on this list:
>
>   http://thread.gmane.org/gmane.comp.lib.boost.cmake/4
>   http://thread.gmane.org/gmane.comp.lib.boost.cmake/10
>
> The current system feels very bulky compared to CMake-generated build
> systems I've used for projects of comparable size.  This is primarily
> due to the use of top-level targets for the tests.  IMO the efforts to
> provide test-to-library dependencies and to avoid log-scraping have
> steered the system off-course.

While I'm fully prepared to believe that the project could be better
structured, I'm still convinced that:

* logfile scraping is too hopelessly fragile to make for a good testing
  system, and there are better and possibly even easier alternatives.
  Frankly I'm not sure what logfile scraping has to do with the
  structural problems you've mentioned.

* Boost developers need the ability to change something in their
  libraries and then run a test that checks everything in Boost that
  could have been affected by that change without rebuilding and
  re-testing all of Boost (i.e. "incremental retesting").

> One of the goals of CMake is to let developers use their favorite
> native tools.  These include Visual Studio and Xcode along with Make
> tools.  

Hurrah (not "horrors," at least, not for me)!

> In order to support these tools CMake's build model separates
> high-level targets from low-level file dependencies.  

It's just like Boost.Build in that way.

> The add_executable(), add_library(), and add_custom_target() commands
> create high-level targets.  Each target contains file-level rules to
> build individual sources.
>
> In generated VS and Xcode projects the high-level targets become
> top-level items in the GUIs.  These IDEs define file-level rules
> inside each target.  In generated Makefiles there is a two-level
> system in which the outer level knows only about inter-dependencies
> among high-level targets and the inner level loads the file-level
> rules inside each target.  This design yields fast builds because each
> make process sees a relatively small set of rules (and makes automatic
> dependency scanning easy and reliable).  It also makes the
> representation of build rules in the IDEs tractable.  The key is that
> there should be relatively few high-level targets compared to the
> number of file-level rules.

Reminder: much of what's in Boost lives in headers.

The modularization work
(https://svn.boost.org/trac/boost/wiki/CMakeModularizeLibrary) could
help us maintain the ability to do incremental retesting without
representing the dependencies of every individual header file.  It would
be nice if module dependencies could be used as a first-level work
eliminator and the system could still avoid retesting things that really
weren't affected, though.  There are a few large-scale, highly modular,
library designs in Boost that only appear as dependencies in bits and
pieces in other libraries.

> Currently Boost's CMake system creates about two high-level targets
> for each test.  One is the add_executable() to build the test and the
> other is the add_custom_target() to run the test.  This results in a
> very large number of high-level targets.  The generated Xcode and VS
> projects are simply too large for the IDEs to load (I waited 10
> minutes for VS to try), which already defeats one purpose of using
> CMake.  

Big problem.

> This leaves the Makefile generators as the only option.
>
> The large number of high-level targets places many rules in the outer
> make level which leads to very long startup times (look at
> CMakeFiles/Makefile2, which make needs to parse many times).  

Just curious: why does make need to parse the same file many times?

> For example, I run
>
>   time make type_traits-rank_test VERBOSE=1
>
> and get
>
>   52.49s user 0.31s system 96% cpu 54.595 total
>
> but only about 1s of that time was actually spent running the compiler
> and linker.
>
> Boost needs to build libraries, documentation, and other files to be
> placed in the install tree.  Rules to build these parts can fit in
> relatively few high-level targets and should certainly use them.  

Sorry, what "should certainly use" what?  Rules should use the targets?

> This
> is currently done when not building the tests (BUILD_TESTING=OFF).
> The problem lies in building the tests.  These do not belong in
> high-level targets for the reasons described above.  I'd like to help
> you create an alternative solution.

Yay

> There are four kinds of tests:
>
>   boost_test_run
>   boost_test_run_fail
>   boost_test_compile
>   boost_test_compile_fail
>
> Let's first consider the run and run_fail tests.  In our proj

Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread troy d. straszheim

Brad King wrote:



The boost-cmake-for-users talk could of course reflect whatever we
get done between now and then.


Has anyone submitted anything for this yet?  We (Kitware) can present
our CMake/CTest/CDash/CPack software process in general, but the
boost-specific part should probably be done by one of its authors or
maintainers.  Do you want to do a combined (two-part) session, or should
I submit a separate proposal for the general part?


I'm going to submit something as one of the authors of the boost-specific 
bits...  what say we each submit a 60 minute talk, yours 'cmake in general' and 
mine 'cmake for boost', and we'll figure out how to make them jibe as the 
details come out.  For instance the CTest/CDash bits need to be consistent, I 
assume that they currently wouldn't be.  (is this roughly what David/Beman want?)


-t




___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread troy d. straszheim

David Abrahams wrote:

on Wed Jan 14 2009, "troy d. straszheim"  wrote:


Hi Brad,

There is a lot to discuss here.  I'll go back later and make specific comments. 
It'd
be great to talk in person at boostcon, (boostcon rocks, by the way.)

I understand/agree with a lot of your points (especially bulkiness, and the 
need to
reduce the number of toplevel targets), in most cases because I've learned more 
about
cmake since I implemented what is currently on the boost trunk.

Brad King wrote:
[snip] 


In summary, I'd like to help you folks address these issues.  Some of
the work will be in Boost's CMake code and some in CMake itself.  The
work will benefit both projects.  We can arrange to meet at BoostCon,
but we can probably get alot of discussion done on this list before
then.  BTW, can anyone suggest a preferred format for a BoostCon
session from the boost-cmake-devs' point of view?

I don't personally see a formal presentation to boost-cmake devs as being 
useful,
there just aren't enough of us (last I checked there were three).


Who are you counting?  


I was counting Doug, Michael and myself


I don't think I've done anything substantial with
Boost-CMake but would still be *very* interested in such a talk.


That's good news...  I stand corrected.  What to you say to the original 
question about the preferred format?


-t



___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread Brad King
troy d. straszheim wrote:
> There is a lot to discuss here.  I'll go back later and make specific
> comments.  It'd be great to talk in person at boostcon, (boostcon rocks,
> by the way.)
> 
> I understand/agree with a lot of your points (especially bulkiness, and
> the need to reduce the number of toplevel targets), in most cases
> because I've learned more about cmake since I implemented what is
> currently on the boost trunk.

Great.  I'll wait for your specific comments to continue discussion.

> Brad King wrote:
>> In summary, I'd like to help you folks address these issues.  Some of
>> the work will be in Boost's CMake code and some in CMake itself.  The
>> work will benefit both projects.  We can arrange to meet at BoostCon,
>> but we can probably get alot of discussion done on this list before
>> then.  BTW, can anyone suggest a preferred format for a BoostCon
>> session from the boost-cmake-devs' point of view?
> 
> I don't personally see a formal presentation to boost-cmake devs as 
> being useful, there just aren't enough of us (last I checked there
> were three).  I'd suggest we just sit down together... there are
> plenty of conference rooms available at all times.

Sure.  We can look at the conference schedule when it is available and
choose a time to meet.

> The boost-cmake-for-users talk could of course reflect whatever we
> get done between now and then.

Has anyone submitted anything for this yet?  We (Kitware) can present
our CMake/CTest/CDash/CPack software process in general, but the
boost-specific part should probably be done by one of its authors or
maintainers.  Do you want to do a combined (two-part) session, or should
I submit a separate proposal for the general part?

-Brad
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread Beman Dawes
On Wed, Jan 14, 2009 at 6:16 PM, Doug Gregor  wrote:
> On Wed, Jan 14, 2009 at 3:02 PM, Beman Dawes  wrote:
>> On Wed, Jan 14, 2009 at 11:52 AM, Brad King  wrote:
>>>..
>>> One of the goals of CMake is to let developers use their favorite
>>> native tools.
>>
>> Horrors! As a boost developer, the last thing in the world I want is
>> to have to know anything about a platform's native tools. I just want
>> to be able to enter the CMake equivalent of "bjam" in the directory
>> I'm testing, and have it build and run all tests for all installed
>> compilers. Perhaps with appropriate arguments if I only want to test
>> with subset of the compilers, or a single test, or pass in some
>> compiler arguments.
>
> This is exactly the argument that got us into our current build-tools
> mess. We've always placed so much emphasis on making things easy for
> Boost *developers* that we've made them extremely tough for Boost
> *users*. This feature---the ability to run "bjam" once and run
> everything across multiple compilers---is responsible for the majority
> of the damage, because we've been architecting bjam for multiple
> compilers at the expense of the common case of a single system
> compiler.

We had this discussion before and decided it would be fine for boost
developers if what they were running was actually just a script that
just called the underlying CMake setups multiple times. But at the
level of reporting, both local and on the web, there has to be a
single summary available that brings together all test results.

> Have you tried helping a Boost newbie go through the process of
> building and installing Boost lately? It's extremely painful, but we
> don't see that pain because we've all gone through the initial hurdles
> of getting bjam setup just right for our own configurations.

I certainly see the pain - I'm the one who does the builds for a
client of mine. It is a painful mess now, that's for sure.

Any progress on pre-built binaries?

> That's
> the wrong thing to optimize: we need to optimize for the case where a
> new user downloads Boost and wants to build/use it immediately. Those
> users only care about a single compiler---the one they use for their
> day-to-day work---and would greatly benefit from being able to use
> their platform-specific tools (Visual Studio, xCode, whatever).

True, but they have to build several variants for that compiler, and
on some platforms the 32 and 64 bit flavors are more like two separate
compilers.

> If we're going to go through the effort of introducing a new build
> system, we need to focus on the user experience first and foremost.

If you don't give the developers tools they can use, they won't switch to CMake.

If you don't give the release managers tools they can use, they won't
switch to CMake.

And of course the users needs have to be met too.

It is a three-legged stool. All three legs have to bear the weight, or
it topples.

--Beman
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread Beman Dawes
On Thu, Jan 15, 2009 at 9:30 AM, David Abrahams  wrote:
>
> on Wed Jan 14 2009, "troy d. straszheim"  wrote:
>
>> Hi Brad,
>>
>> There is a lot to discuss here.  I'll go back later and make specific 
>> comments. It'd
>> be great to talk in person at boostcon, (boostcon rocks, by the way.)
>>
>> I understand/agree with a lot of your points (especially bulkiness, and the 
>> need to
>> reduce the number of toplevel targets), in most cases because I've learned 
>> more about
>> cmake since I implemented what is currently on the boost trunk.
>>
>> Brad King wrote:
>>> [snip]
>>>
>>> In summary, I'd like to help you folks address these issues.  Some of
>>> the work will be in Boost's CMake code and some in CMake itself.  The
>>> work will benefit both projects.  We can arrange to meet at BoostCon,
>>> but we can probably get alot of discussion done on this list before
>>> then.  BTW, can anyone suggest a preferred format for a BoostCon
>>> session from the boost-cmake-devs' point of view?
>>
>> I don't personally see a formal presentation to boost-cmake devs as being 
>> useful,
>> there just aren't enough of us (last I checked there were three).
>
> Who are you counting?  I don't think I've done anything substantial with
> Boost-CMake but would still be *very* interested in such a talk.

Likewise. And I am sure there will be other BoostCon attendees who
aren't reading this list who will also be very interested.

--Beman
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-15 Thread David Abrahams

on Wed Jan 14 2009, "troy d. straszheim"  wrote:

> Hi Brad,
>
> There is a lot to discuss here.  I'll go back later and make specific 
> comments. It'd
> be great to talk in person at boostcon, (boostcon rocks, by the way.)
>
> I understand/agree with a lot of your points (especially bulkiness, and the 
> need to
> reduce the number of toplevel targets), in most cases because I've learned 
> more about
> cmake since I implemented what is currently on the boost trunk.
>
> Brad King wrote:
>> [snip] 
>>
>> In summary, I'd like to help you folks address these issues.  Some of
>> the work will be in Boost's CMake code and some in CMake itself.  The
>> work will benefit both projects.  We can arrange to meet at BoostCon,
>> but we can probably get alot of discussion done on this list before
>> then.  BTW, can anyone suggest a preferred format for a BoostCon
>> session from the boost-cmake-devs' point of view?
>
> I don't personally see a formal presentation to boost-cmake devs as being 
> useful,
> there just aren't enough of us (last I checked there were three).

Who are you counting?  I don't think I've done anything substantial with
Boost-CMake but would still be *very* interested in such a talk.

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-14 Thread Doug Gregor
On Wed, Jan 14, 2009 at 3:02 PM, Beman Dawes  wrote:
> On Wed, Jan 14, 2009 at 11:52 AM, Brad King  wrote:
>>..
>> One of the goals of CMake is to let developers use their favorite
>> native tools.
>
> Horrors! As a boost developer, the last thing in the world I want is
> to have to know anything about a platform's native tools. I just want
> to be able to enter the CMake equivalent of "bjam" in the directory
> I'm testing, and have it build and run all tests for all installed
> compilers. Perhaps with appropriate arguments if I only want to test
> with subset of the compilers, or a single test, or pass in some
> compiler arguments.

This is exactly the argument that got us into our current build-tools
mess. We've always placed so much emphasis on making things easy for
Boost *developers* that we've made them extremely tough for Boost
*users*. This feature---the ability to run "bjam" once and run
everything across multiple compilers---is responsible for the majority
of the damage, because we've been architecting bjam for multiple
compilers at the expense of the common case of a single system
compiler.

Have you tried helping a Boost newbie go through the process of
building and installing Boost lately? It's extremely painful, but we
don't see that pain because we've all gone through the initial hurdles
of getting bjam setup just right for our own configurations. That's
the wrong thing to optimize: we need to optimize for the case where a
new user downloads Boost and wants to build/use it immediately. Those
users only care about a single compiler---the one they use for their
day-to-day work---and would greatly benefit from being able to use
their platform-specific tools (Visual Studio, xCode, whatever).

If we're going to go through the effort of introducing a new build
system, we need to focus on the user experience first and foremost.

  - Doug
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-14 Thread Beman Dawes
On Wed, Jan 14, 2009 at 11:52 AM, Brad King  wrote:
>..
> One of the goals of CMake is to let developers use their favorite
> native tools.

Horrors! As a boost developer, the last thing in the world I want is
to have to know anything about a platform's native tools. I just want
to be able to enter the CMake equivalent of "bjam" in the directory
I'm testing, and have it build and run all tests for all installed
compilers. Perhaps with appropriate arguments if I only want to test
with subset of the compilers, or a single test, or pass in some
compiler arguments.

And of course I'll never even have access to most of the platforms my
tests run on. So the only native development tool is the regression
test reporting system.

You probably know all that, and it doesn't take anything away from
many of the things you are saying. But the fact that a Boost developer
doesn't even have access to native tools on many platforms needs to be
kept in mind. IIUC, this line of argument supports you suggestion
below that "This leaves the Makefile generators as the only option."

>...
> The problem lies in building the tests.  These do not belong in
> high-level targets for the reasons described above.  I'd like to help
> you create an alternative solution.

The offer is much appreciated!

--Beman
___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


Re: [Boost-cmake] Analysis of the current CMake system

2009-01-14 Thread troy d. straszheim

Hi Brad,

There is a lot to discuss here.  I'll go back later and make specific comments. 
 It'd be great to talk in person at boostcon, (boostcon rocks, by the way.)


I understand/agree with a lot of your points (especially bulkiness, and the need 
to reduce the number of toplevel targets), in most cases because I've learned 
more about cmake since I implemented what is currently on the boost trunk.


Brad King wrote:
[snip] 


In summary, I'd like to help you folks address these issues.  Some of
the work will be in Boost's CMake code and some in CMake itself.  The
work will benefit both projects.  We can arrange to meet at BoostCon,
but we can probably get alot of discussion done on this list before
then.  BTW, can anyone suggest a preferred format for a BoostCon
session from the boost-cmake-devs' point of view?


I don't personally see a formal presentation to boost-cmake devs as being 
useful, there just aren't enough of us (last I checked there were three).

I'd suggest we just sit down together... there are plenty of conference rooms
available at all times.   The boost-cmake-for-users talk could of course reflect
whatever we get done between now and then.

-t

___
Boost-cmake mailing list
Boost-cmake@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-cmake


[Boost-cmake] Analysis of the current CMake system

2009-01-14 Thread Brad King
Hi Folks,

I'm considering attending BoostCon 2009 to provide developer-level
CMake expertise, and I'm looking into proposing a session as Hartmut
requested.  In preparation I've downloaded and tried the current
system and read back through some of the discussion on this list:

  http://thread.gmane.org/gmane.comp.lib.boost.cmake/4
  http://thread.gmane.org/gmane.comp.lib.boost.cmake/10

The current system feels very bulky compared to CMake-generated build
systems I've used for projects of comparable size.  This is primarily
due to the use of top-level targets for the tests.  IMO the efforts to
provide test-to-library dependencies and to avoid log-scraping have
steered the system off-course.

One of the goals of CMake is to let developers use their favorite
native tools.  These include Visual Studio and Xcode along with Make
tools.  In order to support these tools CMake's build model separates
high-level targets from low-level file dependencies.  The
add_executable(), add_library(), and add_custom_target() commands
create high-level targets.  Each target contains file-level rules to
build individual sources.

In generated VS and Xcode projects the high-level targets become
top-level items in the GUIs.  These IDEs define file-level rules
inside each target.  In generated Makefiles there is a two-level
system in which the outer level knows only about inter-dependencies
among high-level targets and the inner level loads the file-level
rules inside each target.  This design yields fast builds because each
make process sees a relatively small set of rules (and makes automatic
dependency scanning easy and reliable).  It also makes the
representation of build rules in the IDEs tractable.  The key is that
there should be relatively few high-level targets compared to the
number of file-level rules.

Currently Boost's CMake system creates about two high-level targets
for each test.  One is the add_executable() to build the test and the
other is the add_custom_target() to run the test.  This results in a
very large number of high-level targets.  The generated Xcode and VS
projects are simply too large for the IDEs to load (I waited 10
minutes for VS to try), which already defeats one purpose of using
CMake.  This leaves the Makefile generators as the only option.

The large number of high-level targets places many rules in the outer
make level which leads to very long startup times (look at
CMakeFiles/Makefile2, which make needs to parse many times).  For
example, I run

  time make type_traits-rank_test VERBOSE=1

and get

  52.49s user 0.31s system 96% cpu 54.595 total

but only about 1s of that time was actually spent running the compiler
and linker.

Boost needs to build libraries, documentation, and other files to be
placed in the install tree.  Rules to build these parts can fit in
relatively few high-level targets and should certainly use them.  This
is currently done when not building the tests (BUILD_TESTING=OFF).
The problem lies in building the tests.  These do not belong in
high-level targets for the reasons described above.  I'd like to help
you create an alternative solution.

There are four kinds of tests:

  boost_test_run
  boost_test_run_fail
  boost_test_compile
  boost_test_compile_fail

Let's first consider the run and run_fail tests.  In our projects we
typically link all tests for a given package into one (or a few)
executable(s).  The executable's main() dispatches the individual
tests based on name.  For example, one might manually run the test
mentioned above like this:

  bin/type_traits_tests rank_test

This reduces the number of top-level targets in the build to one per
library to be tested.  It also reduces total link time and disk
usage (especially for static linking and when many tests share common
template instantiations).  However, there are some disadvantages:

  (a) If one test fails to compile none of its tests can run
  (b) A bad test may accidentally link due to symbols from another test

Problem (a) is not a big deal IMO.  If a test doesn't compile the
library it tests has a serious problem and needs manual attention
anyway.  Problem (b) may or may not be a big problem for Boost (it
isn't for us).  However, there is an alternative tied to the
treatement of compile_fail tests.

Let's now consider the compile and compile_fail tests.  The compile
tests could be built into a single executable along with the run and
run_fail tests above, but the compile_fail tests cannot.  Boost's
current solution drives the test compilation by generating an actual
compiler command line.  It bypasses CMake's knowledge of the native
build tools and tries to run its own command line which may not work
on all compilers.  Furthermore, this is not always representative of
how users build programs against boost (they may use CMake, or create
their own VS or Xcode project files).

CTest provides an explicit feature, its --build-and-test mode,
specifically for testing sample external projects built against