At 08:19 PM 5/28/2003, Aleksey Gurtovoy wrote:
>Eric Friedman wrote:
>> I apologize if this has already been asked, but why aren't the
>> libs/mpl/test sources included in regresssion testing? I know some
>> tests are missing and some are perhaps as robust as they might be,
>> but it seems some testing is better than no testing.
>
>Definitely, and besides, although not systematic, the tests do cover
>most of the library's functionality.
>
>As Beman already replied, the reason they are not included into the
>main boost regression run is two-fold - first, due to a large number
>of tests and the current format of the compiler status table it
>would make the latter even more uninformative, to the point of being
>useless (for a human reader, at least). Secondly, many tests are
>compile-time intensive (and some compilers are notoriously slow with
>templates), which for a typical regression run on 8-10 compilers means
>about an hour of additional time. Unless regressions are run on a
>standalone designated machine, it can be too much.
>
>That's not to say that the situation is not going to improve, though
>- here at Meta we have enough computation resources that the last
>issue can be ignored, and solving the first one is on our to-do
>list (we are already running regular nightly regressions -
>http://boost.sourceforge.net/regression-logs/cs-win32_metacomm.html).

One possible short-term fix might be to run the MPL tests separately, and post them as a separate table.

Long term, some kind of hierarchical approach might help with the reporting side of the equation. Perhaps with an intermediate web page that collapses all a library's tests down to one line. Rene's summary page shows that a relatively small organization effort can make reporting much more accessible.

Ideas appreciated.

--Beman


_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to