Re: [boost] is the link-fail test working correctly in theregression tests?

2003-01-05 Thread David Abrahams
"John Maddock" <[EMAIL PROTECTED]> writes:

>> That sounds like a smart move.  It should be easy enough if we can
>> encode that feature into the toolsets.  Can you take care of that part
>> of the job?  If so, it would be very easy for me to update testing.jam
>> and we'd be done.
>
> Not easily, I don't currently have access to those compilers (although the
> options used for EDG based compilers are documented in EDG's generic docs):
> we really need to get some more regression tests running.

I believe it's -tused you're referring to, isn't it?

>> > Actually this is less of a problem here (for config tests), because
>> > we would expect the tests to build and run on some platforms, so the
>> > tests get tested in both directions (that there are some platforms
>> > were they should build and do so, and others where they should not
>> > and don't do so).
>>
>> I'm still very confused about this one, but I have an inkling of what
>> might be going on.  I can understand how you could make a config test
>> which would want to be (compile-fail|run-success), and I can
>> understand how you could use return-code-inversion to make it
>> (compile-faile|run-fail), but I can't understand what kind of useful
>> test could be (compile-faile|link-fail|run-fail).
>
> Let me try again. We have a series of config regression tests (one per
> macro), and taking feature macros for example each can be tested in two
> directions:
>
> The macro is defined in our config: verify that the test code compiles,
> links, and runs.
> The macro is not defined in our config: verify that trying to
> compile+link+run fails at some point (otherwise we could enable this
> feature).

Right... but I'm still a little confused.  I don't think you actually
want to test "in two directions".  Presumably you want to have a
single test which checks that the macro is set appropriately, no?

> For example consider BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE, there are three
> reasons why we might not want to set this:
> 1) the function is not present in the headers (code doesn't compile, because
> the API is unsupported).
> 2) the function is present but linking fails (it's in the header but not the
> library - probably the toolset is set up wrongly for multithreaded code, or
> some other such problem)

I'm not convinced that a failure at this stage should make the test
succeed if it's just reflecting a problem with the toolset.

> 3) compiling and linking succeeds, but the function doesn't actually work
> (it's a non-functioning stub), this situation does actually seem to be
> occurring on some platforms leading to deadlocks when creating and using
> recursive mutexes, the test doesn't currently test for this, but should do
> so if I can figure out how :-(
>
> To conclude then, if BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE is not set, then I
> want to be able to verify that the test code does not compile+link+run,
> otherwise the test should fail because the macro should have been set.
>
> I hope that's making sense now,

Now it sounds like you're not "testing in two directions".  Oh, I see:
that "otherwise" does not refer to the case where
BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE is set; that's handled
differently.  What I still don't understand is how you're going to
write the Jamfile for this test, since the test type has to be
determined based on the value of the macro -- 'any-fail' if it's not
set and 'run' if it's set -- and there's no provision for that sort of
feedback from header files into the build system.

Maybe you're planning to build two tests?

#if BOOST_HAS_FEATURE && BOOST_EXPECT_FAIL
# error
#elif !BOOST_HAS_FEATURE && !BOOST_EXPECT_FAIL
int main() { return 0; }
#else

// test code here

#endif

And then build two tests with the same source code: a 'run' test, and
an 'any-fail' test with BOOST_EXPECT_FAIL ??

-Dave

-- 
   David Abrahams
   [EMAIL PROTECTED] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] is the link-fail test working correctly in theregression tests?

2003-01-04 Thread David Abrahams
"John Maddock" <[EMAIL PROTECTED]> writes:

>> > the problem remains, if we have a "compile-fail" test, the failure
>> > may be delayed until link time if the compiler does link-time
>> > template instantiation.  The reason we're not seeing this cropping
>> > up in the current tests, is that the compilers that were exhibiting
>> > that behaviour are no longer being tested (SGI's compiler for eg).
>>
>> OK, I believe you.  What I'm suggesting is that we ought to check for
>> specific compilers which do this, and do an explicit
>> compile-or-link-fail test in that case for all current compile-fail
>> tests.  I believe it is too hard for programmers to keep track of
>> which expected compilation failures may involve template
>> instantiation.  In fact most of them do, so we'd have to change most
>> of our compile-fail tests to say link-fail.
>
> OK, but that implies that most current compile-fail tests would need
> to have a "int main(){}" added.  Actually thinking about it, most
> compilers that do link-time template instantiation have an option to
> force the instantiation of all used templates (at compile time), so
> maybe the way to handle this is just to modify the compiler
> requirements inside the compile-fail rule definition?

That sounds like a smart move.  It should be easy enough if we can
encode that feature into the toolsets.  Can you take care of that part
of the job?  If so, it would be very easy for me to update testing.jam
and we'd be done.

>> >> Maybe we need some platform/compiler-dependent configuration which
>> >> chooses the appropriate criterion for success.
>> >
>> > It's not unreliable at all, it's the exact negative of a run test.  It
>> > allows a negative to be tested: that if a feature macro is *not* set,
> then a
>> > failure should occur if it is set, otherwise we are possibly
> mis-configured.
>>
>> My point is that it might easily report false successes when something
>> else is wrong, e.g. you just made a typo in a variable name.
>
> Which is true for all compile-fail tests as well.  

Yes.  All I'm saying is that a regular run-fail test has stricter
requirements.  Simple typos that just create compilation errors will
not allow them to succeed.  That's why I don't want to replace
run-fail with your "compile/link/run fail"...

...although now the only expected failure tests we have left are
compile-fail.  So I don't know what to do with the others.

> Actually this is less of a problem here (for config tests), because
> we would expect the tests to build and run on some platforms, so the
> tests get tested in both directions (that there are some platforms
> were they should build and do so, and others where they should not
> and don't do so).

I'm still very confused about this one, but I have an inkling of what
might be going on.  I can understand how you could make a config test
which would want to be (compile-fail|run-success), and I can
understand how you could use return-code-inversion to make it
(compile-faile|run-fail), but I can't understand what kind of useful
test could be (compile-faile|link-fail|run-fail).

-Dave

-- 
   David Abrahams
   [EMAIL PROTECTED] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] is the link-fail test working correctly in theregression tests?

2003-01-01 Thread David Abrahams
"John Maddock" <[EMAIL PROTECTED]> writes:

>> I intentionally changed it because it seemed as though a test which
>> was supposed to fail to link, but which fails to compile should not be
>> deemed a success.  I think I did this by analogy with run-fail, where
>> we were masking some actual compile-time failures which should not
>> have been registered as successes.
>>
>>
>> Of course we seem to have no tests which are really expected to fail
>> linking anymore...
>
> I can't actually think of any uses for that

There are some idioms which are expected to fail at link time but not
compile-time.  For example, suppose someone tries to copy or assign to
boost::noncopyable?

> the problem remains, if we have a "compile-fail" test, the failure
> may be delayed until link time if the compiler does link-time
> template instantiation.  The reason we're not seeing this cropping
> up in the current tests, is that the compilers that were exhibiting
> that behaviour are no longer being tested (SGI's compiler for eg).

OK, I believe you.  What I'm suggesting is that we ought to check for
specific compilers which do this, and do an explicit
compile-or-link-fail test in that case for all current compile-fail
tests.  I believe it is too hard for programmers to keep track of
which expected compilation failures may involve template
instantiation.  In fact most of them do, so we'd have to change most
of our compile-fail tests to say link-fail.

>> > BTW I could use an equivalent run-fail test for boost-config,
>> > meaning: "this file either doesn't compile, link, or run", which is
>> > of course the opposite of the current run-fail.  So a better naming
>> > convention is required all round :-)
>>
>> Wow, that sounds like a pretty unreliable test.  There are so many
>> ways things can go wrong, and you want to accept any of them?
>>
>> Maybe we need some platform/compiler-dependent configuration which
>> chooses the appropriate criterion for success.
>
> It's not unreliable at all, it's the exact negative of a run test.  It
> allows a negative to be tested: that if a feature macro is *not* set, then a
> failure should occur if it is set, otherwise we are possibly mis-configured.

My point is that it might easily report false successes when something
else is wrong, e.g. you just made a typo in a variable name.

-- 
   David Abrahams
   [EMAIL PROTECTED] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] is the link-fail test working correctly in theregression tests?

2002-12-29 Thread David Abrahams
"John Maddock" <[EMAIL PROTECTED]> writes:

>>
>> That test seems to not compile.  A test that is supposed to not link
>> fails if it doesn't even get to the link stage.
>>
>> Why is this test labelled link-fail?
>> I don't know.  Jeremy?
>
> That's not the meaning of the original link-fail test: we started
> off with compile-fail, but because some compilers don't instantiate
> templates until link time, we had to introduce link-fail to mean:
> "either this doesn't compile, or it compiles but doesn't link".
> Obviously the meaning got lost somewhere.  

I intentionally changed it because it seemed as though a test which
was supposed to fail to link, but which fails to compile should not be
deemed a success.  I think I did this by analogy with run-fail, where
we were masking some actual compile-time failures which should not
have been registered as successes.  


Of course we seem to have no tests which are really expected to fail
linking anymore...

> BTW I could use an equivalent run-fail test for boost-config,
> meaning: "this file either doesn't compile, link, or run", which is
> of course the opposite of the current run-fail.  So a better naming
> convention is required all round :-)

Wow, that sounds like a pretty unreliable test.  There are so many
ways things can go wrong, and you want to accept any of them?

Maybe we need some platform/compiler-dependent configuration which
chooses the appropriate criterion for success.
-- 
   David Abrahams
   [EMAIL PROTECTED] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] is the link-fail test working correctly in theregression tests?

2002-12-22 Thread David Abrahams
"John Maddock" <[EMAIL PROTECTED]> writes:

> I notice from the Win32 regression test results that all compilers are
> listed as failing the static_assert_test_fail_8 "link-fail" test.  Yet when
> I look at the actual Jam output they are actually *passing* the test.  Any
> ideas what's going on?

That test seems to not compile.  A test that is supposed to not link
fails if it doesn't even get to the link stage.

Why is this test labelled link-fail?
I don't know.  Jeremy?

-- 
   David Abrahams
   [EMAIL PROTECTED] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost