Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> So where does that leave us? There must be some reason Gennadiy's code is
> producing results one or two epsilons greater than expected.

It's just because the testcases are wrong. For example, line 157, there
is:

tmp = 11; tmp /= 10;
BOOST_CHECK_CLOSE_SHOULD_PASS_N( (tmp*tmp-tmp), 11./100, 1+3 );

The test will pass only if the result is at most 2 ulps away. Is it
possible?

Let t be tmp and e? be error values (smaller than ulp/2).

rounded(rounded(t*t) - t)
  = ((t*t) * (1+e1) - t) * (1+e2)
  = t*t - t + t*t * (e1+e2+e1*e2) - t*e2

So how much is the relative error? For example, if e1=ulp/2 and e2=0, the
absolute error reaches t*t*ulp/2 = 0.605*ulp. So the relative error is
0.605*ulp / (11/100) = 5.5*ulp. The result may be 6 ulps away! (and maybe
more)

Thinking rounding errors just add themselves is a misunderstanding of
floating-point arithmetic (if it was that easy, interval arithmetic
wouldn't be useful).

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: is_nan

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Fernando Cacciola wrote:

> Thanks to Gabriel we may have an is_nan() right now.
> Is there anything else that the interval library uses which might be better
> packed as a compiler-platform specific routine?

All the hardware rounding mode selection stuff. It's equivalent to the
 C header file. In the interval library, it's handled by at least
9 files (all the boost/numeric/interval/detail/*_rounding_control.hpp
headers for example) and some new files may be added each time a new
compiler-platform support is needed.

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: test_fp_comparisons and rounding errors

2003-07-05 Thread Gennadiy Rozental
"Beman Dawes" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> test_fp_comparisons has been failing for a long time. The issue has to do
> with how much tolerance to allow for rounding errors.
>
> The close_at_tolerance algorithm calculates tolerance as follows:
>
> n*std::numeric_limits::epsilon()/2   (1)
>
> where n is the number of possible floating-point rounding errors.
>
> The particular test case that is failing calls the comparison algorithm
> with a rounding error argument of 4. That allows for up to 2 epsilons
> tolerance.

Thanks, Beman for bringing this up. I actually was playing with this test
lately. It does indeed fail on following comparison:

I compare 0.11 with 0.11.

Left size calculated as: 11./100 (1 rounding error).
Right side calculated as: tmp = 11/10 (1 rounding error ); tmp*tmp - tmp (2
rounding errors).

In sum have 1+1+2 = 4 rounding errors. According to my understanding
relative error of calculation should not exceed tolerance calculated with
above formula 1. This statement does not hold.

What I've tried to do is to analyze the "relative error"/(epsilon/2) for
different compilers and FPTs. Here is the code:




template
void foo( FPT* = 0 )
{
FPT tmp  = 11;
tmp /= 10;
FPT tmp1 = tmp * tmp - tmp;

FPT tmp2 = 11./100;

FPT diff = fpt_abs( tmp1 - tmp2 );

FPT d1 = diff / tmp1;
FPT d2 = diff / tmp2;

FPT r1 = d1 / std::numeric_limits::epsilon() * 2;
FPT r2 = d2 / std::numeric_limits::epsilon() * 2;

std::cout << "diff= "<< diff << std::endl;
std::cout << "epsilon= " << std::numeric_limits::epsilon() <<
std::endl;
std::cout << "r1= " << r1 << std::endl;
std::cout << "r2= " << r2 << std::endl;
}

int
main()
{
foo();
foo();
foo();

return 0;
}




Here is the result produced for different compilers:

Borland command line:
---
diff= 2.98023e-08
epsilon= 1.19209e-07
r1= 4.54545
r2= 4.54545
diff= 1.11022e-16
epsilon= 2.22045e-16
r1= 9.09091
r2= 9.09091
diff= 5.42101e-19
epsilon= 1.0842e-19
r1= 90.9091
r2= 90.9091

Metrowerks
-
diff= 2.98023e-08
epsilon= 1.19209e-07
r1= 4.54545
r2= 4.54545
diff= 9.71445e-17
epsilon= 2.22045e-16
r1= 7.95455
r2= 7.95455
diff= 9.71445e-17
epsilon= 2.22045e-16
r1= 7.95455
r2= 7.95455

GCC 3.2.3
---
diff= 2.98023e-08
epsilon= 1.19209e-07
r1= 4.54545
r2= 4.54545
diff= 1.11022e-16
epsilon= 2.22045e-16
r1= 9.09091
r2= 9.09091
diff= 5.42101e-19
epsilon= 1.0842e-19
r1= 90.9091
r2= 90.9091

MSVC 6.5

diff= 0
epsilon= 1.19209e-007
r1= 0
r2= 0
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455

MSVC6.5 + STLport
--
diff= 0
epsilon= 1.19209e-007
r1= 0
r2= 0
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455

MSVC7.1
--
diff= 2.98023e-008
epsilon= 1.19209e-007
r1= 4.54545
r2= 4.54545
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455
diff= 9.71445e-017
epsilon= 2.22045e-016
r1= 7.95455
r2= 7.95455

Mingw
---
diff= 2.98023e-08
epsilon= 1.19209e-07
r1= 4.54545
r2= 4.54545
diff= 9.71445e-17
epsilon= 1.11022e-16
r1= 15.9091
r2= 15.9091
diff= 9.71445e-17
epsilon= 1.11022e-16
r1= 15.9091
r2= 15.9091




As you can see It's never less then 4.

One idea, that I had, is maybe "Testing tool" introduce an extra error
during relative error calculation. It take an extra 6(?) operations before
comparison is performed. It still does not explain all the results above and
I am not sure it's correct.

I attached the test program for anybody interested.

Regards,

Gennadiy.


begin 666 fpt.cpp
M(VEN8VQU9&4@/&)O;W-T+VQI;6EThttp://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Re: is_nan

2003-07-05 Thread Fernando Cacciola
"Joel de Guzman" <[EMAIL PROTECTED]> escribió en el mensaje news:[EMAIL PROTECTED]
> Fernando Cacciola <[EMAIL PROTECTED]> wrote:
> > Gabriel Dos Reis <[EMAIL PROTECTED]> wrote in
>
> >> Yes.  It is an incorrect (unfortunately popular)
> >> implementation.
> >>
> > Right. We should say that more often. It is incorrect
> > however popular.
> >
> > Most compilers provide a non standard extension for this
> > purpose.
> > For instance, Borland uses _isnan.
> > In general, these extensions are found on .
> > The best approach, IMO, is to have a boost::is_nan() with
> > compiler specific implementations.
>
> Hi,
>
> We have an is_nan(float) implementation (for quiet NaNs of course)
> that does just that. Right now, it supports most compilers on Win32 but it
> should be straight-forward to support others. Right now, it is tested on:
>
> g++2.95.3 (MinGW)
> g++3.1 (MinGW)
> Borland 5.5.1
> Comeau 4.24 (Win32)
> Microsoft Visual C++ 6
> Microsoft Visual C++ 7
> Microsoft Visual C++ 7.1
> Metrowerks CodeWarrior 7.2
>
> The default implementation assumes IEEE754 floating point. It takes advantage
> of the fact that IEEE754 has a well defined binary layout for quiet NaNs. Platform
> specific implementations forward the call to the correct platform header when
> available. The code does not rely on the availability of numeric_limits.
>
> I hope this can be used as the basis of a standardized boost::is_nan utility.
> Thoughts?
>
It works for me
Checking directly for the bit patterns is the way it has been done for years in
the Windows platform.
As long as this implementation is correctly dispatched from the user function
I see no problem.

BTW, it just poped my mine: could SFINAE be arranged to detect the existence
of a global non-member function? We could write an implementation that dispatches
to _isnan(), __isnan(), an so on if available.
(though this would break the ODR on the other hand)

Fernando Cacciola



___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Re: is_nan

2003-07-05 Thread Fernando Cacciola


--
Fernando Cacciola
fernando_cacciola-at-movi-dot-com-dot-ar
"Guillaume Melquiond" <[EMAIL PROTECTED]> escribió en el mensaje news:[EMAIL PROTECTED]
> On Fri, 4 Jul 2003, Fernando Cacciola wrote:
>
> > Gabriel Dos Reis <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]
> > > "jvd" <[EMAIL PROTECTED]> writes:
> > >
> > > | Dear boosters,
> > > |
> > > | seems like this code
> > > |
> > > | template< typename T >
> > > | bool is_nan( const T& v )
> > > | {
> > > | return std::numeric_limits::has_quiet_NaN && (v != v);
> > > | }
> > > |
> > > | does not work correctly on some machines.
> > >
> > > Yes.  It is an incorrect (unfortunately popular) implementation.
> > >
> > Right. We should say that more often. It is incorrect however popular.
>
> Yes, it is incorrect for C++. But it's something we can hope to see one
> day. For example, in the LIA-1 annex I about C langage bindings, it is
> written that != is a binding for IEEE-754 ?<> operator (unordered
> compare). In the C9X annex F.8.3 about relational operators, it is written
> that the optimization "x != x -> false" is not allowed since "The
> statement x != x is true if x is a NaN". And so on.
>
Yes of course... but we will have to wait until the LIA-1 bindings
make into C++.
But not too long I hope.

> > Most compilers provide a non standard extension for this purpose.
> > For instance, Borland uses _isnan.
> > In general, these extensions are found on .
>
> In fact, since it is not specified by the C++ standard, isnan comes from
> the C headers and is supposed to be found in .
>
Right.. I was actually thinking on the C header but wrote it incorrectly.
I meant .

> > The best approach, IMO, is to have a boost::is_nan() with compiler specific
> > implementations.
>
> Yes, and there also were discussions on this mailing-list about a
>  header. But unless somebody finds the time to tackle this
> whole problem...
>
Thanks to Gabriel we may have an is_nan() right now.
Is there anything else that the interval library uses which might be better
packed as a compiler-platform specific routine?

Fernando Cacciola




___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Beman Dawes
At 01:29 PM 7/5/2003, Guillaume Melquiond wrote:
>On Sat, 5 Jul 2003, Beman Dawes wrote:
>
>> fpt_abs may do a unary "-" operation. I'm assuming, perhaps 
incorrectly,
>> that is essentially subtraction and subject to a possible rounding 
error.
>
>I don't think there is a system where the opposite of a representable
>number would not be a representable number. So there should be no 
rounding
>error when changing signs. At least not with IEEE-754 floating point
>numbers.
>
>Moreover, 0-x may lead to results different from -x. So changing sign is
>essantially "not" a subtraction :-).

So where does that leave us? There must be some reason Gennadiy's code is 
producing results one or two epsilons greater than expected.

--Beman

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> fpt_abs may do a unary "-" operation. I'm assuming, perhaps incorrectly,
> that is essentially subtraction and subject to a possible rounding error.

I don't think there is a system where the opposite of a representable
number would not be a representable number. So there should be no rounding
error when changing signs. At least not with IEEE-754 floating point
numbers.

Moreover, 0-x may lead to results different from -x. So changing sign is
essantially "not" a subtraction :-).

Guillaume

PS: for those who wonder how 0-x can be different from -x, think about
exceptional cases like x=0 or x=NaN.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Beman Dawes
At 12:18 PM 7/5/2003, Guillaume Melquiond wrote:

>On Sat, 5 Jul 2003, Beman Dawes wrote:
>
>> What is happening here? It seems to me that the error checking code
>> itself that computes the values to be checked (d1 and d2) introduces 
one
>> to four possible additional rounding errors. (The error checking code
>> always does one subtraction, possibly one subtraction in an abs, 
possibly
>> another subtraction in an abs, and possibly one division). That would
>> account for the observed behavior.
>
>Sorry, I don't see all the substractions you are speaking about, I only
>see these three floating-point operations in close_at_tolerance:
>
>  FPT diff = detail::fpt_abs( left - right );
>  FPT d1   = detail::safe_fpt_division( diff, detail::fpt_abs( right ) );
>  FPT d2   = detail::safe_fpt_division( diff, detail::fpt_abs( left ) );

fpt_abs may do a unary "-" operation. I'm assuming, perhaps incorrectly, 
that is essentially subtraction and subject to a possible rounding error.

--Beman

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> test_fp_comparisions has been failing for a long time. The issue has to do
> with how much tolerance to allow for rounding errors.
>
> The close_at_tolerance algorithm calculates tolerance as follows:
>
> n*std::numeric_limits::epsilon()/2
>
> where n is the number of possible floating-point rounding errors.
>
> The particular test case that is failing calls the comparison algorithm
> with a rounding error argument of 4. That allows for up to 2 epsilons
> tolerance.
>
> But if you step through the code, the actual error test values computed (d1
> and d2, in the code) is 3 epsilons for float, and 4 epsilons for double and
> long double.
>
> What is happening here? It seems to me that the error checking code itself
> that computes the values to be checked (d1 and d2) introduces one to four
> possible additional rounding errors. (The error checking code always does
> one subtraction, possibly one subtraction in an abs, possibly another
> subtraction in an abs, and possibly one division). That would account for
> the observed behavior.

Sorry, I don't see all the substractions you are speaking about, I only
see these three floating-point operations in close_at_tolerance:

  FPT diff = detail::fpt_abs( left - right );
  FPT d1   = detail::safe_fpt_division( diff, detail::fpt_abs( right ) );
  FPT d2   = detail::safe_fpt_division( diff, detail::fpt_abs( left ) );

We can suppose the user doesn't want to compare numbers with a relative
error bigger than 1/2 (they are no more "close" numbers). Consequently,
the substraction should not produce any rounding error (thanks to Sterbenz
law). However the divisions will produce rounding errors.

Consequently, the relative errors d1 and d2 are off by ulp/2.

> Is this analysis correct? I know almost nothing about floating-point
> arithmetic, so it is probably flawed. But it looks to me as if the correct
> tolerance formula is  (n+m)*std::numeric_limits::epsilon()/2, where m is
> 1,2,3, or 4 depending on the exact logic path through

m should then be 1, unless I am mistaken.

> close_at_tolerance::operator(). It would also be easy to change the code to
> eliminate some of the operations which add additional possible rounding
> errors.

If the number of ulps is a power of 2 (and consequently tolerance is
2**(-k)), this code doesn't produce any rounding error:

  FPT diff = abs(a - b);
  bool c1 = diff <= tolerance * abs(a);
  bool c2 = diff <= tolerance * abs(b);

(if a and b are not too small)

> If the above is all wrong, perhaps someone can offer the correct analysis
> for why the tests fail.
>
> --Beman

There are also some strange things.

For example, what is the meaning of
  BOOST_CHECK_CLOSE_SHOULD_PASS_N
   (1, 1+std::numeric_limits::epsilon() / 2, 1); ?
Indeed, 1+epsilon/2 is equal to 1 (by definition of epsilon) so this test
doesn't have any interest, does it?

Regards,

Guillaume

PS: I find the logging of test_fp_comparisons really ugly. I hope it is
not intended.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Beman Dawes
test_fp_comparisions has been failing for a long time. The issue has to do 
with how much tolerance to allow for rounding errors.

The close_at_tolerance algorithm calculates tolerance as follows:

   n*std::numeric_limits::epsilon()/2

where n is the number of possible floating-point rounding errors.

The particular test case that is failing calls the comparison algorithm 
with a rounding error argument of 4. That allows for up to 2 epsilons 
tolerance.

But if you step through the code, the actual error test values computed (d1 
and d2, in the code) is 3 epsilons for float, and 4 epsilons for double and 
long double.

What is happening here? It seems to me that the error checking code itself 
that computes the values to be checked (d1 and d2) introduces one to four 
possible additional rounding errors. (The error checking code always does 
one subtraction, possibly one subtraction in an abs, possibly another 
subtraction in an abs, and possibly one division). That would account for 
the observed behavior.

Is this analysis correct? I know almost nothing about floating-point 
arithmetic, so it is probably flawed. But it looks to me as if the correct 
tolerance formula is  (n+m)*std::numeric_limits::epsilon()/2, where m is 
1,2,3, or 4 depending on the exact logic path through 
close_at_tolerance::operator(). It would also be easy to change the code to 
eliminate some of the operations which add additional possible rounding 
errors.

If the above is all wrong, perhaps someone can offer the correct analysis 
for why the tests fail.

--Beman 

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: is_nan

2003-07-05 Thread Joel de Guzman
Fernando Cacciola <[EMAIL PROTECTED]> wrote:
> Gabriel Dos Reis <[EMAIL PROTECTED]> wrote in

>> Yes.  It is an incorrect (unfortunately popular)
>> implementation. 
>> 
> Right. We should say that more often. It is incorrect
> however popular. 
> 
> Most compilers provide a non standard extension for this
> purpose. 
> For instance, Borland uses _isnan.
> In general, these extensions are found on .
> The best approach, IMO, is to have a boost::is_nan() with
> compiler specific implementations.

Hi,

We have an is_nan(float) implementation (for quiet NaNs of course) 
that does just that. Right now, it supports most compilers on Win32 but it
should be straight-forward to support others. Right now, it is tested on:

g++2.95.3 (MinGW)
g++3.1 (MinGW)
Borland 5.5.1
Comeau 4.24 (Win32)
Microsoft Visual C++ 6
Microsoft Visual C++ 7
Microsoft Visual C++ 7.1
Metrowerks CodeWarrior 7.2

The default implementation assumes IEEE754 floating point. It takes advantage
of the fact that IEEE754 has a well defined binary layout for quiet NaNs. Platform
specific implementations forward the call to the correct platform header when 
available. The code does not rely on the availability of numeric_limits. 

I hope this can be used as the basis of a standardized boost::is_nan utility.
Thoughts?

-- 
Joel de Guzman
joel at boost-consulting.com
http://www.boost-consulting.com
http://spirit.sf.net

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Boost::thread feature request: thread priority

2003-07-05 Thread Maxim Egorushkin
"Alexander Terekhov" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
> Maxim Egorushkin wrote:
> [...]
> > Seems like the issue is undefined behaviour when casting away volatile.
> > Do I get it right?
>
> Yes, UB is the issue ("one of the most obvious tips of the iceberg").
>
> Well, more on volatiles (low level stuff) can be found below. Uhmm,
> as for "higher level safety" (and also efficiency)... you might want
> to take a look at ADA's "protected types" (and the "proxy model"):
>
> http://groups.google.com/groups?threadm=3D7E18EB.BC80C2EB%40web.de
> http://groups.google.com/groups?threadm=3D7E24FC.E1AE86B0%40web.de
> (Subject: Re: POSIX Threads and Ada)

That was quite instructive. Thank you, Alexander, very much.



___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Compiler status for GCC 3.3

2003-07-05 Thread Joerg Walter

- Original Message -
From: "Gabriel Dos Reis" <[EMAIL PROTECTED]>
To: "Boost mailing list" <[EMAIL PROTECTED]>
Sent: Monday, June 30, 2003 12:06 PM
Subject: Re: [boost] Compiler status for GCC 3.3


[...]

> | I'm not sure about this. Paul C. Leopardi and Guillaume Melquiond
already
> | reported the issue, Paul also analyzed it here
> | http://groups.yahoo.com/group/ublas-dev/message/676
> |
> | In essence: setting -fabi-version=0 should solve the problem.
>
> On the other hand if your native compiler is GCC and your system was
> not configured with that setting, then you may get into trouble --
> since you'll be mixing translation units with different ABIs.

It sounds as if GCC 3.3 itself could be affected by -fabi-version=?. If say
for example libstdc++ isn't binary compatible when build with
different -fabi-version settings don't we have two different compilers
depending on configure's -fabi-version then?

Thanks,
Joerg


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost