test_fp_comparisions has been failing for a long time. The issue has to do with how much tolerance to allow for rounding errors.

The close_at_tolerance algorithm calculates tolerance as follows:

n*std::numeric_limits<T>::epsilon()/2

where n is the number of possible floating-point rounding errors.

The particular test case that is failing calls the comparison algorithm with a rounding error argument of 4. That allows for up to 2 epsilons tolerance.

But if you step through the code, the actual error test values computed (d1 and d2, in the code) is 3 epsilons for float, and 4 epsilons for double and long double.

What is happening here? It seems to me that the error checking code itself that computes the values to be checked (d1 and d2) introduces one to four possible additional rounding errors. (The error checking code always does one subtraction, possibly one subtraction in an abs, possibly another subtraction in an abs, and possibly one division). That would account for the observed behavior.

Is this analysis correct? I know almost nothing about floating-point arithmetic, so it is probably flawed. But it looks to me as if the correct tolerance formula is (n+m)*std::numeric_limits<T>::epsilon()/2, where m is 1,2,3, or 4 depending on the exact logic path through close_at_tolerance::operator(). It would also be easy to change the code to eliminate some of the operations which add additional possible rounding errors.

If the above is all wrong, perhaps someone can offer the correct analysis for why the tests fail.

--Beman

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to