------- Comment #12 from rob1weld at aol dot com  2007-06-07 13:42 -------
I've done some more testing.

With GNU/Linux 4.0 the file: /usr/include/bits/mathdef.h has this in it:

# if defined __FLT_EVAL_METHOD__ && __FLT_EVAL_METHOD__ == 0
/* When using -mfpmath=sse, values are computed with the precission of the
   used type.  */
typedef float float_t;          /* `float' expressions are evaluated as
`float'.  */
typedef double double_t;        /* `double' expressions are evaluated as
                                   `double'.  */
# else
/* The ix87 FPUs evaluate all values in the 80 bit floating-point format
   which is also available for the user as `long double'.  Therefore we
   define:  */
typedef long double float_t;    /* `float' expressions are evaluated as
                                   `long double'.  */
typedef long double double_t;   /* `double' expressions are evaluated as
                                   `long double'.  */
# endif

That means, for many people but NOT everyone, that floats and doubles are
actually _long_ doubles. I have not examined every line of GCC to determine if
in fact GCC is using floats and doubles at the correct size - regardless of the
actual size they are stored at.

Since Cygwin's GCC 3.4.4 doesn't have long doubles it does not do that.

Memory access is quicker but things like sticky bits will not be in the right
spot and there will be too much precision. "Too much precision" is bad when
tests rely on an exact implementation - otherwise it is usually OK.


I've done a simple hack to the _root_ ./configure file ONLY and NOT changed one
line of GCC's source code to alter the manner in which GCC is configured. It
will need more testing - you _might_ be able to simulate what I am doing by
typing:

export CFLAGS="-mfpmath=sse -msse2"

After doing that simply configure and make GCC as you normally would.

(Note: I am _not_ using the above CFLAGS method but making more complicated
changes within the root configuration file - so your results _might_ not be the
same).

After building GCC all my tests where the SAME, failures and passes were equal.
I was also able to compile mpfr without error (as before). The ONLY difference
was that Paranoia was in a worse state than before, failing miserably.

I altered paranoia to use long doubles instead of doubles and re-ran it. This
is what it says now (edited):

...
        Precision:      long double;
        Version:        10 February 1989;
...
Searching for Radix and Precision.
Radix = 2 .
Closest relative separation found is U1 = 5.421010862427522170037264e-20 .
The number of significant digits of the Radix is 64 .
...
Checking rounding on multiply, divide and add/subtract.
Multiplication appears to round correctly.
Division appears to round correctly.
Addition/Subtraction appears to round correctly.
Checking for sticky bit.
Sticky bit apparently used correctly.
...
Running test of square root(x).
Testing if sqrt(X * X) == X for 20 Integers X.
Test for sqrt monotonicity.
sqrt has passed a test for Monotonicity.
Testing whether sqrt is rounded or chopped.
Square root appears to be correctly rounded
...
Seeking Underflow thresholds UfThold and E0.
Smallest strictly positive number found is E0 = 3.6452e-4951 .
Since comparison denies Z = 0, evaluating (Z + Z) / Z should be safe.
What the machine gets for (Z + Z) / Z is  2 .
This is O.K., provided Over/Underflow has NOT just been signaled.
Underflow is gradual; it incurs Absolute Error =
(roundoff in UfThold) < E0.
The Underflow threshold is 3.36210314311209350662719777e-4932,  below which
calculation may suffer larger Relative error than merely roundoff.
Since underflow occurs below the threshold
UfThold = (2) ^ (-16382)
only underflow should afflict the expression
        (2) ^ (-32764);
actually calculating yields: 0 .
This computed value is O.K.
Testing X^((X + 1) / (X - 1)) vs. exp(2) = 7.38905609893065022739794268 as X ->
1.
Accuracy seems adequate.
Testing powers Z^Q at four nearly extreme values.  ... no discrepancies found.
...
Searching for Overflow threshold:
This may generate an error.
Can `Z = -Y' overflow?
Trying it on Y = -inf .
Seems O.K.
Overflow threshold is V  = 1.18973149535723176502126385e+4932 .
Overflow saturates at V0 = inf .
No Overflow should be signaled for V * 1 = 1.18973149535723176502126385e+4932
                           nor for V / 1 = 1.18973149535723176502126385e+4932 .

No failures, defects nor flaws have been discovered.
Rounding appears to conform to the proposed IEEE standard P854.
The arithmetic diagnosed appears to be Excellent!


-- Note: Previously I was getting output like this:

Searching for Radix and Precision.
Radix = 2.000000 .
Closest relative separation found is U1 = 1.1102230e-16 .

Recalculating radix and precision
 confirms closest relative separation U1 .
Radix confirmed.
The number of significant digits of the Radix is 53.000000 .
...
Checking rounding on multiply, divide and add/subtract.
* is neither chopped nor correctly rounded.
/ is neither chopped nor correctly rounded.
Addition/Subtraction neither rounds nor chops.
Sticky bit used incorrectly or not at all.
FLAW:  lack(s) of guard digits or failure(s) to correctly round or chop
(noted above) count as one flaw in the final tally below.

Does Multiplication commute?  Testing on 20 random pairs.
     No failures found in 20 integer pairs.

Running test of square root(x).
Testing if sqrt(X * X) == X for 20 Integers X.
Test for sqrt monotonicity.
sqrt has passed a test for Monotonicity.
Testing whether sqrt is rounded or chopped.
Square root is neither chopped nor correctly rounded.
Observed errors run from -5.0000000e-01 to 5.0000000e-01 ulps.
...
Seeking Underflow thresholds UfThold and E0.
Smallest strictly positive number found is E0 = 4.94066e-324 .
Since comparison denies Z = 0, evaluating (Z + Z) / Z should be safe.
What the machine gets for (Z + Z) / Z is  2.00000000000000000e+00 .
This is O.K., provided Over/Underflow has NOT just been signaled.
Underflow is gradual; it incurs Absolute Error =
(roundoff in UfThold) < E0.
The Underflow threshold is 2.22507385850720188e-308,  below which
calculation may suffer larger Relative error than merely roundoff.
Since underflow occurs below the threshold
UfThold = (2.00000000000000000e+00) ^ (-1.02200000000000000e+03)
only underflow should afflict the expression
        (2.00000000000000000e+00) ^ (-2.04400000000000000e+03);
actually calculating yields: 0.00000000000000000e+00 .
This computed value is O.K.

Testing X^((X + 1) / (X - 1)) vs. exp(2) = 7.38905609893065218e+00 as X -> 1.
DEFECT:  Calculated 7.38905609548934539e+00 for
        (1 + (-1.11022302462515654e-16) ^ (-1.80143985094819840e+16);
        differs from correct value by -3.44130679508225512e-09 .
        This much error may spoil financial
        calculations involving tiny interest rates.
Testing powers Z^Q at four nearly extreme values. ... no discrepancies found.
...
Searching for Overflow threshold:
This may generate an error.
Can `Z = -Y' overflow?
Trying it on Y = -inf .
Seems O.K.
Overflow threshold is V  = 1.79769313486231571e+308 .
Overflow saturates at V0 = inf .
No Overflow should be signaled for V * 1 = 1.79769313486231571e+308
                           nor for V / 1 = 1.79769313486231571e+308 .
...
The number of  DEFECTs  discovered =         1.
The number of  FLAWs  discovered =           1.

The arithmetic diagnosed may be Acceptable
despite inconvenient Defects.


As previously stated above, I did NOT alter the _source_ code, only the way
that GCC is configured. 

GCC was already using "too much precision", now it has "way too much precision"
- that is better and no worse than "too much precision".

All programs I tested compiled the same with the same passes or failures. Only
Paranoia performs differently and after fixing it, to test for more precision,
it says that GCC is better than it ever was.

I will do more testing before submitting a dozen line patch.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32180

Reply via email to