Greetings! When compiling some functions that were previously
interpreted under gcl, I have one small test failure on 32bit intel
only:
ev (e7, alfa=2, vita=1); (of rtest8.mac)
returns
[0.052961027786557, 4.8572257327350623E-17, 50, 0]
insead of
[0.052961027786557, 5.551115123125785E-17, 50, 0]
This is solely due to the loss of one bit of precision in res24 of
slatec::dqc25s when the optimized compile is done in 64 bits instead of
the 80 that are specially available on the x87 fpu:
0.5391596517903231 (80 bits) vs 0.539159651790323 (64 bits)
(%i6) :lisp (integer-decode-float 0.5391596517903231)
4856318413792211
-53
1
(%i6) :lisp (integer-decode-float 0.539159651790323)
4856318413792210
-53
1
(%i6)
The mul-add in 64bits appears in C as
V40=
(double)(V40)+(double)((double)(((V29))->lfa.lfa_self[(long)((long)0)+((long)(V31)-((long)1))])*
(double)(((V26))->lfa.lfa_self[(long)(V27)+((long)(V31)-((long)1))]));
vs.
{double V253=
lf(fLrow_major_aref((V29),(long)((long)0)+((long)(V31)-((long)1))));
{double V254=
lf(fLrow_major_aref((V26),(long)(V27)+((long)(V31)-((long)1))));
V40= (double)(V40)+(double)((double)(/* INLINE-ARGS */V253)*(double)(/*
INLINE-ARGS */V254));}}
I don't really think this is a GCL bug, yet I've constructed the debian
package build to fail on any test failure. Is there a way I can set the
floating point tolerance higher here to let this result through?
Take care,
--
Camm Maguire [email protected]
==========================================================================
"The earth is but one country, and mankind its citizens." -- Baha'u'llah
_______________________________________________
Gcl-devel mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/gcl-devel