Greetings, and thanks so much!

In general, 5.47 introduced a few testsuite errors revolving around
floating point precision issues/error bounds like this.  The GCL code
being exercised not only has to deal with 32/64 bit, but extended
precision hardware, notably the 80bit x68.  Maxima's testsuite when run
across all Debian architectures is therefore very useful in hardening
the underlying GCL routines.

This particular issue is as stated i386 only, and disappears in 5.46
built using the same GCL under the same environment.  So while it need
not indicate an error in maxima code, it must be due to some change in
maxima's code from 5.46 to 5.47.  I thought identifying the same might
point out a GCL bug.

By way of comparison, 5.47 also showed one ulp_test failure which only
showed up on 32bit machines without the 80bit extended precision of x86
underlying.  We traced this down to a weakness in GCL's big_to_double
routine, where we were relying on the floating point hardware to
implement round to even when adding a ulp to the down-rounded result of
mpz_get_d.  We now do this explicitly at the integer level before double
conversion.

These error bound failures really do not matter to anyone who is not
trying to ensure the GCL code is correct.  I'll try for a bit longer to
see if I can flesh out any bugs and then just add an expected failure
where appropriate.

Take care,

Robert Dodier <robert.dod...@gmail.com> writes:

> On Sun, May 26, 2024 at 4:38 PM Camm Maguire <c...@maguirefamily.org> wrote:
>
>> Greetings!  Can anyone point out what might have changed in the maxima
>> code from 5.46 to 5.47 to cause a failure in the following test from
>> rtest8.mac using the same environment and compiler (gcl)?
>
>> ev (e1, bu=4);
>> [63.75, 7.077671781985375E-13, 31, 0];
>> =============================================================================
>>
>> Correct in 5.46, but in 5.47 I get:
>>
>> [63.75, 7.081127676410173E-13, 31, 0]
>>
>> Same floating point lisp code implementation.  Only on 32bit i386.
>
> Well, looks like the relevant code (src/numerical/slatec/dqk31.lisp)
> didn't change between 5.46.0 and 5.47.0, to judge by the empty result
> from git log. Is it possible that different GCL versions were used? If
> 64 bit GCL results are correct and 32 bit is different, is it possible
> that the 32 bit implementation differs in some way?
>
> The second number is an error estimate -- since the integrand is a
> polynomial (u^3), the error estimate is going to be some polynomial
> too. The differing error estimates suggest that rounding was different
> or something. We could probably track down exactly where the
> computations are diverging, but maybe it's not anything to worry about
> -- I don't know.
>
> best,
>
> Robert
>
>
>

-- 
Camm Maguire                                        c...@maguirefamily.org
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah

Reply via email to