On Oct 3, 5:11 am, Fredrik Johansson <fredrik.johans...@gmail.com>
wrote:

My guess is that you have not talked this over with a numerical
analyst.

> The purpose of this code is *not* to add a list of binary
> fractions accurately.

It is unlikely that the best way to add a list of any numbers that you
are given is
to start by throwing out information that provides the detailed values
of those numbers. Which is what you appear to be doing.

>It is to approximate the sum of symbolically
> defined terms, which with a high probability don't have exact binary
> representations.

So you are saying that I am lying, or don't know what I'm doing when I
type in s=1.234501000,   (maybe even 1234501.0/1000000.0)

You are saying that you are so sure that I don't mean that number that
you are going to throw out some of the digits.

And let us say that I didn't type in that number, but I typed in
r = 5559698243588505001/4503599627370496000
and then coerced it to a float, call it rf.
Are you still so sure that I meant some irrational number that you are
going to throw out some of the digits that I supplied?

would you then compute s-rf  by rounding each of them to 5 digits, and
computing it as zero? Or would you compute the difference between s,
and r exactly, about 7.07e-17 ?




> In this context, including much smaller terms is a
> waste of time.

I think people would expect Sage to get the arithmetically closest
representable answer, and would not be so concerned about the
possibility of wasting a little time.

>
> I'll illustrate with a simple example (in decimal). Consider three
> symbolic terms (with high probability irrational or at least not exact
> binary numbers)

How could you possibly know that the terms you are viewing are not
exact binary numbers?
Your best bet is to believe that the numbers you are asked to add
together are exactly those binary numbers you are asked to add,
because any other bet is less accurate.

> which to 10 digit accuracy are:
>
> 1.234501000
> 1.000000000e-10
> -1.234500000
>
> Suppose we want to evaluate their sum to a precision of 5 digits.

If you want to do arithmetic on data that is 10 decimal digits
precisely in decimal,
it makes sense to do that in 10 (or more) decimal digit arithmetic. If
you want the answer to be correct to 10 decimals, you ordinarily have
to do a little extra work to get it right. If you want to evaluate the
sum of a new set of data that is the rounded value (to 5 digits) of a
set of numbers using 5 digit arithmetic, that is a different task. The
first two numbers add up to
1.2345010001 if you did it accurately, but maybe you are not... for 5
digits, you are adding 1.2345 and 1.0000e-10
which rounds, in 5 digit arithmetic, to
1.2345
Next, if you add -1.2345 the sum is zero.


> The
> sum should be about 1.0001e-6.

No, you are not adding in 5 digits at all. You apparently are using 10
digits.

 First we evaluate each term to 5
> digits:
>
> 1.2345
> 1.0000e-10
> -1.2345
>
> Now, adding these terms in order naively yields 0.0 which is wrong.

Actually, 0.0 is perfectly correct, in 5 digit arithmetic.

> Adding them using compensated (or exact) summation yields 1.0000e-10,
> which is the exact sum of the approximated terms but the wrong value
> for the sum of the exact terms.

Well, it is a far more accurate sum of the three numbers you provided,
than you
would expect with a 5-digit adder.

> The only way to compute the sum of the
> exact terms accurately is to restart with higher precision
> approximations for the terms.

You can compute the sum perfectly accurately.   If you restart with
different values, you can compute THAT sum perfectly accurately as
well.  You are confusing summation with
some idea about better approximation of terms.


You are missing a basic fact, and that is you think there are some
"other" numbers, perhaps irrational  (why not transcendental?  why not
complex with itsy bitsy imaginary parts?) that are the "accurate"
values, and that your summation program is required to somehow read
the minds of the values to get "higher precision".
>
> > Is this just copied from MPFR?
>
> No.

I guess I'm relieved at that.  It would have shaken my faith in the
MPFR authors.
>
> > It may be a good idea if comparison of 2 numbers is very very much
> > cheaper than addition of 2 numbers, and checking for cancellation is
> > cheap.  I didn't notice any
> > reference to the literature in the file I looked at, but maybe I
> > wasn't looking
> > in the right place.
>
> I don't know any particular references for this algorithm.

I can understand that.

> It's not
> really original; it's just summation with a separate error estimate.

Maybe it's not a good idea, and that's why the people who have written
papers on
the subject don't suggest it.

> It's similar to what MPFR (for example) might do internally to track
> errors when evaluating special functions that are sums of multiple
> terms, only here it is explicit and adapted to handle general
> expressions.

Bounding of errors in evaluating special functions in MPFR is, I'm
fairly sure,
done in a sensible way. Typically one ends up summing a series of
decreasing terms with alternating sign. This does not appear to be
what you are doing.

While your method may work sometimes, I suspect it is neither fast nor
reliable.

RJF

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to