On Jul 8, 11:36 am, Mark Dickinson <dicki...@gmail.com> wrote:
> I think that's because we're talking at cross-purposes.
>
> To clarify, suppose you want to compute some value (pi;  log(2);
> AGM(1, sqrt(2)); whatever...) to 1000 significant decimal places.
> Then typically the algorithm (sometimes known as Ziv's onion-peeling
> method) looks like:
>
> (1) Compute an initial approximation to 1002 digits (say), with known
> absolute error (given by a suitable error analysis); for the sake of
> argument, let's say that you use enough intermediate precision to
> guarantee an absolute error of < 0.6 ulps.
>
> (2) Check to see whether that approximation unambiguously gives you
> the correctly-rounded 1000 digits that you need.
>
> (3) If not, increase the target precision (say by 3 digits) and try
> again.
>
> It's the precision increase in (3) that I was calling small, and
> similarly it's step (3) that isn't usually needed more than once or
> twice.  (In general, for most functions and input values;  I dare say
> there are exceptions.)
>
> Step (1) will often involve using significantly more than the target
> precision for intermediate computations, depending very much on what
> you happen to be trying to compute.  IIUC, it's the extra precision in
> step (1) that you don't want to call 'small', and I agree.
>
> IOW, I'm saying that the extra precision required *due to the table-
> maker's dilemma* generally isn't a concern.
Yes, though I think attributing only the precision added in step 3 to
the table-maker's dilemma isn't entirely correct.  While it'd be
certainly less of a dilemma if we could precompute the necessary
precision, it doesn't' help us if the precision is generally
unbounded.  As such, I think it's really two dilemmas for the price of
one.

> > I actually agree with much of what you've said.  It was just the
> "impossible" claim that went over the top (IMO).  The MPFR library
> amply demonstrates that computing many transcendental functions to
> arbitrary precision, with correctly rounded results, is indeed
> possible.
That's because you're latching onto that word instead of the whole
sentence in context and making a much bigger deal out of than is
appropriate.  The fact that I may not be able to complete a given
calculation for an arbitrary precision is not something that can be
ignored.  It's the same notional problem with arbitrary-precision
integers: is it better to run out of memory or overflow the
calculation?  The answer, of course, is a trick question.

Adam

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to