"Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote:

>"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
>
>> What I don't know is how much precision this approximation loses when
>> used in real applications, and I have never found anyone else who has
>> much of a clue, either.
>> 
>I would suspect that this is one of those questions which are simple
>to ask, but horribly difficult to answer - I mean - if the hardware has 
>thrown it away, how do you study it - you need somehow two
>different parallel engines doing the same stuff, and comparing the 
>results, or you have to write a big simulation, and then you bring 
>your simulation errors into the picture - There be Dragons...

Actually, this is a very well studied part of computer science called
"interval arithmetic".  As you say, you do every computation twice, once to
compute the minimum, once to compute the maximum.  When you're done, you
can be confident that the true answer lies within the interval.

For people just getting into it, it can be shocking to realize just how
wide the interval can become after some computations.
-- 
Tim Roberts, [EMAIL PROTECTED]
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to