Steven D'Aprano writes:

> On Thu, 30 May 2013 13:45:13 +1000, Chris Angelico wrote:
> 
> > Let's suppose someone is told to compare floating point numbers by
> > seeing if the absolute value of the difference is less than some
> > epsilon.
> 
> Which is usually the wrong way to do it! Normally one would prefer
> *relative* error, not absolute:
> 
> # absolute error:
> abs(a - b) < epsilon
> 
> 
> # relative error:
> abs(a - b)/a < epsilon
> 

...

I wonder why floating-point errors are not routinely discussed in
terms of ulps (units in last position). There is a recipe for
calculating the difference of two floating point numbers in ulps, and
it's possible to find the previous or next floating point number, but
I don't know of any programming language having built-in support for
these.

Why isn't this considered the most natural measure of a floating point
result being close to a given value? The meaning is roughly this: how
many floating point numbers there are between these two.

"close enough" if abs(ulps(a, b)) < 3 else "not close enough"

"equal" if ulps(a, b) == 0 else "not equal"

There must be some subtle technical issues here, too, but it puzzles
me that this measure of closeness is not often even discussed when
absolute and relative error are discussed - and computed using the
same approximate arithmetic whose accuracy is being measured. Scary.

Got light?
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to