G'day all.
Quoting Thorkil Naur <[EMAIL PROTECTED]>:
Finding the "machine epsilon", perhaps, that is, the smallest
(floating point,
surely) number for which 1.0+machine_eps==1.0 would be a candidate?
The machine epsilon is the smallest relative separation between two
adjacent normalised floating point numbers. (The largest is the
machine epsilon multiplied by the radix, more or less.)
So as I understand it, if you're thinking relative error, this test:
(fabs(x1-x2) < machine_eps * fabs(x1))
should be true if and only if x1 == x2, assuming that x1 and x2 are
nonzero and normalised.
I've always had the impression that using the machine epsilon for
pseudo-equality testing is fairly useless, especially if you can work
out a meaningful problem-specific tolerance.
What seems to be more useful is using the machine epsilon to compute an
estimate of how much relative error your algorithm accumulates. I've
seen this in a lot of Serious Numeric Code(tm), and I've done it myself
(probably inexpertly) a few times.
I haven't tried this, but I imagine that a computed relative error
estimate could be useful for assisting your approximate-equality
tests under some circumstances. Richard might know of some
circumstances where this sort of thing would be useful.
Cheers,
Andrew Bromage
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe