On Tue, May 25, 2010 at 12:34 PM, Joe Marshall <jmarsh...@alum.mit.edu> wrote: > On Tue, May 25, 2010 at 7:23 AM, Michael Sperber > <sper...@deinprogramm.de> wrote: >> However, the IEEE operations aren't defined in terms of >> those intervals: they are defined (simplifying somewhat) as operations >> on "exact" numbers followed by rounding. > > Followed by rounding *if necessary*. And it often isn't necessary. > Many so-called rounding errors come from the translation from base 10 > input to base 2, or from base 2 to base 10 on printing. The computation > itself can often proceed without rounding. For example, integer arithmetic > for add, subtract, and multiply are *exact* for floating-point integer values > in the range -2^52 to 2^52. There will be *no* rounding whatsoever. > > Floating point isn't `magic' or a `black art', it's just a little trickier > than > rationals, and maybe on par with complex numbers.
If the best we could say about IEEE floating point were that it's a valid alternative for 53-bit signed integers, then it would be a failure, wouldn't it? Fortunately we can say much better, but then we have to start admitting to ourselves that it can't precisely represent things like 64-bit MAXINT or the simple fraction 1/3. Floating point is really useful, but it will always be a black art. And much trickier than complex numbers, which are just real numbers that brought along a(n imaginary) friend. I hope I'm not too far off topic here, though. I think the original topic had more to do with the semantics of the `integer?` predicate than with the actual representation of inexact numbers. --Carl _________________________________________________ For list-related administrative tasks: http://list.cs.brown.edu/mailman/listinfo/plt-dev