On Sep 29, 2009, at 10:16 AM, John Cowan wrote: > Andrew Reilly scripsit: > >> I can't, at the moment, think of *any* situation where a fixnum >> overflow >> that results in an inexact result would be appropriate, except >> perhaps >> for toy problems like fact. It certainly isn't the right answer for >> any example of linear iteration, counting, array index calculation, >> or cryptography. > > If your inexacts are IEEE double floats, as they are on almost all > Schemes, then arithmetic operations on inexact integers, provided they > derive from an exact source, will be quite correct up to +/- 2^53-1. > That's a hefty sort of range for linear iteration or counting, and > it's way too big for array or even file indexing. As for > cryptography, > when you need bignums, you need them, there is no arguing that.
If implementations are expected to provide IEEE double floats, then they can certainly be expected to provide exact integers with 53 bit precision by assigning an alternate tag to boxed exact integers represented as doubles - or boxed 64-bit integers. This doesn't preclude having unboxed fixnums in a smaller range. But, of course, I don't think it *is* reasonable to expect IEEE doubles. Implementations which use IEEE singles don't provide any more exact precision than 24-bit fixnums. This is the much more likely scenario in a small Scheme. For what it's worth, a cycle counter on a 2GHz processor overflows the exact range of an IEEE double in 48 days. > People hear about all the problems with floating-point fractions and > cumulative error, and they are often seduced into overpessimism about > floating point, thinking that floating-point integers have the same > issues. They don't, as long as you stay within the significance range > noted above. > > Someone proposed in an earlier thread the notion of flonum-only > Schemes, > and I shot it down because of the lack of distinction between exact > and inexact numbers. But if there were just one extra bit available, > which was propagated through arithmetic expressions by ORing, we'd > have > a pretty nice fixnum/flonum Scheme whose fixnums are internally > flonums. Ultimately the conceptual problem is with exact -> inexact overflow, not with the range of exact integers provided. Arguing that some inexact representations may provide more "actually exact" inexact range than their truly-exact integers do is just a way of saying that those implementations are not providing the full range of exact integers that they easily could - and begins to sound like an exercise in defending your favorite R5RS implementation, which is described by this statement. -- Brian Mastenbrook [email protected] http://brian.mastenbrook.net/ _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
