On Tue, Sep 29, 2009 at 11:16:22AM -0400, John Cowan wrote:
> Andrew Reilly scripsit:
> 
> > I can't, at the moment, think of *any* situation where a fixnum overflow
> > that results in an inexact result would be appropriate, except perhaps
> > for toy problems like fact.  It certainly isn't the right answer for
> > any example of linear iteration, counting, array index calculation,
> > or cryptography.
> 
> If your inexacts are IEEE double floats, as they are on almost all
> Schemes, then arithmetic operations on inexact integers, provided they
> derive from an exact source, will be quite correct up to +/- 2^53-1.

But will the system *know* that they're exact integers?  Will
they still work as arguments to vector-ref?  Will they display as
exact integers?  There was mention before of having an issue
with this scheme and file positioning, and it would seem likely
that there were others, unless extra bits were carried around
that distinguished these doubles as "actually integers", at
which point it seems as though just having an extended-precision
integer representation would be a win.

> People hear about all the problems with floating-point fractions and
> cumulative error, and they are often seduced into overpessimism about
> floating point, thinking that floating-point integers have the same
> issues.  They don't, as long as you stay within the significance range
> noted above.

And how do you know that you are (within the significance range)?

Does having two different zeros cause trouble in the world of
scheme exact integers?

I really don't know.  Your arguments sound kind of valid, but my
reaction is still "eww!"

Cheers,

-- 
Andrew

_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to