On 01/10/2014 10:08 PM, Daniel Micay wrote:
I don't think failure on overflow is very useful. It's still a bug if
you overflow when you don't intend it. If we did have a fast big
integer type, it would make sense to wrap it with an enum heading down
a separate branch for small and large integers, and branching on the
overflow flag to expand to a big integer. I think this is how Python's
integers are implemented.

Failure on overflow *can* be useful in production code, using
tasks to encapsulate suspect computations.  Big-integer types
can be useful, too.  A big-integer type that uses small-integer
arithmetic until overflow is a clever trick, but it's purely
an implementation trick.  Architecturally, it makes no sense
to expose the trick to users.

The fundamental error in the original posting was saying machine
word types are somehow not "CORRECT".  Such types have perfectly
defined behavior and performance in all conditions. They just
don't pretend to model what a mathematician calls an "integer".
They *do* model what actual machines actually do. It makes
sense to call them something else than "integer", but "i32"
*is* something else.

It also makes sense to make a library that tries to emulate
an actual integer type.  That belongs in a library because it's
purely a construct: nothing in any physical machine resembles
an actual integer.  Furthermore, since it is an emulation,
details vary for practical reasons. No single big-integer or
overflow-trapping type can meet all needs. (If you try, you
fail users who need it simple.)  That's OK, because anyone
can code another, and a simple default can satisfy most users.

In fact, i64 satisifies almost all users almost all the time.

Nathan Myers
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to