On 01/11/2014 03:14 PM, Daniel Micay wrote:
On Sat, Jan 11, 2014 at 6:06 PM, Nathan Myers <[email protected]> wrote:
A big-integer type that uses small-integer
arithmetic until overflow is a clever trick, but it's purely
an implementation trick. Architecturally, it makes no sense
to expose the trick to users.
I didn't suggest exposing it to users. I suggested defining a wrapper
around the big integer type with better performance characteristics
for small integers.
Your wrapper sounds to me like THE big-integer type. The thing you
called a "big integer" doesn't need a name.
No single big-integer or
overflow-trapping type can meet all needs. (If you try, you
fail users who need it simple.) That's OK, because anyone
can code another, and a simple default can satisfy most users.
What do you mean by default? If you don't know the bounds, a big
integer is clearly the only correct choice. If you do know the
bounds,you can use a fixed-size integer. I don't think any default
other than a big integer is sane, so I don't think Rust needs a
> default inference fallback.
As I said,
>> In fact, i64 satisifies almost all users almost all the time.
No one would complain about a built-in "i128" type. The thing
about a fixed-size type is that there are no implementation
choices to leak out. Overflowing an i128 variable is quite
difficult, and 128-bit operations are still lots faster than on
any variable-precision type. I could live with "int" == "i128".
Nathan Myers
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev