Walter Bright:

Why stop at 64 bits? Why not make there only be one integral type, and it is of whatever precision is necessary to hold the value? This is quite doable, and has been done.

I think no one has asked for *bignums on default* in this thread.


But at a terrible performance cost.

Nope, this is a significant fallacy of yours.
Common lisp (and OCaML) uses tagged integers on default, and they are very far from being "terrible". Tagged integers cause no heap allocations if they aren't large. Also the Common Lisp compiler in various situations is able to infer an integer can't be too much large, replacing it with some fixnum. And it's easy to add annotations in critical spots to ask the Common Lisp compiler to use a fixnum, to squeeze out all the performance. The result is code that's quick, for most situations. But it's more often correct. In D you drive with eyes shut; sometimes for me it's hard to know if some integral overflow has occurred in a long computation.


And, yes, in D you can create your own "BigInt" datatype which exhibits this behavior.

Currently D bigints don't have short int optimization. And even when this library problem is removed, I think the compiler doesn't perform on BigInts the optimizations it does on ints, because it doesn't know about bigint properties.

Bye,
bearophile

Reply via email to