On Tuesday, 11 December 2012 at 21:57:38 UTC, Walter Bright wrote:
On 12/11/2012 10:45 AM, bearophile wrote:
Walter Bright:

Why stop at 64 bits? Why not make there only be one integral type, and it is of whatever precision is necessary to hold the value? This is quite
doable, and has been done.

I think no one has asked for *bignums on default* in this thread.

I know they didn't ask. But they did ask for 64 bits, and the exact same
argument will apply to bignums, as I pointed out.


Agreed.

But at a terrible performance cost.
Nope, this is a significant fallacy of yours. Common lisp (and OCaML) uses tagged integers on default, and they are very far from being "terrible". Tagged integers cause no heap allocations if they aren't large. Also the Common Lisp compiler in various situations is able to infer an integer can't be too much large, replacing it with some fixnum. And it's easy to add annotations in critical spots to ask the Common Lisp compiler to use a
fixnum, to squeeze out all the performance.

I don't notice anyone reaching for Lisp or Ocaml for high performance applications.


I don't know about common LISP performances, never tried it in something where that really matter. But OCaml is really very performant. I don't know how it handle integer internally.

That's irrelevant to this discussion. It is not a problem with the language. Anyone can improve the library one if they desire, or do their own.


The library is part of the language. What is a language with no vocabulary ?

I think the compiler doesn't perform on BigInts the optimizations it does on
ints, because it doesn't know about bigint properties.

I think the general lack of interest in bigints indicate that the builtin types work well enough for most work.

That argument is fallacious. Something more used don't really mean better. OR PHP and C++ are some of the best languages ever made.

Reply via email to