On 04/27/2012 03:55 AM, James Miller wrote:
On Friday, 27 April 2012 at 00:56:13 UTC, Tryo[17] wrote:

D provides an auto type facility that determins which the type
that can best accommodate a particular value. What prevents
the from determining that the only type that can accommodate
that value is a BigInt? The same way it decides between int,
long, ulong, etc.
Because the compiler doesn't know how to make a BigInt, BigInt is part
of the library, not the language.

Why couldn't to!string be overloaded to take a BigInt?
It is, its the same overload that takes other objects.

The point is this, currently 2^^31 will produce a negative long
value on my system. Not that the value is wrong, the variable
simply cannot support the magnitude of the result for this
calculation so it wraps around and produces a negative value.
However, 2^^n for n>=32 produces a value of 0. Why not
produce the value and let the user choose what to put it into?
Why not make the he language BigInt aware? What is the
negative effect of taking BigInt out of the library and make it
an official part of the language?

Because this is a native language. The idea is to be close to the
hardware, and that means fixed-sized integers, fixed-sized floats and
having to live with that. Making BigInt part of the language opens up
the door for a whole host of other things to become "part of the
language". While we're at it, why don't we make matrices part of the
language, and regexes, and we might aswell move all that datetime stuff
into the language too. Oh and I would love to see all the signals stuff
in there too.

The reason we don't put everything in the language is because the more
you put into the language, the harder it is to move. There are more than
enough bugs in D

s/in D/in the DMD frontend/

right now, and adding more features into the language
means a higher burden for core development. There is a trend of trying
to move away from tight integration into the compiler, and by extension
the language. Associative arrays are being worked on to make most of the
work be done in object.d, with the end result being the compiler only
has to convert T[U] into AA(T, U) and do a similar conversion for aa
literals. This means that there is no extra fancy work for the compiler
to do to support AA's

Also, D is designed for efficiency, if I don't want a BigInt, and all of
the extra memory that comes with, then I would rather have an error. I
don't want what /should/ be a fast system to slow down because I
accidentally type 1 << 33 instead of 1 << 23, I want an error of some sort.

The real solution here isn't to just blindly allow arbitrary features to
be "in the language" as it were, but to make it easier to integrate
library solutions so they feel like part of the language.

--
James Miller

Reply via email to