Robert Fraser wrote:
Eljay wrote:
Is there ANY use case where you'd need a 256-bit integer instead of a
BigInteger? Even 128 is a bit dodgy. UUIDs and what not are identifiers,
not numbers, so have no problem being stored in a struct wrapping a
ubyte[].
Fixed point arithmetic!!!
Seriously, 256 bit fixed point numbers (roughly an integer, but with
renormalization after multiplication and division) can represent the
position of anything in the visible universe at a smaller scale than the
plank length (which physically does not make any sense). I would find
that pretty nice to work with, actually, it would be a lot nicer than
doubles.
I agree compilers should support 256+ bit _data_... But doing so with an
entirely new numeric data-type is probably a bad idea. Special treatment
for certain constructs and library support is a much better idea.
I don't get this, I am doing research in compiler optimizations, and I
can say that library supported value types are just painful to reason
about since they are not part of the language, they end up as being not
properly formalized, and the compiler will lose optimization opportunities.
It makes more sense to declare some growth opportunity for the future,
and then add new types (with new keywords) in future language revisions
(which should be enabled by flags, this does complicate the parser a
little bit, but it does not have to overly complicated in a handwritten
parser.
/ Mattias