Alex Rønne Petersen:
I don't think the language really makes it clear whether
overflows and underflows are well-defined. Do we guarantee that
for any integral type T, T.max + 1 == T.min and T.min - 1 ==
T.max?
This is relevant in particular for GDC and LDC since they
target a lot of weird architectures.
In a good system language I'd like to see something better than
what's present in C#. So I'd like the language to offer the
programmer the choice of 3 or 4 different semantics in integral
operations:
1) A shared standard semantics that overflows, as in Java;
2) A semantics that overflows, that adapts to the fastest
available on the CPU, as in C;
3) Shared standard overflows with unsigned values and run-time
errors when a signed value overflows (or goes out of its range).
4) Run-time errors when every signed or unsigned value overflows
(or goes out of its range), as in Ada.
Bye,
bearophile