On Tuesday, 6 October 2015 at 09:28:29 UTC, Marc Schütz wrote:
I see, this is a new problem introduced by `char + int = char`.
But at least the following could be disallowed without
introducing problems:
int a = 'a';
char b = 32;
But strictly speaking, we already accept overflow (i.e. loss of
precision) for ints, so it's a bit inconsistent to disallow it
for chars.
Yes, D does not have overflow, it has modular arithmetics. So the
same argument would hold for an enumeration (like character
ranges), do you want them to be modular (a circle) or monotonic
(a line). Neither is a good fit for unicode. It probably would
make most sense to split the unicode universe into multiple typed
ranges, some enumerations, some non-enumerations and avoid char
altogether.