On Tuesday, 25 November 2014 at 15:52:22 UTC, Ola Fosheim Grøstad
wrote:
I assume you are basically saying that Walter's view that
matching C++ is more important than getting it right, because
some people might expect C++ behaviour. Yet Ada chose a
different path and is considered a better language with respect
to correctness.
C++ legacy is huge especially in culture. That said, the true
issue is in beliefs (which probably stem from 16-bit era). Can't
judge Ada, have no experience with it, though examples of Java
and .net show how marginal is importance of unsigned types.
I think it is important to get the definitions consistent and
sound so they are easy to reason about, both for users and
implementors. So one should choose whether the type is
primarily monotonic, with incorrect values "truncated into"
modulo N, or if the type is primarily modular.
In this light examples by Marco Leise become interesting, he
tries to evade wrapping even for unsigned types, so, yes types
are primarily monotonic and optimized for small values.
If addition is defined to be primarily monotonic it means you
can optimize "if(x < x+1)…" into "if (true)…". If it is defined
to be primarily modular, then you cannot.
Such optimizations have a bad reputation. If they were more
conservative and didn't propagate back in code flow, the
situation would be probably better. Also isn't (x < x+1) a
suspicious expression, is it a good idea to mess with it?