On 07/06/2011 00:20, Timon Gehr wrote:
<snip>
I'd much prefer the behavior to be defined as 1<<x; being equivalent to
1<<(0x1f&x); (That's what D effectively does during runtime. It is also what
the machine code supports, at least in x87).

Defining the behaviour to match that of one brand of processor would be arbitrary and confusing. Why not define it just to shift by the requested number of bits?

Any extra processor instructions to make it behave correctly for cases where this number >= 32 would be the part of the backend code generation. And if the right operand is a compile-time constant (as it probably is usually), these extra instructions can be eliminated or at least optimised to the particular value.

Are there any practical downsides to making the behavior defined? (Except that
the CTFE Code would have to be fixed). I think Java does it too.

Apparently Java shifts are modulo the number of bits in the type of the left operand. Or something like that. You'd think it was an oversight in the original implementation that was kept for bug compatibility, but you could well ask how they dealt with finding the behaviour to be machine dependent (contrary to the whole philosophy of Java).

Stewart.

Reply via email to