Currently, the behavior of a shift by more than the size in bytes of the operand is undefined. (Well, it's an 'error', but unchecked.)
int x=32; x=1<<x; int y=1<<32; assert(x!=y); // yes, this actually passes! (DMD 2.053) The result is different depending on whether or not it was computed during CTFE/Constant folding. I'd much prefer the behavior to be defined as 1<<x; being equivalent to 1<<(0x1f&x); (That's what D effectively does during runtime. It is also what the machine code supports, at least in x87). Are there any practical downsides to making the behavior defined? (Except that the CTFE Code would have to be fixed). I think Java does it too. Timon