Re: Removing undefined behavior of bitshifts

2011-06-11 Thread Stewart Gordon

On 07/06/2011 00:20, Timon Gehr wrote:
snip

I'd much prefer the behavior to be defined as 1x; being equivalent to
1(0x1fx); (That's what D effectively does during runtime. It is also what
the machine code supports, at least in x87).


Defining the behaviour to match that of one brand of processor would be arbitrary and 
confusing.  Why not define it just to shift by the requested number of bits?


Any extra processor instructions to make it behave correctly for cases where this number 
= 32 would be the part of the backend code generation.  And if the right operand is a 
compile-time constant (as it probably is usually), these extra instructions can be 
eliminated or at least optimised to the particular value.



Are there any practical downsides to making the behavior defined? (Except that
the CTFE Code would have to be fixed). I think Java does it too.


Apparently Java shifts are modulo the number of bits in the type of the left operand.  Or 
something like that.  You'd think it was an oversight in the original implementation that 
was kept for bug compatibility, but you could well ask how they dealt with finding the 
behaviour to be machine dependent (contrary to the whole philosophy of Java).


Stewart.


Re: Removing undefined behavior of bitshifts

2011-06-11 Thread Timon Gehr
Timon Gehr wrote:
 On 07/06/2011 00:20, Timon Gehr wrote:
 snip
 I'd much prefer the behavior to be defined as 1x; being equivalent to
 1(0x1fx); (That's what D effectively does during runtime. It is also what
 the machine code supports, at least in x87).

 Defining the behaviour to match that of one brand of processor would be
arbitrary and
 confusing.

Well, not too much. It is the easiest behavior to implement in hardware.
I thought that it is the most common behavior on different brands of processors,
but I might be mistaken.
Is there any processor that badly needs D support that handles it differently?

 Why not define it just to shift by the requested number of bits?
Because it would turn code that is currently

z = x  y;

to

z = y  (8*sizeof(x)) ? x  y : 0;

On many platforms. Given that you seldom want to shift by a custom amount, and
that it could eliminate some bugs, it might be a reasonable trade-off.



 Any extra processor instructions to make it behave correctly for cases where
this number
  = 32 would be the part of the backend code generation.  And if the right
operand is a
 compile-time constant (as it probably is usually), these extra instructions 
 can be
 eliminated or at least optimised to the particular value.

 Are there any practical downsides to making the behavior defined? (Except 
 that
 the CTFE Code would have to be fixed). I think Java does it too.

 Apparently Java shifts are modulo the number of bits in the type of the left
operand.  Or
 something like that.  You'd think it was an oversight in the original
implementation that
 was kept for bug compatibility, but you could well ask how they dealt with
finding the
 behaviour to be machine dependent (contrary to the whole philosophy of Java).

 Stewart.

I don't even care so much about what the result is, but I feel that saying the
program is in error/the behavior is undefined, when actually you'd just get
back some number is not optimal. (it allows the compiler to do anything if that
case occurs)
I would prefer to make the behavior at least implementation-defined (just a 
formal
change on the D website) or even defined with some runtime-overhead.


Timon


Re: Removing undefined behavior of bitshifts

2011-06-11 Thread s_lange

Am 09.06.2011 09:08, schrieb Don:

s_lange wrote:


As of yet, there are only general-pupose integer rotate instructions
on x86 processors, and very few other CPUs and µCs actually implement
rotate instructions.


Really? Itanium, PowerPC, ARM, 6502, Z80, PIC all have rotate
instructions. I've never used a processor that didn't.
Sorry, my bad, I should have said that there are only a few RISC-like 
processors that don't have rotate instructions: MIPS, SPARC...
Although you are right that many accumulator CPUs and µC do have rotate 
instructions: 8051, 6502, Z80 and PIC


Re: Removing undefined behavior of bitshifts

2011-06-11 Thread Paul D. Anderson
Timon Gehr Wrote:

 Timon Gehr wrote:
  On 07/06/2011 00:20, Timon Gehr wrote:
  snip
  I'd much prefer the behavior to be defined as 1x; being equivalent to
  1(0x1fx); (That's what D effectively does during runtime. It is also 
  what
  the machine code supports, at least in x87).
 
  Defining the behaviour to match that of one brand of processor would be
 arbitrary and
  confusing.
 
 Well, not too much. It is the easiest behavior to implement in hardware.
 I thought that it is the most common behavior on different brands of 
 processors,
 but I might be mistaken.
 Is there any processor that badly needs D support that handles it differently?
 
  Why not define it just to shift by the requested number of bits?
 Because it would turn code that is currently
 
 z = x  y;
 
 to
 
 z = y  (8*sizeof(x)) ? x  y : 0;
 
 On many platforms. Given that you seldom want to shift by a custom amount, and
 that it could eliminate some bugs, it might be a reasonable trade-off.
 
 
 
  Any extra processor instructions to make it behave correctly for cases where
 this number
   = 32 would be the part of the backend code generation.  And if the right
 operand is a
  compile-time constant (as it probably is usually), these extra instructions 
  can be
  eliminated or at least optimised to the particular value.
 
  Are there any practical downsides to making the behavior defined? (Except 
  that
  the CTFE Code would have to be fixed). I think Java does it too.
 
  Apparently Java shifts are modulo the number of bits in the type of the left
 operand.  Or
  something like that.  You'd think it was an oversight in the original
 implementation that
  was kept for bug compatibility, but you could well ask how they dealt with
 finding the
  behaviour to be machine dependent (contrary to the whole philosophy of 
  Java).
 
  Stewart.
 
 I don't even care so much about what the result is, but I feel that saying 
 the
 program is in error/the behavior is undefined, when actually you'd just get
 back some number is not optimal. (it allows the compiler to do anything if 
 that
 case occurs)
 I would prefer to make the behavior at least implementation-defined (just a 
 formal
 change on the D website) or even defined with some runtime-overhead.
 
 
 Timon

FWIW, the General Decimal Arithmetic specification (which I'm getting close to 
implementing in D: it's always just one week away!) specifies that a shift or 
rotate by more than the current precision is disallowed (returns a NaN).

I suppose the clarity on this point is possible because the context, including 
the precision, is part of the specification, and the means to indicate failure 
is also defined.

Unfortunately no such specification exists for integers (at least none that 
doesn't start religious wars).

The clarity is offset, however, by  the requirement to shift by decimal digits, 
not bits. Just a heads up that this won't be in the first release.

Paul




Re: Removing undefined behavior of bitshifts

2011-06-09 Thread Don

s_lange wrote:

Am 07.06.2011 01:20, schrieb Timon Gehr:

I'd much prefer the behavior to be defined as 1x; being equivalent to
1(0x1fx); (That's what D effectively does during runtime. It is 
also what

the machine code supports, at least in x87).

Well, you probably mean what x86 processors do.

The behaviour of x86 processors in this regard is (at leas) well 
defined, but not uniform. It all depends whether you use general-purpose 
integer instructions or some 64, 128, or 256-bit SIMD instructions (MMX, 
SSE, AVX).


Int denotes 32-bit integers, operations on differently sized integer are 
analoguous.


For general-purpose integer instructions, the following identities hold 
true:
(lx == l(x  0x1F)) and (lx == l(x  0x1F)) for any (signed | 
unsigned) int l and x , regardless whether arithmetical or logical right 
shifts are used.


However, for SIMD integer instructions, things are a little different 
and the following identities hold true:
(lx == (unsigned int)x = 0x20 ? 0 : lx) for any (signed | unsigned) 
int l and x ,
(lx == (unsigned int)x = 0x20 ? 0 : lx) for any (signed | unsigned) 
int x and any unsigned int l (using logical right shifts)
(lx == (unsigned int)x = 0x20 ? -1 : lx) for any (signed | 
unsigned) int x and any signed int l (using arithmetic right shifts)


As of yet, there are only general-pupose integer rotate instructions on 
x86 processors, and very few other CPUs and µCs actually implement 
rotate instructions.


Really? Itanium, PowerPC, ARM, 6502, Z80, PIC all have rotate 
instructions. I've never used a processor that didn't.


Removing undefined behavior of bitshifts

2011-06-06 Thread Timon Gehr
Currently, the behavior of a shift by more than the size in bytes of the
operand is undefined. (Well,  it's an 'error', but unchecked.)

int x=32;
x=1x;
int y=132;
assert(x!=y); // yes, this actually passes! (DMD 2.053)

The result is different depending on whether or not it was computed during
CTFE/Constant folding.

I'd much prefer the behavior to be defined as 1x; being equivalent to
1(0x1fx); (That's what D effectively does during runtime. It is also what
the machine code supports, at least in x87).

Are there any practical downsides to making the behavior defined? (Except that
the CTFE Code would have to be fixed). I think Java does it too.


Timon