On Tue, Jun 28, 2005 at 02:32:04PM +0200, Gabriel Dos Reis wrote:
> Robert Dewar <[EMAIL PROTECTED]> writes:
> 
> | Gabriel Dos Reis wrote:
> | 
> | > The issue here is whether if the hardware consistently display a
> | > semantics, GCC should not allow access to that consistent semantics
> | > under the name that "the standard says it is undefined behaviour".
> | > Consider the case of converting a void* to a F*, where F is a function
> | > type.
> | 
> | Well the "hardware consistently displaying a semantics" is not so
> | cut and dried as you think (consider the loop instruction and other
> | arithmetic on the x86 for instance in the context of generating code
> | for loops).
> 
> Please do remember that this is hardware dependent.  If you have
> problems with x86, it does not mean you have the same witha PPC or a
> Sparc. 

For the matter, PPC also has undefined behaviour for integer divides
of 0x80000000 by -1 (according to the architecture specification).
I just checked on a 400MHz PPC750, and the result register ends
up containing -1. 

A side effect is that (INT_MIN % -1) is INT_MAX, which is really 
surprising. I believe that it is reasonable to expect that the 
absolute value of x%y is less than the absolute value of y; it 
might even be required by some language standard.

On x86, the same operation results in a "divide by zero" exception
(vector 0) and a signal under most (all?) operating systems
(SIGFPE under Linux).

Now in practice what would be the cost of checking that the divisor
is -1 and take an alternate path that computes the correct 
results (in modulo arithmetic) for this case ? 

I can see a moderate code size impact, something like 4 or 5 machine
instructions per integer division, not really a performance impact
since on one branch you would have a divide instruction which takes
many clock cycles.

        Regards,
        Gabriel

Reply via email to