https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82677

--- Comment #11 from rguenther at suse dot de <rguenther at suse dot de> ---
On Thu, 26 Oct 2017, nisse at lysator dot liu.se wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82677
> 
> Niels Möller <nisse at lysator dot liu.se> changed:
> 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
>                  CC|                            |nisse at lysator dot liu.se
> 
> --- Comment #10 from Niels Möller <nisse at lysator dot liu.se> ---
> Out of curiosity, how is this handled for division instructions generated by
> gcc, with no __asm__ involved? E.g., consider
> 
> int foo(int d) {
>   int r = 1234567;
>   if (d != 0)
>     r = r / d;
>   return r;
> }
> 
> On an architecture where the div instruction doesn't raise any exception on
> divide by zero, this function could be compiled to a division instruction +
> conditional move, without any branch instruction. Right?
> 
> But on most architectures, that optimization would be invalid, and the 
> compiler
> must somehow know that. Is that a property on the representation of division
> expression? Or is it tied to some property of the instruction pattern for the
> divide instruction?
> 
> My question really is: What would it take to mark an __asm__ expression so 
> that
> it's treated in the same way as a plain C division?

The division is possibly trapping (due to special value of zero) and thus
is never hoisted out of a conditional region.

Similar to a pointer dereference where the pointer may be zero btw.

Reply via email to