On Thu, Mar 02, 2017 at 02:46:29PM +0100, Ingo Molnar wrote:
> 
> * Josh Poimboeuf <jpoim...@redhat.com> wrote:
> 
> > On Thu, Mar 02, 2017 at 07:31:39AM +0100, Ingo Molnar wrote:
> > > 
> > > * Josh Poimboeuf <jpoim...@redhat.com> wrote:
> > > 
> > > > On Wed, Mar 01, 2017 at 11:42:54PM +0100, Arnd Bergmann wrote:
> > > > > On Wed, Mar 1, 2017 at 5:53 PM, Josh Poimboeuf <jpoim...@redhat.com> 
> > > > > wrote:
> > > > > > On Wed, Mar 01, 2017 at 04:27:29PM +0100, Arnd Bergmann wrote:
> > > > > 
> > > > > > I see no apparent reason for the ud2.
> > > > > 
> > > > > It's the possible division by zero. This change would avoid the ud2:
> > > > > 
> > > > > diff --git a/drivers/i2c/busses/i2c-img-scb.c 
> > > > > b/drivers/i2c/busses/i2c-img-scb.c
> > > > > index db8e8b40569d..a2b09c518225 100644
> > > > > --- a/drivers/i2c/busses/i2c-img-scb.c
> > > > > +++ b/drivers/i2c/busses/i2c-img-scb.c
> > > > > @@ -1196,6 +1196,8 @@ static int img_i2c_init(struct img_i2c *i2c)
> > > > >         clk_khz /= prescale;
> > > > > 
> > > > >         /* Setup the clock increment value */
> > > > > +       if (clk_khz < 1)
> > > > > +               clk_khz = 1;
> > > > >         inc = (256 * 16 * bitrate_khz) / clk_khz;
> > > > > 
> > > > >         /*
> > > > 
> > > > Ok, I see what gcc is doing.
> > > > 
> > > >         clk_khz = clk_get_rate(i2c->scb_clk) / 1000;
> > > >         ...
> > > >         inc = (256 * 16 * bitrate_khz) / clk_khz;
> > > > 
> > > > Because CONFIG_HAVE_CLK isn't set, clk_get_rate() returns 0, which means
> > > > clk_khz is always zero, so the last statement *always* results in a
> > > > divide-by-zero.  So that looks like a bug in the code.
> > > > 
> > > > However, I'm baffled by how gcc handles it.  Instead of:
> > > > 
> > > >   a) reporting a compile-time warning/error; or
> > > > 
> > > >   b) letting the #DE (divide error) exception happen;
> > > > 
> > > > it inserts a 'ud2', resulting in a #UD (invalid opcode).  Why?!?
> > > 
> > > Well, technically an invalid opcode is shorter code than generating an 
> > > (integer) 
> > > division by zero exception, right?
> > 
> > What does that matter if it's the wrong behavior?
> 
> Well, both terminate the program, and it's obvious if you look at it with a 
> debugger what happened, right?

If it were obvious, we wouldn't be having this discussion :-)

The only thing obvious to me was that gcc mysteriously removed a bunch
of code and replaced it with a 'ud2' instruction in the middle of the
function for no apparent reason.

-- 
Josh

Reply via email to