On Wed, Mar 01, 2017 at 11:42:54PM +0100, Arnd Bergmann wrote:
> On Wed, Mar 1, 2017 at 5:53 PM, Josh Poimboeuf <jpoim...@redhat.com> wrote:
> > On Wed, Mar 01, 2017 at 04:27:29PM +0100, Arnd Bergmann wrote:
> 
> > I see no apparent reason for the ud2.
> 
> It's the possible division by zero. This change would avoid the ud2:
> 
> diff --git a/drivers/i2c/busses/i2c-img-scb.c 
> b/drivers/i2c/busses/i2c-img-scb.c
> index db8e8b40569d..a2b09c518225 100644
> --- a/drivers/i2c/busses/i2c-img-scb.c
> +++ b/drivers/i2c/busses/i2c-img-scb.c
> @@ -1196,6 +1196,8 @@ static int img_i2c_init(struct img_i2c *i2c)
>         clk_khz /= prescale;
> 
>         /* Setup the clock increment value */
> +       if (clk_khz < 1)
> +               clk_khz = 1;
>         inc = (256 * 16 * bitrate_khz) / clk_khz;
> 
>         /*

Ok, I see what gcc is doing.

        clk_khz = clk_get_rate(i2c->scb_clk) / 1000;
        ...
        inc = (256 * 16 * bitrate_khz) / clk_khz;

Because CONFIG_HAVE_CLK isn't set, clk_get_rate() returns 0, which means
clk_khz is always zero, so the last statement *always* results in a
divide-by-zero.  So that looks like a bug in the code.

However, I'm baffled by how gcc handles it.  Instead of:

  a) reporting a compile-time warning/error; or

  b) letting the #DE (divide error) exception happen;

it inserts a 'ud2', resulting in a #UD (invalid opcode).  Why?!?

-- 
Josh

Reply via email to