On Mon, Nov 18, 2013 at 1:11 PM, Bernd Edlinger
<bernd.edlin...@hotmail.de> wrote:
> Hi,
>
>
> This modified test case exposes a bug in the already approved part of the 
> strict-volatile-bitfields patch:
>
> #include <stdlib.h>
>
> typedef struct {
>   char pad;
>   int arr[0];
> } __attribute__((packed)) str;
>
> str *
> foo (int* src)
> {
>   str *s = malloc (sizeof (str) + sizeof (int));
>   s->arr[0] = 0x12345678;
>   asm volatile("":::"memory");
>   *src = s->arr[0];
>   return s;
> }
>
>
> As we know this test case triggered a recursion in the store_bit_field on ARM 
> and on PowerPC,
> which is no longer reproducible after this patch is applied: 
> http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02025.html
>
> Additionally it triggered a recursion on extract_bit_field, but _only_ on my 
> local copy of the trunk.
> I had this patch installed, but did not expect it to change anything unless 
> the values are volatile.
> That was cased by this hunk in the strict-volatile-bitfields v4 patch:
>
>
> @@ -1691,45 +1736,19 @@ extract_fixed_bit_field (enum machine_mo
>          includes the entire field.  If such a mode would be larger than
>          a word, we won't be doing the extraction the normal way.  */
>
> -      if (MEM_VOLATILE_P (op0)
> -         && flag_strict_volatile_bitfields> 0)
> -       {
> -         if (GET_MODE_BITSIZE (GET_MODE (op0))> 0)
> -           mode = GET_MODE (op0);
> -         else if (target && GET_MODE_BITSIZE (GET_MODE (target))> 0)
> -           mode = GET_MODE (target);
> -         else
> -           mode = tmode;
> -       }
> -      else
> -       mode = get_best_mode (bitsize, bitnum, 0, 0,
> -                             MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P 
> (op0));
> +      mode = GET_MODE (op0);
> +      if (GET_MODE_BITSIZE (mode) == 0
> +         || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
> +       mode = word_mode;
> +      mode = get_best_mode (bitsize, bitnum, 0, 0,
> +                           MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>
>        if (mode == VOIDmode)
>         /* The only way this should occur is if the field spans word
>            boundaries.  */
>         return extract_split_bit_field (op0, bitsize, bitnum, unsignedp);
>
> So the problem started, because initially this function did not look at 
> GET_MODE(op0)
> and always used word_mode. That was changed, but now also affected 
> non-volatile data.
>
>
> Now, if we solve this differently and install the C++ memory model patch,
> we can avoid to introduce the recursion in the extract path,
> and remove these two hunks in the update patch at the same time:
>
> +  else if (MEM_P (str_rtx)
> +          && MEM_VOLATILE_P (str_rtx)
> +          && flag_strict_volatile_bitfields> 0)
> +    /* This is a case where -fstrict-volatile-bitfields doesn't apply
> +       because we can't do a single access in the declared mode of the field.
> +       Since the incoming STR_RTX has already been adjusted to that mode,
> +       fall back to word mode for subsequent logic.  */
> +    str_rtx = adjust_address (str_rtx, word_mode, 0);
>
>
>
> Attached you'll find a new version of the bitfields-update patch,
> it is again relative to the already approved version of the 
> volatile-bitfields patch v4, part 1/2.
>
> Boot-strapped and regression-tested on X86_64-linux-gnu.
> additionally tested with an ARM cross-compiler.
>
>
> OK for trunk?

Ok.

Thanks,
Richard.

>
> Thanks
> Bernd.

Reply via email to