On Mon, Mar 16, 2015 at 11:53 AM, Bernd Edlinger
<bernd.edlin...@hotmail.de> wrote:
>
> Hi,
>
>
> when looking at the m68k I realized the following, which is
> a general problem...
>
> If the alignment of the structure is less than sizeof(field), the
> strict volatile bitfields code may read beyond the end of the
> structure!
>
> Consider this example:
>
> struct s
> {
>   char x : 8;
>   volatile unsigned int y : 31;
>   volatile unsigned int z : 1;
> } __attribute__((packed));
>
> struct s global;
>
>
> Here we have sizeof(struct s) = 5, alignment(global) == 1,
> However when we access global.z we read a 32-bit word
> at offset 4, which touches 3 bytes that are not safe to use.
>
> Something like that does never happen with -fno-strict-volatile-bitfields,
> because IIRC, with the only exception of the simple_mem_bitfield_p code path,
> there is never an access mode used which is larger than MEM_ALIGN(x).

Are you sure?  There is still PR36043 ...

> In this example, if I want to use the packed attribute,
> I also have to use the aligned(4) attribute, this satisfies the
> check "MEM_ALIGN (op0) < modesize", which is IMO always necessary
> for strict volatile bitfields, not only on STRICT_ALIGNMENT targets.

But your patch still somehow special-cases them.

> On a target, that has BIGGEST_ALIGNMENT < BITS_PER_WORD,
> to use the strict volatile bitfields, you have to add the 
> __attribute__((aligned(4)))
> to the structure.
>
> I had to do that on the pr23623.c test case, to have it passed on m68k for 
> instance.
>
>
> I have attached the updated patch.  As explained before, the check
> MEM_ALIGN (op0) < modesize should always be done in 
> strict_volatile_bitfield_p.
>
> For the targets, that usually enable -fstrict-volatile-bitfields, nothing 
> changes,
> Except when we use "packed" on the structure, we need to add also an 
> aligned(4)
> attribute. For m68k where the natural alignment of any structure is <=2 we 
> need to
> force aligned(4) if we want to ensure the access is in SImode.
>
> Boot-strapped and reg-tested on x86_64-linux-gnu.
> OK for trunk?

So - shouldn't the check be

  if (MEM_ALIGN (op0) < GET_MODE_ALIGNMENT (fieldmode))
    return false;

instead?  And looking at

   /* Check for cases of unaligned fields that must be split.  */
-  if (bitnum % BITS_PER_UNIT + bitsize > modesize
-      || (STRICT_ALIGNMENT
-         && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize > modesize))
+  if (bitnum % (STRICT_ALIGNMENT ? modesize : BITS_PER_UNIT)
+      + bitsize > modesize)
     return false;

I wonder what the required semantics are - are they not that we need to access
the whole "underlying" field with the access (the C++ memory model only
requires we don't go beyond the field)?  It seems that information isn't readily
available here though.  So the check checks that we can access the field
with a single access using fieldmode.  Which means (again),

  if (bitnum % (STRICT_ALIGNMENT ? GET_MODE_ALIGNMENT (fieldmode) :
BITS_PER_UNIT)
      + bitsize > modesize)

Also this means that for !STRICT_ALIGNMENT platforms the MEM_ALIGN check isn't
sufficient which is what the other hunks in the patch are about to fix?

It seems at least the

@@ -988,6 +995,16 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_I

          str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
                                          &bitnum);
+         if (!STRICT_ALIGNMENT
+             && bitnum + bitsize > GET_MODE_BITSIZE (fieldmode))
+           {
+             unsigned HOST_WIDE_INT offset;
+             offset = (bitnum + bitsize + BITS_PER_UNIT - 1
+                       - GET_MODE_BITSIZE (fieldmode)) / BITS_PER_UNIT;
+             str_rtx = adjust_bitfield_address (str_rtx, fieldmode, offset);
+             bitnum -= offset * BITS_PER_UNIT;
+           }
+         gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (fieldmode));

hunks could do with a comment.

That said, I fail to realize how the issue is specific to
strict-volatile bitfields?

I also hope somebody else will also look at the patch - I'm not feeling like the
appropriate person to review changes in this area (even if I did so in
the past).

Thanks,
Richard.

>
> Thanks
> Bernd.
>

Reply via email to