http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55547
--- Comment #4 from Alexandre Oliva <aoliva at gcc dot gnu.org> 2012-12-07 23:29:28 UTC --- I don't understand how this sort of unaligned access that modifies unrelated objects can fit in with any reasonable threaded memory model, but I guess that's beyond the scope of this bug report ;-) Here's a patch that restores negative sizes after alignment adjustments, so that the tests for negative sizes elsewhere kick in. We might want to use absolute values elsewhere to get stricter results, but this should be at least conservatively better. Uros, I'm regstrapping this on x86_64; would you please give it a spin on alpha? TIA, --- a/gcc/alias.c +++ b/gcc/alias.c @@ -2100,14 +2100,20 @@ memrefs_conflict_p (int xsize, rtx x, int ysize, rtx y, HOST_WIDE_INT c) /* Deal with alignment ANDs by adjusting offset and size so as to cover the maximum range, without taking any previously known - alignment into account. */ + alignment into account. Make a size negative after such an + adjustments, so that, if we end up with e.g. two SYMBOL_REFs, we + assume a potential overlap, because they may end up in contiguous + memory locations and the stricter-alignment access may span over + part of both. */ if (GET_CODE (x) == AND && CONST_INT_P (XEXP (x, 1))) { HOST_WIDE_INT sc = INTVAL (XEXP (x, 1)); unsigned HOST_WIDE_INT uc = sc; - if (xsize > 0 && sc < 0 && -uc == (uc & -uc)) + if (sc < 0 && -uc == (uc & -uc)) { - xsize -= sc + 1; + if (xsize > 0) + xsize = -xsize; + xsize += sc + 1; c -= sc + 1; return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)), ysize, y, c); @@ -2117,9 +2123,11 @@ memrefs_conflict_p (int xsize, rtx x, int ysize, rtx y, HOST_WIDE_INT c) { HOST_WIDE_INT sc = INTVAL (XEXP (y, 1)); unsigned HOST_WIDE_INT uc = sc; - if (ysize > 0 && sc < 0 && -uc == (uc & -uc)) + if (sc < 0 && -uc == (uc & -uc)) { - ysize -= sc + 1; + if (ysize > 0) + ysize = -ysize; + ysize += sc + 1; c += sc + 1; return memrefs_conflict_p (xsize, x, ysize, canon_rtx (XEXP (y, 0)), c);