Kim,

thank you for the suggestions. Here is an updated webrev:

http://cr.openjdk.java.net/~avoitylov/webrev.8231612.04/

The assembly generated with inline functions is equivalent to that with
the macros, and the testing passed as well.

-Aleksei

On 07/12/2019 02:54, Kim Barrett wrote:
>> On Dec 6, 2019, at 8:32 AM, Aleksei Voitylov <aleksei.voity...@bell-sw.com> 
>> wrote:
>>
>> Hi Kim,
>>
>> (cced hotspot-runtime as the updated webrev has code changes).
>>
>> Thank you for the suggestion to address this at the code level. The
>> following change makes GCC generate correct code:
>>
>> webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.03/
>> issue: https://bugs.openjdk.java.net/browse/JDK-8231612
>>
>> GCC alias analysis at RTL level has some limitations [1]. In particular,
>> it sometimes cannot differentiate between pointer and integer and can
>> make incorrect assumptions about whether two pointers actually alias
>> each other. We are hitting this in
>> Atomic::CmpxchgByteUsingInt::operator(), which is used on some platforms.
>>
>> The solution is to reference the same memory region using only one
>> pointer. Anything with a second pointer here continues to confuse GCC,
>> and would anyway be fragile in view of this GCC bug which surfaces
>> easily on ARM32.
>>
>> I considered making get/set byte operations inline functions but
>> preferred locality defines bring.
>>
>> Tested with JTReg, JCK and jcstress on ARM32 and Solaris SPARC with no
>> regressions. The original problem also disappeared.
>> No regressions in JCK VM,Lang, JTReg HotSpot on Linux x86, x86_64,
>> AArch64, PPC, Mac.
>>
>> Thanks,
>>
>> -Aleksei
>>
>> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49330
>>
>> On 19/11/2019 19:01, Kim Barrett wrote:
>>>> On Nov 19, 2019, at 10:18 AM, Aleksei Voitylov 
>>>> <aleksei.voity...@bell-sw.com> wrote:
>>>>
>>>> Kim,
>>>>
>>>> you are right, to play it safe we need to -fno-tree-pre all relevant
>>>> bool occurrances. I'll get back with an updated webrev when testing is
>>>> finished.
>>> I think just sprinkling the build system with -fno-tree-pre to address this 
>>> doesn’t
>>> scale in the face of people writing new code.
>>>
>>> I think we need to find a way to write CmpxchgByteUsingInt in such a way 
>>> that
>>> it doesn’t tickle this bug, or forbid byte-sized cmpxchg (which would be 
>>> painful),
>>> or provide some entirely other implementation approach.
> This looks like a good solution to me.
>
> (I like this approach even without the gcc bug (which was an
> entertaining read itself); this change lets me delete an item from my
> followup list from templatizing Atomic.)
>
> A couple of really minor nits:
>
> (1) In the macros there are various expressions of the form
>
>    x << BitsPerByte * idx 
>
> Please put parens around the shift calculation, e.g.
>
>    x << (BitsPerByte * idx)
>
> so readers don't need to remember the operator precedence order.
>
> (2) In this line:
>
>  871   uint32_t cur = SET_BYTE_IN_INT(*aligned_dest, canon_compare_value, 
> idx);
>
> I'd like "*aligned_dest" replaced with "Atomic::load(aligned_dest)".
>
> In addition, I think functions would be preferable to macros for the
> helpers.  Put them in the Atomic::CmpxchgByteUsingInt class, with
> inline definitions where you put the macro definitions.
>
>

Reply via email to