On Mon, Nov 14, 2022 at 8:48 AM Jakub Jelinek <ja...@redhat.com> wrote:
>
> Hi!
>
> Working virtually out of Baker Island.
>
> We got a response from AMD in
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104688#c10
> so the following patch starts treating AMD with AVX and CMPXCHG16B
> ISAs like Intel by using vmovdqa for atomic load/store in libatomic.
>
> Ok for trunk if it passes bootstrap/regtest?
>
> 2022-11-13  Jakub Jelinek  <ja...@redhat.com>
>
>         PR target/104688
>         * config/x86/init.c (__libat_feat1_init): Revert 2022-03-17 change
>         - on x86_64 no longer clear bit_AVX if CPU vendor is not Intel.
>
> --- libatomic/config/x86/init.c.jj      2022-03-17 18:48:56.708723194 +0100
> +++ libatomic/config/x86/init.c 2022-11-13 18:23:26.315440071 -1200
> @@ -34,18 +34,6 @@ __libat_feat1_init (void)
>    unsigned int eax, ebx, ecx, edx;
>    FEAT1_REGISTER = 0;
>    __get_cpuid (1, &eax, &ebx, &ecx, &edx);
> -#ifdef __x86_64__
> -  if ((FEAT1_REGISTER & (bit_AVX | bit_CMPXCHG16B))
> -      == (bit_AVX | bit_CMPXCHG16B))
> -    {
> -      /* Intel SDM guarantees that 16-byte VMOVDQA on 16-byte aligned address
> -        is atomic, but so far we don't have this guarantee from AMD.  */
> -      unsigned int ecx2 = 0;
> -      __get_cpuid (0, &eax, &ebx, &ecx2, &edx);
> -      if (ecx2 != signature_INTEL_ecx)
> -       FEAT1_REGISTER &= ~bit_AVX;

We still need this, but also bypass it for AMD signature. There are
other vendors than Intel and AMD.

OK with the above addition.

Thanks,
Uros.

> -    }
> -#endif
>    /* See the load in load_feat1.  */
>    __atomic_store_n (&__libat_feat1, FEAT1_REGISTER, __ATOMIC_RELAXED);
>    return FEAT1_REGISTER;
>
>         Jakub
>

Reply via email to