https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103066

            Bug ID: 103066
           Summary: __sync_val_compare_and_swap/__sync_bool_compare_and_sw
                    ap aren't optimized
           Product: gcc
           Version: 12.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: hjl.tools at gmail dot com
                CC: crazylht at gmail dot com, wwwhhhyyy333 at gmail dot com
            Blocks: 103065
  Target Milestone: ---
            Target: i386,x86-64

>From the CPU's point of view, getting a cache line for writing is more
expensive than reading.  See Appendix A.2 Spinlock in:

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf

The full compare and swap will grab the cache line exclusive and causes
excessive cache line bouncing.

[hjl@gnu-cfl-1 tmp]$ cat x.c
extern int m;

int test(int oldv, int newv)
{
  return __sync_val_compare_and_swap (&m, oldv, newv);
}
[hjl@gnu-cfl-1 tmp]$ gcc -S -O2 x.c
[hjl@gnu-cfl-1 tmp]$ cat x.s
        .file   "x.c"
        .text
        .p2align 4
        .globl  test
        .type   test, @function
test:
.LFB0:
        .cfi_startproc
        movl    %edi, %eax
        lock cmpxchgl   %esi, m(%rip)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
GCC should first emit a normal load, check and return immediately if cmpxchgl
may fail.
        ret
        .cfi_endproc
.LFE0:
        .size   test, .-test
        .ident  "GCC: (GNU) 11.2.1 20211019 (Red Hat 11.2.1-6)"
        .section        .note.GNU-stack,"",@progbits
[hjl@gnu-cfl-1 tmp]$


Referenced Bugs:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103065
[Bug 103065] [meta] atomic operations aren't optimized

Reply via email to