https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103068

            Bug ID: 103068
           Summary: gomp_mutex_lock_slow isn't optimized
           Product: gcc
           Version: 12.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: libgomp
          Assignee: unassigned at gcc dot gnu.org
          Reporter: hjl.tools at gmail dot com
                CC: crazylht at gmail dot com, jakub at gcc dot gnu.org,
                    wwwhhhyyy333 at gmail dot com
  Target Milestone: ---
            Target: i386,x86-64

>From the CPU's point of view, getting a cache line for writing is more
expensive than reading.  See Appendix A.2 Spinlock in:

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf

The full compare and swap will grab the cache line exclusive and causes
excessive cache line bouncing.

gomp_mutex_lock_slow has

void
gomp_mutex_lock_slow (gomp_mutex_t *mutex, int oldval)
{
  /* First loop spins a while.  */
  while (oldval == 1)
    {   
      if (do_spin (mutex, 1)) 
        {
          /* Spin timeout, nothing changed.  Set waiting flag.  */
          oldval = __atomic_exchange_n (mutex, -1, MEMMODEL_ACQUIRE);
          if (oldval == 0)
            return;
          futex_wait (mutex, -1);
          break;
        }
      else
        {
          /* Something changed.  If now unlocked, we're good to go.  */
          oldval = 0;

Add a normal check for *mutex == 1 and continue if
__atomic_compare_exchange_n may fail.

          if (__atomic_compare_exchange_n (mutex, &oldval, 1, false,
                                           MEMMODEL_ACQUIRE, MEMMODEL_RELAXED))
            return;
        }
    }

  /* Second loop waits until mutex is unlocked.  We always exit this
     loop with wait flag set, so next unlock will awaken a thread.  */
  while ((oldval = __atomic_exchange_n (mutex, -1, MEMMODEL_ACQUIRE)))
    do_wait (mutex, -1);
}

Reply via email to