On Mon, Nov 28, 2011 at 05:33:17PM -0800, Richard Henderson wrote: > On 11/27/2011 02:57 PM, Alan Modra wrote: > > This is the mutex part. Depends on > > http://gcc.gnu.org/ml/gcc-patches/2011-11/msg02454.html for MEMMODEL_* > > values. > > > > * config/linux/mutex.h: Use atomic rather than sync builtins. > > * config/linux/mutex.c: Likewise. Comment. Use -1 for waiting state. > > * config/linux/omp-lock.h: Comment fix. > > * config/linux/arm/mutex.h: Delete. > > * config/linux/powerpc/mutex.h: Delete. > > * config/linux/ia64/mutex.h: Delete. > > * config/linux/mips/mutex.h: Delete. > > Looks good modulo _4/_n and using the success return of > __atomic_compare_exchange.
Did you see my comment about the OpenMP spec requirement that critical, ordered and parallel regions have a flush on entry and on exit? So should I commit a version with static inline void gomp_mutex_lock (gomp_mutex_t *mutex) { int oldval = 0; /* FIXME: This should just be MEMMODEL_ACQUIRE, MEMMODEL_RELAXED but GOMP_critical_start and other functions rely on the lock acquisition doing a flush. See OpenMP API version 3.1 section 2.8.6 flush Construct. */ if (__builtin_expect (!__atomic_compare_exchange_n (mutex, &oldval, 1, false, MEMMODEL_ACQ_REL, MEMMODEL_ACQUIRE), 0)) gomp_mutex_lock_slow (mutex, oldval); } or as posted in http://gcc.gnu.org/ml/gcc-patches/2011-11/msg02455.html? Perhaps with the following as well * ordered.c (gomp_ordered_sync): Add MEMMODEL_RELEASE fence. * critical.c (GOMP_critical_start): Likewise. I think we're OK on parallel regions by virtue of gomp_barrier_wait, but I may not know what I'm talking about. This stuff fries my brains. Index: ordered.c =================================================================== --- ordered.c (revision 181770) +++ ordered.c (working copy) @@ -199,6 +199,9 @@ gomp_ordered_sync (void) if (team == NULL || team->nthreads == 1) return; + /* There is an implicit flush on entry to an ordered region. */ + __atomic_thread_fence (MEMMODEL_RELEASE); + /* ??? I believe it to be safe to access this data without taking the ws->lock. The only presumed race condition is with the previous thread on the queue incrementing ordered_cur such that it points Index: critical.c =================================================================== --- critical.c (revision 181770) +++ critical.c (working copy) @@ -33,6 +33,8 @@ static gomp_mutex_t default_lock; void GOMP_critical_start (void) { + /* There is an implicit flush on entry to a critical region. */ + __atomic_thread_fence (MEMMODEL_RELEASE); gomp_mutex_lock (&default_lock); } -- Alan Modra Australia Development Lab, IBM