Author: majnemer
Date: Wed Jul 15 22:13:02 2015
New Revision: 242378

URL: http://llvm.org/viewvc/llvm-project?rev=242378&view=rev
Log:
[Intrin.h] Use compiler builtins to model memory barriers

_ReadBarrier, _WriteBarrier, and _ReadWriteBarrier are essentially
memory barriers of one form or another.  Model these as
atomic_signal_fence(ATOMIC_SEQ_CST).

__faststorefence is a curious intrinsic.  It's single purpose seems to
an alternative to mfence when that instruction is slow.  However, mfence
is not always slow and is, in general, preferable to a 'lock or'
sequence on certain CPUs.  Give the compiler freedom to select the best
sequence to get a fence.

Modified:
    cfe/trunk/lib/Headers/Intrin.h

Modified: cfe/trunk/lib/Headers/Intrin.h
URL: 
http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Headers/Intrin.h?rev=242378&r1=242377&r2=242378&view=diff
==============================================================================
--- cfe/trunk/lib/Headers/Intrin.h (original)
+++ cfe/trunk/lib/Headers/Intrin.h Wed Jul 15 22:13:02 2015
@@ -770,27 +770,25 @@ _InterlockedCompareExchange64(__int64 vo
 
/*----------------------------------------------------------------------------*\
 |* Barriers
 
\*----------------------------------------------------------------------------*/
-#if defined(__i386__) || defined(__x86_64__)
 static __inline__ void __DEFAULT_FN_ATTRS
 __attribute__((__deprecated__("use other intrinsics or C++11 atomics 
instead")))
 _ReadWriteBarrier(void) {
-  __asm__ volatile ("" : : : "memory");
+  __atomic_signal_fence(__ATOMIC_SEQ_CST);
 }
 static __inline__ void __DEFAULT_FN_ATTRS
 __attribute__((__deprecated__("use other intrinsics or C++11 atomics 
instead")))
 _ReadBarrier(void) {
-  __asm__ volatile ("" : : : "memory");
+  __atomic_signal_fence(__ATOMIC_SEQ_CST);
 }
 static __inline__ void __DEFAULT_FN_ATTRS
 __attribute__((__deprecated__("use other intrinsics or C++11 atomics 
instead")))
 _WriteBarrier(void) {
-  __asm__ volatile ("" : : : "memory");
+  __atomic_signal_fence(__ATOMIC_SEQ_CST);
 }
-#endif
 #ifdef __x86_64__
 static __inline__ void __DEFAULT_FN_ATTRS
 __faststorefence(void) {
-  __asm__ volatile("lock orq $0, (%%rsp)" : : : "memory");
+  __atomic_thread_fence(__ATOMIC_SEQ_CST);
 }
 #endif
 
/*----------------------------------------------------------------------------*\


_______________________________________________
cfe-commits mailing list
cfe-commits@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits

Reply via email to