Document each atomic operation provided by urcu/uatomic.h, along with
their memory barrier guarantees.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
---
diff --git a/doc/Makefile.am b/doc/Makefile.am
index bec1d7c..db9811c 100644
--- a/doc/Makefile.am
+++ b/doc/Makefile.am
@@ -1 +1 @@
-dist_doc_DATA = rcu-api.txt
+dist_doc_DATA = rcu-api.txt uatomic-api.txt
diff --git a/doc/uatomic-api.txt b/doc/uatomic-api.txt
new file mode 100644
index 0000000..3605acf
--- /dev/null
+++ b/doc/uatomic-api.txt
@@ -0,0 +1,80 @@
+Userspace RCU Atomic Operations API
+by Mathieu Desnoyers and Paul E. McKenney
+
+
+This document describes the <urcu/uatomic.h> API. Those are the atomic
+operations provided by the Userspace RCU library. The general rule
+regarding memory barriers is that only uatomic_xchg(),
+uatomic_cmpxchg(), uatomic_add_return(), and uatomic_sub_return() imply
+full memory barriers before and after the atomic operation. Other
+primitives don't guarantee any memory barrier.
+
+Only atomic operations performed on integers ("int" and "long", signed
+and unsigned) are supported on all architectures. Some architectures
+also support 1-byte and 2-byte atomic operations. Those respectively
+have UATOMIC_HAS_ATOMIC_BYTE and UATOMIC_HAS_ATOMIC_SHORT defined when
+uatomic.h is included. An architecture trying to perform an atomic write
+to a type size not supported by the architecture will trigger an illegal
+instruction.
+
+In the description below, "type" is a type that can be atomically
+written to by the architecture. It needs to be at most word-sized, and
+its alignment needs to greater or equal to its size.
+
+type uatomic_set(type *addr, type v)
+
+       Atomically write @v into @addr.
+
+type uatomic_read(type *addr)
+
+       Atomically read @v from @addr.
+
+type uatomic_cmpxchg(type *addr, type old, type new)
+
+       Atomically check if @addr contains @old. If true, then replace
+       the content of @addr by @new. Return the value previously
+       contained by @addr. This function imply a full memory barrier
+       before and after the atomic operation.
+
+type uatomic_xchg(type *addr, type new)
+
+       Atomically replace the content of @addr by @new, and return the
+       value previously contained by @addr. This function imply a full
+       memory barrier before and after the atomic operation.
+
+type uatomic_add_return(type *addr, type v)
+type uatomic_sub_return(type *addr, type v)
+
+       Atomically increment/decrement the content of @addr by @v, and
+       return the resulting value. This function imply a full memory
+       barrier before and after the atomic operation.
+
+void uatomic_and(type *addr, type mask)
+void uatomic_or(type *addr, type mask)
+
+       Atomically write the result of bitwise "and"/"or" between the
+       content of @addr and @mask into @addr. Memory barriers are
+       provided by explicitly using cmm_smp_mb__before_uatomic_and(),
+       cmm_smp_mb__after_uatomic_and(),
+       cmm_smp_mb__before_uatomic_or(), and
+       cmm_smp_mb__after_uatomic_or().
+
+void uatomic_add(type *addr, type v)
+void uatomic_sub(type *addr, type v)
+
+       Atomically increment/decrement the content of @addr by @v.
+       Memory barriers are provided by explicitly using
+       cmm_smp_mb__before_uatomic_add(),
+       cmm_smp_mb__after_uatomic_add(),
+       cmm_smp_mb__before_uatomic_sub(), and
+       cmm_smp_mb__after_uatomic_sub().
+
+void uatomic_inc(type *addr)
+void uatomic_dec(type *addr)
+
+       Atomically increment/decrement the content of @addr by 1. Memory
+       barriers are provided by explicitly using
+       cmm_smp_mb__before_uatomic_inc(),
+       cmm_smp_mb__after_uatomic_inc(),
+       cmm_smp_mb__before_uatomic_dec(), and
+       cmm_smp_mb__after_uatomic_dec().
-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

Reply via email to