On 07/30/2013 12:49 PM, Andrew MacLeod wrote:
I split the original patch into some smaller hunks, and cleaned up a few bit and pieces here and there... following:


Not looking for a review, but posting progress to this point since I'm off for 2 weeks, and someone might want to play with this.

I re-examined implementing the atomic type as its own canonical type instead of a qualified variant, as well as some variations... and ultimately, I think the current qualified implementation I posted is probably the best one to work with. Its very convenient to be able to get to the non-atomic type by using TYPE_MAIN_VARIANT... Anyway, so the original 5 patches stand, but will require closer auditing . (starting at http://gcc.gnu.org/ml/gcc-patches/2013-07/msg01478.html)

I've attached 2 patches anda header file here. One is the changes required for handling the C11 atomic expressions... ie x += 4.2, and the other is a first cut at the content for the stdatomic.h include file, adna required change for it.

The atomic expression patch works on many cases, but falls short with any kind of complexity on the RHS of an expression which needs to be mapped to an atomic_load. Whilst parsing, we cant tell, and the rhs tends to become an expression before we really know. so In the comments I have my proposed solution which I will look at when I return in 2 weeks. Basically, rather than have the ATOMIC_TYPE attribute ripple all the way through an expression, as soon as we see an ATOMIC_TYPE, we wrap it in a new node ATOMIC_EXPR, which holds the atomic expression, and sets it own expression type to the non-atomic variant. This means the only expressions which will have the TYPE_ATOMIC flag set will be ones which are wrapped in ATOMIC_EXPR, and those are the ones where we need to go and replace with an __atomic_load or whatever.

I have also punted on the floating point exception stuff which footnote 113 requires.. we'll worry about that later.

Nevertheless, the patch that is there can do some interest8ing things... it can compile and execute this test file successfully as proof of concept... with all the right conmversions and use of lockfree routines when available. (use -mcx16 on an x86_64 box, btw, or the __atomic_*_16 routines will be unresolved ) complex types also work. long double complex is 32 bytes in size, and will make calls into libatomic to resolve those operations.

double __attribute__((atomic)) a;
float __attribute__((atomic)) t;
long double __attribute__((atomic))  ld;
char __attribute__((atomic)) c;
int __attribute__((atomic)) i;
long long __attribute__((atomic)) l;
int g;
main ()
{
  g = 40;
  ld = t = a = 4.567;
  c = l = i = g + 2;
  printf ("%f == %f == %Lf == 4.567\n", t, a, ld);
  printf ("%d == %d == %ld == 42\n", c , i, l);
}


I have also attach a mockup of stdatomic.h... I haven't even tried compiling or testing it yet, so it may have a syntax error or something, but I wanted to see if the functionality was possible. THe only thing missing is the ability to find the non-atomic type of an atomic type.

Ive included an additional 2 line patch which should change the meaning of __typeof__ (again untested, the joys of imminently leaving for 2 weeks :-). Im not sure the normal practical uses of __typeof__ have much meaning for an atomic type, it seems far more useful to have __typeof__ for an atomic qualified type to return the non-atomic variant. If there is resistance to that definition, then I'll need to create a __nonatomic_typeof__ variant... thats just a bit more work. In any case, that patch is required for stdatomic.h to be implemented. You can see where I used it to call the generic atomic operations with all the proper types for the non-atomic fields which need a temp vazr. The header file is actually quite short. I don't know how to implement it other than with macro wrappers around the builtins.

And thats it. I'll be back in 2 weeks to get back at wrapping this, and adding some testcases and error reporting.

unless *YOU*  beat me to it :-)

Andrew




	* c/c-parser.c (c_parser_expr_no_commas): Call build_atomic_load on RHS.
	* c/c-typeck.c (build_modify_expr): Call build_atomic_assign for atomic
	expressions.
	(build_atomic_assign): New.  Construct appropriate sequences for
	atomic assignemnt operations.
	(build_atomic_load):  New.  Construct an atomic load.
	* c-family/c-common.h (build_atomic_load): Prototype.

Index: gcc/c/c-parser.c
===================================================================
*** gcc/c/c-parser.c	(revision 201248)
--- gcc/c/c-parser.c	(working copy)
*************** c_parser_expr_no_commas (c_parser *parse
*** 5440,5445 ****
--- 5440,5472 ----
        code = BIT_IOR_EXPR;
        break;
      default:
+       /* TODO
+ 	 This doesnt work. Expressions like ~a where a is atomic do not function
+ 	 properly. since the rhs is parsed/consumed as an entire expression.  
+ 	 and at the time, we don't know if this expression is a RHS or LHS, so
+ 	 we dont know if we need a load or not.
+ 
+ 	 I think we need to introduce an ATOMIC_EXPR node, and whenever the
+ 	 type of an expression becomes TYPE_ATOMIC(), we immiedately hang the
+ 	 expression off a new ATOMIC_EXPR node as operand 0, and change the
+ 	 type of the ATOMIC_EXPR node to TYPE_MAIN_VARIANT(atomic_type).  This
+ 	 will encapsulate all the expressions which need to be handled with an
+ 	 ATOMIC_EXPR node, and then at this point, scan the rhs and see if there
+ 	 are any ATOMIC_EXPR, and replace those nodes with atomic_loads's of 
+ 	 the ATOMIC_EXPR operand.
+ 
+ 	 THis will also change the LHS processing in build_modify_expr... 
+ 	 although *in theory* the top level expression *ought* to be the
+ 	 only thing that should have an ATOMIC_EXPR(), so it may be as
+ 	 simple as checking the LHS is an ATOMIC_EXPR node rather than 
+ 	 the current check of ATOMIC_TYPE (lhs).
+ 
+ 	 This also means the TYPE_ATOMIC flag in expressions should ONLY 
+ 	 occur on the operand of an ATOMIC_EXPR() nodes... anywhere else 
+ 	 would be an error.  */
+       if (TREE_CODE (lhs.value) != ERROR_MARK 
+ 	  && TYPE_ATOMIC (TREE_TYPE (lhs.value)))
+ 	lhs.value = build_atomic_load (op_location, lhs.value);
        return lhs;
      }
    c_parser_consume_token (parser);
Index: gcc/c/c-typeck.c
===================================================================
*** gcc/c/c-typeck.c	(revision 201248)
--- gcc/c/c-typeck.c	(working copy)
*************** static void readonly_warning (tree, enum
*** 103,108 ****
--- 103,110 ----
  static int lvalue_or_else (location_t, const_tree, enum lvalue_use);
  static void record_maybe_used_decl (tree);
  static int comptypes_internal (const_tree, const_tree, bool *, bool *);
+ static tree build_atomic_assign (location_t, tree, enum tree_code, tree);
+ 
  
  /* Return true if EXP is a null pointer constant, false otherwise.  */
  
*************** build_modify_expr (location_t location,
*** 4830,4835 ****
--- 4832,4838 ----
    tree lhstype = TREE_TYPE (lhs);
    tree olhstype = lhstype;
    bool npc;
+   bool is_atomic_op;
  
    /* Types that aren't fully specified cannot be used in assignments.  */
    lhs = require_complete_type (lhs);
*************** build_modify_expr (location_t location,
*** 4842,4847 ****
--- 4845,4852 ----
    if (!objc_is_property_ref (lhs) && !lvalue_or_else (location, lhs, lv_assign))
      return error_mark_node;
  
+   is_atomic_op = TYPE_ATOMIC (TREE_TYPE (lhs));
+ 
    if (TREE_CODE (rhs) == EXCESS_PRECISION_EXPR)
      {
        rhs_semantic_type = TREE_TYPE (rhs);
*************** build_modify_expr (location_t location,
*** 4872,4883 ****
      {
        lhs = c_fully_fold (lhs, false, NULL);
        lhs = stabilize_reference (lhs);
-       newrhs = build_binary_op (location,
- 				modifycode, lhs, rhs, 1);
  
!       /* The original type of the right hand side is no longer
! 	 meaningful.  */
!       rhs_origtype = NULL_TREE;
      }
  
    if (c_dialect_objc ())
--- 4877,4893 ----
      {
        lhs = c_fully_fold (lhs, false, NULL);
        lhs = stabilize_reference (lhs);
  
!       /* Construct the RHS for any non-atomic compound assignemnt. */
!       if (!is_atomic_op)
!         {
! 	  newrhs = build_binary_op (location,
! 				    modifycode, lhs, rhs, 1);
! 
! 	  /* The original type of the right hand side is no longer
! 	     meaningful.  */
! 	  rhs_origtype = NULL_TREE;
! 	}
      }
  
    if (c_dialect_objc ())
*************** build_modify_expr (location_t location,
*** 4944,4949 ****
--- 4954,4968 ----
  		    "enum conversion in assignment is invalid in C++");
      }
  
+   /* If the lhs is atomic, remove that qualifier.  */
+   if (is_atomic_op)
+     {
+       lhstype = build_qualified_type (lhstype, 
+ 				      TYPE_QUALS(lhstype) & ~TYPE_QUAL_ATOMIC);
+       olhstype = build_qualified_type (olhstype, 
+ 				       TYPE_QUALS(lhstype) & ~TYPE_QUAL_ATOMIC);
+     }
+ 
    /* Convert new value to destination type.  Fold it first, then
       restore any excess precision information, for the sake of
       conversion warnings.  */
*************** build_modify_expr (location_t location,
*** 4970,4978 ****
  
    /* Scan operands.  */
  
!   result = build2 (MODIFY_EXPR, lhstype, lhs, newrhs);
!   TREE_SIDE_EFFECTS (result) = 1;
!   protected_set_expr_location (result, location);
  
    /* If we got the LHS in a different type for storing in,
       convert the result back to the nominal type of LHS
--- 4989,5002 ----
  
    /* Scan operands.  */
  
!   if (is_atomic_op)
!     result = build_atomic_assign (location, lhs, modifycode, newrhs);
!   else
!     {
!       result = build2 (MODIFY_EXPR, lhstype, lhs, newrhs);
!       TREE_SIDE_EFFECTS (result) = 1;
!       protected_set_expr_location (result, location);
!     }
  
    /* If we got the LHS in a different type for storing in,
       convert the result back to the nominal type of LHS
*************** c_build_va_arg (location_t loc, tree exp
*** 10972,10974 ****
--- 10996,11207 ----
  		"C++ requires promoted type, not enum type, in %<va_arg%>");
    return build_va_arg (loc, expr, type);
  }
+ 
+ 
+ /* Expand atomic compound assignments into an approriate sequence as
+    specified by the C11 standard section 6.5.16.2.   
+     given 
+        _Atomic T1 E1
+        T2 E2
+        E1 op= E2
+ 
+  This sequence is used for integer, floating point and complex types. 
+ 
+  In addition the 'fe' prefixed routines may need to be invoked for 
+  floating point and complex when annex F is in effect (regarding floating
+  point or exceptional conditions)  See 6.5.16.2 footnote 113:
+ 
+  TODO these are not implemented as yes, but the comments are placed at the
+  correct locations in the code for the appropriate calls to be made.  They
+  should only be issued if the expression type is !INTEGRAL_TYPE_P().
+ 
+   T1 newval;
+   T1 old;
+   T1 *addr
+   T2 val
+   fenv_t fenv
+ 
+   addr = &E1;
+   val = (E2);
+   __atomic_load (addr, &old, SEQ_CST);
+   feholdexcept (&fenv);				<<-- float & complex only
+ loop:
+     newval = old op val;
+     if (__atomic_compare_exchange_strong (addr, &old, &newval, SEQ_CST,
+ 					  SEQ_CST))
+       goto done;
+     feclearexcept (FE_ALL_EXCEPT);		<<-- float & complex only
+     goto loop:
+ done:
+   feupdateenv (&fenv);				<<-- float & complex only
+ 
+ 
+  Also note that the compiler is simply issuing the generic form of the atomic
+  operations. This requires temp(s) and has their address taken.  The atomic
+  processing is smart enough to figure out when the size of an object can
+  utilize a lock free versionm, and convert the built-in call to the appropriate
+  lockfree routine.  The optimizers will then dispose of any temps that are no
+  longer required, and lock free implementations are utilized for integer, float
+  and complex as long as there is target supoprt for the required size. 
+ 
+  If the operator is NOP_EXPR, then this is a simple assignment, and an
+  __atomic_store is issued to perform the assignment rather than the above loop.
+ 
+ */
+ 
+ /* Build an atomic assignment at LOC, expanding into the proper sequence to
+    store LHS MODIFYCODE= RHS.  Return a value representing the result of 
+    the operation.  */
+ tree
+ build_atomic_assign (location_t loc, tree lhs, enum tree_code modifycode,
+ 		     tree rhs)
+ {
+   tree fndecl, func_call;
+   vec<tree, va_gc> *params;
+   tree val, nonatomic_type, newval, newval_addr;
+   tree old, old_addr;
+   tree stmt, goto_stmt;
+   tree loop_label, loop_decl, done_label, done_decl;
+ 
+   tree lhs_type = TREE_TYPE (lhs);
+   tree lhs_addr = build_unary_op (loc, ADDR_EXPR, lhs, 0);
+   tree seq_cst = build_int_cst (integer_type_node, MEMMODEL_SEQ_CST);
+ 
+   gcc_assert (TYPE_ATOMIC (lhs_type));
+ 
+   /* allocate enough vector items for a compare_exchange.  */
+   vec_alloc (params, 6);
+ 
+   /* Remove the qualifiers for the rest of the expressions and create
+      the VAL temp variable to hold the RHS.  */
+   nonatomic_type = build_qualified_type (lhs_type, TYPE_UNQUALIFIED);
+   val = create_tmp_var (nonatomic_type, NULL);
+   TREE_ADDRESSABLE (val) = 1;
+   rhs = build2 (MODIFY_EXPR, nonatomic_type, val, rhs);
+   SET_EXPR_LOCATION (rhs, loc);
+   add_stmt (rhs);
+ 
+   /* NOP_EXPR indicates its a straight store of the RHS. Simply issue
+      and atomic_store.  */
+   if (modifycode == NOP_EXPR)
+     {
+       /* Build __atomic_store (&lhs, &val, SEQ_CST)  */
+       rhs = build_unary_op (loc, ADDR_EXPR, val, 0);
+       fndecl = builtin_decl_explicit (BUILT_IN_ATOMIC_STORE);
+       params->quick_push (lhs_addr);
+       params->quick_push (rhs);
+       params->quick_push (seq_cst);
+       func_call = build_function_call_vec (loc, fndecl, params, NULL);
+       add_stmt (func_call);
+ 
+       /* Val is the value which was stored, return it for any further value
+ 	 propagation.  */
+       return val;
+     }
+ 
+   /* Create the variables and labels required for the op= form.  */
+   old = create_tmp_var (nonatomic_type, NULL);
+   old_addr = build_unary_op (loc, ADDR_EXPR, old, 0);
+   TREE_ADDRESSABLE (val) = 1;
+ 
+   newval = create_tmp_var (nonatomic_type, NULL);
+   newval_addr = build_unary_op (loc, ADDR_EXPR, newval, 0);
+   TREE_ADDRESSABLE (newval) = 1;
+ 
+   loop_decl = create_artificial_label (loc);
+   loop_label = build1 (LABEL_EXPR, void_type_node, loop_decl);
+ 
+   done_decl = create_artificial_label (loc);
+   done_label = build1 (LABEL_EXPR, void_type_node, done_decl);
+ 
+   /* __atomic_load (addr, &old, SEQ_CST).  */
+   fndecl = builtin_decl_explicit (BUILT_IN_ATOMIC_LOAD);
+   params->quick_push (lhs_addr);
+   params->quick_push (old_addr);
+   params->quick_push (seq_cst);
+   func_call = build_function_call_vec (loc, fndecl, params, NULL);
+   add_stmt (func_call);
+   params->truncate (0);
+ 
+   /* TODO if (!integral)  issue feholdexcept (&fenv); */
+ 
+   /* loop:  */
+   add_stmt (loop_label);
+ 
+   /* newval = old + val;  */
+   rhs = build_binary_op (loc, modifycode, old, val, 1);
+   rhs = build2 (MODIFY_EXPR, nonatomic_type, newval, rhs);
+   SET_EXPR_LOCATION (rhs, loc);
+   add_stmt (rhs);
+ 
+   /* if (__atomic_compare_exchange (addr, &old, &new, false, SEQ_CST, SEQ_CST))
+        goto done;  */
+   fndecl = builtin_decl_explicit (BUILT_IN_ATOMIC_COMPARE_EXCHANGE);
+   params->quick_push (lhs_addr);
+   params->quick_push (old_addr);
+   params->quick_push (newval_addr);
+   params->quick_push (integer_zero_node);
+   params->quick_push (seq_cst);
+   params->quick_push (seq_cst);
+   func_call = build_function_call_vec (loc, fndecl, params, NULL);
+ 
+   goto_stmt = build1 (GOTO_EXPR, void_type_node, done_decl);
+   SET_EXPR_LOCATION (goto_stmt, loc);
+ 
+   stmt = build3 (COND_EXPR, void_type_node, func_call, goto_stmt, NULL_TREE);
+   SET_EXPR_LOCATION (stmt, loc);
+   add_stmt (stmt);
+   
+   /* TODO if (!integral) issue feclearexcept (FE_ALL_EXCEPT);  */
+ 
+   /* goto loop;  */
+   goto_stmt  = build1 (GOTO_EXPR, void_type_node, loop_decl);
+   SET_EXPR_LOCATION (goto_stmt, loc);
+   add_stmt (goto_stmt);
+  
+   /* done:  */
+   add_stmt (done_label);
+ 
+   /* TODO If (!integral) issue feupdateenv (&fenv)  */
+ 
+   /* Newval is the value that was successfully stored, return that.  */
+   return newval;
+ }
+ 
+ 
+ /* This simply performs an atomic load from EXPR and returns the temp it was
+    loaded into.  */
+ 
+ tree
+ build_atomic_load (location_t loc, tree expr)
+ {
+   vec<tree, va_gc> *params;
+   tree nonatomic_type, tmp, tmp_addr, fndecl, func_call;
+   tree expr_type = TREE_TYPE (expr);
+   tree expr_addr = build_unary_op (loc, ADDR_EXPR, expr, 0);
+   tree seq_cst = build_int_cst (integer_type_node, MEMMODEL_SEQ_CST);
+ 
+   gcc_assert (TYPE_ATOMIC (expr_type));
+ 
+   /* Expansion of a generic atomoic load may require an addition element, so
+      allocate enough to prevent a resize.  */
+   vec_alloc (params, 4);
+ 
+   /* Remove the qualifiers for the rest of the expressions and create
+      the VAL temp variable to hold the RHS.  */
+   nonatomic_type = build_qualified_type (expr_type, TYPE_UNQUALIFIED);
+   tmp = create_tmp_var (nonatomic_type, NULL);
+   tmp_addr = build_unary_op (loc, ADDR_EXPR, tmp, 0);
+   TREE_ADDRESSABLE (tmp) = 1;
+ 
+   /* Issue __atomic_load (&expr, &tmp, SEQ_CST);  */
+   fndecl = builtin_decl_explicit (BUILT_IN_ATOMIC_LOAD);
+   params->quick_push (expr_addr);
+   params->quick_push (tmp_addr);
+   params->quick_push (seq_cst);
+   func_call = build_function_call_vec (loc, fndecl, params, NULL);
+   add_stmt (func_call);
+ 
+   /* return tmp which contains the value loaded,  */
+   return tmp;
+ }
Index: gcc/c-family/c-common.h
===================================================================
*** gcc/c-family/c-common.h	(revision 201248)
--- gcc/c-family/c-common.h	(working copy)
*************** extern int field_decl_cmp (const void *,
*** 547,552 ****
--- 547,554 ----
  extern void resort_sorted_fields (void *, void *, gt_pointer_operator,
  				  void *);
  extern bool has_c_linkage (const_tree decl);
+ extern tree build_atomic_load (location_t, tree);
+ 
  
  /* Switches common to the C front ends.  */
  
	* cp/parser.c (cp_parser_simple_type_specifier): Change TYPEOF for
	atomic types to return the non-atomic type.

Index: gcc/cp/parser.c
===================================================================
*** gcc/cp/parser.c	(revision 201248)
--- gcc/cp/parser.c	(working copy)
*************** cp_parser_simple_type_specifier (cp_pars
*** 14262,14267 ****
--- 14262,14269 ----
        /* If it is not already a TYPE, take its type.  */
        if (!TYPE_P (type))
  	type = finish_typeof (type);
+       if (TYPE_ATOMIC (type))
+         type = TYPE_MAIN_VARIANT (type);
  
        if (decl_specs)
  	cp_parser_set_decl_spec_type (decl_specs, type,

typedef enum memory_order
{
  memory_order_relaxed,
  memory_order_consume,
  memory_order_acquire,
  memory_order_release,
  memory_order_acq_rel,
  memory_order_seq_cst
} memory_order;


typedef _Atomic _Bool          atomic_bool;
typedef _Atomic char           atomic_char
typedef _Atomic schar          atomic_schar
typedef _Atomic uchar          atomic_uchar
typedef _Atomic short          atomic_short
typedef _Atomic ushort         atomic_ushort
typedef _Atomic int            atomic_int
typedef _Atomic uint           atomic_uint
typedef _Atomic long           atomic_long
typedef _Atomic ulong          atomic_ulong
typedef _Atomic llong          atomic_llong
typedef _Atomic ullong         atomic_ullong
typedef _Atomic char16_t       atomic_char16_t
typedef _Atomic char32_t       atomic_char32_t
typedef _Atomic wchar_t        atomic_wchar_t
typedef _Atomic int_least8_t   atomic_int_least8_t
typedef _Atomic uint_least8_t  atomic_uint_least8_t
typedef _Atomic int_least16_t  atomic_int_least16_t
typedef _Atomic uint_least16_t atomic_uint_least16_t
typedef _Atomic int_least32_t  atomic_int_least32_t
typedef _Atomic uint_least32_t atomic_uint_least32_t
typedef _Atomic int_least64_t  atomic_int_least64_t
typedef _Atomic uint_least64_t atomic_uint_least64_t
typedef _Atomic int_fast8_t    atomic_int_fast8_t
typedef _Atomic uint_fast8_t   atomic_uint_fast8_t
typedef _Atomic int_fast16_t   atomic_int_fast16_t
typedef _Atomic uint_fast16_t  atomic_uint_fast16_t
typedef _Atomic int_fast32_t   atomic_int_fast32_t
typedef _Atomic uint_fast32_t  atomic_uint_fast32_t
typedef _Atomic int_fast64_t   atomic_int_fast64_t
typedef _Atomic uint_fast64_t  atomic_uint_fast64_t
typedef _Atomic intptr_t       atomic_intptr_t
typedef _Atomic uintptr_t      atomic_uintptr_t
typedef _Atomic size_t         atomic_size_t
typedef _Atomic ptrdiff_t      atomic_ptrdiff_t
typedef _Atomic intmax_t       atomic_intmax_t
typedef _Atomic uintmax_t      atomic_uintmax_t        


#define ATOMIC_VAR_INIT(VALUE)  (VALUE)
#define atomic_init(PTR, VAL)   { *(PTR) = (VAL); }

/* TODO actually kill the dependancy.  */
#define kill_dependency(Y)      (Y)

#define atomic_thread_fence     __atomic_thread_fence
#define atomic_signal_fence     __atomic_signal_fence 
#define atomic_is_lock_free(OBJ) __atomic_is_lock_free (sizeof (*(OBJ)), NULL)

#define ATOMIC_BOOL_LOCK_FREE                   \
                        __atomic_is_lock_free (sizeof (atomic_bool), NULL)
#define ATOMIC_CHAR_LOCK_FREE                   \
                        __atomic_is_lock_free (sizeof (atomic_char), NULL)
#define ATOMIC_CHAR16_T_LOCK_FREE               \
                        __atomic_is_lock_free (sizeof (atomic_char16_t), NULL)
#define ATOMIC_CHAR32_T_LOCK_FREE               \
                        __atomic_is_lock_free (sizeof (atomic_char32_t), NULL)
#define ATOMIC_WCHAR_T_LOCK_FREE                \
                        __atomic_is_lock_free (sizeof (atomic_wchar_t), NULL)
#define ATOMIC_SHORT_LOCK_FREE                  \
                        __atomic_is_lock_free (sizeof (atomic_short), NULL)
#define ATOMIC_INT_LOCK_FREE                    \
                        __atomic_is_lock_free (sizeof (atomic_int), NULL)
#define ATOMIC_LONG_LOCK_FREE                   \
                        __atomic_is_lock_free (sizeof (atomic_long), NULL)
#define ATOMIC_LLONG_LOCK_FREE                  \
                        __atomic_is_lock_free (sizeof (atomic_llong), NULL)
#define ATOMIC_POINTER_LOCK_FREE                \
                        __atomic_is_lock_free (sizeof (_Atomic void *), NULL)


/* TODO: Note this required the __typeof__ definition to drops the atomic
   qualifier, which means __typeof__ (atomic type) return the underlying 
   non-atomic type.
   I think this makes sense, as most meaningful uses of __typeof__ of an atomic
   object would want the non-atomic version to be useful, as it is above.

   If we dont want to change that meaning, we'll need to implement a __typeof__
   variant which does this. 
   
   Also note that the header file uses the generic form of __atomic builtins,
   which requires the address to be taken of the value parameter, and then
   we pass that value on.   This allows the macros to work for any type,
   and the compiler is smart enough to convert these to lock-free _N 
   variants if possible, and throw away the temps.  */

#define atomic_store_explicit(PTR, VAL, MO) ({          \
  __typeof__ (*(PTR)) __tmp  = (VAL);                   \
  __atomic_store ((PTR), &__tmp, (MO)); })

#define atomic_store(PTR, VAL)                          \
  atomic_store_explicit (PTR, VAL, __ATOMIC_SEQ_CST)


#define atomic_load_explicit(PTR, MO) ({                \
  __typeof__ (*(PTR)) __tmp;                            \
  __atomic)load ((PTR), &__tmp, __ATOMIC_SEQ_CST);      \
  __tmp; })

#define atomic_load(PTR)  atomic_load_explicit (PTR, __ATOMIC_SEQ_CST)


#define atomic_exchange_explicit(PTR, VAL, MO) ({       \
  __typeof__ (*(PTR)) __tmp  = (VAL);                   \
  __atomic_exchange_n ((PTR), (VAL), (MO));             \
  __tmp; })

#define atomic_exchange(PTR, VAL)                       \
  atomic_exchange_explicit(PTR, VAL, __ATOMIC_SEQ_CST)


#define atomic_compare_exchange_strong_explicit(PTR, VAL, DES, SUC, FAIL) ({ \
  __typeof__ (*(PTR)) __tmp  = (DES);                   \
  __atomic_compare_exchange_n ((PTR), (VAL), &__tmp, 0, (SUC), (FAIL)); })

#define atomic_compare_exchange_strong(PTR, VAL, DES)                      \
  atomic_compare_exchange_strong_explicit(PTR, VAL, DES, __ATOMIC_SEQ_CST, \
                                          __ATOMIC_SEQ_CST)

#define atomic_compare_exchange_weak_explicit(PTR, VAL, DES, SUC, FAIL) ({  \
  __typeof__ (*(PTR)) __tmp  = (DES);                   \
  __atomic_compare_exchange_n ((PTR), (VAL), &__tmp, 1, (SUC), (FAIL)); })

#define atomic_compare_exchange_weak(PTR, VAL, DES)                      \
  atomic_compare_exchange_weak_explicit(PTR, VAL, DES, __ATOMIC_SEQ_CST, \
                                        __ATOMIC_SEQ_CST)



#define atomic_fetch_add(PTR, VAL) __atomic_fetch_add ((PTR), (VAL),    \
                                                       __ATOMIC_SEQ_CST)
#define atomic_fetch_add_explicit(PTR, VAL, MO)                         \
                          __atomic_fetch_add ((PTR), (VAL), (MO))

#define atomic_fetch_sub(PTR, VAL) __atomic_fetch_sub ((PTR), (VAL),    \
                                                       __ATOMIC_SEQ_CST)
#define atomic_fetch_sub_explicit(PTR, VAL, MO)                         \
                          __atomic_fetch_sub ((PTR), (VAL), (MO))

#define atomic_fetch_or(PTR, VAL) __atomic_fetch_or ((PTR), (VAL),      \
                                                       __ATOMIC_SEQ_CST)
#define atomic_fetch_or_explicit(PTR, VAL, MO)                  \
                          __atomic_fetch_or ((PTR), (VAL), (MO))

#define atomic_fetch_xor(PTR, VAL) __atomic_fetch_xor ((PTR), (VAL),    \
                                                       __ATOMIC_SEQ_CST)
#define atomic_fetch_xor_explicit(PTR, VAL, MO)                         \
                          __atomic_fetch_xor ((PTR), (VAL), (MO))

#define atomic_fetch_and(PTR, VAL) __atomic_fetch_and ((PTR), (VAL),    \
                                                       __ATOMIC_SEQ_CST)
#define atomic_fetch_and_explicit(PTR, VAL, MO)                         \
                          __atomic_fetch_and ((PTR), (VAL), (MO))


#if __GCC_ATOMIC_TEST_AND_SET_TRUEVAL == 1
    typedef bool atomic_flag;
#else
    typedef unsigned char atomic_flag;
#endif

#define ATOMIC_FLAG_INIT        0


#define atomic_flag_test_and_set(PTR)                                   \
                        __atomic_flag_test_and_set ((PTR), __ATOMIC_SEQ_CST)
#define atomic_flag_test_and_set_explicit(PTR, MO)                      \
                        __atomic_flag_test_and_set ((PTR), (MO))

#define atomic_flag_clear(PTR)  __atomic_flag_clear ((PTR), __ATOMIC_SEQ_CST)
#define atomic_flag_clear_explicit(PTR, MO)   __atomic_flag_clear ((PTR), (MO))




Reply via email to