From: Arvind Sankar > Sent: 14 October 2020 22:27 ... > +/* > + * This version is i.e. to prevent dead stores elimination on @ptr > + * where gcc and llvm may behave differently when otherwise using > + * normal barrier(): while gcc behavior gets along with a normal > + * barrier(), llvm needs an explicit input variable to be assumed > + * clobbered. The issue is as follows: while the inline asm might > + * access any memory it wants, the compiler could have fit all of > + * @ptr into memory registers instead, and since @ptr never escaped > + * from that, it proved that the inline asm wasn't touching any of > + * it. This version works well with both compilers, i.e. we're telling > + * the compiler that the inline asm absolutely may see the contents > + * of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495 > + */ > +# define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
That comment doesn't actually match the asm statement. Although the asm statement probably has the desired effect. The "r"(ptr) constraint only passes the address of the buffer into the asm - it doesn't say anything at all about the associated memory. What the "r"(ptr) actually does is to force the address of the associated data to be taken. This means that on-stack space must actually be allocated. The "memory" clobber will then force the registers caching the variable be written out to stack. If you only want to force stores on a single data structure you actually want: #define barrier_data(ptr) asm volatile("" :: "m"(*ptr)) although it would be best then to add an explicit size and associated cast. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)