Hi, OK this was going to be a quick patch, but after sleeping on it, I think it deserves a better analysis... I can prove the comment is incorrect with a test program, but I'm not as sure about my thinking that leads me to call it also misleading.
The comment being removed by this patch is incorrect and misleading (I think). 1. load ... 2. store 1 -> X 3. wmb 4. rmb 5. load a <- Y 6. store ... 4 will only ensure ordering of 1 with 5. 3 will only ensure ordering of 2 with 6. Further, a CPU with strictly in-order stores will still only provide that 2 and 6 are ordered (effectively, it is the same as a weakly ordered CPU with wmb after every store). In all cases, 5 may still be executed before 2 is visible to other CPUs! The additional piece of the puzzle that mb() provides is the store/load ordering, which fundamentally cannot be achieved with any combination of rmb()s and wmb()s. This can be an unexpected result if one expected any sort of global ordering guarantee to barriers (eg. that the barriers themselves are sequentially consistent with other types of barriers). However sfence or lfence barriers need only provide an ordering partial ordering of meomry operations -- Consider that wmb may be implemented as nothing more than inserting a special barrier entry in the store queue, or, in the case of x86, it can be a noop as the store queue is in order. And an rmb may be implemented as a directive to prevent subsequent loads only so long as their are no previous outstanding loads (while there could be stores still in store queues). I can actually see the occasional load/store being reordered around lfence on my core2. That doesn't prove my above assertions, but it does show the comment is wrong (unless my program is -- can send it out by request). So: mb() and smp_mb() always have and always will require a full mfence or lock prefixed instruction on x86. And we should remove this comment. [ This is true for x86's sfence/lfence, but raises a question about Linux's memory barriers. Does anybody expect that a sequence of smp_wmb and smp_rmb would ever provide a full smp_mb barrier? I've always assumed no, but I don't know if it is actually documented? ] Signed-off-by: Nick Piggin <[EMAIL PROTECTED]> --- Index: linux-2.6/include/asm-i386/system.h =================================================================== --- linux-2.6.orig/include/asm-i386/system.h +++ linux-2.6/include/asm-i386/system.h @@ -214,11 +214,6 @@ static inline unsigned long get_limit(un */ -/* - * Actually only lfence would be needed for mb() because all stores done - * by the kernel should be already ordered. But keep a full barrier for now. - */ - #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/