Hi Oleg,

On 02/26/2015 08:29 PM, Oleg Nesterov wrote:
@@ -341,7 +359,13 @@ static inline int sem_lock(struct sem_array *sma, struct 
sembuf *sops,
                         * Thus: if is now 0, then it will stay 0.
                         */
                        if (sma->complex_count == 0) {
-                               /* fast path successful! */
+                               /*
+                                * Fast path successful!
+                                * We only need a final memory barrier.
+                                * (see sem_wait_array() for details).
+                                */
+                               smp_rmb();
+
I'll try to read this again tomorrow, but so far I am confused.

Most probably I missed something, but this looks unneeded at first glance.
No, my fault:
I thought long about sem_wait_array() and then I did copy&paste without thinking properly.

The sequence is:

thread A:
    spin_lock(&local)

thread B:
    complex_count=??;
    spin_unlock(&global); <<< release_mb

thread A:
    spin_unlock_wait(&global); <<< control_mb
    smb_mb__after_control_barrier(); <<< acquire_mb

    <<< now everything from thread B is visible.
<<< and: thread B has dropped the lock, it can't change any protected var
    <<< and: a new thread C can't acquire a lock, we hold &local.

    if (complex_count == 0) goto success;

I'll update the patch.
(cc stable, starting from 3.10...)

--
    Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to