* Waiman Long <waiman.l...@hp.com> wrote:

> On 04/17/2014 11:58 AM, Peter Zijlstra wrote:
> >On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote:
> >>+static __always_inline void
> >>+clear_pending_set_locked(struct qspinlock *lock, u32 val)
> >>+{
> >>+   struct __qspinlock *l = (void *)lock;
> >>+
> >>+   ACCESS_ONCE(l->locked_pending) = 1;
> >>+}
> >>@@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock 
> >>*lock, u32 *pval)
> >>     * we're pending, wait for the owner to go away.
> >>     *
> >>     * *,1,1 ->  *,1,0
> >>+    *
> >>+    * this wait loop must be a load-acquire such that we match the
> >>+    * store-release that clears the locked bit and create lock
> >>+    * sequentiality; this because not all try_clear_pending_set_locked()
> >>+    * implementations imply full barriers.
> >You renamed the function referred in the above comment.
> >
> 
> Sorry, will fix the comments.

I suggest not renaming the function instead. 
try_clear_pending_set_locked() tells the intent in a clearer fashion.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to