On Fri, Oct 04, 2013 at 02:13:00PM +0200, Oleg Nesterov wrote:
> On 10/04, Peter Zijlstra wrote:
> >
> > On Fri, Oct 04, 2013 at 01:15:13PM +0200, Oleg Nesterov wrote:
> > > > What's exclusive to mean? One writer at a time?
> > >
> > > Yes,
> >
> > I'm not entirely sure what the advantage is of having that logic in this
> > primitive. Shouldn't that be something the user of this rcu_sync stuff
> > does (or not) depending on its needs.
> 
> Yes, the user can do the locking itself. But I think this option can help.
> If nothing else it can help to avoid another mutex/whatever and unnecessary
> wakeup/scheule's, even if this is minor.
> 
> And. rcu_sync_enter() should be "bool", it should return "need_sync". IOW,
> rcu_sync_enter() == T means that this thread has done the FAST -> SLOW
> transition, this is particularly useful in "exclusive" mode.
> 
> Consider percpu_down_write(). It takes rw_sem for writing (and this blocks
> the readers) before clear_fast_ctr(), but we only need to do this this
> after sync_sched(), so it could do
> 
>       if (rcu_sync_enter(&brw->rcu_sync))
>               atomic_add(clear_fast_ctr(brw), &brw->slow_read_ctr);
>       else
>               ; /* the above was already done */
> 
>       /* exclude readers */
>       down_write(&brw->rw_sem);
> 
> and now ->rw_sem is only needed to serialize readers/writer.
> 
> Sure, this all is minor (and we will probably copy the "pending writer"
> logic from cpu_hotplug_begin/get_online_cpus).
> 
> But we can get this feature almost for free, so I think it makes sense.

Well, the whole reason I asked is because adding that completion in
there didn't at all smell like free to me; not to mention that I hadn't
at all realized you're using it as a semaphore.

Also; what would be the use once you convert the per-cpu rwsem over to
the scheme I used with hotplug?

I'm really starting to think we shouldn't do this in rcu_sync at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to