On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
> 
> 
> On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> 
> > On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
> > > 
> > > 
> > > On Tue, 23 Oct 2012, Paul E. McKenney wrote:
> > > 
> > > > On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
> > > > > On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
> > > > > > On 10/23, Paul E. McKenney wrote:
> > > > > > >
> > > > > > >  * Note that this guarantee implies a further memory-ordering 
> > > > > > > guarantee.
> > > > > > >  * On systems with more than one CPU, when synchronize_sched() 
> > > > > > > returns,
> > > > > > >  * each CPU is guaranteed to have executed a full memory barrier 
> > > > > > > since
> > > > > > >  * the end of its last RCU read-side critical section
> > > > > >          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > > > > 
> > > > > > Ah wait... I misread this comment.
> > > > > 
> > > > > And I miswrote it.  It should say "since the end of its last RCU-sched
> > > > > read-side critical section."  So, for example, RCU-sched need not 
> > > > > force
> > > > > a CPU that is idle, offline, or (eventually) executing in user mode to
> > > > > execute a memory barrier.  Fixed this.
> > > 
> > > Or you can write "each CPU that is executing a kernel code is guaranteed 
> > > to have executed a full memory barrier".
> > 
> > Perhaps I could, but it isn't needed, nor is it particularly helpful.
> > Please see suggestions in preceding email.
> 
> It is helpful, because if you add this requirement (that already holds for 
> the current implementation), you can drop rcu_read_lock_sched() and 
> rcu_read_unlock_sched() from the following code that you submitted.
> 
> static inline void percpu_up_read(struct percpu_rw_semaphore *p)
> {
>         /*
>          * Decrement our count, but protected by RCU-sched so that
>          * the writer can force proper serialization.
>          */
>         rcu_read_lock_sched();
>         this_cpu_dec(*p->counters);
>         rcu_read_unlock_sched();
> }
> 
> > > The current implementation fulfills this requirement, you can just add it 
> > > to the specification so that whoever changes the implementation keeps it.
> > 
> > I will consider doing that if and when someone shows me a situation where
> > adding that requirement makes things simpler and/or faster.  From what I
> > can see, your example does not do so.
> > 
> >                                                     Thanx, Paul
> 
> If you do, the above code can be simplified to:
> {
>       barrier();
>       this_cpu_dec(*p->counters);
> }

The readers are lightweight enough that you are worried about the overhead
of rcu_read_lock_sched() and rcu_read_unlock_sched()?  Really???

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to