> -----Original Message-----
> From: linux-kernel-ow...@vger.kernel.org [mailto:linux-kernel-
> ow...@vger.kernel.org] On Behalf Of Sebastian Andrzej Siewior
> Sent: Thursday, March 7, 2019 22:34
> To: Liu, Yongxin
> Cc: linux-kernel@vger.kernel.org; linux-rt-us...@vger.kernel.org;
> t...@linutronix.de; rost...@goodmis.org; dan.j.willi...@intel.com;
> pagu...@redhat.com; Gortmaker, Paul; linux-nvd...@lists.01.org
> Subject: Re: [PATCH RT] nvdimm: make lane acquirement RT aware
> 
> On 2019-03-06 17:57:09 [+0800], Yongxin Liu wrote:
> > In this change, we replace get_cpu/put_cpu with local_lock_cpu/
> > local_unlock_cpu, and introduce per CPU variable "ndl_local_lock".
> > Due to preemption on RT, this lock can avoid race condition for the
> > same lane on the same CPU. When CPU number is greater than the lane
> > number, lane can be shared among CPUs. "ndl_lock->lock" is used to
> > protect the lane in this situation.
> 
> so what was the reason that get_cpu() can't be replaced with
> raw_smp_processor_id()?
> 
> Sebastian

The lane is critical resource which needs to be protected. One CPU can use only 
one
lane. If CPU number is greater than the number of total lane, the lane can be 
shared
among CPUs. 

In non-RT kernel, get_cpu() disable preemption by calling preempt_disable() 
first.
Only one thread on the same CPU can get the lane.

In RT kernel, if we only use raw_smp_processor_id(), this doesn't protect the 
lane. 
Thus two threads on the same CPU can get the same lane at the same time.

In this patch, two-level lock can avoid race condition for the lane.

          CPU A                  CPU B (B == A % num_lanes)
 
    task A1    task A2     task B1    task B2
       |          |           |          |
       |__________|           |__________|
            |                      |
       ndl_local_lock           ndl_local_lock
            |                      |
            |______________________|
                       |
                       |
                  ndl_lock->lock
                       |
                       |
                      lane

 
Thanks,
Yongxin

Reply via email to