On 10/26, Can Guo wrote:
> On 2020-10-24 23:06, Jaegeuk Kim wrote:
> > From: Jaegeuk Kim <jaeg...@google.com>
> > 
> > When giving a stress test which enables/disables clkgating, we hit
> > device
> > timeout sometimes. This patch avoids subtle racy condition to address
> > it.
> > 
> > If we use __ufshcd_release(), I've seen that gate_work can be called in
> > parallel
> > with ungate_work, which results in UFS timeout when doing hibern8.
> > Should avoid it.
> > 
> 
> I don't understand this comment. gate_work and ungate_work are queued on
> an ordered workqueue and an ordered workqueue executes at most one work item
> at any given time in the queued order. How can the two run in parallel?

When I hit UFS stuck, I saw this by clkgating tracepoint.

- REQ_CLK_OFF
- CLKS_OFF
- REQ_CLK_OFF
- REQ_CLKS_ON
..

By using active_req, I don't see any problem.

> 
> Thanks,
> 
> Can Guo.
> 
> > Signed-off-by: Jaegeuk Kim <jaeg...@google.com>
> > ---
> >  drivers/scsi/ufs/ufshcd.c | 12 ++++++------
> >  1 file changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> > index b8f573a02713..e0b479f9eb8a 100644
> > --- a/drivers/scsi/ufs/ufshcd.c
> > +++ b/drivers/scsi/ufs/ufshcd.c
> > @@ -1807,19 +1807,19 @@ static ssize_t
> > ufshcd_clkgate_enable_store(struct device *dev,
> >             return -EINVAL;
> > 
> >     value = !!value;
> > +
> > +   spin_lock_irqsave(hba->host->host_lock, flags);
> >     if (value == hba->clk_gating.is_enabled)
> >             goto out;
> > 
> > -   if (value) {
> > -           ufshcd_release(hba);
> > -   } else {
> > -           spin_lock_irqsave(hba->host->host_lock, flags);
> > +   if (value)
> > +           hba->clk_gating.active_reqs--;
> > +   else
> >             hba->clk_gating.active_reqs++;
> > -           spin_unlock_irqrestore(hba->host->host_lock, flags);
> > -   }
> > 
> >     hba->clk_gating.is_enabled = value;
> >  out:
> > +   spin_unlock_irqrestore(hba->host->host_lock, flags);
> >     return count;
> >  }

Reply via email to