On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <raf...@kernel.org> wrote: > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <raf...@kernel.org> wrote: > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npi...@baylibre.com> wrote: > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > suspend requests, and clk_enable on resume requests. This means that > > > runtime PM with clock providers that only have the prepare/unprepare > > > methods implemented is basically useless. > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > often the case when communication with the clock happens through another > > > subsystem like I2C or SCMI. > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > may be invoked in atomic context. > > > > > > For clocks that do implement the enable and disable methods then > > > everything just works as before. > > > > > > Signed-off-by: Nicolas Pitre <npi...@baylibre.com> > > > > > > --- > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > resolved, thanks! > > > > > > Here's the fixed version. > > > > Applied instead of the v1, thanks! > > > > > Changes from v1: > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > config option is unset. > > > > > > diff --git a/drivers/base/power/clock_ops.c > > > b/drivers/base/power/clock_ops.c > > > index ced6863a16..a62fb0f9b1 100644 > > > --- a/drivers/base/power/clock_ops.c > > > +++ b/drivers/base/power/clock_ops.c > > > @@ -23,6 +23,7 @@ > > > enum pce_status { > > > PCE_STATUS_NONE = 0, > > > PCE_STATUS_ACQUIRED, > > > + PCE_STATUS_PREPARED, > > > PCE_STATUS_ENABLED, > > > PCE_STATUS_ERROR, > > > }; > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > char *con_id; > > > struct clk *clk; > > > enum pce_status status; > > > + bool enabled_when_prepared; > > > }; > > > > > > +/** > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > + * entry list. > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > + * and clk_op_might_sleep count to be modified. > > > + * > > > + * Get exclusive access before modifying the PM clock entry list and the > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > + * This also protects against a concurrent clock_op_might_sleep and PM > > > clock > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may > > > not > > > + * happen in atomic context, hence both the mutex and the spinlock must > > > be > > > + * taken here. > > > + */ > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > +{ > > > + mutex_lock(&psd->clock_mutex); > > > + spin_lock_irq(&psd->lock); > > > +} > > > + > > > +/** > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > + * @psd: the same pm_subsys_data instance previously passed to > > > + * pm_clk_list_lock(). > > > + */ > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > Locking annotations for sparse were missing here and above, so I've > added them by hand. > > Please double check the result in my linux-next branch (just pushed).
May i request to add Reported-by: Naresh Kamboju <naresh.kamb...@linaro.org> - Naresh