Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-08-02 Thread Viresh Kumar
On 31-07-17, 09:44, Viresh Kumar wrote:
> On 29-07-17, 10:24, Ulf Hansson wrote:
> > Let's invent a new genpd flag, GENPD_FLAG_PERF_STATE!
> > 
> > The creator of the genpd then needs to set this before calling
> > pm_genpd_init(). Similar as we are dealing with GENPD_FLAG_PM_CLK.
> > 
> > The requirement for GENPD_FLAG_PERF_STATES, is to have the
> > ->get_performance_state() assigned. This shall be verified during
> > pm_genpd_init().
> > 
> > The pm_genpd_has_performance_state() then only need to return true, in
> > cases the device's genpd has GENPD_FLAG_PERF_STATE set - else false.
> > 
> > Regarding ->set_performance_state(), let's just make it optional - and
> > when trying to set a new performance state, just walk the genpd
> > hierarchy, from bottom to up, then invoke the callback when it's
> > assigned.
> 
> Sounds good.

Actually, I don't think we need this flag at all. The presence of the
get_performance_state() callback itself can be used as a flag here instead of
defining a new one.

As with both, your above solution and my solution, we pretty much don't check
presence of set_performance_state() callbacks in the master hierarchy. If its
present, we call it, else nothing happens.

-- 
viresh


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-08-02 Thread Viresh Kumar
On 31-07-17, 09:44, Viresh Kumar wrote:
> On 29-07-17, 10:24, Ulf Hansson wrote:
> > Let's invent a new genpd flag, GENPD_FLAG_PERF_STATE!
> > 
> > The creator of the genpd then needs to set this before calling
> > pm_genpd_init(). Similar as we are dealing with GENPD_FLAG_PM_CLK.
> > 
> > The requirement for GENPD_FLAG_PERF_STATES, is to have the
> > ->get_performance_state() assigned. This shall be verified during
> > pm_genpd_init().
> > 
> > The pm_genpd_has_performance_state() then only need to return true, in
> > cases the device's genpd has GENPD_FLAG_PERF_STATE set - else false.
> > 
> > Regarding ->set_performance_state(), let's just make it optional - and
> > when trying to set a new performance state, just walk the genpd
> > hierarchy, from bottom to up, then invoke the callback when it's
> > assigned.
> 
> Sounds good.

Actually, I don't think we need this flag at all. The presence of the
get_performance_state() callback itself can be used as a flag here instead of
defining a new one.

As with both, your above solution and my solution, we pretty much don't check
presence of set_performance_state() callbacks in the master hierarchy. If its
present, we call it, else nothing happens.

-- 
viresh


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-30 Thread Viresh Kumar
On 29-07-17, 10:24, Ulf Hansson wrote:
> Let's invent a new genpd flag, GENPD_FLAG_PERF_STATE!
> 
> The creator of the genpd then needs to set this before calling
> pm_genpd_init(). Similar as we are dealing with GENPD_FLAG_PM_CLK.
> 
> The requirement for GENPD_FLAG_PERF_STATES, is to have the
> ->get_performance_state() assigned. This shall be verified during
> pm_genpd_init().
> 
> The pm_genpd_has_performance_state() then only need to return true, in
> cases the device's genpd has GENPD_FLAG_PERF_STATE set - else false.
> 
> Regarding ->set_performance_state(), let's just make it optional - and
> when trying to set a new performance state, just walk the genpd
> hierarchy, from bottom to up, then invoke the callback when it's
> assigned.

Sounds good.

-- 
viresh


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-30 Thread Viresh Kumar
On 29-07-17, 10:24, Ulf Hansson wrote:
> Let's invent a new genpd flag, GENPD_FLAG_PERF_STATE!
> 
> The creator of the genpd then needs to set this before calling
> pm_genpd_init(). Similar as we are dealing with GENPD_FLAG_PM_CLK.
> 
> The requirement for GENPD_FLAG_PERF_STATES, is to have the
> ->get_performance_state() assigned. This shall be verified during
> pm_genpd_init().
> 
> The pm_genpd_has_performance_state() then only need to return true, in
> cases the device's genpd has GENPD_FLAG_PERF_STATE set - else false.
> 
> Regarding ->set_performance_state(), let's just make it optional - and
> when trying to set a new performance state, just walk the genpd
> hierarchy, from bottom to up, then invoke the callback when it's
> assigned.

Sounds good.

-- 
viresh


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-29 Thread Ulf Hansson
On 28 July 2017 at 13:00, Viresh Kumar  wrote:
> On 21-07-17, 10:35, Ulf Hansson wrote:
>> >> > +/*
>> >> > + * Returns true if anyone in genpd's parent hierarchy has
>> >> > + * set_performance_state() set.
>> >> > + */
>> >> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> >> > *genpd)
>> >> > +{
>> >>
>> >> So this function will be become in-directly called by generic drivers
>> >> that supports DVFS of the genpd for their devices.
>> >>
>> >> I think the data you validate here would be better to be pre-validated
>> >> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
>> >> result stored in a variable in the genpd struct. Especially when a
>> >> subdomain is added, that is a point when you can verify the
>> >> *_performance_state() callbacks, and thus make sure it's a correct
>> >> setup from the topology point of view.
>
> Looks like I have to keep this routine as is and your solution may not
> work well. :(
>
>> > Something like this ?
>> >
>> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> > index 4a898e095a1d..182c1911ea9c 100644
>> > --- a/drivers/base/power/domain.c
>> > +++ b/drivers/base/power/domain.c
>> > @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
>> > notifier_block *nb,
>> > return NOTIFY_DONE;
>> >  }
>> >
>> > -/*
>> > - * Returns true if anyone in genpd's parent hierarchy has
>> > - * set_performance_state() set.
>> > - */
>> > -static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> > *genpd)
>> > -{
>> > -   struct gpd_link *link;
>> > -
>> > -   if (genpd->set_performance_state)
>> > -   return true;
>> > -
>> > -   list_for_each_entry(link, >slave_links, slave_node) {
>> > -   if (genpd_has_set_performance_state(link->master))
>> > -   return true;
>> > -   }
>> > -
>> > -   return false;
>> > -}
>> > -
>> >  /**
>> >   * pm_genpd_has_performance_state - Checks if power domain does 
>> > performance
>> >   * state management.
>> > @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
>> >
>> > /* The parent domain must have set get_performance_state() */
>> > if (!IS_ERR(genpd) && genpd->get_performance_state) {
>> > -   if (genpd_has_set_performance_state(genpd))
>> > +   if (genpd->can_set_performance_state)
>> > return true;
>> >
>> > /*
>> > @@ -1594,6 +1575,8 @@ static int genpd_add_subdomain(struct 
>> > generic_pm_domain *genpd,
>> > if (genpd_status_on(subdomain))
>> > genpd_sd_counter_inc(genpd);
>> >
>> > +   subdomain->can_set_performance_state += 
>> > genpd->can_set_performance_state;
>> > +
>> >   out:
>> > genpd_unlock(genpd);
>> > genpd_unlock(subdomain);
>> > @@ -1654,6 +1637,8 @@ int pm_genpd_remove_subdomain(struct 
>> > generic_pm_domain *genpd,
>> > if (genpd_status_on(subdomain))
>> > genpd_sd_counter_dec(genpd);
>> >
>> > +   subdomain->can_set_performance_state -= 
>> > genpd->can_set_performance_state;
>> > +
>> > ret = 0;
>> > break;
>> > }
>> > @@ -1721,6 +1706,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
>> > genpd->max_off_time_changed = true;
>> > genpd->provider = NULL;
>> > genpd->has_provider = false;
>> > +   genpd->can_set_performance_state = !!genpd->set_performance_state;
>> > genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
>> > genpd->domain.ops.runtime_resume = genpd_runtime_resume;
>> > genpd->domain.ops.prepare = pm_genpd_prepare;
>> > diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
>> > index bf90177208a2..995d0cb1bc14 100644
>> > --- a/include/linux/pm_domain.h
>> > +++ b/include/linux/pm_domain.h
>> > @@ -64,6 +64,7 @@ struct generic_pm_domain {
>> > unsigned int suspended_count;   /* System suspend device counter */
>> > unsigned int prepared_count;/* Suspend counter of prepared 
>> > devices */
>> > unsigned int performance_state; /* Max requested performance state 
>> > */
>> > +   unsigned int can_set_performance_state; /* Number of parent 
>> > domains supporting set state */
>> > int (*power_off)(struct generic_pm_domain *domain);
>> > int (*power_on)(struct generic_pm_domain *domain);
>> > int (*get_performance_state)(struct device *dev, unsigned long 
>> > rate);
>> >
>>
>> Yes!
>
> The above diff will work fine only for the case where the master
> domain has all its masters set properly before genpd_add_subdomain()
> is called for the subdomain, as the genpd->can_set_performance_state
> count wouldn't change after that. But if the masters of the
> master are linked to the master after genpd_add_subdomain() is called
> for the subdomain, then we 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-29 Thread Ulf Hansson
On 28 July 2017 at 13:00, Viresh Kumar  wrote:
> On 21-07-17, 10:35, Ulf Hansson wrote:
>> >> > +/*
>> >> > + * Returns true if anyone in genpd's parent hierarchy has
>> >> > + * set_performance_state() set.
>> >> > + */
>> >> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> >> > *genpd)
>> >> > +{
>> >>
>> >> So this function will be become in-directly called by generic drivers
>> >> that supports DVFS of the genpd for their devices.
>> >>
>> >> I think the data you validate here would be better to be pre-validated
>> >> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
>> >> result stored in a variable in the genpd struct. Especially when a
>> >> subdomain is added, that is a point when you can verify the
>> >> *_performance_state() callbacks, and thus make sure it's a correct
>> >> setup from the topology point of view.
>
> Looks like I have to keep this routine as is and your solution may not
> work well. :(
>
>> > Something like this ?
>> >
>> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> > index 4a898e095a1d..182c1911ea9c 100644
>> > --- a/drivers/base/power/domain.c
>> > +++ b/drivers/base/power/domain.c
>> > @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
>> > notifier_block *nb,
>> > return NOTIFY_DONE;
>> >  }
>> >
>> > -/*
>> > - * Returns true if anyone in genpd's parent hierarchy has
>> > - * set_performance_state() set.
>> > - */
>> > -static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> > *genpd)
>> > -{
>> > -   struct gpd_link *link;
>> > -
>> > -   if (genpd->set_performance_state)
>> > -   return true;
>> > -
>> > -   list_for_each_entry(link, >slave_links, slave_node) {
>> > -   if (genpd_has_set_performance_state(link->master))
>> > -   return true;
>> > -   }
>> > -
>> > -   return false;
>> > -}
>> > -
>> >  /**
>> >   * pm_genpd_has_performance_state - Checks if power domain does 
>> > performance
>> >   * state management.
>> > @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
>> >
>> > /* The parent domain must have set get_performance_state() */
>> > if (!IS_ERR(genpd) && genpd->get_performance_state) {
>> > -   if (genpd_has_set_performance_state(genpd))
>> > +   if (genpd->can_set_performance_state)
>> > return true;
>> >
>> > /*
>> > @@ -1594,6 +1575,8 @@ static int genpd_add_subdomain(struct 
>> > generic_pm_domain *genpd,
>> > if (genpd_status_on(subdomain))
>> > genpd_sd_counter_inc(genpd);
>> >
>> > +   subdomain->can_set_performance_state += 
>> > genpd->can_set_performance_state;
>> > +
>> >   out:
>> > genpd_unlock(genpd);
>> > genpd_unlock(subdomain);
>> > @@ -1654,6 +1637,8 @@ int pm_genpd_remove_subdomain(struct 
>> > generic_pm_domain *genpd,
>> > if (genpd_status_on(subdomain))
>> > genpd_sd_counter_dec(genpd);
>> >
>> > +   subdomain->can_set_performance_state -= 
>> > genpd->can_set_performance_state;
>> > +
>> > ret = 0;
>> > break;
>> > }
>> > @@ -1721,6 +1706,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
>> > genpd->max_off_time_changed = true;
>> > genpd->provider = NULL;
>> > genpd->has_provider = false;
>> > +   genpd->can_set_performance_state = !!genpd->set_performance_state;
>> > genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
>> > genpd->domain.ops.runtime_resume = genpd_runtime_resume;
>> > genpd->domain.ops.prepare = pm_genpd_prepare;
>> > diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
>> > index bf90177208a2..995d0cb1bc14 100644
>> > --- a/include/linux/pm_domain.h
>> > +++ b/include/linux/pm_domain.h
>> > @@ -64,6 +64,7 @@ struct generic_pm_domain {
>> > unsigned int suspended_count;   /* System suspend device counter */
>> > unsigned int prepared_count;/* Suspend counter of prepared 
>> > devices */
>> > unsigned int performance_state; /* Max requested performance state 
>> > */
>> > +   unsigned int can_set_performance_state; /* Number of parent 
>> > domains supporting set state */
>> > int (*power_off)(struct generic_pm_domain *domain);
>> > int (*power_on)(struct generic_pm_domain *domain);
>> > int (*get_performance_state)(struct device *dev, unsigned long 
>> > rate);
>> >
>>
>> Yes!
>
> The above diff will work fine only for the case where the master
> domain has all its masters set properly before genpd_add_subdomain()
> is called for the subdomain, as the genpd->can_set_performance_state
> count wouldn't change after that. But if the masters of the
> master are linked to the master after genpd_add_subdomain() is called
> for the subdomain, then we wouldn't be update the
> 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-28 Thread Viresh Kumar
On 21-07-17, 10:35, Ulf Hansson wrote:
> >> > +/*
> >> > + * Returns true if anyone in genpd's parent hierarchy has
> >> > + * set_performance_state() set.
> >> > + */
> >> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
> >> > *genpd)
> >> > +{
> >>
> >> So this function will be become in-directly called by generic drivers
> >> that supports DVFS of the genpd for their devices.
> >>
> >> I think the data you validate here would be better to be pre-validated
> >> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
> >> result stored in a variable in the genpd struct. Especially when a
> >> subdomain is added, that is a point when you can verify the
> >> *_performance_state() callbacks, and thus make sure it's a correct
> >> setup from the topology point of view.

Looks like I have to keep this routine as is and your solution may not
work well. :(

> > Something like this ?
> >
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index 4a898e095a1d..182c1911ea9c 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
> > notifier_block *nb,
> > return NOTIFY_DONE;
> >  }
> >
> > -/*
> > - * Returns true if anyone in genpd's parent hierarchy has
> > - * set_performance_state() set.
> > - */
> > -static bool genpd_has_set_performance_state(struct generic_pm_domain 
> > *genpd)
> > -{
> > -   struct gpd_link *link;
> > -
> > -   if (genpd->set_performance_state)
> > -   return true;
> > -
> > -   list_for_each_entry(link, >slave_links, slave_node) {
> > -   if (genpd_has_set_performance_state(link->master))
> > -   return true;
> > -   }
> > -
> > -   return false;
> > -}
> > -
> >  /**
> >   * pm_genpd_has_performance_state - Checks if power domain does performance
> >   * state management.
> > @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
> >
> > /* The parent domain must have set get_performance_state() */
> > if (!IS_ERR(genpd) && genpd->get_performance_state) {
> > -   if (genpd_has_set_performance_state(genpd))
> > +   if (genpd->can_set_performance_state)
> > return true;
> >
> > /*
> > @@ -1594,6 +1575,8 @@ static int genpd_add_subdomain(struct 
> > generic_pm_domain *genpd,
> > if (genpd_status_on(subdomain))
> > genpd_sd_counter_inc(genpd);
> >
> > +   subdomain->can_set_performance_state += 
> > genpd->can_set_performance_state;
> > +
> >   out:
> > genpd_unlock(genpd);
> > genpd_unlock(subdomain);
> > @@ -1654,6 +1637,8 @@ int pm_genpd_remove_subdomain(struct 
> > generic_pm_domain *genpd,
> > if (genpd_status_on(subdomain))
> > genpd_sd_counter_dec(genpd);
> >
> > +   subdomain->can_set_performance_state -= 
> > genpd->can_set_performance_state;
> > +
> > ret = 0;
> > break;
> > }
> > @@ -1721,6 +1706,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
> > genpd->max_off_time_changed = true;
> > genpd->provider = NULL;
> > genpd->has_provider = false;
> > +   genpd->can_set_performance_state = !!genpd->set_performance_state;
> > genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
> > genpd->domain.ops.runtime_resume = genpd_runtime_resume;
> > genpd->domain.ops.prepare = pm_genpd_prepare;
> > diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> > index bf90177208a2..995d0cb1bc14 100644
> > --- a/include/linux/pm_domain.h
> > +++ b/include/linux/pm_domain.h
> > @@ -64,6 +64,7 @@ struct generic_pm_domain {
> > unsigned int suspended_count;   /* System suspend device counter */
> > unsigned int prepared_count;/* Suspend counter of prepared 
> > devices */
> > unsigned int performance_state; /* Max requested performance state 
> > */
> > +   unsigned int can_set_performance_state; /* Number of parent domains 
> > supporting set state */
> > int (*power_off)(struct generic_pm_domain *domain);
> > int (*power_on)(struct generic_pm_domain *domain);
> > int (*get_performance_state)(struct device *dev, unsigned long 
> > rate);
> >
> 
> Yes!

The above diff will work fine only for the case where the master
domain has all its masters set properly before genpd_add_subdomain()
is called for the subdomain, as the genpd->can_set_performance_state
count wouldn't change after that. But if the masters of the
master are linked to the master after genpd_add_subdomain() is called
for the subdomain, then we wouldn't be update the
subdomain->can_set_performance_state field later.

For example, consider this scenario:

   Domain A (has set_performance_state())

   Domain BDomain C 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-28 Thread Viresh Kumar
On 21-07-17, 10:35, Ulf Hansson wrote:
> >> > +/*
> >> > + * Returns true if anyone in genpd's parent hierarchy has
> >> > + * set_performance_state() set.
> >> > + */
> >> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
> >> > *genpd)
> >> > +{
> >>
> >> So this function will be become in-directly called by generic drivers
> >> that supports DVFS of the genpd for their devices.
> >>
> >> I think the data you validate here would be better to be pre-validated
> >> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
> >> result stored in a variable in the genpd struct. Especially when a
> >> subdomain is added, that is a point when you can verify the
> >> *_performance_state() callbacks, and thus make sure it's a correct
> >> setup from the topology point of view.

Looks like I have to keep this routine as is and your solution may not
work well. :(

> > Something like this ?
> >
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index 4a898e095a1d..182c1911ea9c 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
> > notifier_block *nb,
> > return NOTIFY_DONE;
> >  }
> >
> > -/*
> > - * Returns true if anyone in genpd's parent hierarchy has
> > - * set_performance_state() set.
> > - */
> > -static bool genpd_has_set_performance_state(struct generic_pm_domain 
> > *genpd)
> > -{
> > -   struct gpd_link *link;
> > -
> > -   if (genpd->set_performance_state)
> > -   return true;
> > -
> > -   list_for_each_entry(link, >slave_links, slave_node) {
> > -   if (genpd_has_set_performance_state(link->master))
> > -   return true;
> > -   }
> > -
> > -   return false;
> > -}
> > -
> >  /**
> >   * pm_genpd_has_performance_state - Checks if power domain does performance
> >   * state management.
> > @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
> >
> > /* The parent domain must have set get_performance_state() */
> > if (!IS_ERR(genpd) && genpd->get_performance_state) {
> > -   if (genpd_has_set_performance_state(genpd))
> > +   if (genpd->can_set_performance_state)
> > return true;
> >
> > /*
> > @@ -1594,6 +1575,8 @@ static int genpd_add_subdomain(struct 
> > generic_pm_domain *genpd,
> > if (genpd_status_on(subdomain))
> > genpd_sd_counter_inc(genpd);
> >
> > +   subdomain->can_set_performance_state += 
> > genpd->can_set_performance_state;
> > +
> >   out:
> > genpd_unlock(genpd);
> > genpd_unlock(subdomain);
> > @@ -1654,6 +1637,8 @@ int pm_genpd_remove_subdomain(struct 
> > generic_pm_domain *genpd,
> > if (genpd_status_on(subdomain))
> > genpd_sd_counter_dec(genpd);
> >
> > +   subdomain->can_set_performance_state -= 
> > genpd->can_set_performance_state;
> > +
> > ret = 0;
> > break;
> > }
> > @@ -1721,6 +1706,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
> > genpd->max_off_time_changed = true;
> > genpd->provider = NULL;
> > genpd->has_provider = false;
> > +   genpd->can_set_performance_state = !!genpd->set_performance_state;
> > genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
> > genpd->domain.ops.runtime_resume = genpd_runtime_resume;
> > genpd->domain.ops.prepare = pm_genpd_prepare;
> > diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> > index bf90177208a2..995d0cb1bc14 100644
> > --- a/include/linux/pm_domain.h
> > +++ b/include/linux/pm_domain.h
> > @@ -64,6 +64,7 @@ struct generic_pm_domain {
> > unsigned int suspended_count;   /* System suspend device counter */
> > unsigned int prepared_count;/* Suspend counter of prepared 
> > devices */
> > unsigned int performance_state; /* Max requested performance state 
> > */
> > +   unsigned int can_set_performance_state; /* Number of parent domains 
> > supporting set state */
> > int (*power_off)(struct generic_pm_domain *domain);
> > int (*power_on)(struct generic_pm_domain *domain);
> > int (*get_performance_state)(struct device *dev, unsigned long 
> > rate);
> >
> 
> Yes!

The above diff will work fine only for the case where the master
domain has all its masters set properly before genpd_add_subdomain()
is called for the subdomain, as the genpd->can_set_performance_state
count wouldn't change after that. But if the masters of the
master are linked to the master after genpd_add_subdomain() is called
for the subdomain, then we wouldn't be update the
subdomain->can_set_performance_state field later.

For example, consider this scenario:

   Domain A (has set_performance_state())

   Domain BDomain C 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-24 Thread Viresh Kumar
On 23-07-17, 09:20, Ulf Hansson wrote:
> I should have been more clear. Walking the master list, then checking
> each link without using locks - why is that safe?
> 
> Then even if you think it's safe, then please explain in detail why its 
> needed.
> 
> Walking the slave list as being done for power off/on should work
> perfectly okay for your case as well. No?

I got it. I will try to explain why it is done this way with the help
of two versions of genpd_update_performance_state() routine. The first
one is from a previous version and second one is from the current
series.

Just as a note, the problem is not with traversing slave_links list
but the master_links list.

1.) Previous Version (Have deadlock issues, as you reported then).

>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,
>> + int depth)
>> +{

The genpd is already locked here..

>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *subdomain;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }

Above is fine as we traversed list of devices that are powered by the
PM domain. No additional locks are required.

>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {
>> +   subdomain = link->slave;
>> +
>> +   if (subdomain->performance_state > state)
>> +   state = subdomain->performance_state;
>> +   }

But this is not fine at all. subdomain->performance_state might get
updated from another thread for another genpd. And so we need locking
here, but we can't do that as we need to take locks starting from
slaves to masters. This is what you correctly pointed out in earlier
versions.

>> +   if (genpd->performance_state == state)
>> +   return 0;
>> +
>> +   /*
>> +* Not all domains support updating performance state. Move on to 
>> their
>> +* parent domains in that case.
>> +*/
>> +   if (genpd->set_performance_state) {
>> +   ret = genpd->set_performance_state(genpd, state);
>> +   if (!ret)
>> +   genpd->performance_state = state;
>> +
>> +   return ret;
>> +   }
>> +
>> +   prev = genpd->performance_state;
>> +   genpd->performance_state = state;

This is racy as well (because of the earlier traversal of
master-list), as genpd->performance_state might be getting read for
one of its masters currently (from another instance of this same
routine).

>> +
>> +   list_for_each_entry(link, >slave_links, slave_node) {
>> +   struct generic_pm_domain *master = link->master;
>> +
>> +   genpd_lock_nested(master, depth + 1);
>> +   ret = genpd_update_performance_state(master, depth + 1);
>> +   genpd_unlock(master);
>>
>> ... Handle errors here.
>> +   }
>> +
>> +   return 0;
>> +}

So the conclusion was that we surely can't lock the subdomains while
running genpd_update_performance_state() for a master genpd.

And that's what the below latest code tried to address.

2.) New code, which shouldn't have any of those deadlock issues.
 
>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,

genpd is still locked from its caller.

>> + int depth)
>> +{
>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *master;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }
>> +

Above remains the same and shouldn't have any issues.

>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {
>> +   if (link->performance_state > state)
>> +   state = link->performance_state;
>> +   }
>> +

Instead of a single performance_state field for the entire subdomain
structure, we store a performance_state value for each
master <-> subdomain pair. And this field is protected by the master
lock, always.

Since the genpd was already locked, the link->performance_state field
of all its subdomains can be accessed 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-24 Thread Viresh Kumar
On 23-07-17, 09:20, Ulf Hansson wrote:
> I should have been more clear. Walking the master list, then checking
> each link without using locks - why is that safe?
> 
> Then even if you think it's safe, then please explain in detail why its 
> needed.
> 
> Walking the slave list as being done for power off/on should work
> perfectly okay for your case as well. No?

I got it. I will try to explain why it is done this way with the help
of two versions of genpd_update_performance_state() routine. The first
one is from a previous version and second one is from the current
series.

Just as a note, the problem is not with traversing slave_links list
but the master_links list.

1.) Previous Version (Have deadlock issues, as you reported then).

>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,
>> + int depth)
>> +{

The genpd is already locked here..

>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *subdomain;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }

Above is fine as we traversed list of devices that are powered by the
PM domain. No additional locks are required.

>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {
>> +   subdomain = link->slave;
>> +
>> +   if (subdomain->performance_state > state)
>> +   state = subdomain->performance_state;
>> +   }

But this is not fine at all. subdomain->performance_state might get
updated from another thread for another genpd. And so we need locking
here, but we can't do that as we need to take locks starting from
slaves to masters. This is what you correctly pointed out in earlier
versions.

>> +   if (genpd->performance_state == state)
>> +   return 0;
>> +
>> +   /*
>> +* Not all domains support updating performance state. Move on to 
>> their
>> +* parent domains in that case.
>> +*/
>> +   if (genpd->set_performance_state) {
>> +   ret = genpd->set_performance_state(genpd, state);
>> +   if (!ret)
>> +   genpd->performance_state = state;
>> +
>> +   return ret;
>> +   }
>> +
>> +   prev = genpd->performance_state;
>> +   genpd->performance_state = state;

This is racy as well (because of the earlier traversal of
master-list), as genpd->performance_state might be getting read for
one of its masters currently (from another instance of this same
routine).

>> +
>> +   list_for_each_entry(link, >slave_links, slave_node) {
>> +   struct generic_pm_domain *master = link->master;
>> +
>> +   genpd_lock_nested(master, depth + 1);
>> +   ret = genpd_update_performance_state(master, depth + 1);
>> +   genpd_unlock(master);
>>
>> ... Handle errors here.
>> +   }
>> +
>> +   return 0;
>> +}

So the conclusion was that we surely can't lock the subdomains while
running genpd_update_performance_state() for a master genpd.

And that's what the below latest code tried to address.

2.) New code, which shouldn't have any of those deadlock issues.
 
>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,

genpd is still locked from its caller.

>> + int depth)
>> +{
>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *master;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }
>> +

Above remains the same and shouldn't have any issues.

>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {
>> +   if (link->performance_state > state)
>> +   state = link->performance_state;
>> +   }
>> +

Instead of a single performance_state field for the entire subdomain
structure, we store a performance_state value for each
master <-> subdomain pair. And this field is protected by the master
lock, always.

Since the genpd was already locked, the link->performance_state field
of all its subdomains can be accessed 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-23 Thread Ulf Hansson
On 21 July 2017 at 11:05, Viresh Kumar  wrote:
> On 21-07-17, 10:35, Ulf Hansson wrote:
>> This depends on how drivers are dealing with runtime PM in conjunction
>> with the new pm_genpd_update_performance_state().
>>
>> In case you don't want to manage some of this in genpd, then each
>> driver will have to drop their constraints every time they are about
>> to runtime suspend its device. And restore them at runtime resume.
>>
>> To me, that's seems like a bad idea. Then it's better to make genpd
>> deal with this - somehow.
>
> Right. So we should call the ->set_performance_state() from off/on as
> well. Will do that.
>
>> Yes!
>>
>> On top of that change, you could also add some validation if the
>> get/set callbacks is there are any constraints on how they must be
>> assigned.
>
> I am not sure if I understood that, sorry. What other constraints are
> you talking about ?

Just thinking that if a genpd is about to be added as a subdomain, and
it has ->get_performance_state(), but not ->set_performance_state(),
perhaps we should require its master to have
->set_performance_state().

Anyway, I let you do the thinking of what is and what is not needed here.

[...]

>>
>> My main concern is the order of how you take the looks. We never take
>> a masters lock before the current domain lock.
>
> Right and this patch doesn't break that.
>
>> And when walking the topology, we use the slave links and locks the
>> first master from that list. Continues with that tree, then get back
>> to slave list and pick the next master.
>
> Again, that's how this patch does it.
>
>> If you change that order, we could end getting deadlocks.
>
> And because that order isn't changed at all, we shouldn't have
> deadlocks.

True. Trying to clarify more below...

>
>> >> A general comment is that I think you should look more closely in the
>> >> code of genpd_power_off|on(). And also how it calls the
>> >> ->power_on|off() callbacks.
>> >>
>> >> Depending whether you want to update the performance state of the
>> >> master domain before the subdomain or the opposite, you will find one
>> >> of them being suited for this case as well.
>> >
>> > Isn't it very much similar to that already ? The only major difference
>> > is link->performance_state and I already explained why is it required
>> > to be done that way to avoid deadlocks.
>>
>> No, because you walk the master lists. Thus getting a different order or 
>> locks.
>>
>> I did some drawing of this, using the slave links, and I don't see any
>> issues why you can't use that instead.
>
> Damn, I am confused on which part are you talking about. Let me paste
> the code here once again and clarify how this is supposed to work just fine :)

I should have been more clear. Walking the master list, then checking
each link without using locks - why is that safe?

Then even if you think it's safe, then please explain in detail why its needed.

Walking the slave list as being done for power off/on should work
perfectly okay for your case as well. No?

[...]

Kind regards
Uffe


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-23 Thread Ulf Hansson
On 21 July 2017 at 11:05, Viresh Kumar  wrote:
> On 21-07-17, 10:35, Ulf Hansson wrote:
>> This depends on how drivers are dealing with runtime PM in conjunction
>> with the new pm_genpd_update_performance_state().
>>
>> In case you don't want to manage some of this in genpd, then each
>> driver will have to drop their constraints every time they are about
>> to runtime suspend its device. And restore them at runtime resume.
>>
>> To me, that's seems like a bad idea. Then it's better to make genpd
>> deal with this - somehow.
>
> Right. So we should call the ->set_performance_state() from off/on as
> well. Will do that.
>
>> Yes!
>>
>> On top of that change, you could also add some validation if the
>> get/set callbacks is there are any constraints on how they must be
>> assigned.
>
> I am not sure if I understood that, sorry. What other constraints are
> you talking about ?

Just thinking that if a genpd is about to be added as a subdomain, and
it has ->get_performance_state(), but not ->set_performance_state(),
perhaps we should require its master to have
->set_performance_state().

Anyway, I let you do the thinking of what is and what is not needed here.

[...]

>>
>> My main concern is the order of how you take the looks. We never take
>> a masters lock before the current domain lock.
>
> Right and this patch doesn't break that.
>
>> And when walking the topology, we use the slave links and locks the
>> first master from that list. Continues with that tree, then get back
>> to slave list and pick the next master.
>
> Again, that's how this patch does it.
>
>> If you change that order, we could end getting deadlocks.
>
> And because that order isn't changed at all, we shouldn't have
> deadlocks.

True. Trying to clarify more below...

>
>> >> A general comment is that I think you should look more closely in the
>> >> code of genpd_power_off|on(). And also how it calls the
>> >> ->power_on|off() callbacks.
>> >>
>> >> Depending whether you want to update the performance state of the
>> >> master domain before the subdomain or the opposite, you will find one
>> >> of them being suited for this case as well.
>> >
>> > Isn't it very much similar to that already ? The only major difference
>> > is link->performance_state and I already explained why is it required
>> > to be done that way to avoid deadlocks.
>>
>> No, because you walk the master lists. Thus getting a different order or 
>> locks.
>>
>> I did some drawing of this, using the slave links, and I don't see any
>> issues why you can't use that instead.
>
> Damn, I am confused on which part are you talking about. Let me paste
> the code here once again and clarify how this is supposed to work just fine :)

I should have been more clear. Walking the master list, then checking
each link without using locks - why is that safe?

Then even if you think it's safe, then please explain in detail why its needed.

Walking the slave list as being done for power off/on should work
perfectly okay for your case as well. No?

[...]

Kind regards
Uffe


Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-21 Thread Viresh Kumar
On 21-07-17, 10:35, Ulf Hansson wrote:
> This depends on how drivers are dealing with runtime PM in conjunction
> with the new pm_genpd_update_performance_state().
> 
> In case you don't want to manage some of this in genpd, then each
> driver will have to drop their constraints every time they are about
> to runtime suspend its device. And restore them at runtime resume.
> 
> To me, that's seems like a bad idea. Then it's better to make genpd
> deal with this - somehow.

Right. So we should call the ->set_performance_state() from off/on as
well. Will do that.

> Yes!
> 
> On top of that change, you could also add some validation if the
> get/set callbacks is there are any constraints on how they must be
> assigned.

I am not sure if I understood that, sorry. What other constraints are
you talking about ?

> >> From a locking point of view we always traverse the topology from
> >> bottom an up. In other words, we walk the genpd's ->slave_links, and
> >> lock the masters in the order the are defined via the slave_links
> >> list. The order is important to avoid deadlocks. I don't think you
> >> should walk the master_links as being done above, especially not
> >> without using locks.
> >
> > So we need to look at the performance states of the subdomains of a
> > master. The way it is done in this patch with help of
> > link->performance_state, we don't need that locking while traversing
> > the master_links list. Here is how:
> >
> > - Master's (genpd) master_links list is only updated under master's
> >   lock, which we have already taken here. So master_links list can't
> >   get updated concurrently.
> >
> > - The link->performance_state field of a subdomain (or slave) is only
> >   updated from within the master's lock. And we are reading it here
> >   from the same lock.
> >
> > AFAIU, there shouldn't be any deadlocks or locking issues here. Can
> > you describe some case that may blow up ?
> 
> My main concern is the order of how you take the looks. We never take
> a masters lock before the current domain lock.

Right and this patch doesn't break that.

> And when walking the topology, we use the slave links and locks the
> first master from that list. Continues with that tree, then get back
> to slave list and pick the next master.

Again, that's how this patch does it.

> If you change that order, we could end getting deadlocks.

And because that order isn't changed at all, we shouldn't have
deadlocks.

> >> A general comment is that I think you should look more closely in the
> >> code of genpd_power_off|on(). And also how it calls the
> >> ->power_on|off() callbacks.
> >>
> >> Depending whether you want to update the performance state of the
> >> master domain before the subdomain or the opposite, you will find one
> >> of them being suited for this case as well.
> >
> > Isn't it very much similar to that already ? The only major difference
> > is link->performance_state and I already explained why is it required
> > to be done that way to avoid deadlocks.
> 
> No, because you walk the master lists. Thus getting a different order or 
> locks.
> 
> I did some drawing of this, using the slave links, and I don't see any
> issues why you can't use that instead.

Damn, I am confused on which part are you talking about. Let me paste
the code here once again and clarify how this is supposed to work just fine :)

>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,
>> + int depth)
>> +{

genpd is already locked.

>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *master;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }
>> +
>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {

This is the only place where we look at all the sub-domains, but we
don't need locking here at all as link->performance_state is only
accessed while "genpd" is locked. It doesn't need sub-domain's lock.

>> +   if (link->performance_state > state)
>> +   state = link->performance_state;
>> +   }
>> +
>> +   if (genpd->performance_state == state)
>> +   return 0;
>> +
>> +   if (genpd->set_performance_state) {
>> +   ret = genpd->set_performance_state(genpd, state);
>> +   if (!ret)
>> +   genpd->performance_state = state;
>> +
>> +   return ret;
>> +   }
>> +
>> +   /*
>> +* Not all domains support updating performance state. Move on to 
>> their

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-21 Thread Viresh Kumar
On 21-07-17, 10:35, Ulf Hansson wrote:
> This depends on how drivers are dealing with runtime PM in conjunction
> with the new pm_genpd_update_performance_state().
> 
> In case you don't want to manage some of this in genpd, then each
> driver will have to drop their constraints every time they are about
> to runtime suspend its device. And restore them at runtime resume.
> 
> To me, that's seems like a bad idea. Then it's better to make genpd
> deal with this - somehow.

Right. So we should call the ->set_performance_state() from off/on as
well. Will do that.

> Yes!
> 
> On top of that change, you could also add some validation if the
> get/set callbacks is there are any constraints on how they must be
> assigned.

I am not sure if I understood that, sorry. What other constraints are
you talking about ?

> >> From a locking point of view we always traverse the topology from
> >> bottom an up. In other words, we walk the genpd's ->slave_links, and
> >> lock the masters in the order the are defined via the slave_links
> >> list. The order is important to avoid deadlocks. I don't think you
> >> should walk the master_links as being done above, especially not
> >> without using locks.
> >
> > So we need to look at the performance states of the subdomains of a
> > master. The way it is done in this patch with help of
> > link->performance_state, we don't need that locking while traversing
> > the master_links list. Here is how:
> >
> > - Master's (genpd) master_links list is only updated under master's
> >   lock, which we have already taken here. So master_links list can't
> >   get updated concurrently.
> >
> > - The link->performance_state field of a subdomain (or slave) is only
> >   updated from within the master's lock. And we are reading it here
> >   from the same lock.
> >
> > AFAIU, there shouldn't be any deadlocks or locking issues here. Can
> > you describe some case that may blow up ?
> 
> My main concern is the order of how you take the looks. We never take
> a masters lock before the current domain lock.

Right and this patch doesn't break that.

> And when walking the topology, we use the slave links and locks the
> first master from that list. Continues with that tree, then get back
> to slave list and pick the next master.

Again, that's how this patch does it.

> If you change that order, we could end getting deadlocks.

And because that order isn't changed at all, we shouldn't have
deadlocks.

> >> A general comment is that I think you should look more closely in the
> >> code of genpd_power_off|on(). And also how it calls the
> >> ->power_on|off() callbacks.
> >>
> >> Depending whether you want to update the performance state of the
> >> master domain before the subdomain or the opposite, you will find one
> >> of them being suited for this case as well.
> >
> > Isn't it very much similar to that already ? The only major difference
> > is link->performance_state and I already explained why is it required
> > to be done that way to avoid deadlocks.
> 
> No, because you walk the master lists. Thus getting a different order or 
> locks.
> 
> I did some drawing of this, using the slave links, and I don't see any
> issues why you can't use that instead.

Damn, I am confused on which part are you talking about. Let me paste
the code here once again and clarify how this is supposed to work just fine :)

>> +static int genpd_update_performance_state(struct generic_pm_domain *genpd,
>> + int depth)
>> +{

genpd is already locked.

>> +   struct generic_pm_domain_data *pd_data;
>> +   struct generic_pm_domain *master;
>> +   struct pm_domain_data *pdd;
>> +   unsigned int state = 0, prev;
>> +   struct gpd_link *link;
>> +   int ret;
>> +
>> +   /* Traverse all devices within the domain */
>> +   list_for_each_entry(pdd, >dev_list, list_node) {
>> +   pd_data = to_gpd_data(pdd);
>> +
>> +   if (pd_data->performance_state > state)
>> +   state = pd_data->performance_state;
>> +   }
>> +
>> +   /* Traverse all subdomains within the domain */
>> +   list_for_each_entry(link, >master_links, master_node) {

This is the only place where we look at all the sub-domains, but we
don't need locking here at all as link->performance_state is only
accessed while "genpd" is locked. It doesn't need sub-domain's lock.

>> +   if (link->performance_state > state)
>> +   state = link->performance_state;
>> +   }
>> +
>> +   if (genpd->performance_state == state)
>> +   return 0;
>> +
>> +   if (genpd->set_performance_state) {
>> +   ret = genpd->set_performance_state(genpd, state);
>> +   if (!ret)
>> +   genpd->performance_state = state;
>> +
>> +   return ret;
>> +   }
>> +
>> +   /*
>> +* Not all domains support updating performance state. Move on to 
>> their

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-21 Thread Ulf Hansson
[...]

>
>> What happens when a power domain gets powered off and then on. Is the
>> performance state restored? Please elaborate a bit on this.
>
> Can this happen while the genpd is still in use? If not then we
> wouldn't have a problem here as the users of it would have revoked
> their constraints by now.

This depends on how drivers are dealing with runtime PM in conjunction
with the new pm_genpd_update_performance_state().

In case you don't want to manage some of this in genpd, then each
driver will have to drop their constraints every time they are about
to runtime suspend its device. And restore them at runtime resume.

To me, that's seems like a bad idea. Then it's better to make genpd
deal with this - somehow.

[...]

>>
>> I think a better name of this function is:
>> dev_pm_genpd_has_performance_state(). What do you think?
>
> Sure.
>
>> We might even want to decide to explicitly stay using the terminology
>> "DVFS" instead. In such case, perhaps converting the names of the
>> callbacks/API to use "dvfs" instead. For the API added here, maybe
>> dev_pm_genpd_can_dvfs().
>
> I am not sure about that really. Because in most of the cases genpd
> wouldn't do any freq switching, but only voltage.

Fair enough, let's stick with "performance" then. However, then please
make sure to not mention DVFS in the changelog/comments as it could be
confusing.

>
>> > Note that, the performance level as returned by
>> > ->get_performance_state() for the parent domain of a device is used for
>> > all domains in parent hierarchy.
>>
>> Please clarify a bit on this. What exactly does this mean?
>
> For a hierarchy like this:
>
> PPdomain 0   PPdomain 1
>||
>--
> |
>  Pdomain
> |
>   device
>
> ->dev_get_performance_state(dev) would be called for the device and it
> will return a single value (X) representing the performance index of
> its parent ("Pdomain"). But the direct parent domain may not support

This is not parent or parent domain, but just "domain" or perhaps "PM
domain" to make it clear.

> setting of performance index and so we need to propagate the call to
> parents of Pdomain. And that would be PPdomain 0 and 1.

Use "master domains" instead.

>
> Now the paragraph in the commit says that the same performance index
> value X will be used for both these PPdomains, as we don't want to
> make things more complex to begin with.

Alright, I get it, thanks!

>
>> > Tested-by: Rajendra Nayak 
>> > Signed-off-by: Viresh Kumar 
>> > ---
>> >  drivers/base/power/domain.c | 223 
>> > 
>> >  include/linux/pm_domain.h   |  22 +
>> >  2 files changed, 245 insertions(+)
>> >
>> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> > index 71c95ad808d5..d506be9ff1f7 100644
>> > --- a/drivers/base/power/domain.c
>> > +++ b/drivers/base/power/domain.c
>> > @@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
>> > notifier_block *nb,
>> > return NOTIFY_DONE;
>> >  }
>> >
>> > +/*
>> > + * Returns true if anyone in genpd's parent hierarchy has
>> > + * set_performance_state() set.
>> > + */
>> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> > *genpd)
>> > +{
>>
>> So this function will be become in-directly called by generic drivers
>> that supports DVFS of the genpd for their devices.
>>
>> I think the data you validate here would be better to be pre-validated
>> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
>> result stored in a variable in the genpd struct. Especially when a
>> subdomain is added, that is a point when you can verify the
>> *_performance_state() callbacks, and thus make sure it's a correct
>> setup from the topology point of view.
>
> Something like this ?
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 4a898e095a1d..182c1911ea9c 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
> notifier_block *nb,
> return NOTIFY_DONE;
>  }
>
> -/*
> - * Returns true if anyone in genpd's parent hierarchy has
> - * set_performance_state() set.
> - */
> -static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
> -{
> -   struct gpd_link *link;
> -
> -   if (genpd->set_performance_state)
> -   return true;
> -
> -   list_for_each_entry(link, >slave_links, slave_node) {
> -   if (genpd_has_set_performance_state(link->master))
> -   return true;
> -   }
> -
> -   return false;
> -}
> -
>  /**
>   * pm_genpd_has_performance_state - Checks if power domain does performance
>   * state management.
> @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
>
> /* The parent 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-21 Thread Ulf Hansson
[...]

>
>> What happens when a power domain gets powered off and then on. Is the
>> performance state restored? Please elaborate a bit on this.
>
> Can this happen while the genpd is still in use? If not then we
> wouldn't have a problem here as the users of it would have revoked
> their constraints by now.

This depends on how drivers are dealing with runtime PM in conjunction
with the new pm_genpd_update_performance_state().

In case you don't want to manage some of this in genpd, then each
driver will have to drop their constraints every time they are about
to runtime suspend its device. And restore them at runtime resume.

To me, that's seems like a bad idea. Then it's better to make genpd
deal with this - somehow.

[...]

>>
>> I think a better name of this function is:
>> dev_pm_genpd_has_performance_state(). What do you think?
>
> Sure.
>
>> We might even want to decide to explicitly stay using the terminology
>> "DVFS" instead. In such case, perhaps converting the names of the
>> callbacks/API to use "dvfs" instead. For the API added here, maybe
>> dev_pm_genpd_can_dvfs().
>
> I am not sure about that really. Because in most of the cases genpd
> wouldn't do any freq switching, but only voltage.

Fair enough, let's stick with "performance" then. However, then please
make sure to not mention DVFS in the changelog/comments as it could be
confusing.

>
>> > Note that, the performance level as returned by
>> > ->get_performance_state() for the parent domain of a device is used for
>> > all domains in parent hierarchy.
>>
>> Please clarify a bit on this. What exactly does this mean?
>
> For a hierarchy like this:
>
> PPdomain 0   PPdomain 1
>||
>--
> |
>  Pdomain
> |
>   device
>
> ->dev_get_performance_state(dev) would be called for the device and it
> will return a single value (X) representing the performance index of
> its parent ("Pdomain"). But the direct parent domain may not support

This is not parent or parent domain, but just "domain" or perhaps "PM
domain" to make it clear.

> setting of performance index and so we need to propagate the call to
> parents of Pdomain. And that would be PPdomain 0 and 1.

Use "master domains" instead.

>
> Now the paragraph in the commit says that the same performance index
> value X will be used for both these PPdomains, as we don't want to
> make things more complex to begin with.

Alright, I get it, thanks!

>
>> > Tested-by: Rajendra Nayak 
>> > Signed-off-by: Viresh Kumar 
>> > ---
>> >  drivers/base/power/domain.c | 223 
>> > 
>> >  include/linux/pm_domain.h   |  22 +
>> >  2 files changed, 245 insertions(+)
>> >
>> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> > index 71c95ad808d5..d506be9ff1f7 100644
>> > --- a/drivers/base/power/domain.c
>> > +++ b/drivers/base/power/domain.c
>> > @@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
>> > notifier_block *nb,
>> > return NOTIFY_DONE;
>> >  }
>> >
>> > +/*
>> > + * Returns true if anyone in genpd's parent hierarchy has
>> > + * set_performance_state() set.
>> > + */
>> > +static bool genpd_has_set_performance_state(struct generic_pm_domain 
>> > *genpd)
>> > +{
>>
>> So this function will be become in-directly called by generic drivers
>> that supports DVFS of the genpd for their devices.
>>
>> I think the data you validate here would be better to be pre-validated
>> at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
>> result stored in a variable in the genpd struct. Especially when a
>> subdomain is added, that is a point when you can verify the
>> *_performance_state() callbacks, and thus make sure it's a correct
>> setup from the topology point of view.
>
> Something like this ?
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 4a898e095a1d..182c1911ea9c 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -466,25 +466,6 @@ static int genpd_dev_pm_qos_notifier(struct 
> notifier_block *nb,
> return NOTIFY_DONE;
>  }
>
> -/*
> - * Returns true if anyone in genpd's parent hierarchy has
> - * set_performance_state() set.
> - */
> -static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
> -{
> -   struct gpd_link *link;
> -
> -   if (genpd->set_performance_state)
> -   return true;
> -
> -   list_for_each_entry(link, >slave_links, slave_node) {
> -   if (genpd_has_set_performance_state(link->master))
> -   return true;
> -   }
> -
> -   return false;
> -}
> -
>  /**
>   * pm_genpd_has_performance_state - Checks if power domain does performance
>   * state management.
> @@ -507,7 +488,7 @@ bool pm_genpd_has_performance_state(struct device *dev)
>
> /* The parent domain must have set get_performance_state() */

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-19 Thread Viresh Kumar
On 17-07-17, 14:38, Ulf Hansson wrote:
> On 21 June 2017 at 09:10, Viresh Kumar  wrote:
> > Some platforms have the capability to configure the performance state of
> > their Power Domains. The performance levels are identified by positive
> > integer values, a lower value represents lower performance state.
> >
> > This patch adds a new genpd API: pm_genpd_update_performance_state().
> > The caller passes the affected device and the frequency representing its
> > next DVFS state.
> >
> > The power domains get two new callbacks:
> >
> > - get_performance_state(): This is called by the genpd core to retrieve
> >   the performance state (integer value) corresponding to a target
> >   frequency for the device. This state is used by the genpd core to find
> >   the highest requested state by all the devices powered by a domain.
> 
> Please clarify this a bit more.
> 
> I guess what you want to say is that genpd aggregates all requested
> performance state of all its devices and its subdomains, to be able to
> set a correct (highest requested) performance state.

Right.

> Moreover, could you perhaps explain a bit on *when* this callback
> becomes invoked.

Sure, its done when pm_genpd_update_performance_state() is called for
a device. On such an event, the genpd core first calls
get_performance_state() and gets the device's target state. It then
aggregates the states of all devices/subdomains of the parent domain
of this device and finally calls set_performance_state() for the genpd.

> >
> > - set_performance_state(): The highest state retrieved from above
> >   interface is then passed to this callback to finally program the
> >   performance state of the power domain.
> 
> When will this callback be invoked?

See above.

> What happens when a power domain gets powered off and then on. Is the
> performance state restored? Please elaborate a bit on this.

Can this happen while the genpd is still in use? If not then we
wouldn't have a problem here as the users of it would have revoked
their constraints by now.

> > The power domains can avoid supplying these callbacks, if they don't
> > support setting performance-states.
> >
> > A power domain may have only get_performance_state() callback, if it
> > doesn't have the capability of changing the performance state itself but
> > someone in its parent hierarchy has.
> >
> > A power domain may have only set_performance_state(), if it doesn't have
> > any direct devices below it but subdomains. And so the
> > get_performance_state() will never get called from the core.
> >
> 
> It seems like the ->get_perfomance_state() is a device specific
> callback, while the ->set_performance_state() is a genpd domain
> callback.

Yes.

> I am wondering whether we should try to improve the names of the
> callbacks to reflect this.

What about dev_get_performance_state() and
genpd_set_performance_state() ?

> > The more common case would be to have both the callbacks set.
> >
> > Another API, pm_genpd_has_performance_state(), is also added to let
> > other parts of the kernel check if the power domain of a device supports
> > performance-states or not. This could have been done from
> 
> I think a better name of this function is:
> dev_pm_genpd_has_performance_state(). What do you think?

Sure.

> We might even want to decide to explicitly stay using the terminology
> "DVFS" instead. In such case, perhaps converting the names of the
> callbacks/API to use "dvfs" instead. For the API added here, maybe
> dev_pm_genpd_can_dvfs().

I am not sure about that really. Because in most of the cases genpd
wouldn't do any freq switching, but only voltage.

> > pm_genpd_update_performance_state() as well, but that routine gets
> > called every time we do DVFS for the device and it wouldn't be optimal
> > in that case.
> 
> So pm_genpd_update_performance_state() is also a new API added in
> $subject patch. But there is no information about it in the changelog,
> besides the above. Please add that.

Yeah, I missed that and will include like what I said in the beginning
of the reply on how this leads to call of other callbacks.

> Moreover, perhaps we should rename the function to dev_pm_genpd_set_dvfs()

Not sure as earlier said.

> > Note that, the performance level as returned by
> > ->get_performance_state() for the parent domain of a device is used for
> > all domains in parent hierarchy.
> 
> Please clarify a bit on this. What exactly does this mean?

For a hierarchy like this:

PPdomain 0   PPdomain 1
   ||
   --
|
 Pdomain
|
  device

->dev_get_performance_state(dev) would be called for the device and it
will return a single value (X) representing the performance index of
its parent ("Pdomain"). But the direct parent domain may not support
setting of performance index and so we need to propagate the call to
parents of Pdomain. And that would be 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-19 Thread Viresh Kumar
On 17-07-17, 14:38, Ulf Hansson wrote:
> On 21 June 2017 at 09:10, Viresh Kumar  wrote:
> > Some platforms have the capability to configure the performance state of
> > their Power Domains. The performance levels are identified by positive
> > integer values, a lower value represents lower performance state.
> >
> > This patch adds a new genpd API: pm_genpd_update_performance_state().
> > The caller passes the affected device and the frequency representing its
> > next DVFS state.
> >
> > The power domains get two new callbacks:
> >
> > - get_performance_state(): This is called by the genpd core to retrieve
> >   the performance state (integer value) corresponding to a target
> >   frequency for the device. This state is used by the genpd core to find
> >   the highest requested state by all the devices powered by a domain.
> 
> Please clarify this a bit more.
> 
> I guess what you want to say is that genpd aggregates all requested
> performance state of all its devices and its subdomains, to be able to
> set a correct (highest requested) performance state.

Right.

> Moreover, could you perhaps explain a bit on *when* this callback
> becomes invoked.

Sure, its done when pm_genpd_update_performance_state() is called for
a device. On such an event, the genpd core first calls
get_performance_state() and gets the device's target state. It then
aggregates the states of all devices/subdomains of the parent domain
of this device and finally calls set_performance_state() for the genpd.

> >
> > - set_performance_state(): The highest state retrieved from above
> >   interface is then passed to this callback to finally program the
> >   performance state of the power domain.
> 
> When will this callback be invoked?

See above.

> What happens when a power domain gets powered off and then on. Is the
> performance state restored? Please elaborate a bit on this.

Can this happen while the genpd is still in use? If not then we
wouldn't have a problem here as the users of it would have revoked
their constraints by now.

> > The power domains can avoid supplying these callbacks, if they don't
> > support setting performance-states.
> >
> > A power domain may have only get_performance_state() callback, if it
> > doesn't have the capability of changing the performance state itself but
> > someone in its parent hierarchy has.
> >
> > A power domain may have only set_performance_state(), if it doesn't have
> > any direct devices below it but subdomains. And so the
> > get_performance_state() will never get called from the core.
> >
> 
> It seems like the ->get_perfomance_state() is a device specific
> callback, while the ->set_performance_state() is a genpd domain
> callback.

Yes.

> I am wondering whether we should try to improve the names of the
> callbacks to reflect this.

What about dev_get_performance_state() and
genpd_set_performance_state() ?

> > The more common case would be to have both the callbacks set.
> >
> > Another API, pm_genpd_has_performance_state(), is also added to let
> > other parts of the kernel check if the power domain of a device supports
> > performance-states or not. This could have been done from
> 
> I think a better name of this function is:
> dev_pm_genpd_has_performance_state(). What do you think?

Sure.

> We might even want to decide to explicitly stay using the terminology
> "DVFS" instead. In such case, perhaps converting the names of the
> callbacks/API to use "dvfs" instead. For the API added here, maybe
> dev_pm_genpd_can_dvfs().

I am not sure about that really. Because in most of the cases genpd
wouldn't do any freq switching, but only voltage.

> > pm_genpd_update_performance_state() as well, but that routine gets
> > called every time we do DVFS for the device and it wouldn't be optimal
> > in that case.
> 
> So pm_genpd_update_performance_state() is also a new API added in
> $subject patch. But there is no information about it in the changelog,
> besides the above. Please add that.

Yeah, I missed that and will include like what I said in the beginning
of the reply on how this leads to call of other callbacks.

> Moreover, perhaps we should rename the function to dev_pm_genpd_set_dvfs()

Not sure as earlier said.

> > Note that, the performance level as returned by
> > ->get_performance_state() for the parent domain of a device is used for
> > all domains in parent hierarchy.
> 
> Please clarify a bit on this. What exactly does this mean?

For a hierarchy like this:

PPdomain 0   PPdomain 1
   ||
   --
|
 Pdomain
|
  device

->dev_get_performance_state(dev) would be called for the device and it
will return a single value (X) representing the performance index of
its parent ("Pdomain"). But the direct parent domain may not support
setting of performance index and so we need to propagate the call to
parents of Pdomain. And that would be PPdomain 0 and 1.

Now the 

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-17 Thread Ulf Hansson
On 21 June 2017 at 09:10, Viresh Kumar  wrote:
> Some platforms have the capability to configure the performance state of
> their Power Domains. The performance levels are identified by positive
> integer values, a lower value represents lower performance state.
>
> This patch adds a new genpd API: pm_genpd_update_performance_state().
> The caller passes the affected device and the frequency representing its
> next DVFS state.
>
> The power domains get two new callbacks:
>
> - get_performance_state(): This is called by the genpd core to retrieve
>   the performance state (integer value) corresponding to a target
>   frequency for the device. This state is used by the genpd core to find
>   the highest requested state by all the devices powered by a domain.

Please clarify this a bit more.

I guess what you want to say is that genpd aggregates all requested
performance state of all its devices and its subdomains, to be able to
set a correct (highest requested) performance state.

Moreover, could you perhaps explain a bit on *when* this callback
becomes invoked.

>
> - set_performance_state(): The highest state retrieved from above
>   interface is then passed to this callback to finally program the
>   performance state of the power domain.

When will this callback be invoked?

What happens when a power domain gets powered off and then on. Is the
performance state restored? Please elaborate a bit on this.

>
> The power domains can avoid supplying these callbacks, if they don't
> support setting performance-states.
>
> A power domain may have only get_performance_state() callback, if it
> doesn't have the capability of changing the performance state itself but
> someone in its parent hierarchy has.
>
> A power domain may have only set_performance_state(), if it doesn't have
> any direct devices below it but subdomains. And so the
> get_performance_state() will never get called from the core.
>

It seems like the ->get_perfomance_state() is a device specific
callback, while the ->set_performance_state() is a genpd domain
callback.

I am wondering whether we should try to improve the names of the
callbacks to reflect this.

> The more common case would be to have both the callbacks set.
>
> Another API, pm_genpd_has_performance_state(), is also added to let
> other parts of the kernel check if the power domain of a device supports
> performance-states or not. This could have been done from

I think a better name of this function is:
dev_pm_genpd_has_performance_state(). What do you think?

We might even want to decide to explicitly stay using the terminology
"DVFS" instead. In such case, perhaps converting the names of the
callbacks/API to use "dvfs" instead. For the API added here, maybe
dev_pm_genpd_can_dvfs().

> pm_genpd_update_performance_state() as well, but that routine gets
> called every time we do DVFS for the device and it wouldn't be optimal
> in that case.

So pm_genpd_update_performance_state() is also a new API added in
$subject patch. But there is no information about it in the changelog,
besides the above. Please add that.

Moreover, perhaps we should rename the function to dev_pm_genpd_set_dvfs()

>
> Note that, the performance level as returned by
> ->get_performance_state() for the parent domain of a device is used for
> all domains in parent hierarchy.

Please clarify a bit on this. What exactly does this mean?

>
> Tested-by: Rajendra Nayak 
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/base/power/domain.c | 223 
> 
>  include/linux/pm_domain.h   |  22 +
>  2 files changed, 245 insertions(+)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 71c95ad808d5..d506be9ff1f7 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
> notifier_block *nb,
> return NOTIFY_DONE;
>  }
>
> +/*
> + * Returns true if anyone in genpd's parent hierarchy has
> + * set_performance_state() set.
> + */
> +static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
> +{

So this function will be become in-directly called by generic drivers
that supports DVFS of the genpd for their devices.

I think the data you validate here would be better to be pre-validated
at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
result stored in a variable in the genpd struct. Especially when a
subdomain is added, that is a point when you can verify the
*_performance_state() callbacks, and thus make sure it's a correct
setup from the topology point of view.

> +   struct gpd_link *link;
> +
> +   if (genpd->set_performance_state)
> +   return true;
> +
> +   list_for_each_entry(link, >slave_links, slave_node) {
> +   if (genpd_has_set_performance_state(link->master))
> +   return true;
> +

Re: [PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-07-17 Thread Ulf Hansson
On 21 June 2017 at 09:10, Viresh Kumar  wrote:
> Some platforms have the capability to configure the performance state of
> their Power Domains. The performance levels are identified by positive
> integer values, a lower value represents lower performance state.
>
> This patch adds a new genpd API: pm_genpd_update_performance_state().
> The caller passes the affected device and the frequency representing its
> next DVFS state.
>
> The power domains get two new callbacks:
>
> - get_performance_state(): This is called by the genpd core to retrieve
>   the performance state (integer value) corresponding to a target
>   frequency for the device. This state is used by the genpd core to find
>   the highest requested state by all the devices powered by a domain.

Please clarify this a bit more.

I guess what you want to say is that genpd aggregates all requested
performance state of all its devices and its subdomains, to be able to
set a correct (highest requested) performance state.

Moreover, could you perhaps explain a bit on *when* this callback
becomes invoked.

>
> - set_performance_state(): The highest state retrieved from above
>   interface is then passed to this callback to finally program the
>   performance state of the power domain.

When will this callback be invoked?

What happens when a power domain gets powered off and then on. Is the
performance state restored? Please elaborate a bit on this.

>
> The power domains can avoid supplying these callbacks, if they don't
> support setting performance-states.
>
> A power domain may have only get_performance_state() callback, if it
> doesn't have the capability of changing the performance state itself but
> someone in its parent hierarchy has.
>
> A power domain may have only set_performance_state(), if it doesn't have
> any direct devices below it but subdomains. And so the
> get_performance_state() will never get called from the core.
>

It seems like the ->get_perfomance_state() is a device specific
callback, while the ->set_performance_state() is a genpd domain
callback.

I am wondering whether we should try to improve the names of the
callbacks to reflect this.

> The more common case would be to have both the callbacks set.
>
> Another API, pm_genpd_has_performance_state(), is also added to let
> other parts of the kernel check if the power domain of a device supports
> performance-states or not. This could have been done from

I think a better name of this function is:
dev_pm_genpd_has_performance_state(). What do you think?

We might even want to decide to explicitly stay using the terminology
"DVFS" instead. In such case, perhaps converting the names of the
callbacks/API to use "dvfs" instead. For the API added here, maybe
dev_pm_genpd_can_dvfs().

> pm_genpd_update_performance_state() as well, but that routine gets
> called every time we do DVFS for the device and it wouldn't be optimal
> in that case.

So pm_genpd_update_performance_state() is also a new API added in
$subject patch. But there is no information about it in the changelog,
besides the above. Please add that.

Moreover, perhaps we should rename the function to dev_pm_genpd_set_dvfs()

>
> Note that, the performance level as returned by
> ->get_performance_state() for the parent domain of a device is used for
> all domains in parent hierarchy.

Please clarify a bit on this. What exactly does this mean?

>
> Tested-by: Rajendra Nayak 
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/base/power/domain.c | 223 
> 
>  include/linux/pm_domain.h   |  22 +
>  2 files changed, 245 insertions(+)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 71c95ad808d5..d506be9ff1f7 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
> notifier_block *nb,
> return NOTIFY_DONE;
>  }
>
> +/*
> + * Returns true if anyone in genpd's parent hierarchy has
> + * set_performance_state() set.
> + */
> +static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
> +{

So this function will be become in-directly called by generic drivers
that supports DVFS of the genpd for their devices.

I think the data you validate here would be better to be pre-validated
at pm_genpd_init() and at pm_genpd_add|remove_subdomain() and the
result stored in a variable in the genpd struct. Especially when a
subdomain is added, that is a point when you can verify the
*_performance_state() callbacks, and thus make sure it's a correct
setup from the topology point of view.

> +   struct gpd_link *link;
> +
> +   if (genpd->set_performance_state)
> +   return true;
> +
> +   list_for_each_entry(link, >slave_links, slave_node) {
> +   if (genpd_has_set_performance_state(link->master))
> +   return true;
> +   }
> +
> +   return false;
> +}
> +
> +/**
> + * 

[PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-06-21 Thread Viresh Kumar
Some platforms have the capability to configure the performance state of
their Power Domains. The performance levels are identified by positive
integer values, a lower value represents lower performance state.

This patch adds a new genpd API: pm_genpd_update_performance_state().
The caller passes the affected device and the frequency representing its
next DVFS state.

The power domains get two new callbacks:

- get_performance_state(): This is called by the genpd core to retrieve
  the performance state (integer value) corresponding to a target
  frequency for the device. This state is used by the genpd core to find
  the highest requested state by all the devices powered by a domain.

- set_performance_state(): The highest state retrieved from above
  interface is then passed to this callback to finally program the
  performance state of the power domain.

The power domains can avoid supplying these callbacks, if they don't
support setting performance-states.

A power domain may have only get_performance_state() callback, if it
doesn't have the capability of changing the performance state itself but
someone in its parent hierarchy has.

A power domain may have only set_performance_state(), if it doesn't have
any direct devices below it but subdomains. And so the
get_performance_state() will never get called from the core.

The more common case would be to have both the callbacks set.

Another API, pm_genpd_has_performance_state(), is also added to let
other parts of the kernel check if the power domain of a device supports
performance-states or not. This could have been done from
pm_genpd_update_performance_state() as well, but that routine gets
called every time we do DVFS for the device and it wouldn't be optimal
in that case.

Note that, the performance level as returned by
->get_performance_state() for the parent domain of a device is used for
all domains in parent hierarchy.

Tested-by: Rajendra Nayak 
Signed-off-by: Viresh Kumar 
---
 drivers/base/power/domain.c | 223 
 include/linux/pm_domain.h   |  22 +
 2 files changed, 245 insertions(+)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 71c95ad808d5..d506be9ff1f7 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
notifier_block *nb,
return NOTIFY_DONE;
 }
 
+/*
+ * Returns true if anyone in genpd's parent hierarchy has
+ * set_performance_state() set.
+ */
+static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
+{
+   struct gpd_link *link;
+
+   if (genpd->set_performance_state)
+   return true;
+
+   list_for_each_entry(link, >slave_links, slave_node) {
+   if (genpd_has_set_performance_state(link->master))
+   return true;
+   }
+
+   return false;
+}
+
+/**
+ * pm_genpd_has_performance_state - Checks if power domain does performance
+ * state management.
+ *
+ * @dev: Device whose power domain is getting inquired.
+ *
+ * This must be called by the user drivers, before they start calling
+ * pm_genpd_update_performance_state(), to guarantee that all dependencies are
+ * met and the device's genpd supports performance states.
+ *
+ * It is assumed that the user driver guarantees that the genpd wouldn't be
+ * detached while this routine is getting called.
+ *
+ * Returns "true" if device's genpd supports performance states, "false"
+ * otherwise.
+ */
+bool pm_genpd_has_performance_state(struct device *dev)
+{
+   struct generic_pm_domain *genpd = genpd_lookup_dev(dev);
+
+   /* The parent domain must have set get_performance_state() */
+   if (!IS_ERR(genpd) && genpd->get_performance_state) {
+   if (genpd_has_set_performance_state(genpd))
+   return true;
+
+   /*
+* A genpd with ->get_performance_state() callback must also
+* allow setting performance state.
+*/
+   dev_err(dev, "genpd doesn't support setting performance 
state\n");
+   }
+
+   return false;
+}
+EXPORT_SYMBOL_GPL(pm_genpd_has_performance_state);
+
+/*
+ * Re-evaluate performance state of a power domain. Finds the highest requested
+ * performance state by the devices and subdomains within the power domain and
+ * then tries to change its performance state. If the power domain doesn't have
+ * a set_performance_state() callback, then we move the request to its parent
+ * power domain.
+ *
+ * Locking: Access (or update) to device's "pd_data->performance_state" field
+ * happens only with parent domain's lock held. Subdomains have their
+ * "genpd->performance_state" protected with their own lock (and they are the
+ * only user of this field) and their per-parent-domain
+ * "link->performance_state" field is protected with individual parent power
+ 

[PATCH V8 1/6] PM / Domains: Add support to select performance-state of domains

2017-06-21 Thread Viresh Kumar
Some platforms have the capability to configure the performance state of
their Power Domains. The performance levels are identified by positive
integer values, a lower value represents lower performance state.

This patch adds a new genpd API: pm_genpd_update_performance_state().
The caller passes the affected device and the frequency representing its
next DVFS state.

The power domains get two new callbacks:

- get_performance_state(): This is called by the genpd core to retrieve
  the performance state (integer value) corresponding to a target
  frequency for the device. This state is used by the genpd core to find
  the highest requested state by all the devices powered by a domain.

- set_performance_state(): The highest state retrieved from above
  interface is then passed to this callback to finally program the
  performance state of the power domain.

The power domains can avoid supplying these callbacks, if they don't
support setting performance-states.

A power domain may have only get_performance_state() callback, if it
doesn't have the capability of changing the performance state itself but
someone in its parent hierarchy has.

A power domain may have only set_performance_state(), if it doesn't have
any direct devices below it but subdomains. And so the
get_performance_state() will never get called from the core.

The more common case would be to have both the callbacks set.

Another API, pm_genpd_has_performance_state(), is also added to let
other parts of the kernel check if the power domain of a device supports
performance-states or not. This could have been done from
pm_genpd_update_performance_state() as well, but that routine gets
called every time we do DVFS for the device and it wouldn't be optimal
in that case.

Note that, the performance level as returned by
->get_performance_state() for the parent domain of a device is used for
all domains in parent hierarchy.

Tested-by: Rajendra Nayak 
Signed-off-by: Viresh Kumar 
---
 drivers/base/power/domain.c | 223 
 include/linux/pm_domain.h   |  22 +
 2 files changed, 245 insertions(+)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 71c95ad808d5..d506be9ff1f7 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -466,6 +466,229 @@ static int genpd_dev_pm_qos_notifier(struct 
notifier_block *nb,
return NOTIFY_DONE;
 }
 
+/*
+ * Returns true if anyone in genpd's parent hierarchy has
+ * set_performance_state() set.
+ */
+static bool genpd_has_set_performance_state(struct generic_pm_domain *genpd)
+{
+   struct gpd_link *link;
+
+   if (genpd->set_performance_state)
+   return true;
+
+   list_for_each_entry(link, >slave_links, slave_node) {
+   if (genpd_has_set_performance_state(link->master))
+   return true;
+   }
+
+   return false;
+}
+
+/**
+ * pm_genpd_has_performance_state - Checks if power domain does performance
+ * state management.
+ *
+ * @dev: Device whose power domain is getting inquired.
+ *
+ * This must be called by the user drivers, before they start calling
+ * pm_genpd_update_performance_state(), to guarantee that all dependencies are
+ * met and the device's genpd supports performance states.
+ *
+ * It is assumed that the user driver guarantees that the genpd wouldn't be
+ * detached while this routine is getting called.
+ *
+ * Returns "true" if device's genpd supports performance states, "false"
+ * otherwise.
+ */
+bool pm_genpd_has_performance_state(struct device *dev)
+{
+   struct generic_pm_domain *genpd = genpd_lookup_dev(dev);
+
+   /* The parent domain must have set get_performance_state() */
+   if (!IS_ERR(genpd) && genpd->get_performance_state) {
+   if (genpd_has_set_performance_state(genpd))
+   return true;
+
+   /*
+* A genpd with ->get_performance_state() callback must also
+* allow setting performance state.
+*/
+   dev_err(dev, "genpd doesn't support setting performance 
state\n");
+   }
+
+   return false;
+}
+EXPORT_SYMBOL_GPL(pm_genpd_has_performance_state);
+
+/*
+ * Re-evaluate performance state of a power domain. Finds the highest requested
+ * performance state by the devices and subdomains within the power domain and
+ * then tries to change its performance state. If the power domain doesn't have
+ * a set_performance_state() callback, then we move the request to its parent
+ * power domain.
+ *
+ * Locking: Access (or update) to device's "pd_data->performance_state" field
+ * happens only with parent domain's lock held. Subdomains have their
+ * "genpd->performance_state" protected with their own lock (and they are the
+ * only user of this field) and their per-parent-domain
+ * "link->performance_state" field is protected with individual parent power
+ * domain's lock and is only accessed/updated