Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-09 Thread Juri Lelli
On 08/11/18 14:11, Waiman Long wrote:
> On 11/07/2018 11:38 AM, Juri Lelli wrote:
> > Hi,
> >
> > On 07/11/18 07:53, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Tue, Sep 25, 2018 at 04:34:16PM +0200, Juri Lelli wrote:
> >>> It would be great if you could please have a look at the proposed change
> >>> below (and the rest of the set of course :-).
> >> Yeah, looks good to me.  Please feel free to add
> >>
> >>  Acked-by: Tejun Heo 
> > Thanks!
> >
> >>> Another bit that I'd be more comfortable after hearing your word on it
> >>> is this one (discussed over 5/5):
> >>>
> >>> https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/
> >> Can you please loop Waiman Long  into discussion?
> >> He's working on cgroup2 cpuset support which might collide.
> > Sure, I've been originally working on this on top of his series, but
> > didn't try with latest version. Hopefully the two series don't generate
> > too much collisions. I'll try to find some time soon to check again.
> >
> > In the meantime, Waiman Long, how do you feel about it? :-)
> >
> > Thread starts at (you if missed it)
> >
> > https://lore.kernel.org/lkml/20180903142801.20046-1-juri.le...@redhat.com/
> >
> > Best,
> >
> > - Juri
> 
> Your patches look good to me. There will be some minor conflicts, I
> think, but nothing big.

Thanks a lot for reviewing them.

Going to test and respin soon.

Best,

- Juri


Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-08 Thread Waiman Long
On 11/07/2018 11:38 AM, Juri Lelli wrote:
> Hi,
>
> On 07/11/18 07:53, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Sep 25, 2018 at 04:34:16PM +0200, Juri Lelli wrote:
>>> It would be great if you could please have a look at the proposed change
>>> below (and the rest of the set of course :-).
>> Yeah, looks good to me.  Please feel free to add
>>
>>  Acked-by: Tejun Heo 
> Thanks!
>
>>> Another bit that I'd be more comfortable after hearing your word on it
>>> is this one (discussed over 5/5):
>>>
>>> https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/
>> Can you please loop Waiman Long  into discussion?
>> He's working on cgroup2 cpuset support which might collide.
> Sure, I've been originally working on this on top of his series, but
> didn't try with latest version. Hopefully the two series don't generate
> too much collisions. I'll try to find some time soon to check again.
>
> In the meantime, Waiman Long, how do you feel about it? :-)
>
> Thread starts at (you if missed it)
>
> https://lore.kernel.org/lkml/20180903142801.20046-1-juri.le...@redhat.com/
>
> Best,
>
> - Juri

Your patches look good to me. There will be some minor conflicts, I
think, but nothing big.

Cheers,
Longman



Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-08 Thread Juri Lelli
On 07/11/18 17:38, Juri Lelli wrote:
> Hi,
> 
> On 07/11/18 07:53, Tejun Heo wrote:
> > Hello,
> > 
> > On Tue, Sep 25, 2018 at 04:34:16PM +0200, Juri Lelli wrote:
> > > It would be great if you could please have a look at the proposed change
> > > below (and the rest of the set of course :-).
> > 
> > Yeah, looks good to me.  Please feel free to add
> > 
> >  Acked-by: Tejun Heo 
> 
> Thanks!
> 
> > > Another bit that I'd be more comfortable after hearing your word on it
> > > is this one (discussed over 5/5):
> > > 
> > > https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/
> > 
> > Can you please loop Waiman Long  into discussion?
> > He's working on cgroup2 cpuset support which might collide.
> 
> Sure, I've been originally working on this on top of his series, but
> didn't try with latest version. Hopefully the two series don't generate
> too much collisions. I'll try to find some time soon to check again.

So, conflicts weren't too bad (on top of v14).

I guess I'll wait for Waiman Long's patches to land for rebasing again
and testing.

Best,

- Juri


Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-07 Thread Juri Lelli
Hi,

On 07/11/18 07:53, Tejun Heo wrote:
> Hello,
> 
> On Tue, Sep 25, 2018 at 04:34:16PM +0200, Juri Lelli wrote:
> > It would be great if you could please have a look at the proposed change
> > below (and the rest of the set of course :-).
> 
> Yeah, looks good to me.  Please feel free to add
> 
>  Acked-by: Tejun Heo 

Thanks!

> > Another bit that I'd be more comfortable after hearing your word on it
> > is this one (discussed over 5/5):
> > 
> > https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/
> 
> Can you please loop Waiman Long  into discussion?
> He's working on cgroup2 cpuset support which might collide.

Sure, I've been originally working on this on top of his series, but
didn't try with latest version. Hopefully the two series don't generate
too much collisions. I'll try to find some time soon to check again.

In the meantime, Waiman Long, how do you feel about it? :-)

Thread starts at (you if missed it)

https://lore.kernel.org/lkml/20180903142801.20046-1-juri.le...@redhat.com/

Best,

- Juri


Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-07 Thread Tejun Heo
Hello,

On Tue, Sep 25, 2018 at 04:34:16PM +0200, Juri Lelli wrote:
> It would be great if you could please have a look at the proposed change
> below (and the rest of the set of course :-).

Yeah, looks good to me.  Please feel free to add

 Acked-by: Tejun Heo 

> Another bit that I'd be more comfortable after hearing your word on it
> is this one (discussed over 5/5):
> 
> https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/

Can you please loop Waiman Long  into discussion?
He's working on cgroup2 cpuset support which might collide.

Thanks.

-- 
tejun


Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-11-07 Thread Juri Lelli
Hi,

Ping.

Thanks,

- Juri

On 25/09/18 16:34, Juri Lelli wrote:
> Hi Li Zefan and Tejun Heo,
> 
> It would be great if you could please have a look at the proposed change
> below (and the rest of the set of course :-).
> 
> Another bit that I'd be more comfortable after hearing your word on it
> is this one (discussed over 5/5):
> 
> https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/
> 
> Best,
> 
> - Juri
> 
> On 03/09/18 16:27, Juri Lelli wrote:
> > callback_lock grants the holder read-only access to cpusets.  For fixing
> > a synchronization issue between cpusets and scheduler core, it is now
> > required to make callback_lock available to core scheduler code.
> > 
> > Convert callback_lock to raw_spin_lock, so that it will be always safe
> > to acquire it from atomic context.
> > 
> > Signed-off-by: Juri Lelli 
> > ---
> >  kernel/cgroup/cpuset.c | 66 +-
> >  1 file changed, 33 insertions(+), 33 deletions(-)
> > 
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 266f10cb7222..5b43f482fa0f 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -288,7 +288,7 @@ static struct cpuset top_cpuset = {
> >   */
> >  
> >  static DEFINE_MUTEX(cpuset_mutex);
> > -static DEFINE_SPINLOCK(callback_lock);
> > +static DEFINE_RAW_SPINLOCK(callback_lock);
> >  
> >  static struct workqueue_struct *cpuset_migrate_mm_wq;
> >  
> > @@ -922,9 +922,9 @@ static void update_cpumasks_hier(struct cpuset *cs, 
> > struct cpumask *new_cpus)
> > continue;
> > rcu_read_unlock();
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > cpumask_copy(cp->effective_cpus, new_cpus);
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> >  
> > WARN_ON(!is_in_v2_mode() &&
> > !cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
> > @@ -989,9 +989,9 @@ static int update_cpumask(struct cpuset *cs, struct 
> > cpuset *trialcs,
> > if (retval < 0)
> > return retval;
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> >  
> > /* use trialcs->cpus_allowed as a temp variable */
> > update_cpumasks_hier(cs, trialcs->cpus_allowed);
> > @@ -1175,9 +1175,9 @@ static void update_nodemasks_hier(struct cpuset *cs, 
> > nodemask_t *new_mems)
> > continue;
> > rcu_read_unlock();
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > cp->effective_mems = *new_mems;
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> >  
> > WARN_ON(!is_in_v2_mode() &&
> > !nodes_equal(cp->mems_allowed, cp->effective_mems));
> > @@ -1245,9 +1245,9 @@ static int update_nodemask(struct cpuset *cs, struct 
> > cpuset *trialcs,
> > if (retval < 0)
> > goto done;
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > cs->mems_allowed = trialcs->mems_allowed;
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> >  
> > /* use trialcs->mems_allowed as a temp variable */
> > update_nodemasks_hier(cs, &trialcs->mems_allowed);
> > @@ -1338,9 +1338,9 @@ static int update_flag(cpuset_flagbits_t bit, struct 
> > cpuset *cs,
> > spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
> > || (is_spread_page(cs) != is_spread_page(trialcs)));
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > cs->flags = trialcs->flags;
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> >  
> > if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
> > rebuild_sched_domains_locked();
> > @@ -1755,7 +1755,7 @@ static int cpuset_common_seq_show(struct seq_file 
> > *sf, void *v)
> > cpuset_filetype_t type = seq_cft(sf)->private;
> > int ret = 0;
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> >  
> > switch (type) {
> > case FILE_CPULIST:
> > @@ -1774,7 +1774,7 @@ static int cpuset_common_seq_show(struct seq_file 
> > *sf, void *v)
> > ret = -EINVAL;
> > }
> >  
> > -   spin_unlock_irq(&callback_lock);
> > +   raw_spin_unlock_irq(&callback_lock);
> > return ret;
> >  }
> >  
> > @@ -1989,12 +1989,12 @@ static int cpuset_css_online(struct 
> > cgroup_subsys_state *css)
> >  
> > cpuset_inc();
> >  
> > -   spin_lock_irq(&callback_lock);
> > +   raw_spin_lock_irq(&callback_lock);
> > if (is_in_v2_mode()) {

Re: [PATCH v5 3/5] cgroup/cpuset: make callback_lock raw

2018-09-25 Thread Juri Lelli
Hi Li Zefan and Tejun Heo,

It would be great if you could please have a look at the proposed change
below (and the rest of the set of course :-).

Another bit that I'd be more comfortable after hearing your word on it
is this one (discussed over 5/5):

https://lore.kernel.org/lkml/20180925130750.GA25664@localhost.localdomain/

Best,

- Juri

On 03/09/18 16:27, Juri Lelli wrote:
> callback_lock grants the holder read-only access to cpusets.  For fixing
> a synchronization issue between cpusets and scheduler core, it is now
> required to make callback_lock available to core scheduler code.
> 
> Convert callback_lock to raw_spin_lock, so that it will be always safe
> to acquire it from atomic context.
> 
> Signed-off-by: Juri Lelli 
> ---
>  kernel/cgroup/cpuset.c | 66 +-
>  1 file changed, 33 insertions(+), 33 deletions(-)
> 
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 266f10cb7222..5b43f482fa0f 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -288,7 +288,7 @@ static struct cpuset top_cpuset = {
>   */
>  
>  static DEFINE_MUTEX(cpuset_mutex);
> -static DEFINE_SPINLOCK(callback_lock);
> +static DEFINE_RAW_SPINLOCK(callback_lock);
>  
>  static struct workqueue_struct *cpuset_migrate_mm_wq;
>  
> @@ -922,9 +922,9 @@ static void update_cpumasks_hier(struct cpuset *cs, 
> struct cpumask *new_cpus)
>   continue;
>   rcu_read_unlock();
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   cpumask_copy(cp->effective_cpus, new_cpus);
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>  
>   WARN_ON(!is_in_v2_mode() &&
>   !cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
> @@ -989,9 +989,9 @@ static int update_cpumask(struct cpuset *cs, struct 
> cpuset *trialcs,
>   if (retval < 0)
>   return retval;
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>  
>   /* use trialcs->cpus_allowed as a temp variable */
>   update_cpumasks_hier(cs, trialcs->cpus_allowed);
> @@ -1175,9 +1175,9 @@ static void update_nodemasks_hier(struct cpuset *cs, 
> nodemask_t *new_mems)
>   continue;
>   rcu_read_unlock();
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   cp->effective_mems = *new_mems;
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>  
>   WARN_ON(!is_in_v2_mode() &&
>   !nodes_equal(cp->mems_allowed, cp->effective_mems));
> @@ -1245,9 +1245,9 @@ static int update_nodemask(struct cpuset *cs, struct 
> cpuset *trialcs,
>   if (retval < 0)
>   goto done;
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   cs->mems_allowed = trialcs->mems_allowed;
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>  
>   /* use trialcs->mems_allowed as a temp variable */
>   update_nodemasks_hier(cs, &trialcs->mems_allowed);
> @@ -1338,9 +1338,9 @@ static int update_flag(cpuset_flagbits_t bit, struct 
> cpuset *cs,
>   spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
>   || (is_spread_page(cs) != is_spread_page(trialcs)));
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   cs->flags = trialcs->flags;
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>  
>   if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
>   rebuild_sched_domains_locked();
> @@ -1755,7 +1755,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, 
> void *v)
>   cpuset_filetype_t type = seq_cft(sf)->private;
>   int ret = 0;
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>  
>   switch (type) {
>   case FILE_CPULIST:
> @@ -1774,7 +1774,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, 
> void *v)
>   ret = -EINVAL;
>   }
>  
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_irq(&callback_lock);
>   return ret;
>  }
>  
> @@ -1989,12 +1989,12 @@ static int cpuset_css_online(struct 
> cgroup_subsys_state *css)
>  
>   cpuset_inc();
>  
> - spin_lock_irq(&callback_lock);
> + raw_spin_lock_irq(&callback_lock);
>   if (is_in_v2_mode()) {
>   cpumask_copy(cs->effective_cpus, parent->effective_cpus);
>   cs->effective_mems = parent->effective_mems;
>   }
> - spin_unlock_irq(&callback_lock);
> + raw_spin_unlock_ir