On Mon 15-02-16 13:21:25, Tejun Heo wrote:
> Hello, Michal.
> 
> On Mon, Feb 15, 2016 at 06:33:46PM +0100, Michal Hocko wrote:
> > On Wed 10-02-16 10:55:03, Tejun Heo wrote:
> > [...]
> > > --- a/kernel/workqueue.c
> > > +++ b/kernel/workqueue.c
> > > @@ -570,6 +570,16 @@ static struct pool_workqueue 
> > > *unbound_pwq_by_node(struct workqueue_struct *wq,
> > >                                             int node)
> > >  {
> > >   assert_rcu_or_wq_mutex_or_pool_mutex(wq);
> > > +
> > > + /*
> > > +  * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
> > > +  * delayed item is pending.  The plan is to keep CPU -> NODE
> > > +  * mapping valid and stable across CPU on/offlines.  Once that
> > > +  * happens, this workaround can be removed.
> > 
> > I am not sure this is completely true with the code as is currently.
> > Don't wee also need to use cpu_to_mem to handle memoryless CPUs?
> 
> I'm not sure.  I think we still wan to distinguish workers for a
> memoryless node from its neighboring node with memory.  We don't want
> work items for the latter to be randomly distributed to the former
> after all.

I am not sure I understand. Does that mean that a node with no memory
would have its WQ specific pool? I might be missing something but I
thought that cpu_to_node will return NUMA_NO_NODE if the CPU is memory
less. Or do you expect that cpu_to_node will always return a valid node
id even when it doesn't contain any memory?

-- 
Michal Hocko
SUSE Labs

Reply via email to