On Wed, Mar 20, 2013 at 8:26 AM, Lai Jiangshan wrote:
>> for_eahc_node(node)
>> if (pwq_tbl[node] != dfl_pwq)
>> kfree(pwq_tbl[node]);
>> kfree(dfl_pwq);
>
> I also missed.
> we still need to put_unbound_pool() before free(pwq)
Yeap, we do.
On Wed, Mar 20, 2013 at 11:05 PM, Tejun Heo wrote:
> On Wed, Mar 20, 2013 at 11:03:53PM +0800, Lai Jiangshan wrote:
>> > +enomem:
>> > + free_workqueue_attrs(tmp_attrs);
>> > + if (pwq_tbl) {
>> > + for_each_node(node)
>> > + kfree(pwq_tbl[node]);
>>
On Wed, Mar 20, 2013 at 11:03:53PM +0800, Lai Jiangshan wrote:
> > +enomem:
> > + free_workqueue_attrs(tmp_attrs);
> > + if (pwq_tbl) {
> > + for_each_node(node)
> > + kfree(pwq_tbl[node]);
>
> It will free the dfl_pwq multi times.
Oops, you're righ
On Wed, Mar 20, 2013 at 8:00 AM, Tejun Heo wrote:
> Currently, an unbound workqueue has single current, or first, pwq
> (pool_workqueue) to which all new work items are queued. This often
> isn't optimal on NUMA machines as workers may jump around across node
> boundaries and work items get assig
Currently, an unbound workqueue has single current, or first, pwq
(pool_workqueue) to which all new work items are queued. This often
isn't optimal on NUMA machines as workers may jump around across node
boundaries and work items get assigned to workers without any regard
to NUMA affinity.
This p
5 matches
Mail list logo