Re: [PATCH 5/7] workqueue, use default pwq when fail to allocate node pwd
On Thu, Apr 04, 2013 at 10:05:36AM +0800, Lai Jiangshan wrote: > When we fail to allocate the node pwq, we can use the default pwq > for the node. > > Thus we can avoid failure after allocated default pwq, and remove > some code for failure path. I don't know about this one. The reason why we fall back to the default one during CPU UP/DONW is because we don't want to interfere with CPU hotplug which doesn't really have much to do with specific workqueues and shouldn't fail even when things go pretty hairy - e.g. if the user turned off the screen of his/her phone or a laptop is thrown into the backpack with lid closed, CPU_DOWNs during suspend better not fail from memory allocation. apply_workqueue_attrs() is different. We *want* to notify the issuer that something went wrong affnd the action requested couldn't be fulfilled in full. We don't want to hide a failure. It'll show up as a silent performance degradation that nobody knows why it's happening. So, nope, doesn't look like a good idea to me. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 5/7] workqueue, use default pwq when fail to allocate node pwd
On Thu, Apr 04, 2013 at 10:05:36AM +0800, Lai Jiangshan wrote: When we fail to allocate the node pwq, we can use the default pwq for the node. Thus we can avoid failure after allocated default pwq, and remove some code for failure path. I don't know about this one. The reason why we fall back to the default one during CPU UP/DONW is because we don't want to interfere with CPU hotplug which doesn't really have much to do with specific workqueues and shouldn't fail even when things go pretty hairy - e.g. if the user turned off the screen of his/her phone or a laptop is thrown into the backpack with lid closed, CPU_DOWNs during suspend better not fail from memory allocation. apply_workqueue_attrs() is different. We *want* to notify the issuer that something went wrong affnd the action requested couldn't be fulfilled in full. We don't want to hide a failure. It'll show up as a silent performance degradation that nobody knows why it's happening. So, nope, doesn't look like a good idea to me. Thanks. -- tejun -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 5/7] workqueue, use default pwq when fail to allocate node pwd
When we fail to allocate the node pwq, we can use the default pwq for the node. Thus we can avoid failure after allocated default pwq, and remove some code for failure path. Signed-off-by: Lai Jiangshan --- kernel/workqueue.c | 28 +++- 1 files changed, 7 insertions(+), 21 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index a383eaf..737646d 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3751,17 +3751,6 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq, return pwq; } -/* undo alloc_unbound_pwq(), used only in the error path */ -static void free_unbound_pwq(struct pool_workqueue *pwq) -{ - lockdep_assert_held(_pool_mutex); - - if (pwq) { - put_unbound_pool(pwq->pool); - kfree(pwq); - } -} - /** * wq_calc_node_mask - calculate a wq_attrs' cpumask for the specified node * @attrs: the wq_attrs of interest @@ -3891,12 +3880,12 @@ int apply_workqueue_attrs(struct workqueue_struct *wq, for_each_node(node) { if (wq_calc_node_cpumask(attrs, node, -1, tmp_attrs->cpumask)) { pwq_tbl[node] = alloc_unbound_pwq(wq, tmp_attrs); - if (!pwq_tbl[node]) - goto enomem_pwq; - } else { - dfl_pwq->refcnt++; - pwq_tbl[node] = dfl_pwq; + if (pwq_tbl[node]) + continue; + /* fallback to dfl_pwq if the allocation failed */ } + dfl_pwq->refcnt++; + pwq_tbl[node] = dfl_pwq; } mutex_unlock(_pool_mutex); @@ -3931,10 +3920,6 @@ out_free: return ret; enomem_pwq: - free_unbound_pwq(dfl_pwq); - for_each_node(node) - if (pwq_tbl && pwq_tbl[node] != dfl_pwq) - free_unbound_pwq(pwq_tbl[node]); mutex_unlock(_pool_mutex); put_online_cpus(); enomem: @@ -4017,7 +4002,8 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu, if (!pwq) { pr_warning("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n", wq->name); - goto out_unlock; + mutex_lock(>mutex); + goto use_dfl_pwq; } /* -- 1.7.7.6 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 5/7] workqueue, use default pwq when fail to allocate node pwd
When we fail to allocate the node pwq, we can use the default pwq for the node. Thus we can avoid failure after allocated default pwq, and remove some code for failure path. Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com --- kernel/workqueue.c | 28 +++- 1 files changed, 7 insertions(+), 21 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index a383eaf..737646d 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3751,17 +3751,6 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq, return pwq; } -/* undo alloc_unbound_pwq(), used only in the error path */ -static void free_unbound_pwq(struct pool_workqueue *pwq) -{ - lockdep_assert_held(wq_pool_mutex); - - if (pwq) { - put_unbound_pool(pwq-pool); - kfree(pwq); - } -} - /** * wq_calc_node_mask - calculate a wq_attrs' cpumask for the specified node * @attrs: the wq_attrs of interest @@ -3891,12 +3880,12 @@ int apply_workqueue_attrs(struct workqueue_struct *wq, for_each_node(node) { if (wq_calc_node_cpumask(attrs, node, -1, tmp_attrs-cpumask)) { pwq_tbl[node] = alloc_unbound_pwq(wq, tmp_attrs); - if (!pwq_tbl[node]) - goto enomem_pwq; - } else { - dfl_pwq-refcnt++; - pwq_tbl[node] = dfl_pwq; + if (pwq_tbl[node]) + continue; + /* fallback to dfl_pwq if the allocation failed */ } + dfl_pwq-refcnt++; + pwq_tbl[node] = dfl_pwq; } mutex_unlock(wq_pool_mutex); @@ -3931,10 +3920,6 @@ out_free: return ret; enomem_pwq: - free_unbound_pwq(dfl_pwq); - for_each_node(node) - if (pwq_tbl pwq_tbl[node] != dfl_pwq) - free_unbound_pwq(pwq_tbl[node]); mutex_unlock(wq_pool_mutex); put_online_cpus(); enomem: @@ -4017,7 +4002,8 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu, if (!pwq) { pr_warning(workqueue: allocation failed while updating NUMA affinity of \%s\\n, wq-name); - goto out_unlock; + mutex_lock(wq-mutex); + goto use_dfl_pwq; } /* -- 1.7.7.6 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/