Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Kamezawa Hiroyuki
On 2015/04/02 10:36, Gu Zheng wrote: Hi Kame, TJ, On 04/01/2015 04:30 PM, Kamezawa Hiroyuki wrote: On 2015/04/01 12:02, Tejun Heo wrote: On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Gu Zheng
Hi Kame, TJ, On 04/01/2015 04:30 PM, Kamezawa Hiroyuki wrote: > On 2015/04/01 12:02, Tejun Heo wrote: >> On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: >>> Now, hot-added cpus will have the lowest free cpu id. >>> >>> Because of this, in most of systems which has only

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Kamezawa Hiroyuki
On 2015/04/01 12:02, Tejun Heo wrote: On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of systems which has only cpu-hot-add, cpu-ids are always contiguous even after cpu hot add. In enterprise,

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Kamezawa Hiroyuki
On 2015/04/01 12:02, Tejun Heo wrote: On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of systems which has only cpu-hot-add, cpu-ids are always contiguous even after cpu hot add. In enterprise,

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Kamezawa Hiroyuki
On 2015/04/02 10:36, Gu Zheng wrote: Hi Kame, TJ, On 04/01/2015 04:30 PM, Kamezawa Hiroyuki wrote: On 2015/04/01 12:02, Tejun Heo wrote: On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-04-01 Thread Gu Zheng
Hi Kame, TJ, On 04/01/2015 04:30 PM, Kamezawa Hiroyuki wrote: On 2015/04/01 12:02, Tejun Heo wrote: On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of systems which has only cpu-hot-add,

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
On Tue, Mar 31, 2015 at 11:02:42PM -0400, Tejun Heo wrote: > Ugh... so, cpu number allocation on hot-add is part of userland > interface that we're locked into? Tying hotplug and id allocation > order together usually isn't a good idea. What if the cpu up fails > while running the notifiers?

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: > Now, hot-added cpus will have the lowest free cpu id. > > Because of this, in most of systems which has only cpu-hot-add, cpu-ids are > always > contiguous even after cpu hot add. > In enterprise, this would be considered as

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Kamezawa Hiroyuki
On 2015/04/01 0:28, Tejun Heo wrote: Hello, Kamezawa. On Tue, Mar 31, 2015 at 03:09:05PM +0900, Kamezawa Hiroyuki wrote: But this may be considered as API change for most hot-add users. Hmm... Why would it be? What can that possibly break? Now, hot-added cpus will have the lowest free

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
Hello, Kamezawa. On Tue, Mar 31, 2015 at 03:09:05PM +0900, Kamezawa Hiroyuki wrote: > But this may be considered as API change for most hot-add users. Hmm... Why would it be? What can that possibly break? > So, for now, I vote for detemining ids at online but record it is a good way. If we

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Kamezawa Hiroyuki
On 2015/03/30 18:49, Gu Zheng wrote: Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool->node) for all the possible cpu's per cpu workqueue

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Kamezawa Hiroyuki
On 2015/03/30 18:49, Gu Zheng wrote: Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool-node) for all the possible cpu's per cpu workqueue

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
On Tue, Mar 31, 2015 at 11:02:42PM -0400, Tejun Heo wrote: Ugh... so, cpu number allocation on hot-add is part of userland interface that we're locked into? Tying hotplug and id allocation order together usually isn't a good idea. What if the cpu up fails while running the notifiers? The ID

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Kamezawa Hiroyuki
On 2015/04/01 0:28, Tejun Heo wrote: Hello, Kamezawa. On Tue, Mar 31, 2015 at 03:09:05PM +0900, Kamezawa Hiroyuki wrote: But this may be considered as API change for most hot-add users. Hmm... Why would it be? What can that possibly break? Now, hot-added cpus will have the lowest free

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
On Wed, Apr 01, 2015 at 11:55:11AM +0900, Kamezawa Hiroyuki wrote: Now, hot-added cpus will have the lowest free cpu id. Because of this, in most of systems which has only cpu-hot-add, cpu-ids are always contiguous even after cpu hot add. In enterprise, this would be considered as

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-31 Thread Tejun Heo
Hello, Kamezawa. On Tue, Mar 31, 2015 at 03:09:05PM +0900, Kamezawa Hiroyuki wrote: But this may be considered as API change for most hot-add users. Hmm... Why would it be? What can that possibly break? So, for now, I vote for detemining ids at online but record it is a good way. If we know

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-30 Thread Gu Zheng
Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: > On 2015/03/27 0:18, Tejun Heo wrote: >> Hello, >> >> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: >>> wq generates the numa affinity (pool->node) for all the possible cpu's >>> per cpu workqueue at init stage, that

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-30 Thread Gu Zheng
Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: > On 2015/03/27 0:18, Tejun Heo wrote: >> Hello, >> >> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: >>> wq generates the numa affinity (pool->node) for all the possible cpu's >>> per cpu workqueue at init stage, that

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-30 Thread Gu Zheng
Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool-node) for all the possible cpu's per cpu workqueue at init stage, that means the affinity

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-30 Thread Gu Zheng
Hi Kame-san, On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote: On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool-node) for all the possible cpu's per cpu workqueue at init stage, that means the affinity

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Kamezawa Hiroyuki
On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool->node) for all the possible cpu's per cpu workqueue at init stage, that means the affinity of currently un-present ones' may be incorrect, so we need to

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Tejun Heo
Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: > wq generates the numa affinity (pool->node) for all the possible cpu's > per cpu workqueue at init stage, that means the affinity of currently > un-present > ones' may be incorrect, so we need to update the pool->node for the new

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Gu Zheng
Hi Kame-san, On 03/26/2015 11:12 AM, Kamezawa Hiroyuki wrote: > On 2015/03/26 11:17, Gu Zheng wrote: >> Yasuaki Ishimatsu found that with node online/offline, cpu<->node >> relationship is established. Because workqueue uses a info which was >> established at boot time, but it may be changed by

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Kamezawa Hiroyuki
On 2015/03/27 0:18, Tejun Heo wrote: Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool-node) for all the possible cpu's per cpu workqueue at init stage, that means the affinity of currently un-present ones' may be incorrect, so we need to

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Gu Zheng
Hi Kame-san, On 03/26/2015 11:12 AM, Kamezawa Hiroyuki wrote: On 2015/03/26 11:17, Gu Zheng wrote: Yasuaki Ishimatsu found that with node online/offline, cpu-node relationship is established. Because workqueue uses a info which was established at boot time, but it may be changed by node

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-26 Thread Tejun Heo
Hello, On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote: wq generates the numa affinity (pool-node) for all the possible cpu's per cpu workqueue at init stage, that means the affinity of currently un-present ones' may be incorrect, so we need to update the pool-node for the new added

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-25 Thread Kamezawa Hiroyuki
On 2015/03/26 11:17, Gu Zheng wrote: > Yasuaki Ishimatsu found that with node online/offline, cpu<->node > relationship is established. Because workqueue uses a info which was > established at boot time, but it may be changed by node hotpluging. > > Once pool->node points to a stale node,

[PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-25 Thread Gu Zheng
Yasuaki Ishimatsu found that with node online/offline, cpu<->node relationship is established. Because workqueue uses a info which was established at boot time, but it may be changed by node hotpluging. Once pool->node points to a stale node, following allocation failure happens. == SLUB:

[PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-25 Thread Gu Zheng
Yasuaki Ishimatsu found that with node online/offline, cpu-node relationship is established. Because workqueue uses a info which was established at boot time, but it may be changed by node hotpluging. Once pool-node points to a stale node, following allocation failure happens. == SLUB:

Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed

2015-03-25 Thread Kamezawa Hiroyuki
On 2015/03/26 11:17, Gu Zheng wrote: Yasuaki Ishimatsu found that with node online/offline, cpu-node relationship is established. Because workqueue uses a info which was established at boot time, but it may be changed by node hotpluging. Once pool-node points to a stale node, following

[PATCH 0/2] workqueue: fix a bug when numa mapping is changed v4

2014-12-16 Thread Kamezawa Hiroyuki
This is v4. Thank you for hints/commentes to previous versions. I think this versions only contains necessary things and not invasive. Tested several patterns of node hotplug and seems to work well. Changes since v3 - removed changes against get_unbound_pool() - remvoed codes in cpu offline

[PATCH 0/2] workqueue: fix a bug when numa mapping is changed v4

2014-12-16 Thread Kamezawa Hiroyuki
This is v4. Thank you for hints/commentes to previous versions. I think this versions only contains necessary things and not invasive. Tested several patterns of node hotplug and seems to work well. Changes since v3 - removed changes against get_unbound_pool() - remvoed codes in cpu offline