From: Andi Kleen <a...@linux.intel.com> 3.4.104-rc1 review patch. If anyone has any objections, please let me know.
------------------ commit e7b691b085fda913830e5280ae6f724b2a63c824 upstream. slab_node() could access current->mempolicy from interrupt context. However there's a race condition during exit where the mempolicy is first freed and then the pointer zeroed. Using this from interrupts seems bogus anyways. The interrupt will interrupt a random process and therefore get a random mempolicy. Many times, this will be idle's, which noone can change. Just disable this here and always use local for slab from interrupts. I also cleaned up the callers of slab_node a bit which always passed the same argument. I believe the original mempolicy code did that in fact, so it's likely a regression. v2: send version with correct logic v3: simplify. fix typo. Reported-by: Arun Sharma <asha...@fb.com> Cc: penb...@kernel.org Cc: c...@linux.com Signed-off-by: Andi Kleen <a...@linux.intel.com> [tdmac...@twitter.com: Rework control flow based on feedback from c...@linux.com, fix logic, and cleanup current task_struct reference] Acked-by: David Rientjes <rient...@google.com> Acked-by: Christoph Lameter <c...@linux.com> Acked-by: KOSAKI Motohiro <kosaki.motoh...@jp.fujitsu.com> Signed-off-by: David Mackey <tdmac...@twitter.com> Signed-off-by: Pekka Enberg <penb...@kernel.org> Signed-off-by: Zefan Li <lize...@huawei.com> --- include/linux/mempolicy.h | 2 +- mm/mempolicy.c | 8 +++++++- mm/slab.c | 4 ++-- mm/slub.c | 2 +- 4 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index fe07e5a..cc49b23 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -205,7 +205,7 @@ extern struct zonelist *huge_zonelist(struct vm_area_struct *vma, extern bool init_nodemask_of_mempolicy(nodemask_t *mask); extern bool mempolicy_nodemask_intersects(struct task_struct *tsk, const nodemask_t *mask); -extern unsigned slab_node(struct mempolicy *policy); +extern unsigned slab_node(void); extern enum zone_type policy_zone; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 5cec36b..87a43cc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1609,8 +1609,14 @@ static unsigned interleave_nodes(struct mempolicy *policy) * task can change it's policy. The system default policy requires no * such protection. */ -unsigned slab_node(struct mempolicy *policy) +unsigned slab_node(void) { + struct mempolicy *policy; + + if (in_interrupt()) + return numa_node_id(); + + policy = current->mempolicy; if (!policy || policy->flags & MPOL_F_LOCAL) return numa_node_id(); diff --git a/mm/slab.c b/mm/slab.c index da2bb68..3eb1c38 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3336,7 +3336,7 @@ static void *alternate_node_alloc(struct kmem_cache *cachep, gfp_t flags) if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD)) nid_alloc = cpuset_slab_spread_node(); else if (current->mempolicy) - nid_alloc = slab_node(current->mempolicy); + nid_alloc = slab_node(); if (nid_alloc != nid_here) return ____cache_alloc_node(cachep, flags, nid_alloc); return NULL; @@ -3368,7 +3368,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) retry_cpuset: cpuset_mems_cookie = get_mems_allowed(); - zonelist = node_zonelist(slab_node(current->mempolicy), flags); + zonelist = node_zonelist(slab_node(), flags); retry: /* diff --git a/mm/slub.c b/mm/slub.c index c6f225f..54ac6e9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1617,7 +1617,7 @@ static struct page *get_any_partial(struct kmem_cache *s, gfp_t flags, do { cpuset_mems_cookie = get_mems_allowed(); - zonelist = node_zonelist(slab_node(current->mempolicy), flags); + zonelist = node_zonelist(slab_node(), flags); for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) { struct kmem_cache_node *n; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/