On Wed, 12 Jul 2017 16:11:28 +0200 Alexander Potapenko <gli...@google.com> 
wrote:

> >> At creation time the kmem_cache structure is private and no one can run a
> >> free operation.
> I've double-checked the code path and this turned out to be a false
> positive caused by KMSAN not instrumenting the contents of mm/slub.c
> (i.e. the initialization of the spinlock remained unnoticed).
> Christoph is indeed right that kmem_cache_structure is private, so a
> race is not possible here.
> I am sorry for the false alarm.
> >> > Inviting a use-after-free?  I guess not, as there should be no way
> >> > to look up these items at this stage.
> >>
> >> Right.
> >
> > Still.   It looks bad, and other sites do these things in the other order.
> If the maintainers agree the initialization order needs to be fixed,
> we'll need to remove the (irrelevant) KMSAN report from the patch
> description.

Yup.  I did this:

From: Alexander Potapenko <gli...@google.com>
Subject: slub: tidy up initialization ordering

- free_kmem_cache_nodes() frees the cache node before nulling out a
  reference to it

- init_kmem_cache_nodes() publishes the cache node before initializing it

Neither of these matter at runtime because the cache nodes cannot be
looked up by any other thread.  But it's neater and more consistent to
reorder these.

Link: http://lkml.kernel.org/r/20170707083408.40410-1-gli...@google.com
Signed-off-by: Alexander Potapenko <gli...@google.com>
Cc: Christoph Lameter <c...@linux.com>
Cc: Pekka Enberg <penb...@kernel.org>
Cc: David Rientjes <rient...@google.com>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
---

 mm/slub.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff -puN 
mm/slub.c~slub-make-sure-struct-kmem_cache_node-is-initialized-before-publication
 mm/slub.c
--- 
a/mm/slub.c~slub-make-sure-struct-kmem_cache_node-is-initialized-before-publication
+++ a/mm/slub.c
@@ -3358,8 +3358,8 @@ static void free_kmem_cache_nodes(struct
        struct kmem_cache_node *n;
 
        for_each_kmem_cache_node(s, node, n) {
-               kmem_cache_free(kmem_cache_node, n);
                s->node[node] = NULL;
+               kmem_cache_free(kmem_cache_node, n);
        }
 }
 
@@ -3389,8 +3389,8 @@ static int init_kmem_cache_nodes(struct
                        return 0;
                }
 
-               s->node[node] = n;
                init_kmem_cache_node(n);
+               s->node[node] = n;
        }
        return 1;
 }
_

Reply via email to