________________________________________ 发件人: David Rientjes <rient...@google.com> 发送时间: 2020年7月31日 7:45 收件人: Zhang, Qiang 抄送: c...@linux.com; penb...@kernel.org; iamjoonsoo....@lge.com; a...@linux-foundation.org; linux...@kvack.org; linux-kernel@vger.kernel.org 主题: Re: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien
On Thu, 30 Jul 2020, qiang.zh...@windriver.com wrote: > From: Zhang Qiang <qiang.zh...@windriver.com> > > for example: > node0 > cpu0 cpu1 > slab_dead_cpu > >mutex_lock(&slab_mutex) > >cpuup_canceled slab_dead_cpu > >mask = cpumask_of_node(node) >mutex_lock(&slab_mutex) > >n = get_node(cachep0, node0) > >spin_lock_irq(n&->list_lock) > >if (!cpumask_empty(mask)) == true > >spin_unlock_irq(&n->list_lock) > >goto free_slab > .... > >mutex_unlock(&slab_mutex) > > .... >cpuup_canceled > >mask = > cpumask_of_node(node) > kmem_cache_free(cachep0 ) >n = get_node(cachep0, > node0) > >__cache_free_alien(cachep0 ) > >spin_lock_irq(n&->list_lock) > >n = get_node(cachep0, node0) >if (!cpumask_empty(mask)) > == false > >if (n->alien && n->alien[page_node]) >alien = n->alien > >alien = n->alien[page_node] >n->alien = NULL > >.... > >spin_unlock_irq(&n->list_lock) > >.... > >As mentioned in the review of v1 of this patch, we likely want to do a fix >for cpuup_canceled() instead. I see, you mean do fix in "cpuup_canceled" func?