> On Fri, 12 Oct 2007, Yasunori Goto wrote:
>
> > > > + down_read(&slub_lock);
> > > > + list_for_each_entry(s, &slab_caches, list) {
> > > > + local_node = page_to_nid(virt_to_page(s));
> > > > + if (local_node == offline_node)
> > > > +
On Fri, 12 Oct 2007, Yasunori Goto wrote:
> > > + down_read(&slub_lock);
> > > + list_for_each_entry(s, &slab_caches, list) {
> > > + local_node = page_to_nid(virt_to_page(s));
> > > + if (local_node == offline_node)
> > > + /* This slub is on the offline node. */
>
> On Fri, 12 Oct 2007, Yasunori Goto wrote:
>
> > If pages on the new node available, slub can use it before making
> > new kmem_cache_nodes. So, this callback should be called
> > BEFORE pages on the node are available.
>
> If its called before pages on the node are available then it must
> fa
On Fri, 12 Oct 2007, Yasunori Goto wrote:
> If pages on the new node available, slub can use it before making
> new kmem_cache_nodes. So, this callback should be called
> BEFORE pages on the node are available.
If its called before pages on the node are available then it must
fallback and canno
This is to make kmem_cache_nodes of all SLUBs for new node when
memory-hotadd is called. This fixes panic due to access NULL pointer at
discard_slab() after memory hot-add.
If pages on the new node available, slub can use it before making
new kmem_cache_nodes. So, this callback should be called
5 matches
Mail list logo