On Thu, 3 Aug 2017 16:25:46 -0700 Linus Torvalds 
<torva...@linux-foundation.org> wrote:

> On Thu, Aug 3, 2017 at 4:11 PM, Andrew Morton <a...@linux-foundation.org> 
> wrote:
> >
> > Where is this INIT_LIST_HEAD()?
> 
> I think it's this one:
> 
>         list_del_init(&info->shrinklist);
> 
> in shmem_unused_huge_shrink().

OK.

> > I'm not sure I'm understanding this.  AFAICT all the list operations to
> > which you refer are synchronized under spin_lock(&sbinfo->shrinklist_lock)?
> 
> No, notice how shmem_unused_huge_shrink() does the
> 
>         list_move(&info->shrinklist, &to_remove);
> 
> and
> 
>         list_move(&info->shrinklist, &list);
> 
> to move to (two different) private lists under the shrinklist_lock,
> but once it is on that private "list/to_remove" list, it is then
> accessed outside the locked region.

So the code is using sbinfo->shrinklist_lock to protect
sbinfo->shrinklist AND to protect all the per-inode info->shrinklist's.
Except it didn't get the coverage complete.

Presumably it's too expensive to extend sbinfo->shrinklist_lock
coverage in shmem_unused_huge_shrink() (or is it?  - this is huge
pages).  An alternative would be to add a new
shmem_inode_info.shrinklist_lock whose mandate is to protect
shmem_inode_info.shrinklist.

> Honestly, I don't love this situation, or the patch, but I think the
> patch is likely the right thing to do.

Well, we could view the premature droppage of sbinfo->shrinklist_lock
in shmem_unused_huge_shrink() to be a performance optimization and put
some big fat comments in there explaining what's going on.  But it's
tricky and it's not known that such an optimization is warranted.



Reply via email to