On Tue, 20 Oct 2015 12:53:17 -0700 Dave Hansen <[email protected]> wrote:
> > From: Dave Hansen <[email protected]> > > I have a hugetlbfs user which is never explicitly allocating huge pages > with 'nr_hugepages'. They only set 'nr_overcommit_hugepages' and then let > the pages be allocated from the buddy allocator at fault time. > > This works, but they noticed that mbind() was not doing them any good and > the pages were being allocated without respect for the policy they > specified. > > The code in question is this: > > > struct page *alloc_huge_page(struct vm_area_struct *vma, > ... > > page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); > > if (!page) { > > page = alloc_buddy_huge_page(h, NUMA_NO_NODE); > > dequeue_huge_page_vma() is smart and will respect the VMA's memory policy. > But, it only grabs _existing_ huge pages from the huge page pool. If the > pool is empty, we fall back to alloc_buddy_huge_page() which obviously > can't do anything with the VMA's policy because it isn't even passed the > VMA. > > Almost everybody preallocates huge pages. That's probably why nobody has > ever noticed this. Looking back at the git history, I don't think this > _ever_ worked from when alloc_buddy_huge_page() was introduced in 7893d1d5, > 8 years ago. > > The fix is to pass vma/addr down in to the places where we actually call in > to the buddy allocator. It's fairly straightforward plumbing. This has > been lightly tested. huh. Fair enough. > b/mm/hugetlb.c | 111 > ++++++++++++++++++++++++++++++++++++++++++++++++++------- Is it worth deporking this for the CONFIG_NUMA=n case? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

