On Tue, Aug 12, 2014 at 05:28:52AM -0700, Eric Dumazet wrote:
> On Tue, 2014-08-12 at 09:07 +0300, Kirill A. Shutemov wrote:
> > On Tue, Aug 12, 2014 at 08:00:54AM +0300, Oren Twaig wrote:
> > >If not, is there any fast way to change this behavior ? Maybe by
> > >changing the granularity/alignment of such allocations to allow such
> > >mapping ?
> > 
> > What's the point to use vmalloc() in this case?
> 
> Look at various large hashes we have in the system, all using
> vmalloc() :
> 
> [    0.006856] Dentry cache hash table entries: 16777216 (order: 15, 
> 134217728 bytes)
> [    0.033130] Inode-cache hash table entries: 8388608 (order: 14, 67108864 
> bytes)
> [    1.197621] TCP established hash table entries: 524288 (order: 11, 8388608 
> bytes)

I see lower-order allocation in upstream code. Is it some distribution
tweak?

> I would imagine a performance difference if we were using hugepages.

Okay, it's *probably* a valid point.

The hash tables are only allocated with vmalloc() on NUMA system, if
hashdist=1 (default on NUMA).  It does it to distribute memory between
nodes. vmalloc() in NUMA_NO_NODE case will allocate all memory with
0-order page allocations: no physical contiguous memory for hugepage
mappings.

I guess we could teach vmalloc() to interleave between nodes on PMD_SIZE
chunks rather then on PAGE_SIZE if caller asks for big memory allocations.
Although, I'm not sure it it would fit all vmalloc() users.

We also would need to allocate PMD_SIZE-aligned virtual address range
to be able to mapped allocated memory with pmds.

It's *potentially* interesting research project. Any volunteers?

-- 
 Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to