* Peter Zijlstra <pet...@infradead.org> wrote:

> On Mon, Dec 18, 2017 at 12:45:13PM -0800, Dave Hansen wrote:
> > On 12/18/2017 12:41 PM, Peter Zijlstra wrote:
> > >> I also don't think the user_shared area of the fixmap can get *that*
> > >> big.  Does anybody know offhand what the theoretical limits are there?
> > > Problem there is the nr_cpus term I think, we currently have up to 8k
> > > CPUs, but I can see that getting bigger in the future.
> > 
> > It only matters if we go over 512GB, though.  Is the per-cpu part of the
> > fixmap ever more than 512GB/8k=64MB?
> 
> Unlikely, I think the LDT (@ 32 pages / 128K) and the DS (@ 2*4 pages /
> 32K) are the largest entries in there.

Note that with the latest state of things the LDT is not in the fixmap anymore, 
it's mapped separately, via Andy's following patch:

  e86aaee3f2d9: ("x86/pti: Put the LDT in its own PGD if PTI is on")

We have the IDT, the per-CPU entry area and the Debug Store (on Intel CPUs) 
mapped 
in the fixmap area, in addition to the usual fixmap entries that are a handful 
of 
pages. (That's on 64-bit - on 32-bit we have a pretty large kmap area.)

The biggest contribution to the size of the fixmap area is struct 
cpu_entry_area 
(FIX_CPU_ENTRY_AREA_BOTTOM..FIX_CPU_ENTRY_AREA_TOP), which is ~180k, i.e. 44 
pages.

Our current NR_CPUS limit is 8,192 CPUs, but even with 65,536 CPUs the fixmap 
area 
would still only be ~12 GB total - so we are far from running out of space.

Btw., the DS portion of the fixmap is not 64K but 128K. Here's the current size 
distribution of struct cpu_entry_area:

  ::gdt                   -    4K
  ::tss                   -   12K
  ::entry_trampoline      -    4K
#if 64-bit
  ::exception_stacks      -   20K
#endif
#if Intel
  ::cpu_debug_store       -    4K
  ::cpu_debug_buffers     -  128K
#endif

So ::cpu_debug_buffers (struct debug_store_buffers) is the biggest fixmap 
chunk, 
by far, distributed the following way:

  ::bts_buffer[]          -   64K
  ::pebs_buffer[]         -   64K

Thanks,

        Ingo

Reply via email to