On 25/08/2025 09:31, Kevin Brodsky wrote: >>> Note: the performance impact of set_memory_pkey() is likely to be >>> relatively low on arm64 because the linear mapping uses PTE-level >>> descriptors only. This means that set_memory_pkey() simply changes the >>> attributes of some PTE descriptors. However, some systems may be able to >>> use higher-level descriptors in the future [5], meaning that >>> set_memory_pkey() may have to split mappings. Allocating page tables >> I'm supposed the page table hardening feature will be opt-in due to >> its overhead? If so I think you can just keep kernel linear mapping >> using PTE, just like debug page alloc. > Indeed, I don't expect it to be turned on by default (in defconfig). If > the overhead proves too large when block mappings are used, it seems > reasonable to force PTE mappings when kpkeys_hardened_pgtables is enabled.
I had a closer look at what happens when the linear map uses block mappings, rebasing this series on top of [1]. Unfortunately, this is worse than I thought: it does not work at all as things stand. The main issue is that calling set_memory_pkey() in pagetable_*_ctor() can cause the linear map to be split, which requires new PTP(s) to be allocated, which means more nested call(s) to set_memory_pkey(). This explodes as a non-recursive lock is taken on that path. More fundamentally, this cannot work unless we can explicitly allocate PTPs from either: 1. A pool of PTE-mapped pages 2. A pool of memory that is already mapped with the right pkey (at any level) This is where I have to apologise to Rick for not having studied his series more thoroughly, as patch 17 [2] covers this issue very well in the commit message. It seems fair to say there is no ideal or simple solution, though. Rick's patch reserves enough (PTE-mapped) memory for fully splitting the linear map, which is relatively simple but not very pleasant. Chatting with Ryan Roberts, we figured another approach, improving on solution 1 mentioned in [2]. It would rely on allocating all PTPs from a special pool (without using set_memory_pkey() in pagetable_*_ctor), along those lines: 1. 2 pages are reserved at all times (with the appropriate pkey) 2. Try to allocate a 2M block. If needed, use a reserved page as PMD to split a PUD. If successful, set its pkey - the entire block can now be used for PTPs. Replenish the reserve from the block if needed. 3. If no block is available, make an order-2 allocation (4 pages). If needed, use 1-2 reserved pages to split PUD/PMD. Set the pkey of the 4 pages, take 1-2 pages to replenish the reserve if needed. This ensures that we never run out of PTPs for splitting. We may get into an OOM situation more easily due to the order-2 requirement, but the risk remains low compared to requiring a 2M block. A bigger concern is concurrency - do we need a per-CPU cache? Reserving a 2M block per CPU could be very much overkill. No matter which solution is used, this clearly increases the complexity of kpkeys_hardened_pgtables. Mike Rapoport has posted a number of RFCs [3][4] that aim at addressing this problem more generally, but no consensus seems to have emerged and I'm not sure they would completely solve this specific problem either. For now, my plan is to stick to solution 3 from [2], i.e. force the linear map to be PTE-mapped. This is easily done on arm64 as it is the default, and is required for rodata=full, unless [1] is applied and the system supports BBML2_NOABORT. See [1] for the potential performance improvements we'd be missing out on (~5% ballpark). I'm not quite sure what the picture looks like on x86 - it may well be more significant as Rick suggested. - Kevin [1] https://lore.kernel.org/all/[email protected]/ [2] https://lore.kernel.org/all/[email protected]/ [3] https://lore.kernel.org/lkml/[email protected]/ [4] https://lore.kernel.org/all/[email protected]/
