On 4/7/22 17:27, Kirill A. Shutemov wrote:
On Thu, Apr 07, 2022 at 07:28:54AM -0700, Richard Henderson wrote:
On 4/7/22 06:18, Kirill A. Shutemov wrote:
The new hook is incorrect, in that it doesn't apply to addresses along
the tlb fast path.
I'm not sure what you mean by that. tlb_hit() mechanics works. We strip
the tag bits before tlb lookup.
Could you elaborate?
The fast path does not clear the bits, so you enter the slow path before you
get to clearing the bits. You've lost most of the advantage of the tlb
already.
Sorry for my ignorance, but what do you mean by fast path here?
The fast path is the TLB lookup code that is generated by the JIT
compiler. If the TLB hits, the memory access doesn't go through any C
code. I think tagged addresses always fail the fast path in your patch.
While a proper tagged address will have the tag removed in CR2 during a
page fault, an improper tagged address (with bit 63 != {47,56}) should
have the original address reported to CR2.
Hm. I don't see it in spec. It rather points to other direction:
Page faults report the faulting linear address in CR2. Because LAM
masking (by sign-extension) applies before paging, the faulting
linear address recorded in CR2 does not contain the masked
metadata.
Yes, it talks about CR2 in case of page fault, not #GP due to canonicality
checking, but still.
I could imagine a hook that could aid the victim cache in ignoring the tag,
so that we need go through tlb_fill fewer times. But I wouldn't want to
include that in the base version of this feature, and I'd want take more
than a moment in the design so that it could be used by ARM and RISC-V as
well.
But what other options do you see. Clering the bits before TLB look up
matches the architectural spec and makes INVLPG match described behaviour
without special handling.
Ah, INVLPG handling is messy indeed.
Paolo