On 09/22/2012 04:15 PM, Blue Swirl wrote:
> >
> >> This could have nice cleanup effects though and for example enable
> >> generic 'info vmtree' to discover VA->PA mappings for any target
> >> instead of current MMU table walkers.
> >
> > How?  That's in a hardware defined format that's completely invisible to
> > the memory API.
>
> It's invisible now, but target-specific code could grab the mappings
> and feed them to memory API. Memory API would just see the per-CPU
> virtual memory as address spaces that map to physical memory address
> space.
>
> For RAM backed MMU tables like x86 and Sparc32, writes to page table
> memory areas would need to be tracked like SMC. For in-MMU TLBs, this
> would not be needed.
>
> Again, if performance would degrade, this would not be worthwhile. I'd
> expect VA->PA mappings to change at least at context switch rate +
> page fault rate + mmap/exec activity so this could amount to thousands
> of changes per second per CPU.
>
> In theory KVM could use memory API as CPU type agnostic way to
> exchange this information, I'd expect that KVM exit rate is not nearly
> as big and in many cases exchange of mapping information would not be
> needed. It would not improve performance there either.
>

First, the memory API does not operate at that level.  It handles (guest
physical) -> (host virtual | io callback) translations.  These are
(guest virtual) -> (guest physical translations).

Second, the memory API is machine-wide and designed for coarse maps. 
Processor memory maps are per-cpu and page-grained.  (the memory API
actually needs to efficiently support page-grained maps (for iommus) and
per-cpu maps (smm), but that's another story).

Third, we know from the pre-npt/ept days that tracking all mappings
destroys performance.  It's much better to do this on demand.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


Reply via email to