On Fri, Feb 13, 2015 at 6:57 PM, Andrew Cooper <andrew.coop...@citrix.com> wrote: > On 13/02/15 02:11, Kai Huang wrote: > > > On 02/12/2015 10:10 PM, Andrew Cooper wrote: > > On 12/02/15 06:54, Tian, Kevin wrote: > > which presumably > means that the PML buffer flush needs to be aware of which gfns are > mapped by superpages to be able to correctly set a block of bits in the > logdirty bitmap. > > Unfortunately PML itself can't tell us if the logged GPA comes from > superpage or not, but even in PML we still need to split superpages to > 4K page, just like traditional write protection approach does. I think > this is because live migration should be based on 4K page granularity. > Marking all 512 bits of a 2M page to be dirty by a single write doesn't > make sense in both write protection and PML cases. > > agree. extending one write to superpage enlarges dirty set unnecessary. > since spec doesn't say superpage logging is not supported, I'd think a > 4k-aligned entry being logged if within superpage. > > The spec states that an gfn is appended to the log strictly on the > transition of the D bit from 0 to 1. > > In the case of a 2M superpage, there is a single D bit for the entire 2M > range. > > > The plausible (working) scenarios I can see are: > > 1) superpages are not supported (not indicated by the whitepaper). > > A better description would be -- PML doesn't check if it's superpage, it > just operates with D-bit, no matter what page size. > > 2) a single entry will be written which must be taken to cover the > entire 2M range. > 3) an individual entry is written for every access. > > Below is the reply from our hardware guy related to PML on superpage. It > should have answered accurately. > > "As noted in Section 1.3, logging occurs whenever the CPU would set an EPT D > bit. > > It does not matter whether the D bit is in an EPT PTE (4KB page), EPT PDE > (2MB page), or EPT PDPTE (1GB page). > > In all cases, the GPA written to the PML log will be the address of the > write that causes the D bit in question to be updated, with bits 11:0 > cleared. > > This means that, in the case in which the D bit is in an EPT PDE or an EPT > PDPTE, the log entry will communicate which 4KB region within the larger > page was being written. > > Once the D bit is set in one of these entries, a subsequent write to the > larger page will not generate a log entry, even if that write is to a > different 4KB region within the larger page. This is because log entries > are created only when a D bit is being set and a write will not cause a D > bit to be set if the page's D bit is already set. > > The log entries do not communicate the level of the EPT paging-structure > entry in which the D bit was set (i.e., it does not communicate the page > size). " > > > Thanks for the clarification. > > The result of this behaviour is that the PML flush logic is going to have to > look up each gfn and check whether it is mapped by a superpage, which will > add a sizeable overhead.
Sorry that I am replying using my personal email account, as I can't access my company account. I don't think we need to check if the gfn is mapped by a superpage. The PML flush does very simple thing: 1) read out PML index 2) loop all valid GPA logged in PML buffer according to PML index, and call paging_mark_dirty for them. 3) reset PML index to 511, which essentially reset the PML buffer to be empty again. Above process doesn't need to know if the GFN is mapped by superpage or not. Actually, for the superpage, as you can see in my design, it will still set to be read-only in case of PML, as we still need to split superpage to 4K pages even in PML case. Therefore superpage in logdirty mode will be first split to 4K pages in EPT violation, and then those 4K pages will follow PML path. > > It is also not conducive to minimising the data transmitted in the migration > stream. Yes PML itself is unlikely to minimize data transmitted in the migration stream, as how much dirty pages will be transmitted is totally up to guest. But it reduces EPT violation of 4K page write protection, so theoretically PML can reduce CPU cycles in hypervisor context and more cycles can be used in guest mode, therefore it's reasonable to expect guest will have better performance. > > > One future option might be to shatter all the EPT superpages when logdirty > is enabled. This is what I designed originally. This would be ok for a domain which is being migrated away, but > would be suboptiomal for snapshot operations; Xen currently has no ability > to coalesce pages back into superpages. Doesn't this issue exist in current log-dirty implementation anyway? Therefore although PML doesn't solve this issue but it doesn't bring any regression either. To me coalescing pages back to superpage is a separate optimization but not related to PML directly. It also interacts poorly with HAP > vram tracking which enables logdirty mode itself. Why would PML interact with HAP vram tracking poorly? Thanks, -Kai > > ~Andrew > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel > -- Thanks, -Kai _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel