>>> On 09.03.16 at 15:52, <jbeul...@suse.com> wrote: >>>> On 09.03.16 at 15:36, <david.vra...@citrix.com> wrote: >> On 09/03/16 14:01, Jan Beulich wrote: >>>>>> On 08.03.16 at 19:44, <david.vra...@citrix.com> wrote: >>>> --- a/xen/arch/x86/mm.c >>>> +++ b/xen/arch/x86/mm.c >>>> @@ -5641,7 +5641,7 @@ int map_pages_to_xen( >>>> flush_flags |= FLUSH_TLB_GLOBAL; \ >>>> if ( (flags & _PAGE_PRESENT) && \ >>>> (((o_) ^ flags) & PAGE_CACHE_ATTRS) ) \ >>>> - flush_flags |= FLUSH_CACHE; \ >>>> + flush_flags |= FLUSH_CACHE_BY_VA; \ >>>> } while (0) >>> >>> No, that's too simple. You may flush by VA only if the MFN didn't >>> change (or else you flush the wrong page). >> >> Cachability changes goes through update_xen_mappings() which always uses >> the same MFN -> virt mapping so the MFN never changes. > > But the code you change affects map_pages_to_xen(), of which > only one caller is update_xen_mappings(). It would be calling for > hard to debug bugs if we baked in an assumption that only the > latter can do cachability adjustments.
And there's a second issue here: map_pages_to_xen() updates the idle page tables, only part of which is actually in use on a huge machine in the context of a PV guest, so the above is also risking a #PF on a guest address for the CLFLUSH. Setting the new flag only when the virtual address the mapping of which gets changed is within [DIRECTMAP_VIRT_START, HYPERVISOR_VIRT_END) would seem an acceptable assumption: We know we're never changing the VFN <-> PFN relationship inside the 1:1 mapping. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel