On 08/02/17 18:06, George Dunlap wrote:
> On 08/02/17 13:20, Andrew Cooper wrote:
>> On 08/02/17 13:13, Jan Beulich wrote:
>>>>>> On 07.02.17 at 19:48, <andrew.coop...@citrix.com> wrote:
>>>> Until the IPI has completed, other processors might be running on this 
>>>> nested
>>>> p2m object.  clear_domain_page() does not guarantee to make 8-byte atomic
>>>> updates, which means that a pagewalk on a remote processor might encounter 
>>>> a
>>>> partial update.
>>>>
>>>> This is currently safe as other issues prevents a nested p2m ever being 
>>>> shared
>>>> between two cpus (although this is contrary to the original plan).
>>>>
>>>> Setting p2m->np2m_base to P2M_BASE_EADDR before the IPI ensures that the 
>>>> IPI'd
>>>> processors won't continue to use the flushed mappings.
>>>>
>>>> While modifying this function, remove all the trailing whitespace and tweak
>>>> style in the affected areas.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>
>>> Reviewed-by: Jan Beulich <jbeul...@suse.com>
>>> but ...
>>>
>>>> @@ -1633,19 +1635,21 @@ p2m_flush_table(struct p2m_domain *p2m)
>>>>  
>>>>      /* This is no longer a valid nested p2m for any address space */
>>>>      p2m->np2m_base = P2M_BASE_EADDR;
>>>> -    
>>>> -    /* Zap the top level of the trie */
>>>> -    mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
>>>> -    clear_domain_page(mfn);
>>>>  
>>>>      /* Make sure nobody else is using this p2m table */
>>>>      nestedhvm_vmcx_flushtlb(p2m);
>>>>  
>>>> +    /* Zap the top level of the trie */
>>> s/trie/tree/ here, as you touch it anyway?
>>
>> Trie here refers to the datastructure https://en.wikipedia.org/wiki/Trie
>> which is the structure implemented by processor pagetables.  It is more
>> specific than just calling them trees.
> 
> Never heard that before, but we seem to already have at least six
> instances in the hypervisor (and an ocaml file called 'trie.mli'), so I
> guess there's precedent. :-)
> 
> Reviewed-by: George Dunlap <george.dun...@citrix.com>
> 
> I'll check this one in.

Or maybe I won't. :-)

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to