Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-07 Thread Julien Grall



On 05/07/16 21:21, Sergej Proskurin wrote:

On 07/05/2016 05:37 PM, Julien Grall wrote:

When altp2m is active, the hostp2m is not used. All changes are applied
directly to the individual altp2m views. As far as I know, the only
situations, where a mapping is removed from a specific p2m view is in
the functions unmap_regions_rw_cache, unmap_mmio_regions, and
guest_physmap_remove_page. Each one of these functions provide, however
the hostp2m to the function apply_p2m_changes, where the mapping is
eventually removed. So, we definitely remove mappings from the hostp2m.
However, these would not be removed from the associated altp2m views.

Can you state a scenario, when and why pages might need to be removed at
run time from the hostp2m? In that case, maybe it would make sense to
extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
and guest_physmap_remove_page) to additionally remove the mappings from
all available altp2m views?


All the functions, you mentioned can be called after the domain has been
created. If you grep guest_physmap_remove_page in the code source, you
will find that it is used to unmap grant entry from the memory (see
replace_grant_host_mapping) or when decreasing the memory reservation
(see do_memory_op).

Note that, from my understanding, x86 has the same problem.



Yes, the x86 implementation works similarly to the one for ARM. I will
need to think about that. One solution could simply extend the
previously mentioned functions by unmapping the particular entry in all
currently available altp2m views (assuming, we allocate the altp2m views
on demand, as discussed in patch #05).

Whatever the solution will be, it will need to be ported to the x86
implementation as well.


Actually, the x86 code is propagating the change (see 
p2m_altp2m_propagate_change). I missed it while going the code the first 
time.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-06 Thread Sergej Proskurin
Hi Julien,


On 07/06/2016 04:28 PM, Julien Grall wrote:
>
>
> On 05/07/16 21:21, Sergej Proskurin wrote:
>> I agree on this point as well. However, we should maybe think of another
>> name for flush_tlb_domain. Especially, if we do not flush the entire
>> domain (including all currently active altp2m views). What do you think?
>
> This function is only used within p2m.c, although it is exported. What
> about renaming this function to flush_tlb_p2m and instead pass the p2m
> in parameter?
>
> Regards,
>

That sounds good to me :) I will do that. Thank you.

Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-06 Thread Julien Grall



On 05/07/16 21:21, Sergej Proskurin wrote:

I agree on this point as well. However, we should maybe think of another
name for flush_tlb_domain. Especially, if we do not flush the entire
domain (including all currently active altp2m views). What do you think?


This function is only used within p2m.c, although it is exported. What 
about renaming this function to flush_tlb_p2m and instead pass the p2m 
in parameter?


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-05 Thread Sergej Proskurin
Hi Julien,

On 07/05/2016 05:37 PM, Julien Grall wrote:
> 
> 
> On 05/07/16 15:48, Sergej Proskurin wrote:
>> On 07/04/2016 10:32 PM, Julien Grall wrote:
>>> On 04/07/2016 12:45, Sergej Proskurin wrote:
 +p2m_load_altp2m_VTTBR(n);
 +else
 +p2m_load_VTTBR(n->domain);
 +
   isb();

   if ( is_32bit_domain(n->domain) )
 @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
   void flush_tlb_domain(struct domain *d)
   {
   unsigned long flags = 0;
 +struct vcpu *v = NULL;

 -/* Update the VTTBR if necessary with the domain d. In this case,
 - * it's only necessary to flush TLBs on every CPUs with the
 current VMID
 - * (our domain).
 +/*
 + * Update the VTTBR if necessary with the domain d. In this
 case, it is only
 + * necessary to flush TLBs on every CPUs with the current VMID
 (our
 + * domain).
*/
   if ( d != current->domain )
   {
   local_irq_save(flags);
 -p2m_load_VTTBR(d);
 -}

 -flush_tlb();
 +/* If altp2m is active, update VTTBR and flush TLBs of every
 VCPU */
 +if ( altp2m_active(d) )
 +{
 +for_each_vcpu( d, v )
 +{
 +p2m_load_altp2m_VTTBR(v);
 +flush_tlb();
 +}
 +}
 +else
 +{
 +p2m_load_VTTBR(d);
 +flush_tlb();
 +}
>>>
>>> Why do you need to do a such things? If the VMID is the same, a single
>>> call to flush_tlb() will nuke all the entries for the given TLBs.
>>>
>>> If the VMID is not shared, then I am not even sure why you would need
>>> to flush the TLBs for all the altp2ms.
>>>
>>
>> If the VMID is shared between multiple views and we would change one
>> entry in one specific view, we might reuse an entry that is not part of
>> the currently active altp2m view of the domain. And even if we assign
>> unique VMIDs to the individual altp2m views (as discussed in patch #04),
>> as far as I understand, we will still need to flush the mappings of all
>> altp2ms at this point because (AFAIK) changes in individual altp2m views
>> will still need to be propagated to the TLBs. Please correct me, if I am
>> wrong at this point.
> 
> You seem to be really confused how TLB flush instructions work on ARM.
> The function flush_tlb() will flush TLBs on all the processor for the
> current VMID. If the VMID is shared, then calling N-times the flush with
> the same VMID will just be a waste of processor cycle.
> 

True, you are right. I was confusing things, sorry. Thank you for
putting that straight.

> Now, if the VMID is not shared (and based on your current series):
> flush_tlb_domain is called in two places:
>- p2m_alloc_table
>- apply_p2m_changes
> 
> For the former, it allocates root table for a specific p2m. For the
> latter, the changes is only done for a specific p2m, and those changes
> are currently not replicated in all the p2ms. So flushing the TLBs for
> each altp2m view do not seem useful here too.
> 

I agree on this point as well. However, we should maybe think of another
name for flush_tlb_domain. Especially, if we do not flush the entire
domain (including all currently active altp2m views). What do you think?

Also, we would need another flushing routine that takes a specific p2m
as argument so that its VTTBR can be considered while flushing the TLBs.

>>
>>> I have looked to Xen with this series applied and noticed that when
>>> you remove a page from the hostp2m, the mapping in the altp2m is not
>>> removed. So the guest may use a page that would have been freed
>>> previously. Did I miss something?
>>>
>>
>> When altp2m is active, the hostp2m is not used. All changes are applied
>> directly to the individual altp2m views. As far as I know, the only
>> situations, where a mapping is removed from a specific p2m view is in
>> the functions unmap_regions_rw_cache, unmap_mmio_regions, and
>> guest_physmap_remove_page. Each one of these functions provide, however
>> the hostp2m to the function apply_p2m_changes, where the mapping is
>> eventually removed. So, we definitely remove mappings from the hostp2m.
>> However, these would not be removed from the associated altp2m views.
>>
>> Can you state a scenario, when and why pages might need to be removed at
>> run time from the hostp2m? In that case, maybe it would make sense to
>> extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
>> and guest_physmap_remove_page) to additionally remove the mappings from
>> all available altp2m views?
> 
> All the functions, you mentioned can be called after the domain has been
> created. If you grep guest_physmap_remove_page in the code source, you
> will find that it is used to unmap grant entry from the memory (see
> 

Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-05 Thread Julien Grall



On 05/07/16 15:48, Sergej Proskurin wrote:

On 07/04/2016 10:32 PM, Julien Grall wrote:

On 04/07/2016 12:45, Sergej Proskurin wrote:

+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
  isb();

  if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
  void flush_tlb_domain(struct domain *d)
  {
  unsigned long flags = 0;
+struct vcpu *v = NULL;

-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the
current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this
case, it is only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
   */
  if ( d != current->domain )
  {
  local_irq_save(flags);
-p2m_load_VTTBR(d);
-}

-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every
VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}


Why do you need to do a such things? If the VMID is the same, a single
call to flush_tlb() will nuke all the entries for the given TLBs.

If the VMID is not shared, then I am not even sure why you would need
to flush the TLBs for all the altp2ms.



If the VMID is shared between multiple views and we would change one
entry in one specific view, we might reuse an entry that is not part of
the currently active altp2m view of the domain. And even if we assign
unique VMIDs to the individual altp2m views (as discussed in patch #04),
as far as I understand, we will still need to flush the mappings of all
altp2ms at this point because (AFAIK) changes in individual altp2m views
will still need to be propagated to the TLBs. Please correct me, if I am
wrong at this point.


You seem to be really confused how TLB flush instructions work on ARM.
The function flush_tlb() will flush TLBs on all the processor for the 
current VMID. If the VMID is shared, then calling N-times the flush with 
the same VMID will just be a waste of processor cycle.


Now, if the VMID is not shared (and based on your current series):
flush_tlb_domain is called in two places:
   - p2m_alloc_table
   - apply_p2m_changes

For the former, it allocates root table for a specific p2m. For the 
latter, the changes is only done for a specific p2m, and those changes 
are currently not replicated in all the p2ms. So flushing the TLBs for 
each altp2m view do not seem useful here too.





I have looked to Xen with this series applied and noticed that when
you remove a page from the hostp2m, the mapping in the altp2m is not
removed. So the guest may use a page that would have been freed
previously. Did I miss something?



When altp2m is active, the hostp2m is not used. All changes are applied
directly to the individual altp2m views. As far as I know, the only
situations, where a mapping is removed from a specific p2m view is in
the functions unmap_regions_rw_cache, unmap_mmio_regions, and
guest_physmap_remove_page. Each one of these functions provide, however
the hostp2m to the function apply_p2m_changes, where the mapping is
eventually removed. So, we definitely remove mappings from the hostp2m.
However, these would not be removed from the associated altp2m views.

Can you state a scenario, when and why pages might need to be removed at
run time from the hostp2m? In that case, maybe it would make sense to
extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
and guest_physmap_remove_page) to additionally remove the mappings from
all available altp2m views?


All the functions, you mentioned can be called after the domain has been 
created. If you grep guest_physmap_remove_page in the code source, you 
will find that it is used to unmap grant entry from the memory (see 
replace_grant_host_mapping) or when decreasing the memory reservation 
(see do_memory_op).


Note that, from my understanding, x86 has the same problem.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-05 Thread Sergej Proskurin
Hi Julien,


On 07/04/2016 10:32 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/2016 12:45, Sergej Proskurin wrote:
>> This commit makes sure that the TLB of a domain considers flushing all
>> of the associated altp2m views. Therefore, in case a different domain
>> (not the currently active domain) shall flush its TLBs, the current
>> implementation loops over all VTTBRs of the different altp2m mappings
>> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
>> mapping is considered.  At this point, it must be considered that the
>> domain --whose TLBs are to be flushed-- is not locked.
>>
>> Signed-off-by: Sergej Proskurin 
>> ---
>> Cc: Stefano Stabellini 
>> Cc: Julien Grall 
>> ---
>>  xen/arch/arm/p2m.c | 71
>> --
>>  1 file changed, 63 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 7e721f9..019f10e 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -15,6 +15,8 @@
>>  #include 
>>  #include 
>>
>> +#include 
>> +
>>  #ifdef CONFIG_ARM_64
>>  static unsigned int __read_mostly p2m_root_order;
>>  static unsigned int __read_mostly p2m_root_level;
>> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>>   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>>  }
>>
>> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
>> +{
>> +struct domain *d = v->domain;
>> +uint16_t index = vcpu_altp2m(v).p2midx;
>> +
>> +if ( index == INVALID_ALTP2M )
>> +return INVALID_MFN;
>> +
>> +BUG_ON(index >= MAX_ALTP2M);
>> +
>> +return d->arch.altp2m_vttbr[index];
>> +}
>> +
>> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)
>
> Please try to share code when it is possible. For instance, a big part
> of this helper is similar to p2m_load_VTTBR. Assuming you get the p2m
> rather than the VTTBR directly.
>

I will combine these two functions, thank you.

>> +{
>> +struct domain *d = v->domain;
>> +uint64_t vttbr = p2m_get_altp2m_vttbr(v);
>> +
>> +if ( is_idle_domain(d) )
>> +return;
>> +
>> +BUG_ON(vttbr == INVALID_MFN);
>> +WRITE_SYSREG64(vttbr, VTTBR_EL2);
>> +
>> +isb(); /* Ensure update is visible */
>> +}
>> +
>>  static void p2m_load_VTTBR(struct domain *d)
>>  {
>>  if ( is_idle_domain(d) )
>>  return;
>> +
>>  BUG_ON(!d->arch.vttbr);
>>  WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
>> +
>
> Spurious changes.
>

I will revert these, thank you.

>>  isb(); /* Ensure update is visible */
>>  }
>>
>> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>>  WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>>  isb();
>>
>> -p2m_load_VTTBR(n->domain);
>> +if ( altp2m_active(n->domain) )
>
> This would benefit of an unlikely (maybe within altp2m_active).
>

Fair enough. I will insert an unlikely macro into altp2m_active.

>> +p2m_load_altp2m_VTTBR(n);
>> +else
>> +p2m_load_VTTBR(n->domain);
>> +
>>  isb();
>>
>>  if ( is_32bit_domain(n->domain) )
>> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>>  void flush_tlb_domain(struct domain *d)
>>  {
>>  unsigned long flags = 0;
>> +struct vcpu *v = NULL;
>>
>> -/* Update the VTTBR if necessary with the domain d. In this case,
>> - * it's only necessary to flush TLBs on every CPUs with the
>> current VMID
>> - * (our domain).
>> +/*
>> + * Update the VTTBR if necessary with the domain d. In this
>> case, it is only
>> + * necessary to flush TLBs on every CPUs with the current VMID (our
>> + * domain).
>>   */
>>  if ( d != current->domain )
>>  {
>>  local_irq_save(flags);
>> -p2m_load_VTTBR(d);
>> -}
>>
>> -flush_tlb();
>> +/* If altp2m is active, update VTTBR and flush TLBs of every
>> VCPU */
>> +if ( altp2m_active(d) )
>> +{
>> +for_each_vcpu( d, v )
>> +{
>> +p2m_load_altp2m_VTTBR(v);
>> +flush_tlb();
>> +}
>> +}
>> +else
>> +{
>> +p2m_load_VTTBR(d);
>> +flush_tlb();
>> +}
>
> Why do you need to do a such things? If the VMID is the same, a single
> call to flush_tlb() will nuke all the entries for the given TLBs.
>
> If the VMID is not shared, then I am not even sure why you would need
> to flush the TLBs for all the altp2ms.
>

If the VMID is shared between multiple views and we would change one
entry in one specific view, we might reuse an entry that is not part of
the currently active altp2m view of the domain. And even if we assign
unique VMIDs to the individual altp2m views (as discussed in patch #04),
as far as I understand, we will still need to flush the mappings of all
altp2ms at this point because (AFAIK) changes in individual altp2m views
will still need to be propagated to the 

Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Julien Grall

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:

This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 71 --
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include 
 #include 

+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }

+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint16_t index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return INVALID_MFN;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)


Please try to share code when it is possible. For instance, a big part 
of this helper is similar to p2m_load_VTTBR. Assuming you get the p2m 
rather than the VTTBR directly.



+{
+struct domain *d = v->domain;
+uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+if ( is_idle_domain(d) )
+return;
+
+BUG_ON(vttbr == INVALID_MFN);
+WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
 if ( is_idle_domain(d) )
 return;
+
 BUG_ON(!d->arch.vttbr);
 WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+


Spurious changes.


 isb(); /* Ensure update is visible */
 }

@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
 WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
 isb();

-p2m_load_VTTBR(n->domain);
+if ( altp2m_active(n->domain) )


This would benefit of an unlikely (maybe within altp2m_active).


+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
 isb();

 if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
 unsigned long flags = 0;
+struct vcpu *v = NULL;

-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this case, it is 
only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
  */
 if ( d != current->domain )
 {
 local_irq_save(flags);
-p2m_load_VTTBR(d);
-}

-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}


Why do you need to do a such things? If the VMID is the same, a single 
call to flush_tlb() will nuke all the entries for the given TLBs.


If the VMID is not shared, then I am not even sure why you would need to 
flush the TLBs for all the altp2ms.


I have looked to Xen with this series applied and noticed that when you 
remove a page from the hostp2m, the mapping in the altp2m is not 
removed. So the guest may use a page that would have been freed 
previously. Did I miss something?


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin


On 07/04/2016 01:45 PM, Sergej Proskurin wrote:
> This commit makes sure that the TLB of a domain considers flushing all
> of the associated altp2m views. Therefore, in case a different domain
> (not the currently active domain) shall flush its TLBs, the current
> implementation loops over all VTTBRs of the different altp2m mappings
> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
> mapping is considered.  At this point, it must be considered that the
> domain --whose TLBs are to be flushed-- is not locked.
> 
> Signed-off-by: Sergej Proskurin 
> ---
> Cc: Stefano Stabellini 
> Cc: Julien Grall 
> ---
>  xen/arch/arm/p2m.c | 71 
> --
>  1 file changed, 63 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7e721f9..019f10e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -15,6 +15,8 @@
>  #include 
>  #include 
>  
> +#include 
> +
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
>  static unsigned int __read_mostly p2m_root_level;
> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>  }
>  
> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +uint16_t index = vcpu_altp2m(v).p2midx;
> +
> +if ( index == INVALID_ALTP2M )
> +return INVALID_MFN;
> +
> +BUG_ON(index >= MAX_ALTP2M);
> +
> +return d->arch.altp2m_vttbr[index];
> +}
> +
> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)
> +{
> +struct domain *d = v->domain;
> +uint64_t vttbr = p2m_get_altp2m_vttbr(v);
> +
> +if ( is_idle_domain(d) )
> +return;
> +
> +BUG_ON(vttbr == INVALID_MFN);
> +WRITE_SYSREG64(vttbr, VTTBR_EL2);
> +
> +isb(); /* Ensure update is visible */
> +}
> +
>  static void p2m_load_VTTBR(struct domain *d)
>  {
>  if ( is_idle_domain(d) )
>  return;
> +
>  BUG_ON(!d->arch.vttbr);
>  WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
>  isb(); /* Ensure update is visible */
>  }
>  
> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>  WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>  isb();
>  
> -p2m_load_VTTBR(n->domain);
> +if ( altp2m_active(n->domain) )
> +p2m_load_altp2m_VTTBR(n);
> +else
> +p2m_load_VTTBR(n->domain);
> +
>  isb();
>  
>  if ( is_32bit_domain(n->domain) )
> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>  void flush_tlb_domain(struct domain *d)
>  {
>  unsigned long flags = 0;
> +struct vcpu *v = NULL;
>  
> -/* Update the VTTBR if necessary with the domain d. In this case,
> - * it's only necessary to flush TLBs on every CPUs with the current VMID
> - * (our domain).
> +/*
> + * Update the VTTBR if necessary with the domain d. In this case, it is 
> only
> + * necessary to flush TLBs on every CPUs with the current VMID (our
> + * domain).
>   */
>  if ( d != current->domain )
>  {
>  local_irq_save(flags);
> -p2m_load_VTTBR(d);
> -}
>  
> -flush_tlb();
> +/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
> +if ( altp2m_active(d) )
> +{
> +for_each_vcpu( d, v )
> +{
> +p2m_load_altp2m_VTTBR(v);
> +flush_tlb();
> +}
> +}
> +else
> +{
> +p2m_load_VTTBR(d);
> +flush_tlb();
> +}
> +}
> +else
> +flush_tlb();

Flushing of all vCPU's TLBs must also be performed on the current
domain. Does it make sense to flush the current domain's TLBs at this
point as well? If the current domain's TLBs were modified and needs
flushing, it will be done during the loading of the VTTBR at the next
VMENTRY anyway.

>  
>  if ( d != current->domain )
>  {
> -p2m_load_VTTBR(current->domain);
> +/* Make sure altp2m mapping is valid. */
> +if ( altp2m_active(current->domain) )
> +p2m_load_altp2m_VTTBR(current);
> +else
> +p2m_load_VTTBR(current->domain);
>  local_irq_restore(flags);
>  }
>  }
> 

-- 
M.Sc. Sergej Proskurin
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579

prosku...@sec.in.tum.de
www.sec.in.tum.de

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 71 --
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint16_t index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return INVALID_MFN;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+if ( is_idle_domain(d) )
+return;
+
+BUG_ON(vttbr == INVALID_MFN);
+WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
 if ( is_idle_domain(d) )
 return;
+
 BUG_ON(!d->arch.vttbr);
 WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
 isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
 WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
 isb();
 
-p2m_load_VTTBR(n->domain);
+if ( altp2m_active(n->domain) )
+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
 isb();
 
 if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
 unsigned long flags = 0;
+struct vcpu *v = NULL;
 
-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this case, it is 
only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
  */
 if ( d != current->domain )
 {
 local_irq_save(flags);
-p2m_load_VTTBR(d);
-}
 
-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}
+}
+else
+flush_tlb();
 
 if ( d != current->domain )
 {
-p2m_load_VTTBR(current->domain);
+/* Make sure altp2m mapping is valid. */
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
 local_irq_restore(flags);
 }
 }
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.

2016-07-04 Thread Sergej Proskurin
This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
---
 xen/arch/arm/p2m.c | 71 --
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#include 
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint16_t index = vcpu_altp2m(v).p2midx;
+
+if ( index == INVALID_ALTP2M )
+return INVALID_MFN;
+
+BUG_ON(index >= MAX_ALTP2M);
+
+return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+struct domain *d = v->domain;
+uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+if ( is_idle_domain(d) )
+return;
+
+BUG_ON(vttbr == INVALID_MFN);
+WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
 if ( is_idle_domain(d) )
 return;
+
 BUG_ON(!d->arch.vttbr);
 WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
 isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
 WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
 isb();
 
-p2m_load_VTTBR(n->domain);
+if ( altp2m_active(n->domain) )
+p2m_load_altp2m_VTTBR(n);
+else
+p2m_load_VTTBR(n->domain);
+
 isb();
 
 if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
 unsigned long flags = 0;
+struct vcpu *v = NULL;
 
-/* Update the VTTBR if necessary with the domain d. In this case,
- * it's only necessary to flush TLBs on every CPUs with the current VMID
- * (our domain).
+/*
+ * Update the VTTBR if necessary with the domain d. In this case, it is 
only
+ * necessary to flush TLBs on every CPUs with the current VMID (our
+ * domain).
  */
 if ( d != current->domain )
 {
 local_irq_save(flags);
-p2m_load_VTTBR(d);
-}
 
-flush_tlb();
+/* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+if ( altp2m_active(d) )
+{
+for_each_vcpu( d, v )
+{
+p2m_load_altp2m_VTTBR(v);
+flush_tlb();
+}
+}
+else
+{
+p2m_load_VTTBR(d);
+flush_tlb();
+}
+}
+else
+flush_tlb();
 
 if ( d != current->domain )
 {
-p2m_load_VTTBR(current->domain);
+/* Make sure altp2m mapping is valid. */
+if ( altp2m_active(current->domain) )
+p2m_load_altp2m_VTTBR(current);
+else
+p2m_load_VTTBR(current->domain);
 local_irq_restore(flags);
 }
 }
-- 
2.8.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel