Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-27 Thread Alexandru Elisei
Hi Marc,

On 5/27/20 9:41 AM, Marc Zyngier wrote:
> Hi Alex,
>
> On 2020-05-12 17:53, Alexandru Elisei wrote:
>> Hi,
>>
>> On 5/12/20 12:17 PM, James Morse wrote:
>>> Hi Alex, Marc,
>>>
>>> (just on this last_vcpu_ran thing...)
>>>
>>> On 11/05/2020 17:38, Alexandru Elisei wrote:
 On 4/22/20 1:00 PM, Marc Zyngier wrote:
> From: Christoffer Dall 
>
> As we are about to reuse our stage 2 page table manipulation code for
> shadow stage 2 page tables in the context of nested virtualization, we
> are going to manage multiple stage 2 page tables for a single VM.
>
> This requires some pretty invasive changes to our data structures,
> which moves the vmid and pgd pointers into a separate structure and
> change pretty much all of our mmu code to operate on this structure
> instead.
>
> The new structure is called struct kvm_s2_mmu.
>
> There is no intended functional change by this patch alone.
> diff --git a/arch/arm64/include/asm/kvm_host.h
> b/arch/arm64/include/asm/kvm_host.h
> index 7dd8fefa6aecd..664a5d92ae9b8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -63,19 +63,32 @@ struct kvm_vmid {
>  u32    vmid;
>  };
>
> -struct kvm_arch {
> +struct kvm_s2_mmu {
>  struct kvm_vmid vmid;
>
> -    /* stage2 entry level table */
> -    pgd_t *pgd;
> -    phys_addr_t pgd_phys;
> -
> -    /* VTCR_EL2 value for this VM */
> -    u64    vtcr;
> +    /*
> + * stage2 entry level table
> + *
> + * Two kvm_s2_mmu structures in the same VM can point to the same pgd
> + * here.  This happens when running a non-VHE guest hypervisor which
> + * uses the canonical stage 2 page table for both vEL2 and for vEL1/0
> + * with vHCR_EL2.VM == 0.
 It makes more sense to me to say that a non-VHE guest hypervisor will use 
 the
 canonical stage *1* page table when running at EL2
>>> Can KVM say anything about stage1? Its totally under the the guests control
>>> even at vEL2...
>>
>> It just occurred to me that "canonical stage 2 page table" refers to the L0
>> hypervisor stage 2, not to the L1 hypervisor stage 2. If you don't mind my
>> suggestion, perhaps the comment can be slightly improved to avoid any 
>> confusion?
>> Maybe something along the lines of "[..] This happens when running a
>> non-VHE guest
>> hypervisor, in which case we use the canonical stage 2 page table for both 
>> vEL2
>> and for vEL1/0 with vHCR_EL2.VM == 0".
>
> If the confusion stems from the lack of guest stage-2, how about:
>
> "This happens when running a guest using a translation regime that isn't
>  affected by its own stage-2 translation, such as a non-VHE hypervisor
>  running at vEL2, or for vEL1/EL0 with vHCR_EL2.VM == 0. In that case,
>  we use the canonical stage-2 page tables."
>
> instead? Does this lift the ambiguity?

Yes, that's perfect.

Thanks,
Alex
>
> Thanks,
>
>     M.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-27 Thread Marc Zyngier

Hi Alex,

On 2020-05-12 17:53, Alexandru Elisei wrote:

Hi,

On 5/12/20 12:17 PM, James Morse wrote:

Hi Alex, Marc,

(just on this last_vcpu_ran thing...)

On 11/05/2020 17:38, Alexandru Elisei wrote:

On 4/22/20 1:00 PM, Marc Zyngier wrote:

From: Christoffer Dall 

As we are about to reuse our stage 2 page table manipulation code 
for
shadow stage 2 page tables in the context of nested virtualization, 
we

are going to manage multiple stage 2 page tables for a single VM.

This requires some pretty invasive changes to our data structures,
which moves the vmid and pgd pointers into a separate structure and
change pretty much all of our mmu code to operate on this structure
instead.

The new structure is called struct kvm_s2_mmu.

There is no intended functional change by this patch alone.
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h

index 7dd8fefa6aecd..664a5d92ae9b8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -63,19 +63,32 @@ struct kvm_vmid {
u32vmid;
 };

-struct kvm_arch {
+struct kvm_s2_mmu {
struct kvm_vmid vmid;

-   /* stage2 entry level table */
-   pgd_t *pgd;
-   phys_addr_t pgd_phys;
-
-   /* VTCR_EL2 value for this VM */
-   u64vtcr;
+   /*
+* stage2 entry level table
+*
+	 * Two kvm_s2_mmu structures in the same VM can point to the same 
pgd
+	 * here.  This happens when running a non-VHE guest hypervisor 
which
+	 * uses the canonical stage 2 page table for both vEL2 and for 
vEL1/0

+* with vHCR_EL2.VM == 0.
It makes more sense to me to say that a non-VHE guest hypervisor will 
use the

canonical stage *1* page table when running at EL2
Can KVM say anything about stage1? Its totally under the the guests 
control even at vEL2...


It just occurred to me that "canonical stage 2 page table" refers to 
the L0
hypervisor stage 2, not to the L1 hypervisor stage 2. If you don't mind 
my
suggestion, perhaps the comment can be slightly improved to avoid any 
confusion?

Maybe something along the lines of "[..] This happens when running a
non-VHE guest
hypervisor, in which case we use the canonical stage 2 page table for 
both vEL2

and for vEL1/0 with vHCR_EL2.VM == 0".


If the confusion stems from the lack of guest stage-2, how about:

"This happens when running a guest using a translation regime that isn't
 affected by its own stage-2 translation, such as a non-VHE hypervisor
 running at vEL2, or for vEL1/EL0 with vHCR_EL2.VM == 0. In that case,
 we use the canonical stage-2 page tables."

instead? Does this lift the ambiguity?

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-12 Thread Alexandru Elisei
Hi,

On 5/12/20 12:17 PM, James Morse wrote:
> Hi Alex, Marc,
>
> (just on this last_vcpu_ran thing...)
>
> On 11/05/2020 17:38, Alexandru Elisei wrote:
>> On 4/22/20 1:00 PM, Marc Zyngier wrote:
>>> From: Christoffer Dall 
>>>
>>> As we are about to reuse our stage 2 page table manipulation code for
>>> shadow stage 2 page tables in the context of nested virtualization, we
>>> are going to manage multiple stage 2 page tables for a single VM.
>>>
>>> This requires some pretty invasive changes to our data structures,
>>> which moves the vmid and pgd pointers into a separate structure and
>>> change pretty much all of our mmu code to operate on this structure
>>> instead.
>>>
>>> The new structure is called struct kvm_s2_mmu.
>>>
>>> There is no intended functional change by this patch alone.
>>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>>> b/arch/arm64/include/asm/kvm_host.h
>>> index 7dd8fefa6aecd..664a5d92ae9b8 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -63,19 +63,32 @@ struct kvm_vmid {
>>> u32vmid;
>>>  };
>>>  
>>> -struct kvm_arch {
>>> +struct kvm_s2_mmu {
>>> struct kvm_vmid vmid;
>>>  
>>> -   /* stage2 entry level table */
>>> -   pgd_t *pgd;
>>> -   phys_addr_t pgd_phys;
>>> -
>>> -   /* VTCR_EL2 value for this VM */
>>> -   u64vtcr;
>>> +   /*
>>> +* stage2 entry level table
>>> +*
>>> +* Two kvm_s2_mmu structures in the same VM can point to the same pgd
>>> +* here.  This happens when running a non-VHE guest hypervisor which
>>> +* uses the canonical stage 2 page table for both vEL2 and for vEL1/0
>>> +* with vHCR_EL2.VM == 0.
>> It makes more sense to me to say that a non-VHE guest hypervisor will use the
>> canonical stage *1* page table when running at EL2
> Can KVM say anything about stage1? Its totally under the the guests control 
> even at vEL2...

It just occurred to me that "canonical stage 2 page table" refers to the L0
hypervisor stage 2, not to the L1 hypervisor stage 2. If you don't mind my
suggestion, perhaps the comment can be slightly improved to avoid any confusion?
Maybe something along the lines of "[..] This happens when running a non-VHE 
guest
hypervisor, in which case we use the canonical stage 2 page table for both vEL2
and for vEL1/0 with vHCR_EL2.VM == 0".

Thanks,
Alex
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-12 Thread James Morse
Hi Alex,

On 12/05/2020 16:47, Alexandru Elisei wrote:
> On 5/12/20 12:17 PM, James Morse wrote:
>> On 11/05/2020 17:38, Alexandru Elisei wrote:
>>> On 4/22/20 1:00 PM, Marc Zyngier wrote:
 From: Christoffer Dall 

 As we are about to reuse our stage 2 page table manipulation code for
 shadow stage 2 page tables in the context of nested virtualization, we
 are going to manage multiple stage 2 page tables for a single VM.

 This requires some pretty invasive changes to our data structures,
 which moves the vmid and pgd pointers into a separate structure and
 change pretty much all of our mmu code to operate on this structure
 instead.

 The new structure is called struct kvm_s2_mmu.

 There is no intended functional change by this patch alone.
 diff --git a/arch/arm64/include/asm/kvm_host.h 
 b/arch/arm64/include/asm/kvm_host.h
 index 7dd8fefa6aecd..664a5d92ae9b8 100644
 --- a/arch/arm64/include/asm/kvm_host.h
 +++ b/arch/arm64/include/asm/kvm_host.h
 @@ -63,19 +63,32 @@ struct kvm_vmid {
u32vmid;
  };
  
 -struct kvm_arch {
 +struct kvm_s2_mmu {
struct kvm_vmid vmid;
  
 -  /* stage2 entry level table */
 -  pgd_t *pgd;
 -  phys_addr_t pgd_phys;
 -
 -  /* VTCR_EL2 value for this VM */
 -  u64vtcr;
 +  /*
 +   * stage2 entry level table
 +   *
 +   * Two kvm_s2_mmu structures in the same VM can point to the same pgd
 +   * here.  This happens when running a non-VHE guest hypervisor which
 +   * uses the canonical stage 2 page table for both vEL2 and for vEL1/0
 +   * with vHCR_EL2.VM == 0.

>>> It makes more sense to me to say that a non-VHE guest hypervisor will use 
>>> the
>>> canonical stage *1* page table when running at EL2

>> Can KVM say anything about stage1? Its totally under the the guests control 
>> even at vEL2...

> It is. My interpretation of the comment was that if the guest doesn't have 
> virtual
> stage 2 enabled (we're not running a guest of the L1 hypervisor), then the L0 
> host
> can use the same L0 stage 2 tables because we're running the same guest (the 
> L1
> VM), regardless of the actual exception level for the guest.

I think you're right, but I can't see where stage 1 comes in to it!


> If I remember
> correctly, KVM assigns different vmids for guests running at vEL1/0 and vEL2 
> with
> vHCR_EL2.VM == 0 because the translation regimes are different, but keeps the 
> same
> translation tables.

Interesting. Is that because vEL2 really has ASIDs so it needs its own VMID 
space?



>>> (the "Non-secure EL2 translation regime" as ARM DDI 0487F.b calls it on 
>>> page D5-2543).
>>> I think that's
>>> the only situation where vEL2 and vEL1&0 will use the same L0 stage 2 
>>> tables. It's
>>> been quite some time since I reviewed the initial version of the NV 
>>> patches, did I
>>> get that wrong?
>>
 +   */
 +  pgd_t   *pgd;
 +  phys_addr_t pgd_phys;
  
/* The last vcpu id that ran on each physical CPU */
int __percpu *last_vcpu_ran;

>>> It makes sense for the other fields to be part of kvm_s2_mmu, but I'm 
>>> struggling
>>> to figure out why last_vcpu_ran is here. Would you mind sharing the 
>>> rationale? I
>>> don't see this change in v1 or v2 of the NV series.

>> Marc may have a better rationale. My thinking was because kvm_vmid is in 
>> here too.
>>
>> last_vcpu_ran exists to prevent KVM accidentally emulating CNP without the 
>> opt-in. (we
>> call it defacto CNP).
>>
>> The guest may expect to be able to use asid-4 with different page tables on 
>> different

> I'm afraid I don't know what asid-4 is.

Sorry - 4 was just a random number![0]
'to use the same asid number on different vcpus'.


>> vCPUs, assuming the TLB isn't shared. But if KVM is switching between those 
>> vCPU on one
>> physical CPU, the TLB is shared, ... the VMID and ASID are the same, but the 
>> page tables
>> are not. Not fun to debug!
>>
>>
>> NV makes this problem per-stage2, because each stage2 has its own VMID, we 
>> need to track
>> the vcpu_id that last ran this stage2 on this physical CPU. If its not the 
>> same, we need
>> to blow away this VMIDs TLB entries.
>>
>> The workaround lives in virt/kvm/arm/arm.c::kvm_arch_vcpu_load()
> 
> Makes sense, thank you for explaining that.

Great,


Thanks,

James


[0] https://xkcd.com/221/
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-12 Thread Alexandru Elisei
Hi,

On 5/12/20 12:17 PM, James Morse wrote:
> Hi Alex, Marc,
>
> (just on this last_vcpu_ran thing...)
>
> On 11/05/2020 17:38, Alexandru Elisei wrote:
>> On 4/22/20 1:00 PM, Marc Zyngier wrote:
>>> From: Christoffer Dall 
>>>
>>> As we are about to reuse our stage 2 page table manipulation code for
>>> shadow stage 2 page tables in the context of nested virtualization, we
>>> are going to manage multiple stage 2 page tables for a single VM.
>>>
>>> This requires some pretty invasive changes to our data structures,
>>> which moves the vmid and pgd pointers into a separate structure and
>>> change pretty much all of our mmu code to operate on this structure
>>> instead.
>>>
>>> The new structure is called struct kvm_s2_mmu.
>>>
>>> There is no intended functional change by this patch alone.
>>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>>> b/arch/arm64/include/asm/kvm_host.h
>>> index 7dd8fefa6aecd..664a5d92ae9b8 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -63,19 +63,32 @@ struct kvm_vmid {
>>> u32vmid;
>>>  };
>>>  
>>> -struct kvm_arch {
>>> +struct kvm_s2_mmu {
>>> struct kvm_vmid vmid;
>>>  
>>> -   /* stage2 entry level table */
>>> -   pgd_t *pgd;
>>> -   phys_addr_t pgd_phys;
>>> -
>>> -   /* VTCR_EL2 value for this VM */
>>> -   u64vtcr;
>>> +   /*
>>> +* stage2 entry level table
>>> +*
>>> +* Two kvm_s2_mmu structures in the same VM can point to the same pgd
>>> +* here.  This happens when running a non-VHE guest hypervisor which
>>> +* uses the canonical stage 2 page table for both vEL2 and for vEL1/0
>>> +* with vHCR_EL2.VM == 0.
>> It makes more sense to me to say that a non-VHE guest hypervisor will use the
>> canonical stage *1* page table when running at EL2
> Can KVM say anything about stage1? Its totally under the the guests control 
> even at vEL2...

It is. My interpretation of the comment was that if the guest doesn't have 
virtual
stage 2 enabled (we're not running a guest of the L1 hypervisor), then the L0 
host
can use the same L0 stage 2 tables because we're running the same guest (the L1
VM), regardless of the actual exception level for the guest. If I remember
correctly, KVM assigns different vmids for guests running at vEL1/0 and vEL2 
with
vHCR_EL2.VM == 0 because the translation regimes are different, but keeps the 
same
translation tables.

>
>
>> (the "Non-secure EL2 translation regime" as ARM DDI 0487F.b calls it on page 
>> D5-2543).
>> I think that's
>> the only situation where vEL2 and vEL1&0 will use the same L0 stage 2 
>> tables. It's
>> been quite some time since I reviewed the initial version of the NV patches, 
>> did I
>> get that wrong?
>
>>> +*/
>>> +   pgd_t   *pgd;
>>> +   phys_addr_t pgd_phys;
>>>  
>>> /* The last vcpu id that ran on each physical CPU */
>>> int __percpu *last_vcpu_ran;
>> It makes sense for the other fields to be part of kvm_s2_mmu, but I'm 
>> struggling
>> to figure out why last_vcpu_ran is here. Would you mind sharing the 
>> rationale? I
>> don't see this change in v1 or v2 of the NV series.
> Marc may have a better rationale. My thinking was because kvm_vmid is in here 
> too.
>
> last_vcpu_ran exists to prevent KVM accidentally emulating CNP without the 
> opt-in. (we
> call it defacto CNP).
>
> The guest may expect to be able to use asid-4 with different page tables on 
> different

I'm afraid I don't know what asid-4 is.

> vCPUs, assuming the TLB isn't shared. But if KVM is switching between those 
> vCPU on one
> physical CPU, the TLB is shared, ... the VMID and ASID are the same, but the 
> page tables
> are not. Not fun to debug!
>
>
> NV makes this problem per-stage2, because each stage2 has its own VMID, we 
> need to track
> the vcpu_id that last ran this stage2 on this physical CPU. If its not the 
> same, we need
> to blow away this VMIDs TLB entries.
>
> The workaround lives in virt/kvm/arm/arm.c::kvm_arch_vcpu_load()

Makes sense, thank you for explaining that.

Thanks,
Alex
>
>
>> More below.
> (lightly trimmed!)
>
> Thanks,
>
> James
>
>
>>>  
>>> +   struct kvm *kvm;
>>> +};
> [...]
>
>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>> index 53b3ba9173ba7..03f01fcfa2bd5 100644
>>> --- a/virt/kvm/arm/arm.c
>>> +++ b/virt/kvm/arm/arm.c
>> There's a comment that still mentions arch.vmid that you missed in this file:
>>
>> static bool need_new_vmid_gen(struct kvm_vmid *vmid)
>> {
>>     u64 current_vmid_gen = atomic64_read(_vmid_gen);
>>     smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
>>
> [..]
>
>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>>> index e3b9ee268823b..2f99749048285 100644
>>> --- a/virt/kvm/arm/mmu.c
>>> +++ b/virt/kvm/arm/mmu.c
>>> @@ -886,21 +898,23 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, 
>>> size_t size,
>>>  }
>>>  
>>>  /**
>>> - * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation.
>>> - * @kvm:   The 

Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-12 Thread James Morse
Hi Alex, Marc,

(just on this last_vcpu_ran thing...)

On 11/05/2020 17:38, Alexandru Elisei wrote:
> On 4/22/20 1:00 PM, Marc Zyngier wrote:
>> From: Christoffer Dall 
>>
>> As we are about to reuse our stage 2 page table manipulation code for
>> shadow stage 2 page tables in the context of nested virtualization, we
>> are going to manage multiple stage 2 page tables for a single VM.
>>
>> This requires some pretty invasive changes to our data structures,
>> which moves the vmid and pgd pointers into a separate structure and
>> change pretty much all of our mmu code to operate on this structure
>> instead.
>>
>> The new structure is called struct kvm_s2_mmu.
>>
>> There is no intended functional change by this patch alone.

>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>> b/arch/arm64/include/asm/kvm_host.h
>> index 7dd8fefa6aecd..664a5d92ae9b8 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -63,19 +63,32 @@ struct kvm_vmid {
>>  u32vmid;
>>  };
>>  
>> -struct kvm_arch {
>> +struct kvm_s2_mmu {
>>  struct kvm_vmid vmid;
>>  
>> -/* stage2 entry level table */
>> -pgd_t *pgd;
>> -phys_addr_t pgd_phys;
>> -
>> -/* VTCR_EL2 value for this VM */
>> -u64vtcr;
>> +/*
>> + * stage2 entry level table
>> + *
>> + * Two kvm_s2_mmu structures in the same VM can point to the same pgd
>> + * here.  This happens when running a non-VHE guest hypervisor which
>> + * uses the canonical stage 2 page table for both vEL2 and for vEL1/0
>> + * with vHCR_EL2.VM == 0.
> 
> It makes more sense to me to say that a non-VHE guest hypervisor will use the
> canonical stage *1* page table when running at EL2

Can KVM say anything about stage1? Its totally under the the guests control 
even at vEL2...


> (the "Non-secure EL2 translation regime" as ARM DDI 0487F.b calls it on page 
> D5-2543).

> I think that's
> the only situation where vEL2 and vEL1&0 will use the same L0 stage 2 tables. 
> It's
> been quite some time since I reviewed the initial version of the NV patches, 
> did I
> get that wrong?


>> + */
>> +pgd_t   *pgd;
>> +phys_addr_t pgd_phys;
>>  
>>  /* The last vcpu id that ran on each physical CPU */
>>  int __percpu *last_vcpu_ran;
> 
> It makes sense for the other fields to be part of kvm_s2_mmu, but I'm 
> struggling
> to figure out why last_vcpu_ran is here. Would you mind sharing the 
> rationale? I
> don't see this change in v1 or v2 of the NV series.

Marc may have a better rationale. My thinking was because kvm_vmid is in here 
too.

last_vcpu_ran exists to prevent KVM accidentally emulating CNP without the 
opt-in. (we
call it defacto CNP).

The guest may expect to be able to use asid-4 with different page tables on 
different
vCPUs, assuming the TLB isn't shared. But if KVM is switching between those 
vCPU on one
physical CPU, the TLB is shared, ... the VMID and ASID are the same, but the 
page tables
are not. Not fun to debug!


NV makes this problem per-stage2, because each stage2 has its own VMID, we need 
to track
the vcpu_id that last ran this stage2 on this physical CPU. If its not the 
same, we need
to blow away this VMIDs TLB entries.

The workaround lives in virt/kvm/arm/arm.c::kvm_arch_vcpu_load()


> More below.

(lightly trimmed!)

Thanks,

James


>>  
>> +struct kvm *kvm;
>> +};

[...]

>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 53b3ba9173ba7..03f01fcfa2bd5 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
> 
> There's a comment that still mentions arch.vmid that you missed in this file:
> 
> static bool need_new_vmid_gen(struct kvm_vmid *vmid)
> {
>     u64 current_vmid_gen = atomic64_read(_vmid_gen);
>     smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
> 

[..]

>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index e3b9ee268823b..2f99749048285 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c

>> @@ -886,21 +898,23 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, 
>> size_t size,
>>  }
>>  
>>  /**
>> - * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation.
>> - * @kvm:The KVM struct pointer for the VM.
>> + * kvm_init_stage2_mmu - Initialise a S2 MMU strucrure
>> + * @kvm:The pointer to the KVM structure
>> + * @mmu:The pointer to the s2 MMU structure
>>   *
>>   * Allocates only the stage-2 HW PGD level table(s) of size defined by
>> - * stage2_pgd_size(kvm).
>> + * stage2_pgd_size(mmu->kvm).
>>   *
>>   * Note we don't need locking here as this is only called when the VM is
>>   * created, which can only be done once.
>>   */
>> -int kvm_alloc_stage2_pgd(struct kvm *kvm)
>> +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
>>  {
>>  phys_addr_t pgd_phys;
>>  pgd_t *pgd;
>> +int cpu;
>>  
>> -if (kvm->arch.pgd != NULL) {
>> +if (mmu->pgd != NULL) {
>>  kvm_err("kvm_arch already 

Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-11 Thread Alexandru Elisei
Hi,

On 4/22/20 1:00 PM, Marc Zyngier wrote:
> From: Christoffer Dall 
>
> As we are about to reuse our stage 2 page table manipulation code for
> shadow stage 2 page tables in the context of nested virtualization, we
> are going to manage multiple stage 2 page tables for a single VM.
>
> This requires some pretty invasive changes to our data structures,
> which moves the vmid and pgd pointers into a separate structure and
> change pretty much all of our mmu code to operate on this structure
> instead.
>
> The new structure is called struct kvm_s2_mmu.
>
> There is no intended functional change by this patch alone.
>
> [Designed data structure layout in collaboration]
> Signed-off-by: Christoffer Dall 
> Co-developed-by: Marc Zyngier 
> [maz: Moved the last_vcpu_ran down to the S2 MMU structure as well]
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/include/asm/kvm_asm.h  |   5 +-
>  arch/arm64/include/asm/kvm_host.h |  30 +++-
>  arch/arm64/include/asm/kvm_mmu.h  |  16 +-
>  arch/arm64/kvm/hyp/switch.c   |   8 +-
>  arch/arm64/kvm/hyp/tlb.c  |  48 +++---
>  virt/kvm/arm/arm.c|  32 +---
>  virt/kvm/arm/mmu.c| 266 +-
>  7 files changed, 219 insertions(+), 186 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h 
> b/arch/arm64/include/asm/kvm_asm.h
> index 7c7eeeaab9faa..5adf4e1a4c2c9 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -53,6 +53,7 @@
>  
>  struct kvm;
>  struct kvm_vcpu;
> +struct kvm_s2_mmu;
>  
>  extern char __kvm_hyp_init[];
>  extern char __kvm_hyp_init_end[];
> @@ -60,8 +61,8 @@ extern char __kvm_hyp_init_end[];
>  extern char __kvm_hyp_vector[];
>  
>  extern void __kvm_flush_vm_context(void);
> -extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
> -extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
> +extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t 
> ipa);
> +extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
>  extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
>  
>  extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 7dd8fefa6aecd..664a5d92ae9b8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -63,19 +63,32 @@ struct kvm_vmid {
>   u32vmid;
>  };
>  
> -struct kvm_arch {
> +struct kvm_s2_mmu {
>   struct kvm_vmid vmid;
>  
> - /* stage2 entry level table */
> - pgd_t *pgd;
> - phys_addr_t pgd_phys;
> -
> - /* VTCR_EL2 value for this VM */
> - u64vtcr;
> + /*
> +  * stage2 entry level table
> +  *
> +  * Two kvm_s2_mmu structures in the same VM can point to the same pgd
> +  * here.  This happens when running a non-VHE guest hypervisor which
> +  * uses the canonical stage 2 page table for both vEL2 and for vEL1/0
> +  * with vHCR_EL2.VM == 0.

It makes more sense to me to say that a non-VHE guest hypervisor will use the
canonical stage *1* page table when running at EL2 (the "Non-secure EL2
translation regime" as ARM DDI 0487F.b calls it on page D5-2543). I think that's
the only situation where vEL2 and vEL1&0 will use the same L0 stage 2 tables. 
It's
been quite some time since I reviewed the initial version of the NV patches, 
did I
get that wrong?

> +  */
> + pgd_t   *pgd;
> + phys_addr_t pgd_phys;
>  
>   /* The last vcpu id that ran on each physical CPU */
>   int __percpu *last_vcpu_ran;

It makes sense for the other fields to be part of kvm_s2_mmu, but I'm struggling
to figure out why last_vcpu_ran is here. Would you mind sharing the rationale? I
don't see this change in v1 or v2 of the NV series.

More below.

>  
> + struct kvm *kvm;
> +};
> +
> +struct kvm_arch {
> + struct kvm_s2_mmu mmu;
> +
> + /* VTCR_EL2 value for this VM */
> + u64vtcr;
> +
>   /* The maximum number of vCPUs depends on the used GIC model */
>   int max_vcpus;
>  
> @@ -255,6 +268,9 @@ struct kvm_vcpu_arch {
>   void *sve_state;
>   unsigned int sve_max_vl;
>  
> + /* Stage 2 paging state used by the hardware on next switch */
> + struct kvm_s2_mmu *hw_mmu;
> +
>   /* HYP configuration */
>   u64 hcr_el2;
>   u32 mdcr_el2;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h 
> b/arch/arm64/include/asm/kvm_mmu.h
> index 5ba1310639ec6..c6c8eee008d66 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -154,8 +154,8 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, 
> size_t size,
>  void free_hyp_pgds(void);
>  
>  void stage2_unmap_vm(struct kvm *kvm);
> -int kvm_alloc_stage2_pgd(struct kvm *kvm);
> -void kvm_free_stage2_pgd(struct kvm *kvm);
> +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu);
> +void kvm_free_stage2_pgd(struct kvm_s2_mmu 

Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-06 Thread Marc Zyngier
On Tue, 05 May 2020 18:59:56 +0100
Marc Zyngier  wrote:

Hi James,

> > But accessing VTCR is why the stage2_dissolve_p?d() stuff still
> > needs the kvm pointer, hence the backreference... it might be neater
> > to push the vtcr properties into kvm_s2_mmu that way you could drop
> > the kvm backref, and only things that take vm-wide locks would need
> > the kvm pointer. But I don't think it matters.  
> 
> That's an interesting consideration. I'll have a look.

So I went back on forth on this (the joys of not sleeping), and decided
to keep the host's VTCR_EL2 where it is today (in the kvm structure).
Two reasons for this:

- This field is part of the host configuration. Moving it to the S2 MMU
  structure muddies the waters a bit once you start nesting, as this
  structure really describes an inner guest context. It has its own
  associated VTCR, which lives in the sysreg file, and it becomes a bit
  confusing to look at a kvm_s2_mmu structure in isolation and wonder
  whether this field is directly related to the PTs in this structure,
  or to something else.

- It duplicates state. If there is one thing I have learned over the
  past years, it is that you should keep a given state in one single
  place at all times. Granted, VTCR doesn't change over the lifetime of
  the guest, but still.

I guess the one thing that would push me to the other side of the
debate is if we can show that the amount of pointer chasing generated
by the mmu->kvm->vtcr dance is causing actual performance issues. So
far, I haven't measured such an impact.

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread Andrew Scull
> > > + /* VTCR_EL2 value for this VM */
> > > + u64vtcr;
> > 
> > VTCR seems quite strongly tied to the MMU config. Is it not controlled
> > independently for the nested MMUs and so remains in this struct?
> 
> This particular instance of VTCR_EL2 is the host's version. Which
> means it describes the virtual HW for the EL1 guest. It constraints,
> among other things, the number of IPA bits for the guest, for example,
> and is configured by the VMM.
> 
> Once you start nesting, each vcpu has its own VTCR_EL2 which is still
> constrained by the main one (no nested guest can have a T0SZ bigger
> than the value imposed by userspace for this guest as a whole).
> 
> Does it make sense?

It does up to my ignorance of the spec in this regard.

Simliar to James's question, should `vtcr` live inside the mmu struct
with the top level `kvm::mmu` field containing the host's version and
the nested mmus containing the nested version of vtcr to be applied to
the vCPU? I didn't noticed there being a vtcr for the nested version in
the ~90-patch series so maybe that just isn't something that needs
thinking about?
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread Andrew Scull
Having a go at reviewing. Might turn out to be more useful as a learning
exercise for me rather than useful feedback but we've got to start
somewhere..

> -struct kvm_arch {
> +struct kvm_s2_mmu {
>   struct kvm_vmid vmid;
>  
> - /* stage2 entry level table */
> - pgd_t *pgd;
> - phys_addr_t pgd_phys;
> -
> - /* VTCR_EL2 value for this VM */
> - u64vtcr;
> + /*
> +  * stage2 entry level table
> +  *
> +  * Two kvm_s2_mmu structures in the same VM can point to the same pgd
> +  * here.  This happens when running a non-VHE guest hypervisor which
> +  * uses the canonical stage 2 page table for both vEL2 and for vEL1/0
> +  * with vHCR_EL2.VM == 0.
> +  */
> + pgd_t   *pgd;
> + phys_addr_t pgd_phys;
>  
>   /* The last vcpu id that ran on each physical CPU */
>   int __percpu *last_vcpu_ran;
>  
> + struct kvm *kvm;
> +};
> +
> +struct kvm_arch {
> + struct kvm_s2_mmu mmu;
> +
> + /* VTCR_EL2 value for this VM */
> + u64vtcr;

VTCR seems quite strongly tied to the MMU config. Is it not controlled
independently for the nested MMUs and so remains in this struct?

> -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t 
> *pmd)
> +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> pmd_t *pmd)

How strictly is the long line style rule enforced? checkpatch has 16
such warnings on this patch.

> -static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t 
> *pudp)
> +static void stage2_dissolve_pud(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> pud_t *pudp)
>  {
> + struct kvm *kvm __maybe_unused = mmu->kvm;
> +
>   if (!stage2_pud_huge(kvm, *pudp))
>   return;

There're a couple places with `__maybe_unused` on variables that are
then used soon after. Can they be dropped in these cases so as not to
hide legitimate warning?
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread Marc Zyngier
On Tue, 05 May 2020 18:23:51 +0100,
Andrew Scull  wrote:
> 
> > > > +   /* VTCR_EL2 value for this VM */
> > > > +   u64vtcr;
> > > 
> > > VTCR seems quite strongly tied to the MMU config. Is it not controlled
> > > independently for the nested MMUs and so remains in this struct?
> > 
> > This particular instance of VTCR_EL2 is the host's version. Which
> > means it describes the virtual HW for the EL1 guest. It constraints,
> > among other things, the number of IPA bits for the guest, for example,
> > and is configured by the VMM.
> > 
> > Once you start nesting, each vcpu has its own VTCR_EL2 which is still
> > constrained by the main one (no nested guest can have a T0SZ bigger
> > than the value imposed by userspace for this guest as a whole).
> > 
> > Does it make sense?
> 
> It does up to my ignorance of the spec in this regard.
> 
> Simliar to James's question, should `vtcr` live inside the mmu struct
> with the top level `kvm::mmu` field containing the host's version and
> the nested mmus containing the nested version of vtcr to be applied to
> the vCPU? I didn't noticed there being a vtcr for the nested version in
> the ~90-patch series so maybe that just isn't something that needs
> thinking about?

They serve two very different purposes. One defines the virtual HW,
the other one is the view that a guest hypervisor gives to its own
guests. The latter is also per-vcpu, and not per VM (yes, NV implies
the "de-facto CnP", for those who remember an intense whiteboard
session).  It thus lives in the system register file (being private to
each vcpu). Another reason is that the HW can directly access the
in-memory version of VTCR_EL2 when ARMv8.4-NV is present.

Thanks,

M.

-- 
Jazz is not dead, it just smells funny.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread Marc Zyngier
Hi James,

On Tue, 05 May 2020 17:03:15 +0100,
James Morse  wrote:
> 
> Hi Marc,
> 
> On 22/04/2020 13:00, Marc Zyngier wrote:
> > From: Christoffer Dall 
> > 
> > As we are about to reuse our stage 2 page table manipulation code for
> > shadow stage 2 page tables in the context of nested virtualization, we
> > are going to manage multiple stage 2 page tables for a single VM.
> > 
> > This requires some pretty invasive changes to our data structures,
> > which moves the vmid and pgd pointers into a separate structure and
> > change pretty much all of our mmu code to operate on this structure
> > instead.
> > 
> > The new structure is called struct kvm_s2_mmu.
> > 
> > There is no intended functional change by this patch alone.
> 
> It's not obvious to me that VTCR_EL2.T0SZ is a per-vm thing, today
> the size of the IPA space comes from the VMM, its not a
> hardware/compile-time property. Where does the vEL2's T0SZ go?
> ... but using this for nested guests would 'only' cause a
> translation fault, it would still need handling in the emulation
> code. So making it per-vm should be simpler.

My reasoning is that this VTCR defines the virtual HW, and the guest's
own VTCR_EL2 is just another guest system register. It is the role of
the NV code to compose the two in a way that makes sense (delivering
translation faults if the NV guest's S2 output range doesn't fit in
the host's view of the VM IPA range).

> But accessing VTCR is why the stage2_dissolve_p?d() stuff still
> needs the kvm pointer, hence the backreference... it might be neater
> to push the vtcr properties into kvm_s2_mmu that way you could drop
> the kvm backref, and only things that take vm-wide locks would need
> the kvm pointer. But I don't think it matters.

That's an interesting consideration. I'll have a look.

> I think I get it. I can't see anything that should be the other
> vm/vcpu pointer.
> 
> Reviewed-by: James Morse 

Thanks!

> Some boring fiddly stuff:
> 
> [...]
> 
> > @@ -125,24 +123,24 @@ static void __hyp_text 
> > __tlb_switch_to_host_nvhe(struct kvm *kvm,
> > }
> >  }
> >  
> > -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
> > +static void __hyp_text __tlb_switch_to_host(struct kvm_s2_mmu *mmu,
> > struct tlb_inv_context *cxt)
> >  {
> > if (has_vhe())
> > -   __tlb_switch_to_host_vhe(kvm, cxt);
> > +   __tlb_switch_to_host_vhe(cxt);
> > else
> > -   __tlb_switch_to_host_nvhe(kvm, cxt);
> > +   __tlb_switch_to_host_nvhe(cxt);
> >  }
> 
> What does __tlb_switch_to_host() need the kvm_s2_mmu for?

Not much. Obviously mechanical conversion of kvm->kvm_s2_mmu, and not
finishing the job. I'll fix that.

> 
> [...]
> 
> 
> >  void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
> >  {
> > -   struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
> > +   struct kvm_s2_mmu *mmu = kern_hyp_va(kern_hyp_va(vcpu)->arch.hw_mmu);
> > struct tlb_inv_context cxt;
> >
> > /* Switch to requested VMID */
> > -   __tlb_switch_to_guest(kvm, );
> > +   __tlb_switch_to_guest(mmu, );
> >
> > __tlbi(vmalle1);
> > dsb(nsh);
> > isb();
> >
> > -   __tlb_switch_to_host(kvm, );
> > +   __tlb_switch_to_host(mmu, );
> >  }
> 
> Does this need the vcpu in the future?
> Its the odd one out, the other tlb functions here take the s2_mmu, or nothing.
> We only use the s2_mmu here.

I think this was done as a way not impact the 32bit code (rest in
peace). Definitely a candidate for immediate cleanup.

> 
> [...]
> 
> 
> > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > index e3b9ee268823b..2f99749048285 100644
> > --- a/virt/kvm/arm/mmu.c
> > +++ b/virt/kvm/arm/mmu.c
> 
> > @@ -96,31 +96,33 @@ static bool kvm_is_device_pfn(unsigned long pfn)
> >   *
> >   * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs.
> >   */
> > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t 
> > *pmd)
> > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> > pmd_t *pmd)
> 
> The comment above this function still has '@kvm:  pointer to kvm 
> structure.'
> 
> [...]
> 
> 
> > @@ -331,8 +339,9 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t 
> > *pgd,
> >   * destroying the VM), otherwise another faulting VCPU may come in and mess
> >   * with things behind our backs.
> >   */
> > -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 
> > size)
> > +static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 
> > u64 size)
> 
> The comment above this function still has '@kvm:   The VM pointer'
> 
> [...]
> 
> > -static void stage2_flush_memslot(struct kvm *kvm,
> > +static void stage2_flush_memslot(struct kvm_s2_mmu *mmu,
> >  struct kvm_memory_slot *memslot)
> >  {
> 
> Wouldn't something manipulating a memslot have to mess with a set of
> kvm_s2_mmu once this is all assembled?  stage2_unmap_memslot() takes
> 

Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread Marc Zyngier
Hi Andrew,

On Tue, 05 May 2020 16:26:48 +0100,
Andrew Scull  wrote:
> 
> Having a go at reviewing. Might turn out to be more useful as a learning
> exercise for me rather than useful feedback but we've got to start
> somewhere..

Thanks for making the effort. Asking questions is never a pointless
exercise, as it usually means that something isn't as crystal clear as
the author expects... ;-)

> 
> > -struct kvm_arch {
> > +struct kvm_s2_mmu {
> > struct kvm_vmid vmid;
> >  
> > -   /* stage2 entry level table */
> > -   pgd_t *pgd;
> > -   phys_addr_t pgd_phys;
> > -
> > -   /* VTCR_EL2 value for this VM */
> > -   u64vtcr;
> > +   /*
> > +* stage2 entry level table
> > +*
> > +* Two kvm_s2_mmu structures in the same VM can point to the same pgd
> > +* here.  This happens when running a non-VHE guest hypervisor which
> > +* uses the canonical stage 2 page table for both vEL2 and for vEL1/0
> > +* with vHCR_EL2.VM == 0.
> > +*/
> > +   pgd_t   *pgd;
> > +   phys_addr_t pgd_phys;
> >  
> > /* The last vcpu id that ran on each physical CPU */
> > int __percpu *last_vcpu_ran;
> >  
> > +   struct kvm *kvm;
> > +};
> > +
> > +struct kvm_arch {
> > +   struct kvm_s2_mmu mmu;
> > +
> > +   /* VTCR_EL2 value for this VM */
> > +   u64vtcr;
> 
> VTCR seems quite strongly tied to the MMU config. Is it not controlled
> independently for the nested MMUs and so remains in this struct?

This particular instance of VTCR_EL2 is the host's version. Which
means it describes the virtual HW for the EL1 guest. It constraints,
among other things, the number of IPA bits for the guest, for example,
and is configured by the VMM.

Once you start nesting, each vcpu has its own VTCR_EL2 which is still
constrained by the main one (no nested guest can have a T0SZ bigger
than the value imposed by userspace for this guest as a whole).

Does it make sense?

> 
> > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t 
> > *pmd)
> > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> > pmd_t *pmd)
> 
> How strictly is the long line style rule enforced? checkpatch has 16
> such warnings on this patch.

It isn't enforced at all for KVM/arm. I am perfectly happy with
longish lines (I stupidly gave away my vt100 a very long time ago).
In general, checkpatch warnings are to be looked into (it sometimes
brings interesting stuff up), but this falls into the *cosmetic*
department, and I cannot be bothered.

> 
> > -static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t 
> > *pudp)
> > +static void stage2_dissolve_pud(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> > pud_t *pudp)
> >  {
> > +   struct kvm *kvm __maybe_unused = mmu->kvm;
> > +
> > if (!stage2_pud_huge(kvm, *pudp))
> > return;
> 
> There're a couple places with `__maybe_unused` on variables that are
> then used soon after. Can they be dropped in these cases so as not to
> hide legitimate warning?

Absolutely. I'll have a look.

Thanks for the review!

M.

-- 
Jazz is not dead, it just smells funny.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm

2020-05-05 Thread James Morse
Hi Marc,

On 22/04/2020 13:00, Marc Zyngier wrote:
> From: Christoffer Dall 
> 
> As we are about to reuse our stage 2 page table manipulation code for
> shadow stage 2 page tables in the context of nested virtualization, we
> are going to manage multiple stage 2 page tables for a single VM.
> 
> This requires some pretty invasive changes to our data structures,
> which moves the vmid and pgd pointers into a separate structure and
> change pretty much all of our mmu code to operate on this structure
> instead.
> 
> The new structure is called struct kvm_s2_mmu.
> 
> There is no intended functional change by this patch alone.

It's not obvious to me that VTCR_EL2.T0SZ is a per-vm thing, today the size of 
the IPA
space comes from the VMM, its not a hardware/compile-time property. Where does 
the vEL2's
T0SZ go? ... but using this for nested guests would 'only' cause a translation 
fault, it
would still need handling in the emulation code. So making it per-vm should be 
simpler.

But accessing VTCR is why the stage2_dissolve_p?d() stuff still needs the kvm 
pointer,
hence the backreference... it might be neater to push the vtcr properties into 
kvm_s2_mmu
that way you could drop the kvm backref, and only things that take vm-wide 
locks would
need the kvm pointer. But I don't think it matters.


I think I get it. I can't see anything that should be the other vm/vcpu pointer.

Reviewed-by: James Morse 


Some boring fiddly stuff:

[...]

> @@ -125,24 +123,24 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct 
> kvm *kvm,
>   }
>  }
>  
> -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
> +static void __hyp_text __tlb_switch_to_host(struct kvm_s2_mmu *mmu,
>   struct tlb_inv_context *cxt)
>  {
>   if (has_vhe())
> - __tlb_switch_to_host_vhe(kvm, cxt);
> + __tlb_switch_to_host_vhe(cxt);
>   else
> - __tlb_switch_to_host_nvhe(kvm, cxt);
> + __tlb_switch_to_host_nvhe(cxt);
>  }

What does __tlb_switch_to_host() need the kvm_s2_mmu for?

[...]


>  void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
>  {
> - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
> + struct kvm_s2_mmu *mmu = kern_hyp_va(kern_hyp_va(vcpu)->arch.hw_mmu);
>   struct tlb_inv_context cxt;
>
>   /* Switch to requested VMID */
> - __tlb_switch_to_guest(kvm, );
> + __tlb_switch_to_guest(mmu, );
>
>   __tlbi(vmalle1);
>   dsb(nsh);
>   isb();
>
> - __tlb_switch_to_host(kvm, );
> + __tlb_switch_to_host(mmu, );
>  }

Does this need the vcpu in the future?
Its the odd one out, the other tlb functions here take the s2_mmu, or nothing.
We only use the s2_mmu here.

[...]


> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index e3b9ee268823b..2f99749048285 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c

> @@ -96,31 +96,33 @@ static bool kvm_is_device_pfn(unsigned long pfn)
>   *
>   * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs.
>   */
> -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t 
> *pmd)
> +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, 
> pmd_t *pmd)

The comment above this function still has '@kvm:pointer to kvm 
structure.'

[...]


> @@ -331,8 +339,9 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t *pgd,
>   * destroying the VM), otherwise another faulting VCPU may come in and mess
>   * with things behind our backs.
>   */
> -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
> +static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, 
> u64 size)

The comment above this function still has '@kvm:   The VM pointer'

[...]

> -static void stage2_flush_memslot(struct kvm *kvm,
> +static void stage2_flush_memslot(struct kvm_s2_mmu *mmu,
>struct kvm_memory_slot *memslot)
>  {

Wouldn't something manipulating a memslot have to mess with a set of kvm_s2_mmu 
once this
is all assembled?
stage2_unmap_memslot() takes struct kvm, it seems odd to pass one kvm_s2_mmu 
here.

[...]

> @@ -886,21 +898,23 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, 
> size_t size,

> -int kvm_alloc_stage2_pgd(struct kvm *kvm)
> +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
>  {
>   phys_addr_t pgd_phys;
>   pgd_t *pgd;
> + int cpu;
>  
> - if (kvm->arch.pgd != NULL) {
> + if (mmu->pgd != NULL) {
>   kvm_err("kvm_arch already initialized?\n");

Does this error message still make sense?


>   return -EINVAL;
>   }

[...]

> @@ -1439,9 +1467,10 @@ static void stage2_wp_ptes(pmd_t *pmd, phys_addr_t 
> addr, phys_addr_t end)
>   * @addr:range start address
>   * @end: range end address
>   */
> -static void stage2_wp_pmds(struct kvm *kvm, pud_t *pud,
> +static void stage2_wp_pmds(struct kvm_s2_mmu *mmu, pud_t *pud,
>