Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-02-22 Thread Marc Zyngier
On Fri, 22 Feb 2019 11:42:46 +
Julien Grall  wrote:

> Hi Marc,
> 
> On 22/02/2019 09:18, Marc Zyngier wrote:
> > On Thu, 21 Feb 2019 11:02:56 +
> > Julien Grall  wrote:
> > 
> > Hi Julien,
> >   
> >> Hi Christoffer,
> >>
> >> On 24/01/2019 14:00, Christoffer Dall wrote:  
> >>> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> >>> simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> >>> update_vmid to always assign a valid vmid value (within the supported
> >>> range).  
> >>
> >> [...]
> >>  
> >>> - kvm->arch.vmid = kvm_next_vmid;
> >>> + vmid->vmid = kvm_next_vmid;
> >>>   kvm_next_vmid++;
> >>> - kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
> >>> -
> >>> - /* update vttbr to be used with the new vmid */
> >>> - pgd_phys = virt_to_phys(kvm->arch.pgd);
> >>> - BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
> >>> - vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
> >>> VTTBR_VMID_MASK(kvm_vmid_bits);
> >>> - kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
> >>> + kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;  
> >>
> >> The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it 
> >> required
> >> to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented 
> >> using
> >> the function bsearch.
> >>
> >> So wouldn't it be better to keep kvm_vmid_bits variable for use in 
> >> update_vttbr()?  
> > 
> > How often does this happen? Can you measure this overhead at all?
> > 
> > My understanding is that we hit this path on rollover only, having IPIed
> > all CPUs and invalidated all TLBs. I seriously doubt you can observe
> > any sort of overhead at all, given that it is so incredibly rare. But
> > feel free to prove me wrong!  
> 
> That would happen on roll-over and the first time you allocate VMID for the 
> VM.
> 
> I am planning to run some test with 3-bit VMIDs and provide them next week.

Sure, but who implements 3 bit VMIDs? I'm only interested in
performance on real HW, and the minimal implementation allowed is
8bits.

So don't bother testing this with such contrived conditions. Test it
for real, on a system that can run enough stuff concurrently to quickly
exhaust its VMID space and provoke rollovers.

Alternatively, measure the time it takes to create a single VM that
exits immediately. On its own, that'd be a useful unit test for
extremely short-lived VMs.

Thanks,

M.
-- 
Without deviation from the norm, progress is not possible.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-02-22 Thread Julien Grall

Hi Marc,

On 22/02/2019 09:18, Marc Zyngier wrote:

On Thu, 21 Feb 2019 11:02:56 +
Julien Grall  wrote:

Hi Julien,


Hi Christoffer,

On 24/01/2019 14:00, Christoffer Dall wrote:

Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
simply forego the masking of the vmid value in kvm_get_vttbr and rely on
update_vmid to always assign a valid vmid value (within the supported
range).


[...]


-   kvm->arch.vmid = kvm_next_vmid;
+   vmid->vmid = kvm_next_vmid;
kvm_next_vmid++;
-   kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
-
-   /* update vttbr to be used with the new vmid */
-   pgd_phys = virt_to_phys(kvm->arch.pgd);
-   BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
-   vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
VTTBR_VMID_MASK(kvm_vmid_bits);
-   kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
+   kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;


The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it required
to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using
the function bsearch.

So wouldn't it be better to keep kvm_vmid_bits variable for use in 
update_vttbr()?


How often does this happen? Can you measure this overhead at all?

My understanding is that we hit this path on rollover only, having IPIed
all CPUs and invalidated all TLBs. I seriously doubt you can observe
any sort of overhead at all, given that it is so incredibly rare. But
feel free to prove me wrong!


That would happen on roll-over and the first time you allocate VMID for the VM.

I am planning to run some test with 3-bit VMIDs and provide them next week.

Cheers,

--
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-02-22 Thread Marc Zyngier
On Thu, 21 Feb 2019 11:02:56 +
Julien Grall  wrote:

Hi Julien,

> Hi Christoffer,
> 
> On 24/01/2019 14:00, Christoffer Dall wrote:
> > Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> > simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> > update_vmid to always assign a valid vmid value (within the supported
> > range).  
> 
> [...]
> 
> > -   kvm->arch.vmid = kvm_next_vmid;
> > +   vmid->vmid = kvm_next_vmid;
> > kvm_next_vmid++;
> > -   kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
> > -
> > -   /* update vttbr to be used with the new vmid */
> > -   pgd_phys = virt_to_phys(kvm->arch.pgd);
> > -   BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
> > -   vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
> > VTTBR_VMID_MASK(kvm_vmid_bits);
> > -   kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
> > +   kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;  
> 
> The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it 
> required 
> to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using 
> the function bsearch.
> 
> So wouldn't it be better to keep kvm_vmid_bits variable for use in 
> update_vttbr()?

How often does this happen? Can you measure this overhead at all?

My understanding is that we hit this path on rollover only, having IPIed
all CPUs and invalidated all TLBs. I seriously doubt you can observe
any sort of overhead at all, given that it is so incredibly rare. But
feel free to prove me wrong!

Thanks,

M.
-- 
Without deviation from the norm, progress is not possible.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-02-21 Thread Julien Grall
Hi Christoffer,

On 24/01/2019 14:00, Christoffer Dall wrote:
> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> update_vmid to always assign a valid vmid value (within the supported
> range).

[...]

> - kvm->arch.vmid = kvm_next_vmid;
> + vmid->vmid = kvm_next_vmid;
>   kvm_next_vmid++;
> - kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
> -
> - /* update vttbr to be used with the new vmid */
> - pgd_phys = virt_to_phys(kvm->arch.pgd);
> - BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
> - vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
> VTTBR_VMID_MASK(kvm_vmid_bits);
> - kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
> + kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;

The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it required 
to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using 
the function bsearch.

So wouldn't it be better to keep kvm_vmid_bits variable for use in 
update_vttbr()?

Cheers,

-- 
Julien Grall
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-01-31 Thread Marc Zyngier
On 25/01/2019 11:05, Julien Thierry wrote:
> Hi Christoffer,
> 
> On 24/01/2019 14:00, Christoffer Dall wrote:
>> In preparation for nested virtualization where we are going to have more
>> than a single VMID per VM, let's factor out the VMID data into a
>> separate VMID data structure and change the VMID allocator to operate on
>> this new structure instead of using a struct kvm.
>>
>> This also means that udate_vttbr now becomes update_vmid, and that the
>> vttbr itself is generated on the fly based on the stage 2 page table
>> base address and the vmid.
>>
>> We cache the physical address of the pgd when allocating the pgd to
>> avoid doing the calculation on every entry to the guest and to avoid
>> calling into potentially non-hyp-mapped code from hyp/EL2.
>>
>> If we wanted to merge the VMID allocator with the arm64 ASID allocator
>> at some point in the future, it should actually become easier to do that
>> after this patch.
>>
>> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
>> simply forego the masking of the vmid value in kvm_get_vttbr and rely on
>> update_vmid to always assign a valid vmid value (within the supported
>> range).
>>
>> Signed-off-by: Christoffer Dall 
>> Reviewed-by: Marc Zyngier 
>> ---
>>  arch/arm/include/asm/kvm_host.h   | 13 ---
>>  arch/arm/include/asm/kvm_mmu.h| 11 ++
>>  arch/arm/kvm/hyp/switch.c |  2 +-
>>  arch/arm/kvm/hyp/tlb.c|  4 +--
>>  arch/arm64/include/asm/kvm_host.h |  9 +++--
>>  arch/arm64/include/asm/kvm_hyp.h  |  3 +-
>>  arch/arm64/include/asm/kvm_mmu.h  | 11 ++
>>  virt/kvm/arm/arm.c| 57 +++
>>  virt/kvm/arm/mmu.c|  2 ++
>>  9 files changed, 63 insertions(+), 49 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
>> index 3a875fc1b63c..fadbd9ad3a90 100644
>> --- a/arch/arm/include/asm/kvm_mmu.h
>> +++ b/arch/arm/include/asm/kvm_mmu.h
>> @@ -426,6 +426,17 @@ static inline bool kvm_cpu_has_cnp(void)
>>  return false;
>>  }
>>  
>> +static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
>> +{
>> +struct kvm_vmid *vmid = >arch.vmid;
>> +u64 vmid_field, baddr;
>> +u64 cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
>> +
> 
> Nit:
> As James pointed out, we're not merging this one with the 64-bit
> version. The question is, since we don't merge it, can't we simplify
> this one by removing the CNP related code from it? (we know CNP is
> always false for 32-bit ARM).

We certainly can.

> 
>> +baddr = kvm->arch.pgd_phys;
>> +vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
>> +return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
>> +}
>> +
>>  #endif  /* !__ASSEMBLY__ */
> 
> Otherwise, things look good to me:
> 
> Reviewed-by: Julien Thierry 

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-01-25 Thread Julien Thierry
Hi Christoffer,

On 24/01/2019 14:00, Christoffer Dall wrote:
> In preparation for nested virtualization where we are going to have more
> than a single VMID per VM, let's factor out the VMID data into a
> separate VMID data structure and change the VMID allocator to operate on
> this new structure instead of using a struct kvm.
> 
> This also means that udate_vttbr now becomes update_vmid, and that the
> vttbr itself is generated on the fly based on the stage 2 page table
> base address and the vmid.
> 
> We cache the physical address of the pgd when allocating the pgd to
> avoid doing the calculation on every entry to the guest and to avoid
> calling into potentially non-hyp-mapped code from hyp/EL2.
> 
> If we wanted to merge the VMID allocator with the arm64 ASID allocator
> at some point in the future, it should actually become easier to do that
> after this patch.
> 
> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> update_vmid to always assign a valid vmid value (within the supported
> range).
> 
> Signed-off-by: Christoffer Dall 
> Reviewed-by: Marc Zyngier 
> ---
>  arch/arm/include/asm/kvm_host.h   | 13 ---
>  arch/arm/include/asm/kvm_mmu.h| 11 ++
>  arch/arm/kvm/hyp/switch.c |  2 +-
>  arch/arm/kvm/hyp/tlb.c|  4 +--
>  arch/arm64/include/asm/kvm_host.h |  9 +++--
>  arch/arm64/include/asm/kvm_hyp.h  |  3 +-
>  arch/arm64/include/asm/kvm_mmu.h  | 11 ++
>  virt/kvm/arm/arm.c| 57 +++
>  virt/kvm/arm/mmu.c|  2 ++
>  9 files changed, 63 insertions(+), 49 deletions(-)
> 

[...]

> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 3a875fc1b63c..fadbd9ad3a90 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -426,6 +426,17 @@ static inline bool kvm_cpu_has_cnp(void)
>   return false;
>  }
>  
> +static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
> +{
> + struct kvm_vmid *vmid = >arch.vmid;
> + u64 vmid_field, baddr;
> + u64 cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
> +

Nit:
As James pointed out, we're not merging this one with the 64-bit
version. The question is, since we don't merge it, can't we simplify
this one by removing the CNP related code from it? (we know CNP is
always false for 32-bit ARM).

> + baddr = kvm->arch.pgd_phys;
> + vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
> + return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
> +}
> +
>  #endif   /* !__ASSEMBLY__ */

Otherwise, things look good to me:

Reviewed-by: Julien Thierry 

Cheers,

-- 
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-01-25 Thread Marc Zyngier
Hi James,

Thanks for looking into this.

On Thu, 24 Jan 2019 19:01:57 +,
James Morse  wrote:
> 
> Hi guys,
> 
> (CC: +Suzuki)
> 
> On 24/01/2019 14:00, Christoffer Dall wrote:
> > In preparation for nested virtualization where we are going to have more
> > than a single VMID per VM, let's factor out the VMID data into a
> > separate VMID data structure and change the VMID allocator to operate on
> > this new structure instead of using a struct kvm.
> > 
> > This also means that udate_vttbr now becomes update_vmid, and that the
> > vttbr itself is generated on the fly based on the stage 2 page table
> > base address and the vmid.
> > 
> > We cache the physical address of the pgd when allocating the pgd to
> > avoid doing the calculation on every entry to the guest and to avoid
> > calling into potentially non-hyp-mapped code from hyp/EL2.
> > 
> > If we wanted to merge the VMID allocator with the arm64 ASID allocator
> > at some point in the future, it should actually become easier to do that
> > after this patch.
> 
> > Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> > simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> > update_vmid to always assign a valid vmid value (within the supported
> > range).
> 
> 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h 
> > b/arch/arm64/include/asm/kvm_mmu.h
> > index 8af4b1befa42..189d93461d33 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -596,5 +596,16 @@ static inline bool kvm_cpu_has_cnp(void)
> > return system_supports_cnp();
> >  }
> >  
> > +static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
> > +{
> > +   struct kvm_vmid *vmid = >arch.vmid;
> > +   u64 vmid_field, baddr;
> > +   u64 cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
> > +
> > +   baddr = kvm->arch.pgd_phys;
> > +   vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
> > +   return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
> > +}
> 
> (32bits version is the same ... but I guess there is nowhere to put it!)

Yes, that's the usual conundrum.

> 
> 
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 3dd240ea9e76..b77db673bb03 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -536,18 +529,12 @@ static void update_vttbr(struct kvm *kvm)
> > kvm_call_hyp(__kvm_flush_vm_context);
> > }
> >  
> > -   kvm->arch.vmid = kvm_next_vmid;
> > +   vmid->vmid = kvm_next_vmid;
> > kvm_next_vmid++;
> > -   kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
> > -
> > -   /* update vttbr to be used with the new vmid */
> > -   pgd_phys = virt_to_phys(kvm->arch.pgd);
> 
> > -   BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
> 
> Where did this go? (escaped during a turbulent rebase?)
> 
> This removes the only caller of kvm_vttbr_baddr_mask()... It looks
> like this is a safety check that the stage2-pgd is correctly aligned
> when the IPA size, and thus size of the top level entry can by set
> by user-space.
> 
> ... or is it unnecessary if alloc_pages_exact() can only return naturally
> aligned groups of pages? (which I haven't checked)

The rational is indeed that alloc_pages_exact allocates a power of 2
number of pages, and then frees the unrequested tail pages. This means
that we're always correctly aligned.

> Keeping it sounds like a good thing to have in case we accidentally merge
> stage2/host-stage1 pgd's somewhere down the line.

This seems hard to achieve for this very reason, as level-0
concatenation gets in the way of unifying the two allocators.

> (Suzuki suggested it would make more sense in kvm_alloc_stage2_pgd(), where we
> could fail vm creation, instead of BUG()ing).
> 
> (this was added by e55cac5bf2a9c ("kvm: arm/arm64: Prepare for VM specific
> stage2 translations"), the bulk of the logic is in 595583306434c ("kvm: arm64:
> Dynamic configuration of VTTBR mask"))

We could bring it back if people have a strong feeling about this, but
certainly not as a BUG_ON(). Failing it in gracefully in
kvm_alloc_stage2_pgd seems a more palatable solution, but I can't
convince myself that we need it.

Thanks,

M.

-- 
Jazz is not dead, it just smell funny.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-01-24 Thread James Morse
Hi guys,

(CC: +Suzuki)

On 24/01/2019 14:00, Christoffer Dall wrote:
> In preparation for nested virtualization where we are going to have more
> than a single VMID per VM, let's factor out the VMID data into a
> separate VMID data structure and change the VMID allocator to operate on
> this new structure instead of using a struct kvm.
> 
> This also means that udate_vttbr now becomes update_vmid, and that the
> vttbr itself is generated on the fly based on the stage 2 page table
> base address and the vmid.
> 
> We cache the physical address of the pgd when allocating the pgd to
> avoid doing the calculation on every entry to the guest and to avoid
> calling into potentially non-hyp-mapped code from hyp/EL2.
> 
> If we wanted to merge the VMID allocator with the arm64 ASID allocator
> at some point in the future, it should actually become easier to do that
> after this patch.

> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
> simply forego the masking of the vmid value in kvm_get_vttbr and rely on
> update_vmid to always assign a valid vmid value (within the supported
> range).


> diff --git a/arch/arm64/include/asm/kvm_mmu.h 
> b/arch/arm64/include/asm/kvm_mmu.h
> index 8af4b1befa42..189d93461d33 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -596,5 +596,16 @@ static inline bool kvm_cpu_has_cnp(void)
>   return system_supports_cnp();
>  }
>  
> +static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
> +{
> + struct kvm_vmid *vmid = >arch.vmid;
> + u64 vmid_field, baddr;
> + u64 cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
> +
> + baddr = kvm->arch.pgd_phys;
> + vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
> + return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
> +}

(32bits version is the same ... but I guess there is nowhere to put it!)


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 3dd240ea9e76..b77db673bb03 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -536,18 +529,12 @@ static void update_vttbr(struct kvm *kvm)
>   kvm_call_hyp(__kvm_flush_vm_context);
>   }
>  
> - kvm->arch.vmid = kvm_next_vmid;
> + vmid->vmid = kvm_next_vmid;
>   kvm_next_vmid++;
> - kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
> -
> - /* update vttbr to be used with the new vmid */
> - pgd_phys = virt_to_phys(kvm->arch.pgd);

> - BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));

Where did this go? (escaped during a turbulent rebase?)

This removes the only caller of kvm_vttbr_baddr_mask()... It looks like this is
a safety check that the stage2-pgd is correctly aligned when the IPA size, and
thus size of the top level entry can by set by user-space.

... or is it unnecessary if alloc_pages_exact() can only return naturally
aligned groups of pages? (which I haven't checked)

Keeping it sounds like a good thing to have in case we accidentally merge
stage2/host-stage1 pgd's somewhere down the line.

(Suzuki suggested it would make more sense in kvm_alloc_stage2_pgd(), where we
could fail vm creation, instead of BUG()ing).

(this was added by e55cac5bf2a9c ("kvm: arm/arm64: Prepare for VM specific
stage2 translations"), the bulk of the logic is in 595583306434c ("kvm: arm64:
Dynamic configuration of VTTBR mask"))


> - vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
> VTTBR_VMID_MASK(kvm_vmid_bits);
> - kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
> + kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;
>  
>   smp_wmb();
> - WRITE_ONCE(kvm->arch.vmid_gen, atomic64_read(_vmid_gen));
> + WRITE_ONCE(vmid->vmid_gen, atomic64_read(_vmid_gen));
>  
>   spin_unlock(_vmid_lock);
>  }


Thanks,

James
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 06/14] KVM: arm/arm64: Factor out VMID into struct kvm_vmid

2019-01-24 Thread Christoffer Dall
In preparation for nested virtualization where we are going to have more
than a single VMID per VM, let's factor out the VMID data into a
separate VMID data structure and change the VMID allocator to operate on
this new structure instead of using a struct kvm.

This also means that udate_vttbr now becomes update_vmid, and that the
vttbr itself is generated on the fly based on the stage 2 page table
base address and the vmid.

We cache the physical address of the pgd when allocating the pgd to
avoid doing the calculation on every entry to the guest and to avoid
calling into potentially non-hyp-mapped code from hyp/EL2.

If we wanted to merge the VMID allocator with the arm64 ASID allocator
at some point in the future, it should actually become easier to do that
after this patch.

Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
simply forego the masking of the vmid value in kvm_get_vttbr and rely on
update_vmid to always assign a valid vmid value (within the supported
range).

Signed-off-by: Christoffer Dall 
Reviewed-by: Marc Zyngier 
---
 arch/arm/include/asm/kvm_host.h   | 13 ---
 arch/arm/include/asm/kvm_mmu.h| 11 ++
 arch/arm/kvm/hyp/switch.c |  2 +-
 arch/arm/kvm/hyp/tlb.c|  4 +--
 arch/arm64/include/asm/kvm_host.h |  9 +++--
 arch/arm64/include/asm/kvm_hyp.h  |  3 +-
 arch/arm64/include/asm/kvm_mmu.h  | 11 ++
 virt/kvm/arm/arm.c| 57 +++
 virt/kvm/arm/mmu.c|  2 ++
 9 files changed, 63 insertions(+), 49 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 43e343e00fb8..8073267dc4a0 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -57,10 +57,13 @@ int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
 void kvm_reset_coprocs(struct kvm_vcpu *vcpu);
 
-struct kvm_arch {
-   /* VTTBR value associated with below pgd and vmid */
-   u64vttbr;
+struct kvm_vmid {
+   /* The VMID generation used for the virt. memory system */
+   u64vmid_gen;
+   u32vmid;
+};
 
+struct kvm_arch {
/* The last vcpu id that ran on each physical CPU */
int __percpu *last_vcpu_ran;
 
@@ -70,11 +73,11 @@ struct kvm_arch {
 */
 
/* The VMID generation used for the virt. memory system */
-   u64vmid_gen;
-   u32vmid;
+   struct kvm_vmid vmid;
 
/* Stage-2 page table */
pgd_t *pgd;
+   phys_addr_t pgd_phys;
 
/* Interrupt controller */
struct vgic_distvgic;
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 3a875fc1b63c..fadbd9ad3a90 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -426,6 +426,17 @@ static inline bool kvm_cpu_has_cnp(void)
return false;
 }
 
+static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
+{
+   struct kvm_vmid *vmid = >arch.vmid;
+   u64 vmid_field, baddr;
+   u64 cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
+
+   baddr = kvm->arch.pgd_phys;
+   vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
+   return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ARM_KVM_MMU_H__ */
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index acf1c37fa49c..3b058a5d7c5f 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -77,7 +77,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu 
*vcpu)
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
 {
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-   write_sysreg(kvm->arch.vttbr, VTTBR);
+   write_sysreg(kvm_get_vttbr(kvm), VTTBR);
write_sysreg(vcpu->arch.midr, VPIDR);
 }
 
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
index c0edd450e104..8e4afba73635 100644
--- a/arch/arm/kvm/hyp/tlb.c
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -41,7 +41,7 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
 
/* Switch to requested VMID */
kvm = kern_hyp_va(kvm);
-   write_sysreg(kvm->arch.vttbr, VTTBR);
+   write_sysreg(kvm_get_vttbr(kvm), VTTBR);
isb();
 
write_sysreg(0, TLBIALLIS);
@@ -61,7 +61,7 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu 
*vcpu)
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
 
/* Switch to requested VMID */
-   write_sysreg(kvm->arch.vttbr, VTTBR);
+   write_sysreg(kvm_get_vttbr(kvm), VTTBR);
isb();
 
write_sysreg(0, TLBIALL);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index f497bb31031f..444dd1cb1958 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -57,16 +57,19 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
 int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext);
 void