Hi Marc,

On 22/02/2019 09:18, Marc Zyngier wrote:
On Thu, 21 Feb 2019 11:02:56 +0000
Julien Grall <julien.gr...@arm.com> wrote:

Hi Julien,

Hi Christoffer,

On 24/01/2019 14:00, Christoffer Dall wrote:
Note that to avoid mapping the kvm_vmid_bits variable into hyp, we
simply forego the masking of the vmid value in kvm_get_vttbr and rely on
update_vmid to always assign a valid vmid value (within the supported
range).

[...]

-       kvm->arch.vmid = kvm_next_vmid;
+       vmid->vmid = kvm_next_vmid;
        kvm_next_vmid++;
-       kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
-
-       /* update vttbr to be used with the new vmid */
-       pgd_phys = virt_to_phys(kvm->arch.pgd);
-       BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm));
-       vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
VTTBR_VMID_MASK(kvm_vmid_bits);
-       kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
+       kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;

The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it required
to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using
the function bsearch.

So wouldn't it be better to keep kvm_vmid_bits variable for use in 
update_vttbr()?

How often does this happen? Can you measure this overhead at all?

My understanding is that we hit this path on rollover only, having IPIed
all CPUs and invalidated all TLBs. I seriously doubt you can observe
any sort of overhead at all, given that it is so incredibly rare. But
feel free to prove me wrong!

That would happen on roll-over and the first time you allocate VMID for the VM.

I am planning to run some test with 3-bit VMIDs and provide them next week.

Cheers,

--
Julien Grall
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to