Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On 12/12/2011 04:49 PM, Avi Kivity wrote: On 12/12/2011 05:25 PM, Peter Maydell wrote: On 12 December 2011 15:15, Avi Kivitya...@redhat.com wrote: We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1 That would be a guest bug. From the ARM ARM: For a symmetric multiprocessor cluster where a single operating system is running on the set of processing elements, ARMv7 requires all ASID values to be assigned uniquely within any single Inner Shareable domain. In other words, each ASID value must have the same meaning to all processing elements in the system. Thanks. So per-vm vmids should work. We where playing with a VMID recycling patch based on the assumption of per-cpu VMIDs being possible, which would have the advantage of recycling VMIDs without much complicated locking (inspired by the KVM SVM implementation). However we killed it with fire and hot plasma when it became clear it violated the ARM spec... On the other hand, maybe we could do something with per vcpu VMIDs, but with proper synchronization accross physical CPUs in order to be compatible with the spec, but at the same time potentially allow a buggy guest to run? Since in practice a lot of CPUs will not share TLB (and instruction cache) structures, maybe it's possible that there is software out there that violates the spec, without having problems on the real hw. Anyway VMID reuse will be available soon, and the difference between a per vm and per vcpu implementation is a couple of trivial lines of code. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On Tue, Dec 13, 2011 at 12:10 PM, Antonios Motakis a.mota...@virtualopensystems.com wrote: On 12/12/2011 04:49 PM, Avi Kivity wrote: On 12/12/2011 05:25 PM, Peter Maydell wrote: On 12 December 2011 15:15, Avi Kivitya...@redhat.com wrote: We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1 That would be a guest bug. From the ARM ARM: For a symmetric multiprocessor cluster where a single operating system is running on the set of processing elements, ARMv7 requires all ASID values to be assigned uniquely within any single Inner Shareable domain. In other words, each ASID value must have the same meaning to all processing elements in the system. Thanks. So per-vm vmids should work. We where playing with a VMID recycling patch based on the assumption of per-cpu VMIDs being possible, which would have the advantage of recycling VMIDs without much complicated locking (inspired by the KVM SVM implementation). However we killed it with fire and hot plasma when it became clear it violated the ARM spec... On the other hand, maybe we could do something with per vcpu VMIDs, but with proper synchronization accross physical CPUs in order to be compatible with the spec, but at the same time potentially allow a buggy guest to run? Since in practice a lot of CPUs will not share TLB (and instruction cache) structures, maybe it's possible that there is software out there that violates the spec, without having problems on the real hw. Anyway VMID reuse will be available soon, and the difference between a per vm and per vcpu implementation is a couple of trivial lines of code. yes, this is going to be a simple per-vm implementation that flushes TLBs on roll-over for the next patch series, let's leave it at that for now! -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On 12/11/2011 12:24 PM, Christoffer Dall wrote: This commit introduces the framework for guest memory management through the use of 2nd stage translation. Each VM has a pointer to a level-1 tabled (the pgd field in struct kvm_arch) which is used for the 2nd stage translations. Entries are added when handling guest faults (later patch) and the table itself can be allocated and freed through the following functions implemented in arch/arm/kvm/arm_mmu.c: - kvm_alloc_stage2_pgd(struct kvm *kvm); - kvm_free_stage2_pgd(struct kvm *kvm); Further, each entry in TLBs and caches are tagged with a VMID identifier in addition to ASIDs. The VMIDs are managed using a bitmap and assigned when creating the VM in kvm_arch_init_vm() where the 2nd stage pgd is also allocated. The table is freed in kvm_arch_destroy_vm(). Both functions are called from the main KVM code. struct kvm_arch { - pgd_t *pgd; /* 1-level 2nd stage table */ + u32vmid;/* The VMID used for the virt. memory system */ + pgd_t *pgd; /* 1-level 2nd stage table */ + u64vttbr; /* VTTBR value associated with above pgd and vmid */ }; I can't say I have a solid grasp here, but my feeling is that vmid needs to be per-vcpu. Otherwise vcpu 1 can migrate to a cpu that previously ran vcpu 0, and reuse its tlb since they have the same vmid. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On Mon, Dec 12, 2011 at 9:40 AM, Avi Kivity a...@redhat.com wrote: On 12/11/2011 12:24 PM, Christoffer Dall wrote: This commit introduces the framework for guest memory management through the use of 2nd stage translation. Each VM has a pointer to a level-1 tabled (the pgd field in struct kvm_arch) which is used for the 2nd stage translations. Entries are added when handling guest faults (later patch) and the table itself can be allocated and freed through the following functions implemented in arch/arm/kvm/arm_mmu.c: - kvm_alloc_stage2_pgd(struct kvm *kvm); - kvm_free_stage2_pgd(struct kvm *kvm); Further, each entry in TLBs and caches are tagged with a VMID identifier in addition to ASIDs. The VMIDs are managed using a bitmap and assigned when creating the VM in kvm_arch_init_vm() where the 2nd stage pgd is also allocated. The table is freed in kvm_arch_destroy_vm(). Both functions are called from the main KVM code. struct kvm_arch { - pgd_t *pgd; /* 1-level 2nd stage table */ + u32 vmid; /* The VMID used for the virt. memory system */ + pgd_t *pgd; /* 1-level 2nd stage table */ + u64 vttbr; /* VTTBR value associated with above pgd and vmid */ }; I can't say I have a solid grasp here, but my feeling is that vmid needs to be per-vcpu. Otherwise vcpu 1 can migrate to a cpu that previously ran vcpu 0, and reuse its tlb since they have the same vmid. According to the ARM guys, the same rules apply as for ASIDs which results into 1 vmid per VM. The vmid is checked along with a guest-specific asid, so they would only share TLB entries when running the same guest process, which I assume is fine then. Actually desirable. But, granted, my SMP knowledge is limited so far, so if Marc or Catalin feels like chipping in here... -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On 12/12/2011 05:09 PM, Christoffer Dall wrote: On Mon, Dec 12, 2011 at 9:40 AM, Avi Kivity a...@redhat.com wrote: On 12/11/2011 12:24 PM, Christoffer Dall wrote: This commit introduces the framework for guest memory management through the use of 2nd stage translation. Each VM has a pointer to a level-1 tabled (the pgd field in struct kvm_arch) which is used for the 2nd stage translations. Entries are added when handling guest faults (later patch) and the table itself can be allocated and freed through the following functions implemented in arch/arm/kvm/arm_mmu.c: - kvm_alloc_stage2_pgd(struct kvm *kvm); - kvm_free_stage2_pgd(struct kvm *kvm); Further, each entry in TLBs and caches are tagged with a VMID identifier in addition to ASIDs. The VMIDs are managed using a bitmap and assigned when creating the VM in kvm_arch_init_vm() where the 2nd stage pgd is also allocated. The table is freed in kvm_arch_destroy_vm(). Both functions are called from the main KVM code. struct kvm_arch { - pgd_t *pgd; /* 1-level 2nd stage table */ + u32vmid;/* The VMID used for the virt. memory system */ + pgd_t *pgd; /* 1-level 2nd stage table */ + u64vttbr; /* VTTBR value associated with above pgd and vmid */ }; I can't say I have a solid grasp here, but my feeling is that vmid needs to be per-vcpu. Otherwise vcpu 1 can migrate to a cpu that previously ran vcpu 0, and reuse its tlb since they have the same vmid. According to the ARM guys, the same rules apply as for ASIDs which results into 1 vmid per VM. The vmid is checked along with a guest-specific asid, so they would only share TLB entries when running the same guest process, which I assume is fine then. Actually desirable. But, granted, my SMP knowledge is limited so far, so if Marc or Catalin feels like chipping in here... We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1, and be sure that TLB entries don't leak from A to B since the TLB is cpu local (is that in fact correct?). Does the ARM arch allow this, or does it forbid running different contexts on different vcpus with the same ASID? -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On 12 December 2011 15:15, Avi Kivity a...@redhat.com wrote: We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1 That would be a guest bug. From the ARM ARM: For a symmetric multiprocessor cluster where a single operating system is running on the set of processing elements, ARMv7 requires all ASID values to be assigned uniquely within any single Inner Shareable domain. In other words, each ASID value must have the same meaning to all processing elements in the system. -- PMM -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On 12/12/2011 05:25 PM, Peter Maydell wrote: On 12 December 2011 15:15, Avi Kivity a...@redhat.com wrote: We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1 That would be a guest bug. From the ARM ARM: For a symmetric multiprocessor cluster where a single operating system is running on the set of processing elements, ARMv7 requires all ASID values to be assigned uniquely within any single Inner Shareable domain. In other words, each ASID value must have the same meaning to all processing elements in the system. Thanks. So per-vm vmids should work. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Android-virt] [PATCH v5 04/13] ARM: KVM: Memory virtualization setup
On Mon, Dec 12, 2011 at 10:49 AM, Avi Kivity a...@redhat.com wrote: On 12/12/2011 05:25 PM, Peter Maydell wrote: On 12 December 2011 15:15, Avi Kivity a...@redhat.com wrote: We need to differentiate in how Linux-as-a-guest acts and how the cpu is supposed to work. A guest operating system can theoretically assign the ASID x to process A running on vcpu 0, and the same ASID x to process B running on vcpu 1 That would be a guest bug. From the ARM ARM: For a symmetric multiprocessor cluster where a single operating system is running on the set of processing elements, ARMv7 requires all ASID values to be assigned uniquely within any single Inner Shareable domain. In other words, each ASID value must have the same meaning to all processing elements in the system. Thanks. So per-vm vmids should work. yep. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v5 04/13] ARM: KVM: Memory virtualization setup
This commit introduces the framework for guest memory management through the use of 2nd stage translation. Each VM has a pointer to a level-1 tabled (the pgd field in struct kvm_arch) which is used for the 2nd stage translations. Entries are added when handling guest faults (later patch) and the table itself can be allocated and freed through the following functions implemented in arch/arm/kvm/arm_mmu.c: - kvm_alloc_stage2_pgd(struct kvm *kvm); - kvm_free_stage2_pgd(struct kvm *kvm); Further, each entry in TLBs and caches are tagged with a VMID identifier in addition to ASIDs. The VMIDs are managed using a bitmap and assigned when creating the VM in kvm_arch_init_vm() where the 2nd stage pgd is also allocated. The table is freed in kvm_arch_destroy_vm(). Both functions are called from the main KVM code. Signed-off-by: Christoffer Dall c.d...@virtualopensystems.com --- arch/arm/include/asm/kvm_host.h |4 ++ arch/arm/include/asm/kvm_mmu.h |5 +++ arch/arm/kvm/arm.c | 59 +++-- arch/arm/kvm/mmu.c | 69 +++ 4 files changed, 132 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 6a10467..06d1263 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -31,7 +31,9 @@ struct kvm_vcpu; u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); struct kvm_arch { - pgd_t *pgd; /* 1-level 2nd stage table */ + u32vmid;/* The VMID used for the virt. memory system */ + pgd_t *pgd; /* 1-level 2nd stage table */ + u64vttbr; /* VTTBR value associated with above pgd and vmid */ }; #define EXCEPTION_NONE 0 diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 13fd8dc..9d7440c 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -32,4 +32,9 @@ extern pgd_t *kvm_hyp_pgd; int create_hyp_mappings(pgd_t *hyp_pgd, void *from, void *to); void free_hyp_pmds(pgd_t *hyp_pgd); +int kvm_alloc_stage2_pgd(struct kvm *kvm); +void kvm_free_stage2_pgd(struct kvm *kvm); + +int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); + #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index e6bdf50..89ba18d 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -94,15 +94,62 @@ void kvm_arch_sync_events(struct kvm *kvm) { } +/** + * kvm_arch_init_vm - initializes a VM data structure + * @kvm: pointer to the KVM struct + */ int kvm_arch_init_vm(struct kvm *kvm) { - return 0; + int ret = 0; + phys_addr_t pgd_phys; + unsigned long vmid; + + mutex_lock(kvm_vmids_mutex); + vmid = find_first_zero_bit(kvm_vmids, VMID_SIZE); + if (vmid = VMID_SIZE) { + mutex_unlock(kvm_vmids_mutex); + return -EBUSY; + } + __set_bit(vmid, kvm_vmids); + kvm-arch.vmid = vmid; + mutex_unlock(kvm_vmids_mutex); + + ret = kvm_alloc_stage2_pgd(kvm); + if (ret) + goto out_fail_alloc; + + pgd_phys = virt_to_phys(kvm-arch.pgd); + kvm-arch.vttbr = pgd_phys ((1LLU 40) - 1) ~((2 VTTBR_X) - 1); + kvm-arch.vttbr |= ((u64)vmid 48); + + ret = create_hyp_mappings(kvm_hyp_pgd, kvm, kvm + 1); + if (ret) + goto out_free_stage2_pgd; + + return ret; +out_free_stage2_pgd: + kvm_free_stage2_pgd(kvm); +out_fail_alloc: + clear_bit(vmid, kvm_vmids); + return ret; } +/** + * kvm_arch_destroy_vm - destroy the VM data structure + * @kvm: pointer to the KVM struct + */ void kvm_arch_destroy_vm(struct kvm *kvm) { int i; + kvm_free_stage2_pgd(kvm); + + if (kvm-arch.vmid != 0) { + mutex_lock(kvm_vmids_mutex); + clear_bit(kvm-arch.vmid, kvm_vmids); + mutex_unlock(kvm_vmids_mutex); + } + for (i = 0; i KVM_MAX_VCPUS; ++i) { if (kvm-vcpus[i]) { kvm_arch_vcpu_free(kvm-vcpus[i]); @@ -178,6 +225,10 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) if (err) goto free_vcpu; + err = create_hyp_mappings(kvm_hyp_pgd, vcpu, vcpu + 1); + if (err) + goto free_vcpu; + return vcpu; free_vcpu: kmem_cache_free(kvm_vcpu_cache, vcpu); @@ -187,7 +238,7 @@ out: void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) { - KVMARM_NOT_IMPLEMENTED(); + kmem_cache_free(kvm_vcpu_cache, vcpu); } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) @@ -293,8 +344,8 @@ static int init_hyp_mode(void) hyp_stack_ptr = (unsigned long)kvm_arm_hyp_stack_page + PAGE_SIZE; - init_phys_addr = virt_to_phys((void *)__kvm_hyp_init); - init_end_phys_addr = virt_to_phys((void *)__kvm_hyp_init_end); + init_phys_addr