[PATCH] mm: sparse: Skip no-map regions in memblocks_present
Do not mark regions that are marked with nomap to be present, otherwise these memblock cause unnecessarily allocation of metadata. Cc: Andrew Morton Cc: Pavel Tatashin Cc: Oscar Salvador Cc: Michal Hocko Cc: Mike Rapoport Cc: Baoquan He Cc: Qian Cai Cc: Wei Yang Cc: Logan Gunthorpe Cc: linux...@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: KarimAllah Ahmed --- mm/sparse.c | 4 1 file changed, 4 insertions(+) diff --git a/mm/sparse.c b/mm/sparse.c index fd13166..33810b6 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -256,6 +256,10 @@ void __init memblocks_present(void) struct memblock_region *reg; for_each_memblock(memory, reg) { + + if (memblock_is_nomap(reg)) + continue; + memory_present(memblock_get_region_node(reg), memblock_region_memory_base_pfn(reg), memblock_region_memory_end_pfn(reg)); -- 2.7.4
[PATCH] fdt: Properly handle "no-map" field in the memory region
Mark the memory region with NOMAP flag instead of completely removing it from the memory blocks. That makes the FDT handling consistent with the EFI memory map handling. Cc: Rob Herring Cc: Frank Rowand Cc: devicet...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: KarimAllah Ahmed --- drivers/of/fdt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index de893c9..77982ae 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -1175,7 +1175,7 @@ int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base, phys_addr_t size, bool nomap) { if (nomap) - return memblock_remove(base, size); + return memblock_mark_nomap(base, size); return memblock_reserve(base, size); } -- 2.7.4
[PATCH] arm: Extend the check for RAM in /dev/mem
Some valid RAM can live outside kernel control (e.g. using mem= kernel command-line). For these regions, pfn_valid would return "false" causing system RAM to be mapped as uncached. Use memblock instead to identify RAM. Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon Cc: Mike Rapoport Cc: Andrew Morton Cc: Anders Roxell Cc: Enrico Weigelt Cc: Thomas Gleixner Cc: KarimAllah Ahmed Cc: Mark Rutland Cc: James Morse Cc: Anshuman Khandual Cc: Jun Yao Cc: Yu Zhao Cc: Robin Murphy Cc: Ard Biesheuvel Cc: linux-arm-ker...@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: KarimAllah Ahmed --- arch/arm/mm/mmu.c | 2 +- arch/arm64/mm/mmu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 1aa2586..492774b 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -705,7 +705,7 @@ static void __init build_mem_type_table(void) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!memblock_is_memory(__pfn_to_phys(pfn))) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 3645f29..cdc3e8e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -78,7 +78,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!memblock_is_memory(__pfn_to_phys(pfn))) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); -- 2.7.4
[PATCH v6 03/14] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6bdca39..c40af67 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH v6 01/14] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed Reviewed-by: Jim Mattson Reviewed-by: David Hildenbrand Reviewed-by: Konrad Rzeszutek Wilk --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx/nested.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8ff2052..11b44a9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4192,7 +4192,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -4241,18 +4241,10 @@ static int handle_vmon(struct kvm_vcpu *vcpu) if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) return nested_vmx_failInvalid(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) return nested_vmx_failInvalid(vcpu); - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - return nested_vmx_failInvalid(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); - vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); if (ret) -- 2.7.4
[PATCH v6 08/14] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx/nested.c | 43 --- arch/x86/kvm/vmx/vmx.h| 2 +- 2 files changed, 13 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 31b352c..53b1063 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -230,12 +230,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(vcpu, >nested.virtual_apic_map, true); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(vcpu, >nested.pi_desc_map, true); + vmx->nested.pi_desc = NULL; kvm_mmu_free_roots(vcpu, >arch.guest_mmu, KVM_MMU_ROOTS_ALL); @@ -2868,26 +2864,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - vmcs_write64(POSTED_INTR_DESC_ADDR, -1ull); + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, +pfn_to_hpa(map->pfn) + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -3911,12 +3896,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(vcpu, >nested.virtual_apic_map, true); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(vcpu, >nested.pi_desc_map, true); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index f618f52..bd04725 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -143,7 +143,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; -- 2.7.4
[PATCH v6 04/14] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". The current implementation of this API is using memremap for memory that is not backed by a "struct page" which would lead to a huge slow-down if it was used for high-frequency mapping operations. The API does not have any effect on current setups where guest memory is backed by a "struct page". Further patches are going to also introduce a pfn-cache which would significantly improve the performance of the memremap case. Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v5 -> v6: - Added a helper function to check if the mapping is mapped or not. - Added more comments on the struct. - Setting ->page to NULL on unmap and to a poison ptr if unused during map. - Checking for map ptr before using it. - Change kvm_vcpu_unmap to also mark page dirty for LM. That requires passing the vCPU pointer again to this function. v3 -> v4: - Update the commit message. v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 28 + virt/kvm/kvm_main.c | 64 2 files changed, 92 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c38cc5e..15879ed 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,32 @@ enum { READING_SHADOW_PAGE_TABLES, }; +#define KVM_UNMAPPED_PAGE ((void *) 0x500 + POISON_POINTER_DELTA) + +struct kvm_host_map { + /* +* Only valid if the 'pfn' is managed by the host kernel (i.e. There is +* a 'struct page' for it. When using mem= kernel parameter some memory +* can be used as guest memory but they are not managed by host +* kernel). +* If 'pfn' is not managed by the host kernel, this field is +* initialized to KVM_UNMAPPED_PAGE. +*/ + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + +/* + * Used to check if the mapping is valid or not. Never use 'kvm_host_map' + * directly to check for that. + */ +static inline bool kvm_vcpu_mapped(struct kvm_host_map *map) +{ + return !!map->hva; +} + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -710,7 +736,9 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5ecea81..da3a8fc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1738,6 +1738,70 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = KVM_UNMAPPED_PAGE; + + if (!map) + return -EINVAL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_ma
[PATCH v6 06/14] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx/nested.c | 11 +-- arch/x86/kvm/vmx/vmx.h| 3 +++ 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 8fc327f..1813211 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -507,9 +507,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -535,11 +536,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -587,8 +587,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(vcpu, _vmx(vcpu)->nested.msr_bitmap_map, false); return true; } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 9932895..6fb69d8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -144,6 +144,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; -- 2.7.4
[PATCH v6 10/14] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v4 -> v5: - unmap with dirty flag --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index f13a3a2..d30a35b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3062,32 +3062,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3290,10 +3264,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3302,9 +3277,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3408,7 +3388,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(>vcpu, , true); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3474,7 +3454,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3558,7 +3538,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.pause_filter_thresh = nested_vmcb->control.pause_filter_thresh; - nested_svm_unmap(page); + kvm_vcpu_unmap(>vcpu, map, true); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3578,17 +3558,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3596,7 +3582,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(>vcpu, , true); return false; } @@ -3640,7 +3626,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3664,21 +3650,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(st
[PATCH v6 11/14] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzessutek Wilk --- v4 -> v5: - unmap with dirty flag --- arch/x86/kvm/vmx/nested.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 53b1063..3c173b9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -588,20 +588,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(vcpu, , false); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -2637,9 +2637,9 @@ static int nested_vmx_check_vmentry_prereqs(struct kvm_vcpu *vcpu, static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -2647,17 +2647,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(vcpu, , false); return r; } -- 2.7.4
[PATCH v6 13/14] KVM/nVMX: Use page_address_valid in a few more locations
Use page_address_valid in a few more locations that is already checking for a page aligned address that does not cross the maximum physical address. Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx/nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 60ba582..91e42f9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4203,7 +4203,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) * Note - IA32_VMX_BASIC[48] will never be 1 for the nested case; * which replaces physical address width with 32 */ - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failInvalid(vcpu); if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || @@ -4266,7 +4266,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) if (nested_vmx_get_vmptr(vcpu, )) return 1; - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); @@ -4473,7 +4473,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) if (nested_vmx_get_vmptr(vcpu, )) return 1; - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS); -- 2.7.4
[PATCH v6 14/14] kvm, x86: Properly check whether a pfn is an MMIO or not
pfn_valid check is not sufficient because it only checks if a page has a struct page or not, if "mem=" was passed to the kernel some valid pages won't have a struct page. This means that if guests were assigned valid memory that lies after the mem= boundary it will be passed uncached to the guest no matter what the guest caching attributes are for this memory. Introduce a new function e820__mapped_raw_any which is equivalent to e820__mapped_any but uses the original e820 unmodified and use it to identify real *RAM*. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Introduce e820__mapped_raw_any --- arch/x86/include/asm/e820/api.h | 1 + arch/x86/kernel/e820.c | 18 +++--- arch/x86/kvm/mmu.c | 5 - 3 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/e820/api.h b/arch/x86/include/asm/e820/api.h index 4562062..b3f6e89 100644 --- a/arch/x86/include/asm/e820/api.h +++ b/arch/x86/include/asm/e820/api.h @@ -12,6 +12,7 @@ extern unsigned long e820_saved_max_low_pfn; extern unsigned long pci_mem_start; +extern bool e820__mapped_raw_any(u64 start, u64 end, enum e820_type type); extern bool e820__mapped_any(u64 start, u64 end, enum e820_type type); extern bool e820__mapped_all(u64 start, u64 end, enum e820_type type); diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index ef914d0..3039659 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -75,12 +75,13 @@ EXPORT_SYMBOL(pci_mem_start); * This function checks if any part of the range is mapped * with type. */ -bool e820__mapped_any(u64 start, u64 end, enum e820_type type) +static bool _e820__mapped_any(struct e820_table *table, + u64 start, u64 end, enum e820_type type) { int i; - for (i = 0; i < e820_table->nr_entries; i++) { - struct e820_entry *entry = _table->entries[i]; + for (i = 0; i < table->nr_entries; i++) { + struct e820_entry *entry = >entries[i]; if (type && entry->type != type) continue; @@ -90,6 +91,17 @@ bool e820__mapped_any(u64 start, u64 end, enum e820_type type) } return 0; } + +bool e820__mapped_raw_any(u64 start, u64 end, enum e820_type type) +{ + return _e820__mapped_any(e820_table_firmware, start, end, type); +} +EXPORT_SYMBOL_GPL(e820__mapped_raw_any); + +bool e820__mapped_any(u64 start, u64 end, enum e820_type type) +{ + return _e820__mapped_any(e820_table, start, end, type); +} EXPORT_SYMBOL_GPL(e820__mapped_any); /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index da9c423..abf6a0d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -2875,7 +2876,9 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) */ (!pat_enabled() || pat_pfn_immune_to_uc_mtrr(pfn)); - return true; + return !e820__mapped_raw_any(pfn_to_hpa(pfn), +pfn_to_hpa(pfn + 1) - 1, +E820_TYPE_RAM); } /* Bits which may be returned by set_spte() */ -- 2.7.4
[PATCH v6 09/14] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v5 -> v6: - Remove explicit call to dirty the page. It is now done by kvm_vcpu_unmap v4 -> v5: - unmap with dirty flag v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 14 ++ 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3d27206..f6800e0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5494,9 +5494,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5513,12 +5513,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5535,13 +5534,12 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(vcpu, , true); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; - kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); kvm_page_track_write(vcpu, gpa, new, bytes); return X86EMUL_CONTINUE; -- 2.7.4
[PATCH v6 12/14] KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS
Use kvm_vcpu_map for accessing the enlightened VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v4 -> v5: - s/enhanced/enlightened - unmap with dirty flag --- arch/x86/kvm/vmx/nested.c | 14 +- arch/x86/kvm/vmx/vmx.h| 2 +- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 3c173b9..60ba582 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -193,10 +193,8 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) if (!vmx->nested.hv_evmcs) return; - kunmap(vmx->nested.hv_evmcs_page); - kvm_release_page_dirty(vmx->nested.hv_evmcs_page); + kvm_vcpu_unmap(vcpu, >nested.hv_evmcs_map, true); vmx->nested.hv_evmcs_vmptr = -1ull; - vmx->nested.hv_evmcs_page = NULL; vmx->nested.hv_evmcs = NULL; } @@ -1769,13 +1767,11 @@ static int nested_vmx_handle_enlightened_vmptrld(struct kvm_vcpu *vcpu, nested_release_evmcs(vcpu); - vmx->nested.hv_evmcs_page = kvm_vcpu_gpa_to_page( - vcpu, assist_page.current_nested_vmcs); - - if (unlikely(is_error_page(vmx->nested.hv_evmcs_page))) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(assist_page.current_nested_vmcs), +>nested.hv_evmcs_map)) return 0; - vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); + vmx->nested.hv_evmcs = vmx->nested.hv_evmcs_map.hva; /* * Currently, KVM only supports eVMCS version 1 @@ -4278,7 +4274,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER); - if (vmx->nested.hv_evmcs_page) { + if (vmx->nested.hv_evmcs_map.hva) { if (vmptr == vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); } else { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index bd04725..a130139 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -172,7 +172,7 @@ struct nested_vmx { } smm; gpa_t hv_evmcs_vmptr; - struct page *hv_evmcs_page; + struct kvm_host_map hv_evmcs_map; struct hv_enlightened_vmcs *hv_evmcs; }; -- 2.7.4
[PATCH v6 05/14] X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory
Use kvm_vcpu_map to the map the VMCS12 from guest memory because kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v4 -> v5: - Switch to the new guest mapping API instead of reading directly from guest. - unmap with dirty flag v3 -> v4: - Return VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID on failure (jmattson@) v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx/nested.c | 15 +++ 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 11b44a9..8fc327f 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4521,11 +4521,10 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) return 1; if (vmx->nested.current_vmptr != vmptr) { + struct kvm_host_map map; struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmptr), )) { /* * Reads from an unbacked page return all 1s, * which means that the 32 bits located at the @@ -4535,12 +4534,13 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); } - new_vmcs12 = kmap(page); + + new_vmcs12 = map.hva; + if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(vcpu, , false); return nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); } @@ -4552,8 +4552,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) * cached. */ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(vcpu, , false); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH v6 07/14] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed Reviewed-by: Konrad Rzeszutek Wilk --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx/nested.c | 32 +++- arch/x86/kvm/vmx/vmx.c| 5 ++--- arch/x86/kvm/vmx/vmx.h| 2 +- 3 files changed, 14 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1813211..31b352c 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -229,10 +229,7 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(vcpu, >nested.virtual_apic_map, true); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -2817,6 +2814,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -2849,11 +2847,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -2868,11 +2862,9 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); + } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -3279,11 +3271,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -3917,10 +3910,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(vcpu, >nested.virtual_apic_map, true); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a3da75a..ea9cffc 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3627,14 +3627,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_ap
[PATCH v6 00/14] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. Patch 2 is additionally fixing an incorrect page release and marking the page as dirty (i.e. as a side-effect of using the helper function to write). As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. The current implementation of the new API uses memremap to map memory that does not have a "struct page". This proves to be very slow for high frequency mappings. Since this does not affect the normal use-case where a "struct page" is available, the performance of this API will be handled by a seperate patch series. So the simple way to use memory outside kernel control is: 1- Pass 'mem=' in the kernel command-line to limit the amount of memory managed by the kernel. 2- Map this physical memory you want to give to the guest with: mmap("/dev/mem", physical_address_offset, ..) 3- Use the user-space virtual address as the "userspace_addr" field in KVM_SET_USER_MEMORY_REGION ioctl. v5 -> v6: - Added one extra patch to ensure that support for this mem= case is complete for x86. - Added a helper function to check if the mapping is mapped or not. - Added more comments on the struct. - Setting ->page to NULL on unmap and to a poison ptr if unused during map - Checking for map ptr before using it. - Change kvm_vcpu_unmap to also mark page dirty for LM. That requires passing the vCPU pointer again to this function. v4 -> v5: - Introduce a new parameter 'dirty' into kvm_vcpu_unmap - A horrible rebase due to nested.c :) - Dropped a couple of hyperv patches as the code was fixed already as a side-effect of another patch. - Added a new trivial cleanup patch. v3 -> v4: - Rebase - Add a new patch to also fix the newly introduced enlightned VMCS. v2 -> v3: - Rebase - Add a new patch to also fix the newly introduced shadow VMCS. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (13): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS KVM/nVMX: Use page_address_valid in a few more locations kvm, x86: Properly check whether a pfn is an MMIO or not arch/x86/include/asm/e820/api.h | 1 + arch/x86/kernel/e820.c | 18 - arch/x86/kvm/mmu.c | 5 +- arch/x86/kvm/paging_tmpl.h | 38 +++--- arch/x86/kvm/svm.c | 97 arch/x86/kvm/vmx/nested.c | 160 +++- arch/x86/kvm/vmx/vmx.c | 19 ++--- arch/x86/kvm/vmx/vmx.h | 9 ++- arch/x86/kvm/x86.c | 14 ++-- include/linux/kvm_host.h| 28 +++ virt/kvm/kvm_main.c | 64 11 files changed, 267 insertions(+), 186 deletions(-) -- 2.7.4
[PATCH v6 02/14] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. As a side-effect of using kvm_write_guest_page the page is also properly marked as dirty. Signed-off-by: KarimAllah Ahmed Reviewed-by: David Hildenbrand Reviewed-by: Konrad Rzeszutek Wilk --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4341175..a3da75a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7206,9 +7206,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -7228,15 +7226,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[PATCH v5 01/13] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed Reviewed-by: Jim Mattson Reviewed-by: David Hildenbrand --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx/nested.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 3170e29..536468a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4192,7 +4192,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -4241,18 +4241,10 @@ static int handle_vmon(struct kvm_vcpu *vcpu) if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) return nested_vmx_failInvalid(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) return nested_vmx_failInvalid(vcpu); - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - return nested_vmx_failInvalid(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); - vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); if (ret) -- 2.7.4
[PATCH v5 00/13] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. Patch 2 is additionally fixing an incorrect page release and marking the page as dirty (i.e. as a side-effect of using the helper function to write). As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. The current implementation of the new API uses memremap to map memory that does not have a "struct page". This proves to be very slow for high frequency mappings. Since this does not affect the normal use-case where a "struct page" is available, the performance of this API will be handled by a seperate patch series. v4 -> v5: - Introduce a new parameter 'dirty' into kvm_vcpu_unmap - A horrible rebase due to nested.c :) - Dropped a couple of hyperv patches as the code was fixed already as a side-effect of another patch. - Added a new trivial cleanup patch. v3 -> v4: - Rebase - Add a new patch to also fix the newly introduced enlightned VMCS. v2 -> v3: - Rebase - Add a new patch to also fix the newly introduced shadow VMCS. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (12): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS KVM/nVMX: Use page_address_valid in a few more locations arch/x86/kvm/paging_tmpl.h | 38 --- arch/x86/kvm/svm.c | 97 +-- arch/x86/kvm/vmx/nested.c | 160 - arch/x86/kvm/vmx/vmx.c | 19 ++ arch/x86/kvm/vmx/vmx.h | 9 ++- arch/x86/kvm/x86.c | 13 ++-- include/linux/kvm_host.h | 9 +++ virt/kvm/kvm_main.c| 53 +++ 8 files changed, 217 insertions(+), 181 deletions(-) -- 2.7.4
[PATCH v5 12/13] KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS
Use kvm_vcpu_map for accessing the enhanced VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - s/enhanced/enlightened - unmap with dirty flag --- arch/x86/kvm/vmx/nested.c | 14 +- arch/x86/kvm/vmx/vmx.h| 2 +- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 04a8b43..ccb3b63 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -193,10 +193,8 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) if (!vmx->nested.hv_evmcs) return; - kunmap(vmx->nested.hv_evmcs_page); - kvm_release_page_dirty(vmx->nested.hv_evmcs_page); + kvm_vcpu_unmap(>nested.hv_evmcs_map, true); vmx->nested.hv_evmcs_vmptr = -1ull; - vmx->nested.hv_evmcs_page = NULL; vmx->nested.hv_evmcs = NULL; } @@ -1769,13 +1767,11 @@ static int nested_vmx_handle_enlightened_vmptrld(struct kvm_vcpu *vcpu, nested_release_evmcs(vcpu); - vmx->nested.hv_evmcs_page = kvm_vcpu_gpa_to_page( - vcpu, assist_page.current_nested_vmcs); - - if (unlikely(is_error_page(vmx->nested.hv_evmcs_page))) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(assist_page.current_nested_vmcs), +>nested.hv_evmcs_map)) return 0; - vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); + vmx->nested.hv_evmcs = vmx->nested.hv_evmcs_map.hva; /* * Currently, KVM only supports eVMCS version 1 @@ -4278,7 +4274,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER); - if (vmx->nested.hv_evmcs_page) { + if (vmx->nested.hv_evmcs_map.hva) { if (vmptr == vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); } else { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index bd04725..a130139 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -172,7 +172,7 @@ struct nested_vmx { } smm; gpa_t hv_evmcs_vmptr; - struct page *hv_evmcs_page; + struct kvm_host_map hv_evmcs_map; struct hv_enlightened_vmcs *hv_evmcs; }; -- 2.7.4
[PATCH v5 08/13] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx/nested.c | 43 --- arch/x86/kvm/vmx/vmx.h| 2 +- 2 files changed, 13 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index dcff99d..b4230ce 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -230,12 +230,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map, true); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map, true); + vmx->nested.pi_desc = NULL; kvm_mmu_free_roots(vcpu, >arch.guest_mmu, KVM_MMU_ROOTS_ALL); @@ -2868,26 +2864,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - vmcs_write64(POSTED_INTR_DESC_ADDR, -1ull); + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, +pfn_to_hpa(map->pfn) + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -3911,12 +3896,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map, true); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map, true); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index f618f52..bd04725 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -143,7 +143,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; -- 2.7.4
[PATCH v5 09/13] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 02c8e09..0c35cfc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5492,9 +5492,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5511,12 +5511,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5533,8 +5532,8 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(, true); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; -- 2.7.4
[PATCH v5 02/13] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. As a side-effect of using kvm_write_guest_page the page is also properly marked as dirty. Signed-off-by: KarimAllah Ahmed Reviewed-by: David Hildenbrand --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4d39f73..71d88df 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7199,9 +7199,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -7221,15 +7219,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[PATCH v5 07/13] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx/nested.c | 32 +++- arch/x86/kvm/vmx/vmx.c| 5 ++--- arch/x86/kvm/vmx/vmx.h| 2 +- 3 files changed, 14 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 4127ad9..dcff99d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -229,10 +229,7 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map, true); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -2817,6 +2814,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -2849,11 +2847,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -2868,11 +2862,9 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); + } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -3279,11 +3271,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -3917,10 +3910,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map, true); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 71d88df..e13308e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3627,14 +3627,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) + WARN_ON_ONCE(!vmx->nested.virtua
[PATCH v5 11/13] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag --- arch/x86/kvm/vmx/nested.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b4230ce..04a8b43 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -588,20 +588,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(, false); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -2637,9 +2637,9 @@ static int nested_vmx_check_vmentry_prereqs(struct kvm_vcpu *vcpu, static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -2647,17 +2647,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(, false); return r; } -- 2.7.4
[PATCH v5 10/13] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 307e5bd..d886664 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3062,32 +3062,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3290,10 +3264,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3302,9 +3277,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3408,7 +3388,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(, true); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3466,7 +3446,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3550,7 +3530,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.pause_filter_thresh = nested_vmcb->control.pause_filter_thresh; - nested_svm_unmap(page); + kvm_vcpu_unmap(map, true); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3570,17 +3550,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3588,7 +3574,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(, true); return false; } @@ -3632,7 +3618,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3656,21 +3642,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(struct vcpu_svm *svm) { struct vmcb *nested_vmcb; -
[PATCH v5 06/13] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - unmap with dirty flag v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx/nested.c | 11 +-- arch/x86/kvm/vmx/vmx.h| 3 +++ 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 5602b0c..4127ad9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -507,9 +507,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -535,11 +536,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -587,8 +587,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(_vmx(vcpu)->nested.msr_bitmap_map, false); return true; } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 9932895..6fb69d8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -144,6 +144,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; -- 2.7.4
[PATCH v5 04/13] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". The current implementation of this API is using memremap for memory that is not backed by a "struct page" which would lead to a huge slow-down if it was used for high-frequency mapping operations. The API does not have any effect on current setups where guest memory is backed by a "struct page". Further patches are going to also introduce a pfn-cache which would significantly improve the performance of the memremap case. Signed-off-by: KarimAllah Ahmed --- v3 -> v4: - Update the commit message. v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 9 virt/kvm/kvm_main.c | 53 2 files changed, 62 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c38cc5e..8a2f5fa 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -710,7 +717,9 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); +void kvm_vcpu_unmap(struct kvm_host_map *map, bool dirty); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1f888a1..4d8f2e3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1733,6 +1733,59 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = NULL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_map); + +void kvm_vcpu_unmap(struct kvm_host_map *map, bool dirty) +{ + if (!map->hva) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->hva); + + if (dirty) + kvm_release_pfn_dirty(map->pfn); + else + kvm_release_pfn_clean(map->pfn); + map->hva = NULL; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn; -- 2.7.4
[PATCH v5 03/13] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6bdca39..c40af67 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH v5 13/13] KVM/nVMX: Use page_address_valid in a few more locations
Use page_address_valid in a few more locations that is already checking for a page aligned address that does not cross the maximum physical address. Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx/nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ccb3b63..77aad46 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4203,7 +4203,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) * Note - IA32_VMX_BASIC[48] will never be 1 for the nested case; * which replaces physical address width with 32 */ - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failInvalid(vcpu); if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || @@ -4266,7 +4266,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) if (nested_vmx_get_vmptr(vcpu, )) return 1; - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); @@ -4473,7 +4473,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) if (nested_vmx_get_vmptr(vcpu, )) return 1; - if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) + if (!page_address_valid(vcpu, vmptr)) return nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS); -- 2.7.4
[PATCH v5 05/13] X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory
Use kvm_vcpu_map to the map the VMCS12 from guest memory because kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v4 -> v5: - Switch to the new guest mapping API instead of reading directly from guest. - unmap with dirty flag v3 -> v4: - Return VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID on failure (jmattson@) v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx/nested.c | 15 +++ 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 536468a..5602b0c 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4521,11 +4521,10 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) return 1; if (vmx->nested.current_vmptr != vmptr) { + struct kvm_host_map map; struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmptr), )) { /* * Reads from an unbacked page return all 1s, * which means that the 32 bits located at the @@ -4536,12 +4535,13 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); return kvm_skip_emulated_instruction(vcpu); } - new_vmcs12 = kmap(page); + + new_vmcs12 = map.hva; + if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(, false); return nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); } @@ -4553,8 +4553,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) * cached. */ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(, false); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH] KVM/nVMX: Stop mapping the "APIC-access address" page into the kernel
The "APIC-access address" is simply a token that the hypervisor puts into the PFN of a 4K EPTE (or PTE if using shadow paging) that triggers APIC virtualization whenever a page walk terminates with that PFN. This address has to be a legal address (i.e. within the physical address supported by the CPU), but it need not have WB memory behind it. In fact, it need not have anything at all behind it. When bit 31 ("activate secondary controls") of the primary processor-based VM-execution controls is set and bit 0 ("virtualize APIC accesses") of the secondary processor-based VM-execution controls is set, the PFN recorded in the VMCS "APIC-access address" field will never be touched. (Instead, the access triggers APIC virtualization, which may access the PFN recorded in the "Virtual-APIC address" field of the VMCS.) So stop mapping the "APIC-access address" page into the kernel and even drop the requirements to have a valid page backing it. Instead, just use some token that: 1) Not one of the valid guest pages. 2) Within the physical address supported by the CPU. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed --- Thanks Jim for the commit message :) --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 10 ++ arch/x86/kvm/vmx.c | 71 ++--- 3 files changed, 42 insertions(+), 40 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fbda5a9..7e50196 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1077,6 +1077,7 @@ struct kvm_x86_ops { void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa); + bool (*nested_apic_access_addr)(struct kvm_vcpu *vcpu, gpa_t gpa, hpa_t *hpa); void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector); int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7c03c0f..ae46a8d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3962,9 +3962,19 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, gva_t gva, kvm_pfn_t *pfn, bool write, bool *writable) { + hpa_t hpa; struct kvm_memory_slot *slot; bool async; + if (is_guest_mode(vcpu) && + kvm_x86_ops->nested_apic_access_addr && + kvm_x86_ops->nested_apic_access_addr(vcpu, gfn_to_gpa(gfn), )) { + *pfn = hpa >> PAGE_SHIFT; + if (writable) + *writable = true; + return false; + } + /* * Don't expose private memslots to L2. */ diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 83a614f..340cf56 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -864,7 +864,6 @@ struct nested_vmx { * Guest pages referred to in the vmcs02 with host-physical * pointers, so we must keep them pinned while L2 runs. */ - struct page *apic_access_page; struct kvm_host_map virtual_apic_map; struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; @@ -8512,10 +8511,6 @@ static void free_nested(struct kvm_vcpu *vcpu) kfree(vmx->nested.cached_vmcs12); kfree(vmx->nested.cached_shadow_vmcs12); /* Unpin physical memory we referred to in the vmcs02 */ - if (vmx->nested.apic_access_page) { - kvm_release_page_dirty(vmx->nested.apic_access_page); - vmx->nested.apic_access_page = NULL; - } kvm_vcpu_unmap(>nested.virtual_apic_map); kvm_vcpu_unmap(>nested.pi_desc_map); vmx->nested.pi_desc = NULL; @@ -11901,41 +11896,27 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12); +static hpa_t vmx_apic_access_addr(void) +{ + /* +* The physical address choosen here has to: +* 1) Never be an address that could be assigned to a guest. +* 2) Within the maximum physical limits of the CPU. +* +* So our choice below is completely random, but at least it follows +* these two rules. +*/ + return __pa_symbol(_text) & PAGE_MASK; +} + static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); struct kvm_host_map *map; -
[PATCH] KVM/nVMX: Stop mapping the "APIC-access address" page into the kernel
The "APIC-access address" is simply a token that the hypervisor puts into the PFN of a 4K EPTE (or PTE if using shadow paging) that triggers APIC virtualization whenever a page walk terminates with that PFN. This address has to be a legal address (i.e. within the physical address supported by the CPU), but it need not have WB memory behind it. In fact, it need not have anything at all behind it. When bit 31 ("activate secondary controls") of the primary processor-based VM-execution controls is set and bit 0 ("virtualize APIC accesses") of the secondary processor-based VM-execution controls is set, the PFN recorded in the VMCS "APIC-access address" field will never be touched. (Instead, the access triggers APIC virtualization, which may access the PFN recorded in the "Virtual-APIC address" field of the VMCS.) So stop mapping the "APIC-access address" page into the kernel and even drop the requirements to have a valid page backing it. Instead, just use some token that: 1) Not one of the valid guest pages. 2) Within the physical address supported by the CPU. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed --- Thanks Jim for the commit message :) --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 10 ++ arch/x86/kvm/vmx.c | 71 ++--- 3 files changed, 42 insertions(+), 40 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fbda5a9..7e50196 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1077,6 +1077,7 @@ struct kvm_x86_ops { void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa); + bool (*nested_apic_access_addr)(struct kvm_vcpu *vcpu, gpa_t gpa, hpa_t *hpa); void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector); int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7c03c0f..ae46a8d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3962,9 +3962,19 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, gva_t gva, kvm_pfn_t *pfn, bool write, bool *writable) { + hpa_t hpa; struct kvm_memory_slot *slot; bool async; + if (is_guest_mode(vcpu) && + kvm_x86_ops->nested_apic_access_addr && + kvm_x86_ops->nested_apic_access_addr(vcpu, gfn_to_gpa(gfn), )) { + *pfn = hpa >> PAGE_SHIFT; + if (writable) + *writable = true; + return false; + } + /* * Don't expose private memslots to L2. */ diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 83a614f..340cf56 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -864,7 +864,6 @@ struct nested_vmx { * Guest pages referred to in the vmcs02 with host-physical * pointers, so we must keep them pinned while L2 runs. */ - struct page *apic_access_page; struct kvm_host_map virtual_apic_map; struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; @@ -8512,10 +8511,6 @@ static void free_nested(struct kvm_vcpu *vcpu) kfree(vmx->nested.cached_vmcs12); kfree(vmx->nested.cached_shadow_vmcs12); /* Unpin physical memory we referred to in the vmcs02 */ - if (vmx->nested.apic_access_page) { - kvm_release_page_dirty(vmx->nested.apic_access_page); - vmx->nested.apic_access_page = NULL; - } kvm_vcpu_unmap(>nested.virtual_apic_map); kvm_vcpu_unmap(>nested.pi_desc_map); vmx->nested.pi_desc = NULL; @@ -11901,41 +11896,27 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12); +static hpa_t vmx_apic_access_addr(void) +{ + /* +* The physical address choosen here has to: +* 1) Never be an address that could be assigned to a guest. +* 2) Within the maximum physical limits of the CPU. +* +* So our choice below is completely random, but at least it follows +* these two rules. +*/ + return __pa_symbol(_text) & PAGE_MASK; +} + static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); struct kvm_host_map *map; -
[PATCH v4 05/14] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". The current implementation of this API is using memremap for memory that is not backed by a "struct page" which would lead to a huge slow-down if it was used for high-frequency mapping operations. The API does not have any effect on current setups where guest memory is backed by a "struct page". Further patches are going to also introduce a pfn-cache which would significantly improve the performance of the memremap case. Signed-off-by: KarimAllah Ahmed --- v3 -> v4: - Update the commit message. v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 9 + virt/kvm/kvm_main.c | 50 2 files changed, 59 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698..59e56b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -708,6 +715,8 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); +void kvm_vcpu_unmap(struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2679e47..ea7c82f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1658,6 +1658,56 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = NULL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_map); + +void kvm_vcpu_unmap(struct kvm_host_map *map) +{ + if (!map->hva) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->hva); + + kvm_release_pfn_dirty(map->pfn); + map->hva = NULL; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn; -- 2.7.4
[PATCH v4 05/14] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". The current implementation of this API is using memremap for memory that is not backed by a "struct page" which would lead to a huge slow-down if it was used for high-frequency mapping operations. The API does not have any effect on current setups where guest memory is backed by a "struct page". Further patches are going to also introduce a pfn-cache which would significantly improve the performance of the memremap case. Signed-off-by: KarimAllah Ahmed --- v3 -> v4: - Update the commit message. v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 9 + virt/kvm/kvm_main.c | 50 2 files changed, 59 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698..59e56b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -708,6 +715,8 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); +void kvm_vcpu_unmap(struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2679e47..ea7c82f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1658,6 +1658,56 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = NULL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_map); + +void kvm_vcpu_unmap(struct kvm_host_map *map) +{ + if (!map->hva) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->hva); + + kvm_release_pfn_dirty(map->pfn); + map->hva = NULL; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn; -- 2.7.4
[PATCH v4 00/14] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. The current implementation of the new API uses memremap to map memory that does not have a "struct page". This proves to be very slow for high frequency mappings. Since this does not affect the normal use-case where a "struct page" is available, the performance of this API will be handled by a seperate patch series. v3 -> v4: - Rebase - Add a new patch to also fix the newly introduced enhanced VMCS. v2 -> v3: - Rebase - Add a new patch to also fix the newly introduced shadow VMCS. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (13): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS arch/x86/kvm/hyperv.c | 28 +++ arch/x86/kvm/paging_tmpl.h | 38 ++--- arch/x86/kvm/svm.c | 97 +++ arch/x86/kvm/vmx.c | 189 + arch/x86/kvm/x86.c | 13 ++-- include/linux/kvm_host.h | 9 +++ virt/kvm/kvm_main.c| 50 7 files changed, 228 insertions(+), 196 deletions(-) -- 2.7.4
[PATCH v4 00/14] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. The current implementation of the new API uses memremap to map memory that does not have a "struct page". This proves to be very slow for high frequency mappings. Since this does not affect the normal use-case where a "struct page" is available, the performance of this API will be handled by a seperate patch series. v3 -> v4: - Rebase - Add a new patch to also fix the newly introduced enhanced VMCS. v2 -> v3: - Rebase - Add a new patch to also fix the newly introduced shadow VMCS. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (13): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS arch/x86/kvm/hyperv.c | 28 +++ arch/x86/kvm/paging_tmpl.h | 38 ++--- arch/x86/kvm/svm.c | 97 +++ arch/x86/kvm/vmx.c | 189 + arch/x86/kvm/x86.c | 13 ++-- include/linux/kvm_host.h | 9 +++ virt/kvm/kvm_main.c| 50 7 files changed, 228 insertions(+), 196 deletions(-) -- 2.7.4
[PATCH v4 13/14] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 390a971..a958700 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -12138,20 +12138,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -13133,9 +13133,9 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -13143,17 +13143,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(); return r; } -- 2.7.4
[PATCH v4 07/14] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 39 +-- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index cca3ba0..4a99289 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -865,9 +865,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6183,11 +6182,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -6213,14 +6213,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) + WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn)) return false; rvi = vmx_get_rvi(); - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; vppr = *((u32 *)(vapic_page + APIC_PROCPRI)); - kunmap(vmx->nested.virtual_apic_page); return ((rvi & 0xf0) > (vppr & 0xf0)); } @@ -8519,10 +8518,7 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11917,6 +11913,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11949,11 +11946,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11968,11 +11961,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -14228,10 +14218,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx-&g
[PATCH v4 13/14] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 390a971..a958700 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -12138,20 +12138,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -13133,9 +13133,9 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -13143,17 +13143,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(); return r; } -- 2.7.4
[PATCH v4 07/14] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 39 +-- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index cca3ba0..4a99289 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -865,9 +865,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6183,11 +6182,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -6213,14 +6213,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) + WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn)) return false; rvi = vmx_get_rvi(); - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; vppr = *((u32 *)(vapic_page + APIC_PROCPRI)); - kunmap(vmx->nested.virtual_apic_page); return ((rvi & 0xf0) > (vppr & 0xf0)); } @@ -8519,10 +8518,7 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11917,6 +11913,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11949,11 +11946,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11968,11 +11961,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -14228,10 +14218,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx-&g
[PATCH v4 08/14] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 45 +++-- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4a99289..390a971 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -866,7 +866,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -8519,12 +8519,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; kvm_mmu_free_roots(vcpu, >arch.guest_mmu, KVM_MMU_ROOTS_ALL); @@ -11966,24 +11962,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -14218,13 +14206,10 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } + kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the -- 2.7.4
[PATCH v4 04/14] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 7cf2185..b953545 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH v4 09/14] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d029377..81d75af 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5417,9 +5417,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5436,12 +5436,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5458,8 +5457,8 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; -- 2.7.4
[PATCH v4 10/14] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
Use kvm_vcpu_map in synic_clear_sint_msg_pending since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 4e80080..63941ac 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -162,26 +162,22 @@ static void synic_clear_sint_msg_pending(struct kvm_vcpu_hv_synic *synic, u32 sint) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; - gpa_t gpa; + struct kvm_host_map map; struct hv_message *msg; struct hv_message_page *msg_page; - gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(synic->msg_page), )) { vcpu_err(vcpu, "Hyper-V SynIC can't get msg page, gpa 0x%llx\n", -gpa); +synic->msg_page); return; } - msg_page = kmap_atomic(page); + msg_page = map.hva; msg = _page->sint_message[sint]; msg->header.message_flags.msg_pending = 0; - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); - kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); + kvm_vcpu_unmap(); + kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(synic->msg_page)); } static void kvm_hv_notify_acked_sint(struct kvm_vcpu *vcpu, u32 sint) -- 2.7.4
[PATCH v4 08/14] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 45 +++-- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4a99289..390a971 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -866,7 +866,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -8519,12 +8519,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; kvm_mmu_free_roots(vcpu, >arch.guest_mmu, KVM_MMU_ROOTS_ALL); @@ -11966,24 +11962,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -14218,13 +14206,10 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } + kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the -- 2.7.4
[PATCH v4 04/14] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 7cf2185..b953545 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH v4 09/14] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d029377..81d75af 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5417,9 +5417,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5436,12 +5436,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5458,8 +5457,8 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; -- 2.7.4
[PATCH v4 10/14] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
Use kvm_vcpu_map in synic_clear_sint_msg_pending since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 4e80080..63941ac 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -162,26 +162,22 @@ static void synic_clear_sint_msg_pending(struct kvm_vcpu_hv_synic *synic, u32 sint) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; - gpa_t gpa; + struct kvm_host_map map; struct hv_message *msg; struct hv_message_page *msg_page; - gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(synic->msg_page), )) { vcpu_err(vcpu, "Hyper-V SynIC can't get msg page, gpa 0x%llx\n", -gpa); +synic->msg_page); return; } - msg_page = kmap_atomic(page); + msg_page = map.hva; msg = _page->sint_message[sint]; msg->header.message_flags.msg_pending = 0; - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); - kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); + kvm_vcpu_unmap(); + kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(synic->msg_page)); } static void kvm_hv_notify_acked_sint(struct kvm_vcpu *vcpu, u32 sint) -- 2.7.4
[PATCH v4 11/14] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
Use kvm_vcpu_map in synic_deliver_msg since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 63941ac..af6bd18 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -585,7 +585,7 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, struct hv_message *src_msg) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; + struct kvm_host_map map; gpa_t gpa; struct hv_message *dst_msg; int r; @@ -595,11 +595,11 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, return -ENOENT; gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) return -EFAULT; - msg_page = kmap_atomic(page); + msg_page = map.hva; dst_msg = _page->sint_message[sint]; if (sync_cmpxchg(_msg->header.message_type, HVMSG_NONE, src_msg->header.message_type) != HVMSG_NONE) { @@ -616,8 +616,8 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, else if (r == 0) r = -EFAULT; } - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); return r; } -- 2.7.4
[PATCH v4 14/14] KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS
Use kvm_vcpu_map for accessing the enhanced VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index a958700..83a614f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -894,7 +894,7 @@ struct nested_vmx { } smm; gpa_t hv_evmcs_vmptr; - struct page *hv_evmcs_page; + struct kvm_host_map hv_evmcs_map; struct hv_enlightened_vmcs *hv_evmcs; }; @@ -8456,10 +8456,8 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) if (!vmx->nested.hv_evmcs) return; - kunmap(vmx->nested.hv_evmcs_page); - kvm_release_page_dirty(vmx->nested.hv_evmcs_page); + kvm_vcpu_unmap(>nested.hv_evmcs_map); vmx->nested.hv_evmcs_vmptr = -1ull; - vmx->nested.hv_evmcs_page = NULL; vmx->nested.hv_evmcs = NULL; } @@ -8559,7 +8557,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER); - if (vmx->nested.hv_evmcs_page) { + if (vmx->nested.hv_evmcs_map.hva) { if (vmptr == vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); } else { @@ -9355,13 +9353,11 @@ static int nested_vmx_handle_enlightened_vmptrld(struct kvm_vcpu *vcpu, nested_release_evmcs(vcpu); - vmx->nested.hv_evmcs_page = kvm_vcpu_gpa_to_page( - vcpu, assist_page.current_nested_vmcs); - - if (unlikely(is_error_page(vmx->nested.hv_evmcs_page))) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(assist_page.current_nested_vmcs), +>nested.hv_evmcs_map)) return 0; - vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); + vmx->nested.hv_evmcs = vmx->nested.hv_evmcs_map.hva; /* * Currently, KVM only supports eVMCS version 1 -- 2.7.4
[PATCH v4 11/14] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
Use kvm_vcpu_map in synic_deliver_msg since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 63941ac..af6bd18 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -585,7 +585,7 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, struct hv_message *src_msg) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; + struct kvm_host_map map; gpa_t gpa; struct hv_message *dst_msg; int r; @@ -595,11 +595,11 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, return -ENOENT; gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) return -EFAULT; - msg_page = kmap_atomic(page); + msg_page = map.hva; dst_msg = _page->sint_message[sint]; if (sync_cmpxchg(_msg->header.message_type, HVMSG_NONE, src_msg->header.message_type) != HVMSG_NONE) { @@ -616,8 +616,8 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, else if (r == 0) r = -EFAULT; } - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); return r; } -- 2.7.4
[PATCH v4 14/14] KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS
Use kvm_vcpu_map for accessing the enhanced VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index a958700..83a614f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -894,7 +894,7 @@ struct nested_vmx { } smm; gpa_t hv_evmcs_vmptr; - struct page *hv_evmcs_page; + struct kvm_host_map hv_evmcs_map; struct hv_enlightened_vmcs *hv_evmcs; }; @@ -8456,10 +8456,8 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) if (!vmx->nested.hv_evmcs) return; - kunmap(vmx->nested.hv_evmcs_page); - kvm_release_page_dirty(vmx->nested.hv_evmcs_page); + kvm_vcpu_unmap(>nested.hv_evmcs_map); vmx->nested.hv_evmcs_vmptr = -1ull; - vmx->nested.hv_evmcs_page = NULL; vmx->nested.hv_evmcs = NULL; } @@ -8559,7 +8557,7 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER); - if (vmx->nested.hv_evmcs_page) { + if (vmx->nested.hv_evmcs_map.hva) { if (vmptr == vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); } else { @@ -9355,13 +9353,11 @@ static int nested_vmx_handle_enlightened_vmptrld(struct kvm_vcpu *vcpu, nested_release_evmcs(vcpu); - vmx->nested.hv_evmcs_page = kvm_vcpu_gpa_to_page( - vcpu, assist_page.current_nested_vmcs); - - if (unlikely(is_error_page(vmx->nested.hv_evmcs_page))) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(assist_page.current_nested_vmcs), +>nested.hv_evmcs_map)) return 0; - vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); + vmx->nested.hv_evmcs = vmx->nested.hv_evmcs_map.hva; /* * Currently, KVM only supports eVMCS version 1 -- 2.7.4
[PATCH v4 12/14] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index cc6467b..005cb2c 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3053,32 +3053,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3279,10 +3253,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3291,9 +3266,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3392,7 +3372,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3450,7 +3430,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3530,7 +3510,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; - nested_svm_unmap(page); + kvm_vcpu_unmap(map); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3550,17 +3530,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3568,7 +3554,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(); return false; } @@ -3612,7 +3598,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3636,21 +3622,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(struct vcpu_svm *svm) { struct vmcb *nested_vmcb
[PATCH v4 01/14] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed Reviewed-by: Jim Mattson --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 02edd99..b84f230 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8358,7 +8358,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -8407,18 +8407,10 @@ static int handle_vmon(struct kvm_vcpu *vcpu) if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) return nested_vmx_failInvalid(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) return nested_vmx_failInvalid(vcpu); - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - return nested_vmx_failInvalid(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); - vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); if (ret) -- 2.7.4
[PATCH v4 12/14] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index cc6467b..005cb2c 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3053,32 +3053,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3279,10 +3253,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3291,9 +3266,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3392,7 +3372,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3450,7 +3430,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3530,7 +3510,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; - nested_svm_unmap(page); + kvm_vcpu_unmap(map); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3550,17 +3530,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3568,7 +3554,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(); return false; } @@ -3612,7 +3598,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3636,21 +3622,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(struct vcpu_svm *svm) { struct vmcb *nested_vmcb
[PATCH v4 01/14] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed Reviewed-by: Jim Mattson --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 02edd99..b84f230 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8358,7 +8358,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -8407,18 +8407,10 @@ static int handle_vmon(struct kvm_vcpu *vcpu) if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) return nested_vmx_failInvalid(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) return nested_vmx_failInvalid(vcpu); - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - return nested_vmx_failInvalid(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); - vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); if (ret) -- 2.7.4
[PATCH v4 06/14] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 6d6dfa9..cca3ba0 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -867,6 +867,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; @@ -12069,9 +12072,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -12097,11 +12101,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -12149,8 +12152,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(_vmx(vcpu)->nested.msr_bitmap_map); return true; } -- 2.7.4
[PATCH v4 02/14] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
Copy the VMCS12 directly from guest memory instead of the map->copy->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v3 -> v4: - Return VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID on failure (jmattson@) v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 24 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index b84f230..75817cb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9301,20 +9301,22 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) return 1; if (vmx->nested.current_vmptr != vmptr) { - struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) - return nested_vmx_failInvalid(vcpu); + struct vmcs12 *new_vmcs12 = (struct vmcs12 *)__get_free_page(GFP_KERNEL); + + if (!new_vmcs12 || + kvm_read_guest(vcpu->kvm, vmptr, new_vmcs12, + sizeof(*new_vmcs12))) { + free_page((unsigned long)new_vmcs12); + return nested_vmx_failValid(vcpu, + VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); + } - new_vmcs12 = kmap(page); if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + free_page((unsigned long)new_vmcs12); return nested_vmx_failValid(vcpu, - VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); + VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); } nested_release_vmcs12(vcpu); @@ -9324,9 +9326,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) * cached. */ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); - + free_page((unsigned long)new_vmcs12); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH v4 03/14] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 75817cb..6d6dfa9 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -14427,9 +14427,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -14449,15 +14447,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[PATCH v4 06/14] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 6d6dfa9..cca3ba0 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -867,6 +867,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; @@ -12069,9 +12072,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -12097,11 +12101,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -12149,8 +12152,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(_vmx(vcpu)->nested.msr_bitmap_map); return true; } -- 2.7.4
[PATCH v4 02/14] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
Copy the VMCS12 directly from guest memory instead of the map->copy->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v3 -> v4: - Return VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID on failure (jmattson@) v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 24 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index b84f230..75817cb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9301,20 +9301,22 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) return 1; if (vmx->nested.current_vmptr != vmptr) { - struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) - return nested_vmx_failInvalid(vcpu); + struct vmcs12 *new_vmcs12 = (struct vmcs12 *)__get_free_page(GFP_KERNEL); + + if (!new_vmcs12 || + kvm_read_guest(vcpu->kvm, vmptr, new_vmcs12, + sizeof(*new_vmcs12))) { + free_page((unsigned long)new_vmcs12); + return nested_vmx_failValid(vcpu, + VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); + } - new_vmcs12 = kmap(page); if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + free_page((unsigned long)new_vmcs12); return nested_vmx_failValid(vcpu, - VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); + VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); } nested_release_vmcs12(vcpu); @@ -9324,9 +9326,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) * cached. */ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); - + free_page((unsigned long)new_vmcs12); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH v4 03/14] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 75817cb..6d6dfa9 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -14427,9 +14427,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -14449,15 +14447,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[RESEND PATCH v3 07/13] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 39 +-- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5b15ca2..9683923 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -845,9 +845,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6152,11 +6151,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -6182,14 +6182,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) + WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn)) return false; rvi = vmx_get_rvi(); - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; vppr = *((u32 *)(vapic_page + APIC_PROCPRI)); - kunmap(vmx->nested.virtual_apic_page); return ((rvi & 0xf0) > (vppr & 0xf0)); } @@ -8468,10 +8467,7 @@ static void free_nested(struct vcpu_vmx *vmx) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11394,6 +11390,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11426,11 +11423,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11445,11 +11438,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -13353,10 +13343,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx-&g
[RESEND PATCH v3 07/13] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 39 +-- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5b15ca2..9683923 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -845,9 +845,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6152,11 +6151,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -6182,14 +6182,13 @@ static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || !nested_cpu_has_vid(get_vmcs12(vcpu)) || - WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) + WARN_ON_ONCE(!vmx->nested.virtual_apic_map.gfn)) return false; rvi = vmx_get_rvi(); - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; vppr = *((u32 *)(vapic_page + APIC_PROCPRI)); - kunmap(vmx->nested.virtual_apic_page); return ((rvi & 0xf0) > (vppr & 0xf0)); } @@ -8468,10 +8467,7 @@ static void free_nested(struct vcpu_vmx *vmx) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11394,6 +11390,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11426,11 +11423,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11445,11 +11438,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -13353,10 +13343,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx-&g
[PATCH v2] KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned
The spec only requires the posted interrupt descriptor address to be 64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also forces the address to be page aligned. Only validate that the address does not cross the maximum physical address without enforcing a page alignment. v1 -> v2: - Add a missing parenthesis (dropped while merging!) Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: H. Peter Anvin Cc: x...@kernel.org Cc: k...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 6de84e581c0 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2") Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 38f1a16..bb0fcdb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11667,7 +11667,7 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, !nested_exit_intr_ack_set(vcpu) || (vmcs12->posted_intr_nv & 0xff00) || (vmcs12->posted_intr_desc_addr & 0x3f) || - (!page_address_valid(vcpu, vmcs12->posted_intr_desc_addr + (vmcs12->posted_intr_desc_addr >> cpuid_maxphyaddr(vcpu return -EINVAL; /* tpr shadow is needed by all apicv features. */ -- 2.7.4
[PATCH v2] KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned
The spec only requires the posted interrupt descriptor address to be 64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also forces the address to be page aligned. Only validate that the address does not cross the maximum physical address without enforcing a page alignment. v1 -> v2: - Add a missing parenthesis (dropped while merging!) Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: H. Peter Anvin Cc: x...@kernel.org Cc: k...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 6de84e581c0 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2") Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 38f1a16..bb0fcdb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11667,7 +11667,7 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, !nested_exit_intr_ack_set(vcpu) || (vmcs12->posted_intr_nv & 0xff00) || (vmcs12->posted_intr_desc_addr & 0x3f) || - (!page_address_valid(vcpu, vmcs12->posted_intr_desc_addr + (vmcs12->posted_intr_desc_addr >> cpuid_maxphyaddr(vcpu return -EINVAL; /* tpr shadow is needed by all apicv features. */ -- 2.7.4
[PATCH v3 06/13] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index d857401..5b15ca2 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -847,6 +847,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; @@ -11546,9 +11549,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -11574,11 +11578,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -11626,8 +11629,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(_vmx(vcpu)->nested.msr_bitmap_map); return true; } -- 2.7.4
[PATCH v3 01/13] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4f5d4bd..358759a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8321,7 +8321,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -8373,19 +8373,11 @@ static int handle_vmon(struct kvm_vcpu *vcpu) return kvm_skip_emulated_instruction(vcpu); } - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) { nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - nested_vmx_failInvalid(vcpu); - return kvm_skip_emulated_instruction(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); -- 2.7.4
[PATCH v3 05/13] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". Keep in mind that memremap is horribly slow, so this mapping API should not be used for high-frequency mapping operations. But rather for low-frequency mappings. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 9 + virt/kvm/kvm_main.c | 50 2 files changed, 59 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698..59e56b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -708,6 +715,8 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); +void kvm_vcpu_unmap(struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 94e931f..a79a3c4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1652,6 +1652,56 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = NULL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_map); + +void kvm_vcpu_unmap(struct kvm_host_map *map) +{ + if (!map->hva) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->hva); + + kvm_release_pfn_dirty(map->pfn); + map->hva = NULL; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn; -- 2.7.4
[PATCH v3 03/13] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index bc45347..d857401 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -13571,9 +13571,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -13593,15 +13591,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[PATCH v3 06/13] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index d857401..5b15ca2 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -847,6 +847,9 @@ struct nested_vmx { struct page *apic_access_page; struct page *virtual_apic_page; struct page *pi_desc_page; + + struct kvm_host_map msr_bitmap_map; + struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; @@ -11546,9 +11549,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; - struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; + struct kvm_host_map *map = _vmx(vcpu)->nested.msr_bitmap_map; + /* * pred_cmd & spec_ctrl are trying to verify two things: * @@ -11574,11 +11578,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, !pred_cmd && !spec_ctrl) return false; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) return false; - msr_bitmap_l1 = (unsigned long *)kmap(page); + msr_bitmap_l1 = (unsigned long *)map->hva; if (nested_cpu_has_apic_reg_virt(vmcs12)) { /* * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it @@ -11626,8 +11629,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W); - kunmap(page); - kvm_release_page_clean(page); + kvm_vcpu_unmap(_vmx(vcpu)->nested.msr_bitmap_map); return true; } -- 2.7.4
[PATCH v3 01/13] X86/nVMX: handle_vmon: Read 4 bytes from guest memory
Read the data directly from guest memory instead of the map->read->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Suggested-by: Jim Mattson Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 14 +++--- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4f5d4bd..358759a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8321,7 +8321,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu) { int ret; gpa_t vmptr; - struct page *page; + uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; @@ -8373,19 +8373,11 @@ static int handle_vmon(struct kvm_vcpu *vcpu) return kvm_skip_emulated_instruction(vcpu); } - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + if (kvm_read_guest(vcpu->kvm, vmptr, , sizeof(revision)) || + revision != VMCS12_REVISION) { nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } - if (*(u32 *)kmap(page) != VMCS12_REVISION) { - kunmap(page); - kvm_release_page_clean(page); - nested_vmx_failInvalid(vcpu); - return kvm_skip_emulated_instruction(vcpu); - } - kunmap(page); - kvm_release_page_clean(page); vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); -- 2.7.4
[PATCH v3 05/13] KVM: Introduce a new guest mapping API
In KVM, specially for nested guests, there is a dominant pattern of: => map guest memory -> do_something -> unmap guest memory In addition to all this unnecessarily noise in the code due to boiler plate code, most of the time the mapping function does not properly handle memory that is not backed by "struct page". This new guest mapping API encapsulate most of this boiler plate code and also handles guest memory that is not backed by "struct page". Keep in mind that memremap is horribly slow, so this mapping API should not be used for high-frequency mapping operations. But rather for low-frequency mappings. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Drop the caching optimization (pbonzini) - Use 'hva' instead of 'kaddr' (pbonzini) - Return 0/-EINVAL/-EFAULT instead of true/false. -EFAULT will be used for AMD patch (pbonzini) - Introduce __kvm_map_gfn which accepts a memory slot and use it (pbonzini) - Only clear map->hva instead of memsetting the whole structure. - Drop kvm_vcpu_map_valid since it is no longer used. - Fix EXPORT_MODULE naming. --- include/linux/kvm_host.h | 9 + virt/kvm/kvm_main.c | 50 2 files changed, 59 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698..59e56b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -205,6 +205,13 @@ enum { READING_SHADOW_PAGE_TABLES, }; +struct kvm_host_map { + struct page *page; + void *hva; + kvm_pfn_t pfn; + kvm_pfn_t gfn; +}; + /* * Sometimes a large or cross-page mmio needs to be broken up into separate * exits for userspace servicing. @@ -708,6 +715,8 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); +void kvm_vcpu_unmap(struct kvm_host_map *map); struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 94e931f..a79a3c4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1652,6 +1652,56 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); +static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn, +struct kvm_host_map *map) +{ + kvm_pfn_t pfn; + void *hva = NULL; + struct page *page = NULL; + + pfn = gfn_to_pfn_memslot(slot, gfn); + if (is_error_noslot_pfn(pfn)) + return -EINVAL; + + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + hva = kmap(page); + } else { + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); + } + + if (!hva) + return -EFAULT; + + map->page = page; + map->hva = hva; + map->pfn = pfn; + map->gfn = gfn; + + return 0; +} + +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +{ + return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_map); + +void kvm_vcpu_unmap(struct kvm_host_map *map) +{ + if (!map->hva) + return; + + if (map->page) + kunmap(map->page); + else + memunmap(map->hva); + + kvm_release_pfn_dirty(map->pfn); + map->hva = NULL; +} +EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); + struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn) { kvm_pfn_t pfn; -- 2.7.4
[PATCH v3 03/13] X86/nVMX: Update the PML table without mapping and unmapping the page
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx.c | 14 +- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index bc45347..d857401 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -13571,9 +13571,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -13593,15 +13591,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), , +offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0; -- 2.7.4
[PATCH v3 11/13] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
Use kvm_vcpu_map in synic_deliver_msg since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 8bdc78d..5310b8b 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -581,7 +581,7 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, struct hv_message *src_msg) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; + struct kvm_host_map map; gpa_t gpa; struct hv_message *dst_msg; int r; @@ -591,11 +591,11 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, return -ENOENT; gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) return -EFAULT; - msg_page = kmap_atomic(page); + msg_page = map.hva; dst_msg = _page->sint_message[sint]; if (sync_cmpxchg(_msg->header.message_type, HVMSG_NONE, src_msg->header.message_type) != HVMSG_NONE) { @@ -612,8 +612,8 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, else if (r == 0) r = -EFAULT; } - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); return r; } -- 2.7.4
[PATCH v3 10/13] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
Use kvm_vcpu_map in synic_clear_sint_msg_pending since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 01d209a..8bdc78d 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -158,26 +158,22 @@ static void synic_clear_sint_msg_pending(struct kvm_vcpu_hv_synic *synic, u32 sint) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; - gpa_t gpa; + struct kvm_host_map map; struct hv_message *msg; struct hv_message_page *msg_page; - gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(synic->msg_page), )) { vcpu_err(vcpu, "Hyper-V SynIC can't get msg page, gpa 0x%llx\n", -gpa); +synic->msg_page); return; } - msg_page = kmap_atomic(page); + msg_page = map.hva; msg = _page->sint_message[sint]; msg->header.message_flags.msg_pending = 0; - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); - kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); + kvm_vcpu_unmap(); + kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(synic->msg_page)); } static void kvm_hv_notify_acked_sint(struct kvm_vcpu *vcpu, u32 sint) -- 2.7.4
[PATCH v3 11/13] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
Use kvm_vcpu_map in synic_deliver_msg since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 8bdc78d..5310b8b 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -581,7 +581,7 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, struct hv_message *src_msg) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; + struct kvm_host_map map; gpa_t gpa; struct hv_message *dst_msg; int r; @@ -591,11 +591,11 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, return -ENOENT; gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) return -EFAULT; - msg_page = kmap_atomic(page); + msg_page = map.hva; dst_msg = _page->sint_message[sint]; if (sync_cmpxchg(_msg->header.message_type, HVMSG_NONE, src_msg->header.message_type) != HVMSG_NONE) { @@ -612,8 +612,8 @@ static int synic_deliver_msg(struct kvm_vcpu_hv_synic *synic, u32 sint, else if (r == 0) r = -EFAULT; } - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); return r; } -- 2.7.4
[PATCH v3 10/13] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
Use kvm_vcpu_map in synic_clear_sint_msg_pending since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/hyperv.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 01d209a..8bdc78d 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -158,26 +158,22 @@ static void synic_clear_sint_msg_pending(struct kvm_vcpu_hv_synic *synic, u32 sint) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct page *page; - gpa_t gpa; + struct kvm_host_map map; struct hv_message *msg; struct hv_message_page *msg_page; - gpa = synic->msg_page & PAGE_MASK; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) { + if (kvm_vcpu_map(vcpu, gpa_to_gfn(synic->msg_page), )) { vcpu_err(vcpu, "Hyper-V SynIC can't get msg page, gpa 0x%llx\n", -gpa); +synic->msg_page); return; } - msg_page = kmap_atomic(page); + msg_page = map.hva; msg = _page->sint_message[sint]; msg->header.message_flags.msg_pending = 0; - kunmap_atomic(msg_page); - kvm_release_page_dirty(page); - kvm_vcpu_mark_page_dirty(vcpu, gpa >> PAGE_SHIFT); + kvm_vcpu_unmap(); + kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(synic->msg_page)); } static void kvm_hv_notify_acked_sint(struct kvm_vcpu *vcpu, u32 sint) -- 2.7.4
[PATCH v3 07/13] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 34 +++--- 1 file changed, 11 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5b15ca2..83a5e95 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -845,9 +845,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6152,11 +6151,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -8468,10 +8468,7 @@ static void free_nested(struct vcpu_vmx *vmx) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11394,6 +11391,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11426,11 +11424,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11445,11 +11439,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -13353,10 +13344,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); -- 2.7.4
[PATCH v3 09/13] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ca71773..f083e53 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5280,9 +5280,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5299,12 +5299,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5321,8 +5320,8 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; -- 2.7.4
[PATCH v3 07/13] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
Use kvm_vcpu_map when mapping the virtual APIC page since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the virtual APIC page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) - Use pfn_to_hpa instead of gfn_to_gpa --- arch/x86/kvm/vmx.c | 34 +++--- 1 file changed, 11 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5b15ca2..83a5e95 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -845,9 +845,8 @@ struct nested_vmx { * pointers, so we must keep them pinned while L2 runs. */ struct page *apic_access_page; - struct page *virtual_apic_page; + struct kvm_host_map virtual_apic_map; struct page *pi_desc_page; - struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -6152,11 +6151,12 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256); if (max_irr != 256) { - vapic_page = kmap(vmx->nested.virtual_apic_page); + vapic_page = vmx->nested.virtual_apic_map.hva; + if (!vapic_page) + return; + __kvm_apic_update_irr(vmx->nested.pi_desc->pir, vapic_page, _irr); - kunmap(vmx->nested.virtual_apic_page); - status = vmcs_read16(GUEST_INTR_STATUS); if ((u8)max_irr > ((u8)status & 0xff)) { status &= ~0xff; @@ -8468,10 +8468,7 @@ static void free_nested(struct vcpu_vmx *vmx) kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); @@ -11394,6 +11391,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); + struct kvm_host_map *map; struct page *page; u64 hpa; @@ -11426,11 +11424,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - if (vmx->nested.virtual_apic_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->virtual_apic_page_addr); + map = >nested.virtual_apic_map; /* * If translation failed, VM entry will fail because @@ -11445,11 +11439,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * control. But such a configuration is useless, so * let's keep the code simple. */ - if (!is_error_page(page)) { - vmx->nested.virtual_apic_page = page; - hpa = page_to_phys(vmx->nested.virtual_apic_page); - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); - } + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } if (nested_cpu_has_posted_intr(vmcs12)) { @@ -13353,10 +13344,7 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } - if (vmx->nested.virtual_apic_page) { - kvm_release_page_dirty(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = NULL; - } + kvm_vcpu_unmap(>nested.virtual_apic_map); if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); kvm_release_page_dirty(vmx->nested.pi_desc_page); -- 2.7.4
[PATCH v3 09/13] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Update to match the new API return codes --- arch/x86/kvm/x86.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ca71773..f083e53 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5280,9 +5280,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, unsigned int bytes, struct x86_exception *exception) { + struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); gpa_t gpa; - struct page *page; char *kaddr; bool exchanged; @@ -5299,12 +5299,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) goto emul_write; - page = kvm_vcpu_gfn_to_page(vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), )) goto emul_write; - kaddr = kmap_atomic(page); - kaddr += offset_in_page(gpa); + kaddr = map.hva + offset_in_page(gpa); + switch (bytes) { case 1: exchanged = CMPXCHG_TYPE(u8, kaddr, old, new); @@ -5321,8 +5320,8 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, default: BUG(); } - kunmap_atomic(kaddr); - kvm_release_page_dirty(page); + + kvm_vcpu_unmap(); if (!exchanged) return X86EMUL_CMPXCHG_FAILED; -- 2.7.4
[PATCH v3 00/13] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. v3 -> v2: - rebase - Add a new patch to also fix the newly introducing shadow VMCS support for nested. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (12): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS arch/x86/kvm/hyperv.c | 28 arch/x86/kvm/paging_tmpl.h | 38 --- arch/x86/kvm/svm.c | 97 +- arch/x86/kvm/vmx.c | 167 + arch/x86/kvm/x86.c | 13 ++-- include/linux/kvm_host.h | 9 +++ virt/kvm/kvm_main.c| 50 ++ 7 files changed, 217 insertions(+), 185 deletions(-) -- 2.7.4
[PATCH v3 00/13] KVM/X86: Introduce a new guest mapping interface
Guest memory can either be directly managed by the kernel (i.e. have a "struct page") or they can simply live outside kernel control (i.e. do not have a "struct page"). KVM mostly support these two modes, except in a few places where the code seems to assume that guest memory must have a "struct page". This patchset introduces a new mapping interface to map guest memory into host kernel memory which also supports PFN-based memory (i.e. memory without 'struct page'). It also converts all offending code to this interface or simply read/write directly from guest memory. As far as I can see all offending code is now fixed except the APIC-access page which I will handle in a seperate series along with dropping kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API. v3 -> v2: - rebase - Add a new patch to also fix the newly introducing shadow VMCS support for nested. Filippo Sironi (1): X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs KarimAllah Ahmed (12): X86/nVMX: handle_vmon: Read 4 bytes from guest memory X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory X86/nVMX: Update the PML table without mapping and unmapping the page KVM: Introduce a new guest mapping API KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KVM/nSVM: Use the new mapping API for mapping guest memory KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS arch/x86/kvm/hyperv.c | 28 arch/x86/kvm/paging_tmpl.h | 38 --- arch/x86/kvm/svm.c | 97 +- arch/x86/kvm/vmx.c | 167 + arch/x86/kvm/x86.c | 13 ++-- include/linux/kvm_host.h | 9 +++ virt/kvm/kvm_main.c| 50 ++ 7 files changed, 217 insertions(+), 185 deletions(-) -- 2.7.4
[PATCH v3 13/13] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 0ae25c3..2892770 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11616,20 +11616,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -12494,9 +12494,9 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -12504,17 +12504,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(); return r; } -- 2.7.4
[PATCH v3 12/13] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index d96092b..911d853 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3036,32 +3036,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3262,10 +3236,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3274,9 +3249,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3375,7 +3355,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3433,7 +3413,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3513,7 +3493,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; - nested_svm_unmap(page); + kvm_vcpu_unmap(map); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3533,17 +3513,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3551,7 +3537,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(); return false; } @@ -3595,7 +3581,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3619,21 +3605,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(struct vcpu_svm *svm) { struct vmcb *nested_vmcb
[PATCH v3 13/13] KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
Use kvm_vcpu_map for accessing the shadow VMCS since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 25 - 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 0ae25c3..2892770 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11616,20 +11616,20 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, static void nested_cache_shadow_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { + struct kvm_host_map map; struct vmcs12 *shadow; - struct page *page; if (!nested_cpu_has_shadow_vmcs(vmcs12) || vmcs12->vmcs_link_pointer == -1ull) return; shadow = get_shadow_vmcs12(vcpu); - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - memcpy(shadow, kmap(page), VMCS12_SIZE); + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) + return; - kunmap(page); - kvm_release_page_clean(page); + memcpy(shadow, map.hva, VMCS12_SIZE); + kvm_vcpu_unmap(); } static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, @@ -12494,9 +12494,9 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { - int r; - struct page *page; + int r = 0; struct vmcs12 *shadow; + struct kvm_host_map map; if (vmcs12->vmcs_link_pointer == -1ull) return 0; @@ -12504,17 +12504,16 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu *vcpu, if (!page_address_valid(vcpu, vmcs12->vmcs_link_pointer)) return -EINVAL; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->vmcs_link_pointer); - if (is_error_page(page)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->vmcs_link_pointer), )) return -EINVAL; - r = 0; - shadow = kmap(page); + shadow = map.hva; + if (shadow->hdr.revision_id != VMCS12_REVISION || shadow->hdr.shadow_vmcs != nested_cpu_has_shadow_vmcs(vmcs12)) r = -EINVAL; - kunmap(page); - kvm_release_page_clean(page); + + kvm_vcpu_unmap(); return r; } -- 2.7.4
[PATCH v3 12/13] KVM/nSVM: Use the new mapping API for mapping guest memory
Use the new mapping API for mapping guest memory to avoid depending on "struct page". Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/svm.c | 97 +++--- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index d96092b..911d853 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3036,32 +3036,6 @@ static inline bool nested_svm_nmi(struct vcpu_svm *svm) return false; } -static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, struct page **_page) -{ - struct page *page; - - might_sleep(); - - page = kvm_vcpu_gfn_to_page(>vcpu, gpa >> PAGE_SHIFT); - if (is_error_page(page)) - goto error; - - *_page = page; - - return kmap(page); - -error: - kvm_inject_gp(>vcpu, 0); - - return NULL; -} - -static void nested_svm_unmap(struct page *page) -{ - kunmap(page); - kvm_release_page_dirty(page); -} - static int nested_svm_intercept_ioio(struct vcpu_svm *svm) { unsigned port, size, iopm_len; @@ -3262,10 +3236,11 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr static int nested_svm_vmexit(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; trace_kvm_nested_vmexit_inject(vmcb->control.exit_code, vmcb->control.exit_info_1, @@ -3274,9 +3249,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) vmcb->control.exit_int_info_err, KVM_ISA_SVM); - nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(svm->nested.vmcb), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return 1; + } + + nested_vmcb = map.hva; /* Exit Guest-Mode */ leave_guest_mode(>vcpu); @@ -3375,7 +3355,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) mark_all_dirty(svm->vmcb); - nested_svm_unmap(page); + kvm_vcpu_unmap(); nested_svm_uninit_mmu_context(>vcpu); kvm_mmu_reset_context(>vcpu); @@ -3433,7 +3413,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, -struct vmcb *nested_vmcb, struct page *page) +struct vmcb *nested_vmcb, struct kvm_host_map *map) { if (kvm_get_rflags(>vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; @@ -3513,7 +3493,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; - nested_svm_unmap(page); + kvm_vcpu_unmap(map); /* Enter Guest-Mode */ enter_guest_mode(>vcpu); @@ -3533,17 +3513,23 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static bool nested_svm_vmrun(struct vcpu_svm *svm) { + int rc; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; - struct page *page; + struct kvm_host_map map; u64 vmcb_gpa; vmcb_gpa = svm->vmcb->save.rax; - nested_vmcb = nested_svm_map(svm, svm->vmcb->save.rax, ); - if (!nested_vmcb) + rc = kvm_vcpu_map(>vcpu, gfn_to_gpa(vmcb_gpa), ); + if (rc) { + if (rc == -EINVAL) + kvm_inject_gp(>vcpu, 0); return false; + } + + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { nested_vmcb->control.exit_code= SVM_EXIT_ERR; @@ -3551,7 +3537,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - nested_svm_unmap(page); + kvm_vcpu_unmap(); return false; } @@ -3595,7 +3581,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, page); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, ); return true; } @@ -3619,21 +3605,26 @@ static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) static int vmload_interception(struct vcpu_svm *svm) { struct vmcb *nested_vmcb
[PATCH v3 02/13] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
Copy the VMCS12 directly from guest memory instead of the map->copy->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 23 +-- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 358759a..bc45347 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8879,33 +8879,28 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) } if (vmx->nested.current_vmptr != vmptr) { - struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + struct vmcs12 *new_vmcs12 = (struct vmcs12 *)__get_free_page(GFP_KERNEL); + + if (!new_vmcs12 || + kvm_read_guest(vcpu->kvm, vmptr, new_vmcs12, + sizeof(*new_vmcs12))) { + free_page((unsigned long)new_vmcs12); nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } - new_vmcs12 = kmap(page); + if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + free_page((unsigned long)new_vmcs12); nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); return kvm_skip_emulated_instruction(vcpu); } nested_release_vmcs12(vmx); - /* -* Load VMCS12 from guest memory since it is not already -* cached. -*/ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); - + free_page((unsigned long)new_vmcs12); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH v3 08/13] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 45 +++-- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 83a5e95..0ae25c3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -846,7 +846,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -8469,12 +8469,8 @@ static void free_nested(struct vcpu_vmx *vmx) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; free_loaded_vmcs(>nested.vmcs02); } @@ -11444,24 +11440,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -13344,13 +13332,10 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } + kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the -- 2.7.4
[PATCH v3 04/13] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 14ffd97..8f7bc8f 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH v3 02/13] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
Copy the VMCS12 directly from guest memory instead of the map->copy->unmap sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Massage commit message a bit. --- arch/x86/kvm/vmx.c | 23 +-- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 358759a..bc45347 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8879,33 +8879,28 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) } if (vmx->nested.current_vmptr != vmptr) { - struct vmcs12 *new_vmcs12; - struct page *page; - page = kvm_vcpu_gpa_to_page(vcpu, vmptr); - if (is_error_page(page)) { + struct vmcs12 *new_vmcs12 = (struct vmcs12 *)__get_free_page(GFP_KERNEL); + + if (!new_vmcs12 || + kvm_read_guest(vcpu->kvm, vmptr, new_vmcs12, + sizeof(*new_vmcs12))) { + free_page((unsigned long)new_vmcs12); nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } - new_vmcs12 = kmap(page); + if (new_vmcs12->hdr.revision_id != VMCS12_REVISION || (new_vmcs12->hdr.shadow_vmcs && !nested_cpu_has_vmx_shadow_vmcs(vcpu))) { - kunmap(page); - kvm_release_page_clean(page); + free_page((unsigned long)new_vmcs12); nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); return kvm_skip_emulated_instruction(vcpu); } nested_release_vmcs12(vmx); - /* -* Load VMCS12 from guest memory since it is not already -* cached. -*/ memcpy(vmx->nested.cached_vmcs12, new_vmcs12, VMCS12_SIZE); - kunmap(page); - kvm_release_page_clean(page); - + free_page((unsigned long)new_vmcs12); set_current_vmptr(vmx, vmptr); } -- 2.7.4
[PATCH v3 08/13] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has a "struct page". One additional semantic change is that the virtual host mapping lifecycle has changed a bit. It now has the same lifetime of the pinning of the interrupt descriptor table page on the host side. Signed-off-by: KarimAllah Ahmed --- v1 -> v2: - Do not change the lifecycle of the mapping (pbonzini) --- arch/x86/kvm/vmx.c | 45 +++-- 1 file changed, 15 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 83a5e95..0ae25c3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -846,7 +846,7 @@ struct nested_vmx { */ struct page *apic_access_page; struct kvm_host_map virtual_apic_map; - struct page *pi_desc_page; + struct kvm_host_map pi_desc_map; struct kvm_host_map msr_bitmap_map; struct pi_desc *pi_desc; @@ -8469,12 +8469,8 @@ static void free_nested(struct vcpu_vmx *vmx) vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; free_loaded_vmcs(>nested.vmcs02); } @@ -11444,24 +11440,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) } if (nested_cpu_has_posted_intr(vmcs12)) { - if (vmx->nested.pi_desc_page) { /* shouldn't happen */ - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; + map = >nested.pi_desc_map; + + if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + vmx->nested.pi_desc = + (struct pi_desc *)(((void *)map->hva) + + offset_in_page(vmcs12->posted_intr_desc_addr)); + vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) + + offset_in_page(vmcs12->posted_intr_desc_addr)); } - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); - if (is_error_page(page)) - return; - vmx->nested.pi_desc_page = page; - vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); - vmx->nested.pi_desc = - (struct pi_desc *)((void *)vmx->nested.pi_desc + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); - vmcs_write64(POSTED_INTR_DESC_ADDR, - page_to_phys(vmx->nested.pi_desc_page) + - (unsigned long)(vmcs12->posted_intr_desc_addr & - (PAGE_SIZE - 1))); + } if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12)) vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, @@ -13344,13 +13332,10 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } + kvm_vcpu_unmap(>nested.virtual_apic_map); - if (vmx->nested.pi_desc_page) { - kunmap(vmx->nested.pi_desc_page); - kvm_release_page_dirty(vmx->nested.pi_desc_page); - vmx->nested.pi_desc_page = NULL; - vmx->nested.pi_desc = NULL; - } + kvm_vcpu_unmap(>nested.pi_desc_map); + vmx->nested.pi_desc = NULL; /* * We are now running in L2, mmu_notifier will force to reload the -- 2.7.4
[PATCH v3 04/13] X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
From: Filippo Sironi cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of pages and the respective struct page to map in the kernel virtual address space. This doesn't work if get_user_pages_fast() is invoked with a userspace virtual address that's backed by PFNs outside of kernel reach (e.g., when limiting the kernel memory with mem= in the command line and using /dev/mem to map memory). If get_user_pages_fast() fails, look up the VMA that back the userspace virtual address, compute the PFN and the physical address, and map it in the kernel virtual address space with memremap(). Signed-off-by: Filippo Sironi Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/paging_tmpl.h | 38 +- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 14ffd97..8f7bc8f 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -141,15 +141,35 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct page *page; npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, ); - /* Check if the user is doing something meaningless. */ - if (unlikely(npages != 1)) - return -EFAULT; - - table = kmap_atomic(page); - ret = CMPXCHG([index], orig_pte, new_pte); - kunmap_atomic(table); - - kvm_release_page_dirty(page); + if (likely(npages == 1)) { + table = kmap_atomic(page); + ret = CMPXCHG([index], orig_pte, new_pte); + kunmap_atomic(table); + + kvm_release_page_dirty(page); + } else { + struct vm_area_struct *vma; + unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK; + unsigned long pfn; + unsigned long paddr; + + down_read(>mm->mmap_sem); + vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE); + if (!vma || !(vma->vm_flags & VM_PFNMAP)) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + paddr = pfn << PAGE_SHIFT; + table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB); + if (!table) { + up_read(>mm->mmap_sem); + return -EFAULT; + } + ret = CMPXCHG([index], orig_pte, new_pte); + memunmap(table); + up_read(>mm->mmap_sem); + } return (ret != orig_pte); } -- 2.7.4
[PATCH] KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned
The spec only requires the posted interrupt descriptor address to be 64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also forces the address to be page aligned. Only validate that the address does not cross the maximum physical address without enforcing a page alignment. Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: H. Peter Anvin Cc: x...@kernel.org Cc: k...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 6de84e581c0 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2") Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 30bf860..47962f2 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11668,7 +11668,7 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, !nested_exit_intr_ack_set(vcpu) || (vmcs12->posted_intr_nv & 0xff00) || (vmcs12->posted_intr_desc_addr & 0x3f) || - (!page_address_valid(vcpu, vmcs12->posted_intr_desc_addr + (vmcs12->posted_intr_desc_addr >> cpuid_maxphyaddr(vcpu))) return -EINVAL; /* tpr shadow is needed by all apicv features. */ -- 2.7.4
[PATCH] KVM/nVMX: Do not validate that posted_intr_desc_addr is page aligned
The spec only requires the posted interrupt descriptor address to be 64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also forces the address to be page aligned. Only validate that the address does not cross the maximum physical address without enforcing a page alignment. Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: H. Peter Anvin Cc: x...@kernel.org Cc: k...@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 6de84e581c0 ("nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2") Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 30bf860..47962f2 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11668,7 +11668,7 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, !nested_exit_intr_ack_set(vcpu) || (vmcs12->posted_intr_nv & 0xff00) || (vmcs12->posted_intr_desc_addr & 0x3f) || - (!page_address_valid(vcpu, vmcs12->posted_intr_desc_addr + (vmcs12->posted_intr_desc_addr >> cpuid_maxphyaddr(vcpu))) return -EINVAL; /* tpr shadow is needed by all apicv features. */ -- 2.7.4
[PATCH] rcu: Benefit from expedited grace period in __wait_rcu_gp
When expedited grace-period is set, both synchronize_sched synchronize_rcu_bh can be optimized to have a significantly lower latency. Improve wait_rcu_gp handling to also account for expedited grace-period. The downside is that wait_rcu_gp will not wait anymore for all RCU variants concurrently when an expedited grace-period is set, however, given the improved latency it does not really matter. Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: linux-kernel@vger.kernel.org Signed-off-by: KarimAllah Ahmed --- kernel/rcu/update.c | 34 -- 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index 68fa19a..44b8817 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -392,13 +392,27 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array, might_sleep(); continue; } - init_rcu_head_on_stack(_array[i].head); - init_completion(_array[i].completion); + for (j = 0; j < i; j++) if (crcu_array[j] == crcu_array[i]) break; - if (j == i) - (crcu_array[i])(_array[i].head, wakeme_after_rcu); + if (j != i) + continue; + + if ((crcu_array[i] == call_rcu_sched || +crcu_array[i] == call_rcu_bh) + && rcu_gp_is_expedited()) { + if (crcu_array[i] == call_rcu_sched) + synchronize_sched_expedited(); + else + synchronize_rcu_bh_expedited(); + + continue; + } + + init_rcu_head_on_stack(_array[i].head); + init_completion(_array[i].completion); + (crcu_array[i])(_array[i].head, wakeme_after_rcu); } /* Wait for all callbacks to be invoked. */ @@ -407,11 +421,19 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array, (crcu_array[i] == call_rcu || crcu_array[i] == call_rcu_bh)) continue; + + if ((crcu_array[i] == call_rcu_sched || +crcu_array[i] == call_rcu_bh) + && rcu_gp_is_expedited()) + continue; + for (j = 0; j < i; j++) if (crcu_array[j] == crcu_array[i]) break; - if (j == i) - wait_for_completion(_array[i].completion); + if (j != i) + continue; + + wait_for_completion(_array[i].completion); destroy_rcu_head_on_stack(_array[i].head); } } -- 2.7.4
[PATCH] rcu: Benefit from expedited grace period in __wait_rcu_gp
When expedited grace-period is set, both synchronize_sched synchronize_rcu_bh can be optimized to have a significantly lower latency. Improve wait_rcu_gp handling to also account for expedited grace-period. The downside is that wait_rcu_gp will not wait anymore for all RCU variants concurrently when an expedited grace-period is set, however, given the improved latency it does not really matter. Cc: Paul E. McKenney Cc: Josh Triplett Cc: Steven Rostedt Cc: Mathieu Desnoyers Cc: Lai Jiangshan Cc: linux-kernel@vger.kernel.org Signed-off-by: KarimAllah Ahmed --- kernel/rcu/update.c | 34 -- 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index 68fa19a..44b8817 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -392,13 +392,27 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array, might_sleep(); continue; } - init_rcu_head_on_stack(_array[i].head); - init_completion(_array[i].completion); + for (j = 0; j < i; j++) if (crcu_array[j] == crcu_array[i]) break; - if (j == i) - (crcu_array[i])(_array[i].head, wakeme_after_rcu); + if (j != i) + continue; + + if ((crcu_array[i] == call_rcu_sched || +crcu_array[i] == call_rcu_bh) + && rcu_gp_is_expedited()) { + if (crcu_array[i] == call_rcu_sched) + synchronize_sched_expedited(); + else + synchronize_rcu_bh_expedited(); + + continue; + } + + init_rcu_head_on_stack(_array[i].head); + init_completion(_array[i].completion); + (crcu_array[i])(_array[i].head, wakeme_after_rcu); } /* Wait for all callbacks to be invoked. */ @@ -407,11 +421,19 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array, (crcu_array[i] == call_rcu || crcu_array[i] == call_rcu_bh)) continue; + + if ((crcu_array[i] == call_rcu_sched || +crcu_array[i] == call_rcu_bh) + && rcu_gp_is_expedited()) + continue; + for (j = 0; j < i; j++) if (crcu_array[j] == crcu_array[i]) break; - if (j == i) - wait_for_completion(_array[i].completion); + if (j != i) + continue; + + wait_for_completion(_array[i].completion); destroy_rcu_head_on_stack(_array[i].head); } } -- 2.7.4