Allocate lm_root before the PAE roots so that the PAE roots aren't
leaked if the memory allocation for the lm_root happens to fail.
Note, KVM can _still_ leak PAE roots if mmu_check_root() fails on a
guest's PDPTR. That too will be fixed in a future commit.
Signed-off-by: Sean Christopherson
the guest, in which case KVM uses a direct
mapped MMU even though TDP is disabled.
Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
Cc: sta...@vger.kernel.org
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu
being leaked, not to mention the above false
positive.
Opportunistically delete a warning on root_hpa being valid, there's
nothing special about 4/5-level shadow pages that warrants a WARN.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 8
1 file changed, 4 insertions
")
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c462062d36aa..0987cc1d53eb 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++
Add a helper to consolidate boilerplate for nested VM-Exits that don't
provide any data in exit_info_*.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/nested.c | 55 +--
arch/x86/kvm/svm/svm.c| 6 +
arch/x86
of pae_root means bugs crash the host. Obviously, KVM could
unconditionally allocate pae_root, but that's arguably a worse failure
mode as it would potentially corrupt the guest instead of crashing it.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 11 +--
1 file changed, 9
For clarity, explicitly skip syncing roots if the MMU load failed
instead of relying on the !VALID_PAGE check in kvm_mmu_sync_roots().
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b
that
is _not_intercepted by L1. E.g. if KVM is intercepting #GPs for the
VMware backdoor, a #GP that occurs in L2 while vectoring an injected #DF
will cause KVM to emulate triple fault.
Cc: Boris Ostrovsky
Cc: Jim Mattson
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm
On Tue, Mar 02, 2021, Paolo Bonzini wrote:
> On 02/03/21 18:45, Sean Christopherson wrote:
> > If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
> > triggering triple fault. On both VMX and SVM, if the CPU hits a fault
> > while vectoring an injected #DF (
;). x86.c and svm/nested.c conflict with kvm/master.
They are minor and straighforward, but let me know if you want me to post
a version based on kvm/master for easier inclusion into 5.12.
Sean Christopherson (2):
KVM: x86: Handle triple fault in L2 without killing L1
KVM: nSVM: Add helper to synthes
That last sentence is confusing. kvm_apic_set_state() already clears .pending,
by way of __start_apic_timer(). I think what you mean is:
When we cancel the timer and clear .pending during state restore, clear
expired_tscdeadline as well.
With that,
Reviewed-by: Sean Christopherson
Side topi
On Tue, Mar 02, 2021, Kai Huang wrote:
> On Mon, 2021-03-01 at 12:32 +0100, Borislav Petkov wrote:
> > On Tue, Mar 02, 2021 at 12:28:27AM +1300, Kai Huang wrote:
> > > I think some script can utilize /proc/cpuinfo. For instance, admin can
> > > have
> > > automation tool/script to deploy enclave
On Mon, Mar 01, 2021, Cathy Avery wrote:
> kvm_set_rflags(>vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
> svm_set_efer(>vcpu, vmcb12->save.efer);
> svm_set_cr0(>vcpu, vmcb12->save.cr0);
> svm_set_cr4(>vcpu, vmcb12->save.cr4);
Why not utilize VMCB_CR?
> -
On Thu, Feb 25, 2021, Jing Liu wrote:
> XCOMP_BV[63] field indicates that the save area is in the compacted
> format and XCOMP_BV[62:0] indicates the states that have space allocated
> in the save area, including both XCR0 and XSS bits enabled by the host
> kernel. Use xfeatures_mask_all for
On Wed, Feb 03, 2021, Like Xu wrote:
> @@ -348,10 +352,26 @@ static bool intel_pmu_handle_lbr_msrs_access(struct
> kvm_vcpu *vcpu,
> return true;
> }
>
> +/*
> + * Check if the requested depth values is supported
> + * based on the bits [0:7] of the guest cpuid.1c.eax.
> + */
> +static
On Mon, Mar 01, 2021, Woodhouse, David wrote:
> On Fri, 2021-02-26 at 06:57 -0500, Paolo Bonzini wrote:
> > + depends on KVM && IA32_FEAT_CTL
>
> Hm, why IA32_FEAT_CTL?
Ya, unless Xen support is Intel-only, that's a bug.
+Vitaly
On Thu, Feb 25, 2021, Yang Weijiang wrote:
> These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
> read/write them and after they're changed. If CET guest entry-load bit is not
> set by L1 guest, migrate them to L2 manaully.
>
> Sug
CET,edx, feature_bit(IBT));
Ugh, what sadist put SHSTK and IBT in separate output registers.
Reviewed-by: Sean Christopherson
>
> #undef cr4_fixed1_update
> }
> --
> 2.26.2
>
On Mon, Mar 01, 2021, Kai Huang wrote:
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 7449ef33f081..a7dc86e87a09 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -381,6 +381,26 @@ const struct vm_operations_struct
On Mon, Mar 01, 2021, Kai Huang wrote:
> +static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpuid_entry2 *sgx_12_0, *sgx_12_1;
> + gva_t pageinfo_gva, secs_gva;
> + gva_t metadata_gva, contents_gva;
> + gpa_t metadata_gpa, contents_gpa, secs_gpa;
> +
On Mon, Mar 01, 2021, Kai Huang wrote:
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 8c922e68274d..276220d0e4b5 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -696,6 +696,21 @@ static bool __init
On Mon, Mar 01, 2021, Kai Huang wrote:
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 50810d471462..df8e338267aa 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1570,12 +1570,18 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu,
> u64 data)
On Mon, Mar 01, 2021, Kai Huang wrote:
> And because they're architectural.
Heh, this snarky sentence can be dropped, it was a lot more clever when these
were being moved to sgx_arch.h.
> Signed-off-by: Sean Christopherson
> Acked-by: Dave Hansen
> Acked-by: Jarkko Sakkinen
On Mon, Mar 01, 2021, Kai Huang wrote:
> + /*
> + * SECS pages are "pinned" by child pages, an unpinned once all
s/an/and
> + * children have been EREMOVE'd. A child page in this instance
> + * may have pinned an SECS page encountered in an earlier release(),
> + *
On Wed, Feb 24, 2021, Xu, Like wrote:
> On 2021/2/24 1:15, Sean Christopherson wrote:
> > On Tue, Feb 23, 2021, Like Xu wrote:
> > > If lbr_desc->event is successfully created, the intel_pmu_create_
> > > guest_lbr_event() will return 0, otherwise it will return -ENOE
On Fri, Feb 26, 2021, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:a99163e9 Merge tag 'devicetree-for-5.12' of git://git.kern..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=11564f12d0
> kernel config:
On Fri, Feb 26, 2021, Nadav Amit wrote:
>
> > On Feb 25, 2021, at 1:16 PM, Sean Christopherson wrote:
> > It's been literally years since I wrote this code, but I distinctly
> > remember the
> > addresses being relative to the base. I also remember testing multiple
+Will and Quentin (arm64)
Moving the non-KVM x86 folks to bcc, I don't they care about KVM details at this
point.
On Fri, Feb 26, 2021, Ashish Kalra wrote:
> On Thu, Feb 25, 2021 at 02:59:27PM -0800, Steve Rutherford wrote:
> > On Thu, Feb 25, 2021 at 12:20 PM Ashish Kalra wrote:
> > Thanks for
the end GFN is unused.
No functional change intended.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 020f2e573f44..9ce8d226b621
Add typedefs for the MMU handlers that are invoked when walking the MMU
SPTEs (rmaps in legacy MMU) to act on a host virtual address range.
No functional change intended.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 27 ++-
arch
WARN if set_tdp_spte() is invoked with multipel GFNs. It is specifically
a callback to handle a single host PTE being changed. Consuming the
@end parameter also eliminates the confusing 'unused' parameter.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 4
Add a TDP MMU helper to handle a single HVA hook, the name is a nice
reminder that the flow in question is operating on a single HVA.
No functional change intended.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 16 +++-
1 file changed, 11
; this is handled by the post-loop flush.
Fixes: 1d8dd6b3f12b ("kvm: x86/mmu: Support changed pte notifier in tdp MMU")
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mm
Effectively belated code review of a few pieces of the TDP MMU.
Sean Christopherson (5):
KVM: x86/mmu: Remove spurious TLB flush from TDP MMU's change_pte()
hook
KVM: x86/mmu: WARN if TDP MMU's set_tdp_spte() sees multiple GFNs
KVM: x86/mmu: Use 'end' param in TDP MMU's test_age_gfn
ous anyways." It's more misplaced than flat out incorrect, e.g.
the alternative would be to hoist the comment above mmu_page_hash. I like
removing it though, IMO mmu_page_hash is the most obvious name out of the
various structures that track shadow pages.
With that tweak:
Reviewed-by: Sea
ase
> and fixup are wrong.
>
> Fix the calculations of the expected fault IP and new IP by adjusting
> the base after each entry.
>
> Cc: Andy Lutomirski
> Cc: Peter Zijlstra
> Cc: Sean Christopherson
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Pe
Use the is_removed_spte() helper instead of open coding the check.
No functional change intended.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm
Use low "available" bits to tag REMOVED SPTEs. Using a high bit is
moderately costly as it often causes the compiler to generate a 64-bit
immediate. More importantly, this makes it very clear REMOVED_SPTE is
a value, not a flag.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
bit.
Doing that change with the current kvm_mmu_set_mask_ptes() would be an
absolute mess.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 3 --
arch/x86/kvm/mmu.h | 1 +
arch/x86/kvm/mmu/spte.c | 60
by enabling this new WARN.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.h | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 645e9bc2d4a2..2fad4ccd3679 100644
--- a/arch/x86/kvm/mmu/spte.h
e.g. drop_parent_pte().
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.c | 8
arch/x86/kvm/mmu/spte.h | 11 ++-
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index d12acf5eb871..e07aabb23b8a 10064
Debugging unexpected reserved bit page faults sucks. Dump the reserved
bits that (likely) caused the page fault to make debugging suck a little
less.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch
Make the location of the HOST_WRITABLE and MMU_WRITABLE configurable for
a given KVM instance. This will allow EPT to use high available bits,
which in turn will free up bit 11 for a constant MMU_PRESENT bit.
No functional change intended.
Signed-off-by: Sean Christopherson
---
Documentation
-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.c | 3 +++
arch/x86/kvm/mmu/spte.h | 48 -
2 files changed, 36 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 2329ba60c67a..d12acf5eb871 100644
.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 25 -
arch/x86/kvm/mmu/spte.c | 19 +++
arch/x86/kvm/vmx/vmx.c | 17 ++---
3 files changed, 25 insertions(+), 36 deletions(-)
diff --git a/arch/x86
Move kvm_mmu_set_mask_ptes() into mmu.c as prep for future cleanup of the
mask initialization code.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 4
arch/x86/kvm/x86.c | 3 ---
2 files changed, 4 insertions(+), 3 deletions(-)
diff
the bits are not available as they're
used for the MMIO generation. For access tracked SPTEs, they are also
not available as bits 56:54 are used to store the original RX bits.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.h | 8 +---
1 file
Use bits 53 and 52 for the MMIO generation now that they're not used to
identify MMIO SPTEs.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.c | 1 -
arch/x86/kvm/mmu/spte.h | 8
2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch
.
No functional change intended.
Signed-off-by: Sean Christopherson
---
Documentation/virt/kvm/locking.rst | 37 +++---
arch/x86/kvm/mmu/spte.c| 17 ++
arch/x86/kvm/mmu/spte.h| 34 ---
3 files changed, 56 insertions
Add a module param to disable MMIO caching so that it's possible to test
the related flows without access to the necessary hardware. Using shadow
paging with 64-bit KVM and 52 bits of physical address space must disable
MMIO caching as there are no reserved bits to be had.
Signed-off-by: Sean
the MMIO generation.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/spte.c | 11 ++-
arch/x86/kvm/mmu/spte.h | 10 --
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 3 ++-
6 files changed, 15
E.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 22 +-
1 file changed, 5 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 37c68abc54b8..4a24beefff94 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm
sting trace points to TDP MMU")
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index f46972892a2d..782cae1eb5e1 100644
--- a/arch/x
The value returned by make_mmio_spte() is a SPTE, it is not a mask.
Name it accordingly.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 6 +++---
arch/x86/kvm/mmu/spte.c | 10 +-
2 files changed, 8 insertions(+), 8 deletions(-)
diff
If MMIO caching is disabled, e.g. when using shadow paging on CPUs with
52 bits of PA space, go straight to MMIO emulation and don't install an
MMIO SPTE. The SPTE will just generate a !PRESENT #PF, i.e. can't
actually accelerate future MMIO.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm
the legacy MMU should allow such a
scenario, and closing this hole allows for additional cleanups.
Fixes: 2f2fad0897cb ("kvm: x86/mmu: Add functions to handle changed TDP SPTEs")
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 8
1 file changed, 8
that support SME and are susceptible to L1TF. But, closing the
hole is trivial.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/spte.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index ef55f0bc4ccf
AND operation and remedy the issue.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d75524bc8423..93b0285e8b38 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
listically, this is all but guaranteed to be a benign bug. Fix it up
primarily so that a future patch can tweak the MMU_WARN_ON checking A/D
status to fire if the SPTE is not-present.
Fixes: f8e144971c68 ("kvm: x86/mmu: Add access tracking for tdp_mmu")
Cc: Ben Gardon
Signed-off-by:
without checking PML breaks NPT on 32-bit KVM.
Fixes: 1f4e5fc83a42 ("KVM: x86: fix nested guest live migration with PML")
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu_internal.h | 16
1 file changed, 8 insertions(+), 8 deletion
broke.
Sean Christopherson (24):
KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is
enabled
KVM: x86/mmu: Check for shadow-present SPTE before querying A/D status
KVM: x86/mmu: Bail from fast_page_fault() if SPTE is not
shadow-present
KVM: x86/mmu: Disable MMIO
On Thu, Feb 25, 2021, Dmitry Vyukov wrote:
> On Wed, Feb 24, 2021 at 7:08 PM 'Sean Christopherson' via
> syzkaller-bugs wrote:
> >
> > On Wed, Feb 24, 2021, Borislav Petkov wrote:
> > > Hi Dmitry,
> > >
> > > On Wed, Feb 24, 2021 at 06:12:57P
On Tue, Feb 23, 2021, Liu, Jing2 wrote:
> XCOMP_BV[63] field indicates that the save area is in the
> compacted format and XCOMP_BV[62:0] indicates the states that
> have space allocated in the save area, including both XCR0
> and XSS bits enable by the host kernel. Use xfeatures_mask_all
> for
On Wed, Feb 24, 2021, Ashish Kalra wrote:
> # Samples: 19K of event 'kvm:kvm_hypercall'
> # Event count (approx.): 19573
> #
> # Overhead Command Shared Object Symbol
> # ... .
> #
>100.00% qemu-system-x86
On Wed, Feb 24, 2021, Borislav Petkov wrote:
> Hi Dmitry,
>
> On Wed, Feb 24, 2021 at 06:12:57PM +0100, Dmitry Vyukov wrote:
> > Looking at the bisection log, the bisection was distracted by something
> > else.
>
> Meaning the bisection result:
>
> 167dcfc08b0b ("x86/mm: Increase pgt_buf size
o need the best->function
> == 0x7 assignment, because there is e->function == function in
s/assignment/check, here and in the shortlog.
> cpuid_entry2_find().
>
> Signed-off-by: Yejune Deng
With the shortlog and changelog cleaned up:
Reviewed-by: Sean Christopherson
> ---
On Wed, Feb 24, 2021, Nathan Tempelman wrote:
> static bool __sev_recycle_asids(int min_asid, int max_asid)
> {
> @@ -1124,6 +1129,10 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
> if (copy_from_user(_cmd, argp, sizeof(struct kvm_sev_cmd)))
> return -EFAULT;
>
+Marc and Wooky
On Wed, Feb 24, 2021, Paolo Bonzini wrote:
> [CCing Nathaniel McCallum]
Ah, I assume Enarx can use this to share an asid across multiple workloads?
> On 24/02/21 09:59, Nathan Tempelman wrote:
> >
> > +7.23 KVM_CAP_VM_COPY_ENC_CONTEXT_TO
> >
ect return value causes KVM to exit to userspace without filling
the run state, e.g. QEMU logs "KVM: unknown exit, hardware reason 0".
Fixes: 14c2bf81fcd2 ("KVM: SVM: Fix #GP handling for doubly-nested
virtualization")
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm
On Tue, Feb 23, 2021, Jim Mattson wrote:
> On Tue, Feb 23, 2021 at 2:51 PM Sean Christopherson wrote:
> >
> > On Fri, Feb 19, 2021, David Edmondson wrote:
> > > If the VM entry/exit controls for loading/saving MSR_EFER are either
> > > not available (an older
On Fri, Feb 19, 2021, David Edmondson wrote:
> Show EFER and PAT based on their individual entry/exit controls.
>
> Signed-off-by: David Edmondson
> ---
> arch/x86/kvm/vmx/vmx.c | 19 ++-
> 1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c
On Fri, Feb 19, 2021, David Edmondson wrote:
> If the VM entry/exit controls for loading/saving MSR_EFER are either
> not available (an older processor or explicitly disabled) or not
> used (host and guest values are the same), reading GUEST_IA32_EFER
> from the VMCS returns an inaccurate value.
>
On Tue, Feb 23, 2021, Like Xu wrote:
> If lbr_desc->event is successfully created, the intel_pmu_create_
> guest_lbr_event() will return 0, otherwise it will return -ENOENT,
> and then jump to LBR msrs dummy handling.
>
> Fixes: 1b5ac3226a1a ("KVM: vmx/pmu: Pass-through LBR msrs when the guest
On Tue, Feb 23, 2021, Like Xu wrote:
> When the processor that support model-specific LBR generates a debug
> breakpoint event, it automatically clears the LBR flag. This action
> does not clear previously stored LBR stack MSRs. (Intel SDM 17.4.2)
>
> Signed-off-by: Like Xu
> ---
>
On Mon, Feb 22, 2021, David Stevens wrote:
> ---
> v3 -> v4:
> - Skip prefetch while invalidations are in progress
Oof, nice catch.
...
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 9ac0a727015d..f6aaac729667 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++
On Mon, Feb 22, 2021, Liu, Jing2 wrote:
>
> On 2/9/2021 1:24 AM, Sean Christopherson wrote:
> > On Mon, Feb 08, 2021, Dave Hansen wrote:
> > > On 2/8/21 8:16 AM, Jing Liu wrote:
> > > > -#define XSTATE_COMPACTION_ENABLED (1ULL << 63)
> > > >
On Wed, Feb 03, 2021, Will Deacon wrote:
> On Fri, Jan 08, 2021 at 12:15:14PM +, Quentin Perret wrote:
...
> > +static inline unsigned long hyp_s1_pgtable_size(void)
> > +{
...
> > + res += nr_pages << PAGE_SHIFT;
> > + }
> > +
> > + /* Allow 1 GiB for private mappings */
> >
On Fri, Jan 08, 2021, Quentin Perret wrote:
> [2]
> https://kvmforum2020.sched.com/event/eE24/virtualization-for-the-masses-exposing-kvm-on-android-will-deacon-google
I couldn't find any slides on the official KVM forum site linked above. I was
able to track down a mirror[1] and the recorded
On Thu, Feb 18, 2021, Mike Kravetz wrote:
> On 2/18/21 8:23 AM, Sean Christopherson wrote:
> > On Thu, Feb 18, 2021, Paolo Bonzini wrote:
> >> On 13/02/21 01:50, Sean Christopherson wrote:
> >>>
> >>> pfn = spte_to_pfn(iter.old_spte);
&
On Thu, Feb 18, 2021, Paolo Bonzini wrote:
> On 18/02/21 18:42, Sean Christopherson wrote:
> > > The bug is present since commit 06fc7772690d ("KVM: SVM: Activate nested
> > > state only when guest state is complete", 2010-04-25). Unfortunately,
> > > it i
On Thu, Feb 04, 2021, Ashish Kalra wrote:
> From: Ashish Kalra
...
> arch/x86/include/asm/mem_encrypt.h | 8 +
> arch/x86/kernel/kvm.c | 52 ++
> arch/x86/mm/mem_encrypt.c | 41 +++
> 3 files changed, 101 insertions(+)
On Thu, Feb 18, 2021, Kalra, Ashish wrote:
> From: Sean Christopherson
>
> On Thu, Feb 18, 2021, Kalra, Ashish wrote:
> > From: Sean Christopherson
> >
> > On Wed, Feb 17, 2021, Kalra, Ashish wrote:
> > >> From: Sean Christopherson On Thu, F
chable via the
world switch logic.
> [1]
> https://lore.kernel.org/kvm/1266493115-28386-1-git-send-email-joerg.roe...@amd.com/
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Sean Christopherson
> ---
> arch/x86/kvm/svm/nested.c | 2 +-
> 1 file changed, 1 insertion(+),
On Thu, Feb 18, 2021, Paolo Bonzini wrote:
> On 18/02/21 13:56, David Edmondson wrote:
> > On Thursday, 2021-02-18 at 12:54:52 +01, Paolo Bonzini wrote:
> >
> > > On 18/02/21 11:04, David Edmondson wrote:
> > > > When dumping the VMCS, retrieve the current guest value of EFER from
> > > > the
On Thu, Feb 18, 2021, Kalra, Ashish wrote:
> From: Sean Christopherson
>
> On Wed, Feb 17, 2021, Kalra, Ashish wrote:
> >> From: Sean Christopherson On Thu, Feb 04, 2021,
> >> Ashish Kalra wrote:
> >> > From: Brijesh Singh
> >> >
> >
On Thu, Feb 18, 2021, Paolo Bonzini wrote:
> On 13/02/21 01:50, Sean Christopherson wrote:
> >
> > pfn = spte_to_pfn(iter.old_spte);
> > if (kvm_is_reserved_pfn(pfn) ||
> > - (!PageTrans
On Thu, Feb 18, 2021, Paolo Bonzini wrote:
> On 13/02/21 01:50, Sean Christopherson wrote:
> >
> > -* Nothing to do for RO slots or CREATE/MOVE/DELETE of a slot.
> > -* See comments below.
> > +* Nothing to do for RO slots (which can't be dirtied and can't
On Fri, Feb 12, 2021, Sean Christopherson wrote:
> Paolo, this is more or less ready, but on final read-through before
> sending I realized it would be a good idea to WARN during VM destruction
> if cpu_dirty_logging_count is non-zero. I wanted to get you this before
> the 5.12
On Wed, Feb 17, 2021, Paolo Bonzini wrote:
> On 17/02/21 18:29, Sean Christopherson wrote:
> > All that being said, I'm pretty we can eliminate setting
> > inject_page_fault dynamically. I think that would yield more
> > maintainable code. Following these flows is a
On Wed, Feb 17, 2021, Maxim Levitsky wrote:
> Just like all other nested memory accesses, after a migration loading
> PDPTRs should be delayed to first VM entry to ensure
> that guest memory is fully initialized.
>
> Just move the call to nested_vmx_load_cr3 to nested_get_vmcs12_pages
> to
On Wed, Feb 17, 2021, Maxim Levitsky wrote:
> This fixes a (mostly theoretical) bug which can happen if ept=0
> on host and we run a nested guest which triggers a mmu context
> reset while running nested.
> In this case the .inject_page_fault callback will be lost.
>
> Signed-off-by: Maxim
URING_VMENTRY))
> > > kvm_machine_check();
> > >
> > > + if (likely(!vmx->exit_reason.failed_vmentry))
> > > + vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
> > > +
> >
> > Any reason for the if?
>
> Sean
On Wed, Feb 17, 2021, Kalra, Ashish wrote:
> From: Sean Christopherson
> On Thu, Feb 04, 2021, Ashish Kalra wrote:
> > From: Brijesh Singh
> >
> > The ioctl is used to retrieve a guest's shared pages list.
>
> >What's the performance hit to boot time if
On Thu, Feb 04, 2021, Ashish Kalra wrote:
> From: Brijesh Singh
>
> The ioctl is used to retrieve a guest's shared pages list.
>
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: Paolo Bonzini
> Cc: "Radim Krčmář"
AFAIK, Radim is no longer involved with KVM, and his
On Thu, Feb 04, 2021, Ashish Kalra wrote:
> From: Brijesh Singh
>
> The ioctl is used to retrieve a guest's shared pages list.
What's the performance hit to boot time if KVM_HC_PAGE_ENC_STATUS is passed
through to userspace? That way, userspace could manage the set of pages in
whatever data
On Thu, Feb 04, 2021, Ashish Kalra wrote:
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h
> b/arch/x86/include/uapi/asm/kvm_para.h
> index 950afebfba88..f6bfa138874f 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -33,6 +33,7 @@
> #define
On Sat, Feb 13, 2021, Andy Lutomirski wrote:
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index f923e14e87df..ec39073b4897 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -1467,12 +1467,8 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
> #ifdef
is rarely the desired behavior.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/kvm_cache_regs.h | 19 ---
arch/x86/kvm/svm/svm.c| 8
arch/x86/kvm/vmx/nested.c | 20 ++--
arch/x86/kvm/vmx/vmx.c| 12 ++--
arch/x86/kvm/x86.c
t;KVM: x86/xen: intercept xen hypercalls if enabled")
Cc: Joao Martins
Cc: David Woodhouse
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/xen.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/x
is taken from ECX.
Fixes: ff092385e828 ("KVM: SVM: Implement INVLPGA")
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/s
401 - 500 of 2860 matches
Mail list logo