ex 1cceac5984daa..319460090a836 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -44,7 +44,7 @@
> #include
> #include
>
> -DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
> +DEFINE_STATIC_KEY_FALSE_RO(kvm_async_pf_enabled);
>
> static int kvmapf = 1;
>
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
On Fri, 2021-04-02 at 19:38 +0200, Paolo Bonzini wrote:
> On 01/04/21 15:54, Maxim Levitsky wrote:
> > Hi!
> >
> > I would like to publish two debug features which were needed for other stuff
> > I work on.
> >
> > One is the reworked lx-symbols scri
On Fri, 2021-04-02 at 17:27 +, Sean Christopherson wrote:
> On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> > Similar to the rest of guest page accesses after migration,
> > this should be delayed to KVM_REQ_GET_NESTED_STATE_PAGES
> > request.
>
> FWIW, I st
On Mon, 2021-04-05 at 17:01 +, Sean Christopherson wrote:
> On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> > if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
> > part of the migration state and thus are loaded
> > by those ioctls.
> >
> > Signed-off-by: Maxi
ted-by: Paolo Bonzini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 40 +--
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 8523f60adb92..ac5e3e17bda4 100644
--- a/arch/
Small refactoring that will be used in the next patch.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/kvm_cache_regs.h | 7 +++
arch/x86/kvm/svm/svm.c| 6 ++
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm
if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
part of the migration state and thus are loaded
by those ioctls.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch
if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
part of the migration state and thus are loaded
by those ioctls.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86
test currently fails on Intel (regardless of my patches).
Finally patch 2 in this patch series fixes a rare L0 kernel oops,
which I can trigger by migrating a hyper-v machine.
Best regards,
Maxim Levitskky
Maxim Levitsky (6):
KVM: nVMX: delay loading of PDPTRs
)
New capability, KVM_CAP_SREGS2 is added to signal
userspace of this ioctl.
Currently only implemented on x86.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 43 ++
arch/x86/include/asm/kvm_host.h | 7 ++
arch/x86/include/uapi/asm/kvm.h | 13 +++
arch/x86/kvm
Similar to the rest of guest page accesses after migration,
this should be delayed to KVM_REQ_GET_NESTED_STATE_PAGES
request.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c
Split the check for having a vmexit handler to
svm_check_exit_valid, and make svm_handle_invalid_exit
only handle a vmexit that is already not valid.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git
Store the supported bits into KVM_GUESTDBG_VALID_MASK
macro, similar to how arm does this.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 9 +
arch/x86/kvm/x86.c | 2 ++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b
happen, but at least this eliminates the common
case.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 1 +
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/x86.c | 4
4 files changed, 8 insertions(+), 1 deletion
Move KVM_GUESTDBG_VALID_MASK to kvm_host.h
and use it to return the value of this capability.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/arm64/include/asm/kvm_host.h | 4
arch/arm64/kvm/arm.c | 2 ++
arch/arm64/kvm/guest.c| 5 -
3 files changed
Currently #TS interception is only done once.
Also exception interception is not enabled for SEV guests.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm/svm.c | 70 +
arch/x86/kvm/svm/svm.h | 6
)
Signed-off-by: Maxim Levitsky
---
kernel/module.c | 8 +-
scripts/gdb/linux/symbols.py | 203 +++
2 files changed, 143 insertions(+), 68 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index 30479355ab85..ea81fc06ea1f 100644
--- a/kernel
.gd22...@pd.tnic/
CC: Borislav Petkov
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/x86.c | 3 +++
arch/x86/kvm/x86.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3627ce8fe5bb..1a51031d64d8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm
Define KVM_GUESTDBG_VALID_MASK and use it to implement this capabiity.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/s390/include/asm/kvm_host.h | 4
arch/s390/kvm/kvm-s390.c | 3 +++
2 files changed, 7 insertions(+)
diff --git a/arch/s390/include/asm/kvm_host.h b
This capability will allow the user to know which KVM_GUESTDBG_* bits
are supported.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 3 +++
include/uapi/linux/kvm.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt
,
Maxim Levitsky
Maxim Levitsky (9):
scripts/gdb: rework lx-symbols gdb script
KVM: introduce KVM_CAP_SET_GUEST_DEBUG2
KVM: x86: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: aarch64: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: s390x: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: x86: implement
On Thu, 2021-04-01 at 14:16 +0300, Maxim Levitsky wrote:
> This is a result of a deep rabbit hole dive in regard to why
> currently the nested migration of 32 bit guests
> is totally broken on AMD.
Please ignore this patch series, I didn't update the patch version.
Best regards,
), then
virtual vmload/save is force disabled.
V2: incorporated review feedback from Paulo.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm/svm/svm.c | 99
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f..ded84d244f19 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch
TER msrs were stored in
the migration stream if L1 changed these msrs with
vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 99 +++---
arch/x86/kvm/svm/svm.h | 6 +--
2 files changed, 68 insertions(+), 37 deletions(-)
diff --
On Thu, 2021-04-01 at 19:05 +0200, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > Injected interrupts/nmi should not block a pending exception,
> > but rather be either lost if nested hypervisor doesn't
> > intercept the pending exception (as in stock
TER msrs were stored in
the migration stream if L1 changed these msrs with
vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 99 +++---
arch/x86/kvm/svm/svm.h | 6 +--
2 files changed, 68 insertions(+), 37 deletions(-)
diff --
clone of "kernel-starship-5.12.unstable"
Maxim Levitsky (4):
KVM: x86: pending exceptions must not be blocked by an injected event
KVM: x86: separate pending and injected exception
KVM: x86: correctly merge pending and injected exception
KVM: x86: remove tweaking of inject_
On Thu, 2021-04-01 at 16:44 +0200, Paolo Bonzini wrote:
> Just a quick review on the API:
>
> On 01/04/21 16:18, Maxim Levitsky wrote:
> > +struct kvm_sregs2 {
> > + /* out (KVM_GET_SREGS2) / in (KVM_SET_SREGS2) */
> > + struct kvm_segment cs, ds, es, fs, gs, ss;
&g
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f..ded84d244f19 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch
), then
virtual vmload/save is force disabled.
V2: incorporated review feedback from Paulo.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm/svm/svm.c | 99
using new nested callback
'deliver_exception_as_vmexit'
The kvm_deliver_pending_exception is called after each VM exit,
and prior to VM entry which ensures that during userspace VM exits,
only injected exception can be in a raised state.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm
This is no longer needed since page faults can now be
injected as regular exceptions in all the cases.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 20
arch/x86/kvm/vmx/nested.c | 23 ---
2 files changed, 43 deletions(-)
diff --git
.
The only reason for an exception to be blocked is when nested run
is pending (and that can't really happen currently
but still worth checking for).
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 8 +++-
arch/x86/kvm/vmx/nested.c | 10 --
2 files changed, 15 insertions(+), 3
Use 'pending_exception' and 'injected_exception' fields
to store the pending and the injected exceptions.
After this patch still only one is active, but
in the next patch both could co-exist in some cases.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 25 --
arch/x86
On Thu, 2021-04-01 at 15:03 +0200, Vitaly Kuznetsov wrote:
> Maxim Levitsky writes:
>
> > Currently to support Intel->AMD migration, if CPU vendor is GenuineIntel,
> > we emulate the full 64 value for MSR_IA32_SYSENTER_{EIP|ESP}
> > msrs, and we also emulate the s
On Thu, 2021-03-18 at 16:35 +, Sean Christopherson wrote:
> On Thu, Mar 18, 2021, Joerg Roedel wrote:
> > On Thu, Mar 18, 2021 at 11:24:25AM +0200, Maxim Levitsky wrote:
> > > But again this is a debug feature, and it is intended to allow the user
> > > t
On Thu, 2021-03-18 at 10:19 +0100, Joerg Roedel wrote:
> On Tue, Mar 16, 2021 at 12:51:20PM +0200, Maxim Levitsky wrote:
> > I agree but what is wrong with that?
> > This is a debug feature, and it only can be enabled by the root,
> > and so someone might actually wan
On Tue, 2021-03-16 at 18:01 +0100, Jan Kiszka wrote:
> On 16.03.21 17:50, Sean Christopherson wrote:
> > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > > On Tue, 2021-03-16 at 16:31 +0100, Jan Kiszka wrote:
> > > > Back then, when I was hacking on the gdb-stub and KV
On Tue, 2021-03-16 at 14:46 +0100, Jan Kiszka wrote:
> On 16.03.21 13:34, Maxim Levitsky wrote:
> > On Tue, 2021-03-16 at 12:27 +0100, Jan Kiszka wrote:
> > > On 16.03.21 11:59, Maxim Levitsky wrote:
> > > > On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
>
On Tue, 2021-03-16 at 14:38 +0100, Jan Kiszka wrote:
> On 15.03.21 23:10, Maxim Levitsky wrote:
> > Fix several issues that are present in lx-symbols script:
> >
> > * Track module unloads by placing another software breakpoint at
> > 'free_module'
> > (force
On Tue, 2021-03-16 at 12:27 +0100, Jan Kiszka wrote:
> On 16.03.21 11:59, Maxim Levitsky wrote:
> > On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
> > > On 16.03.21 00:37, Sean Christopherson wrote:
> > > > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
>
On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
> On 16.03.21 00:37, Sean Christopherson wrote:
> > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > > This change greatly helps with two issues:
> > >
> > > * Resuming from a breakpoint is much more re
On Mon, 2021-03-15 at 16:37 -0700, Sean Christopherson wrote:
> On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > This change greatly helps with two issues:
> >
> > * Resuming from a breakpoint is much more reliable.
> >
> > When resuming execution from a br
On Tue, 2021-03-16 at 09:32 +0100, Joerg Roedel wrote:
> Hi Maxim,
>
> On Tue, Mar 16, 2021 at 12:10:20AM +0200, Maxim Levitsky wrote:
> > -static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
> > +static int (*svm_exit_handlers[])(struct kvm_vcpu *vcpu)
On Tue, 2021-03-16 at 09:16 +0100, Paolo Bonzini wrote:
> On 15/03/21 19:19, Maxim Levitsky wrote:
> > On Mon, 2021-03-15 at 18:56 +0100, Paolo Bonzini wrote:
> > > On 15/03/21 18:43, Maxim Levitsky wrote:
> > > > +
on an idea first shown here:
https://patchwork.kernel.org/project/kvm/patch/20160301192822.gd22...@pd.tnic/
CC: Borislav Petkov
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm/svm.c | 77 -
arch/x86/kvm/svm/svm.h
active when guest is debugged, it won't affect
KVM running normal 'production' VMs.
Signed-off-by: Maxim Levitsky
Tested-by: Stefano Garzarella
---
arch/x86/kvm/x86.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a9d95f90a0487
this feature on Intel as well.
Best regards,
Maxim Levitsky
Maxim Levitsky (3):
scripts/gdb: rework lx-symbols gdb script
KVM: x86: guest debug: don't inject interrupts while single stepping
KVM: SVM: allow to intercept all exceptions for debug
arch/x86/include/asm/kvm_host.h | 2 +
arch
'
instruction and executes the garbage tail of the optcode on which
the breakpoint was placed.
Signed-off-by: Maxim Levitsky
---
kernel/module.c | 8 ++-
scripts/gdb/linux/symbols.py | 106 +--
2 files changed, 83 insertions(+), 31 deletions(-)
diff
On Mon, 2021-03-15 at 18:56 +0100, Paolo Bonzini wrote:
> On 15/03/21 18:43, Maxim Levitsky wrote:
> > + if (!guest_cpuid_is_intel(vcpu)) {
> > + /*
> > +* If hardware supports Virtual VMLOAD VMSAVE then enable it
> > +* in VMCB an
is
force disabled.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm/svm/svm.c | 97 --
arch/x86/kvm/svm/svm.h | 7
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f3..ded84d244f19f 100644
--- a/arch/x86/kvm/cpuid.h
+++ b
ted migration of 32 bit nested guests which was broken due
to incorrect cached values of these msrs being read if L1 changed these
msrs with vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 97 --
arch/x86/kvm/svm/svm.h |
On Tue, 2021-03-09 at 14:12 +0100, Paolo Bonzini wrote:
> On 09/03/21 11:09, Maxim Levitsky wrote:
> > What happens if mmio generation overflows (e.g if userspace keeps on
> > updating the memslots)?
> > In theory if we have a SPTE with a stale generation, it can became valid
18 bits) for the mmio generation:
What happens if mmio generation overflows (e.g if userspace keeps on updating
the memslots)?
In theory if we have a SPTE with a stale generation, it can became valid, no?
I think that we should in the case of the overflow zap all mmio sptes.
What do you think?
Best regards,
Maxim Levitsky
On Mon, 2021-03-08 at 09:18 -0800, Sean Christopherson wrote:
> On Mon, Mar 08, 2021, Maxim Levitsky wrote:
> > On Thu, 2021-03-04 at 18:16 -0800, Sean Christopherson wrote:
> > > Directly connect the 'npt' param to the 'npt_enabled' variable so that
> > > runtime
ECTED_PT) {
if (write_fault)
ret = RET_PF_EMULATE;
It is a hack since it only happens to work because we eventually
unprotect the guest mmu pages when we detect write flooding to them.
Still performance wise, my win98 guest works very well with this
(with npt=0 on host)
On Thu, 2021-02-25 at 17:05 +0100, Paolo Bonzini wrote:
> On 25/02/21 16:41, Maxim Levitsky wrote:
> > Injected events should not block a pending exception, but rather,
> > should either be lost or be delivered to the nested hypervisor as part of
> > exitintinfo/IDT_VECTORIN
On Thu, 2021-02-25 at 17:41 +0200, Maxim Levitsky wrote:
> clone of "kernel-starship-5.11"
>
> Maxim Levitsky (4):
> KVM: x86: determine if an exception has an error code only when
> injecting it.
> KVM: x86: mmu: initialize fault.async_page_fault in wa
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 23 +-
arch/x86/include/uapi/asm/kvm.h | 14 +-
arch/x86/kvm/svm/nested.c | 62 +++---
arch/x86/kvm/svm/svm.c | 8 +-
arch/x86/kvm/vmx/nested.c | 114 +-
arch/x86/kvm/vmx/vmx.c | 14
Injected events should not block a pending exception, but rather,
should either be lost or be delivered to the nested hypervisor as part of
exitintinfo/IDT_VECTORING_INFO
(if nested hypervisor intercepts the pending exception)
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 7
This field was left uninitialized by a mistake.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/mmu/paging_tmpl.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index d9f66cc459e84..3dc9a25772bd8 100644
--- a/arch/x86/kvm/mmu
A page fault can be queued while vCPU is in real paged mode on AMD, and
AMD manual asks the user to always intercept it
(otherwise result is undefined).
The resulting VM exit, does have an error code.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/x86.c | 13 +
1 file changed, 9
clone of "kernel-starship-5.11"
Maxim Levitsky (4):
KVM: x86: determine if an exception has an error code only when
injecting it.
KVM: x86: mmu: initialize fault.async_page_fault in walk_addr_generic
KVM: x86: pending exception must be be injected even with an injected
e
se fails, but I haven't
> checked why].
I agree with all of this. I'll see why this code is needed (it is needed,
since I once removed it accidentaly on VMX, and it broke nesting with ept=0,
in exact the same way as it was broken on AMD).
I''l debug this a bit to see if I can make it work as you suggest.
Best regards,
Maxim Levitsky
>
>
> Paolo
>
On Wed, 2021-02-17 at 09:29 -0800, Sean Christopherson wrote:
> On Wed, Feb 17, 2021, Maxim Levitsky wrote:
> > This fixes a (mostly theoretical) bug which can happen if ept=0
> > on host and we run a nested guest which triggers a mmu context
> > reset while running nes
On Wed, 2021-02-17 at 17:06 +0100, Paolo Bonzini wrote:
> On 17/02/21 15:57, Maxim Levitsky wrote:
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index b3e36dc3f164..e428d69e21c0 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx
On Wed, 2021-02-17 at 16:57 +0200, Maxim Levitsky wrote:
> In case of npt=0 on host,
> nSVM needs the same .inject_page_fault tweak as VMX has,
> to make sure that shadow mmu faults are injected as vmexits.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/
In case of npt=0 on host,
nSVM needs the same .inject_page_fault tweak as VMX has,
to make sure that shadow mmu faults are injected as vmexits.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 18 ++
arch/x86/kvm/svm/svm.c| 5 -
arch/x86/kvm/svm/svm.h
Just like all other nested memory accesses, after a migration loading
PDPTRs should be delayed to first VM entry to ensure
that guest memory is fully initialized.
Just move the call to nested_vmx_load_cr3 to nested_get_vmcs12_pages
to implement this.
Signed-off-by: Maxim Levitsky
---
arch/x86
ini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 40 +--
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 53b9037259b5..ebc7dfaa9f13 100644
--- a/arch/x86/kvm/svm/neste
This way trace will capture all the nested mode entries
(including entries after migration, and from smm)
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 26 ++
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch
This fixes a (mostly theoretical) bug which can happen if ept=0
on host and we run a nested guest which triggers a mmu context
reset while running nested.
In this case the .inject_page_fault callback will be lost.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 8 +---
arch
This callback will be used to tweak the mmu context
in arch specific code after it was reset.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h| 2 ++
arch/x86/kvm/mmu/mmu.c | 2 ++
arch/x86/kvm/svm/svm.c | 6
crashed but I strongly suspect a bug in shadow mmu,
which I track separately.
(see below for full explanation).
This patch series is based on kvm/queue branch.
Best regards,
Maxim Levitsky
PS: The shadow mmu bug which I spent most of this week on:
In my testing I am not able to boot win10
trace_kvm_exit prints this value (using vmx_get_exit_info)
so it makes sense to read it before the trace point.
Fixes: dcf068da7eb2 ("KVM: VMX: Introduce generic fastpath handler")
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 4 +++-
1 file changed, 3 insertions(+),
ted-by: Paolo Bonzini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 519fe84f2100..c209f1232928 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/neste
For the case of nested on nested, we
> let the guest handle it.
>
> Co-developed-by: Bandan Das
> Signed-off-by: Bandan Das
> Signed-off-by: Wei Huang
> Tested-by: Maxim Levitsky
> Reviewed-by: Maxim Levitsky
> ---
> arch/x86/kvm/svm/svm.c | 20 ++--
>
gered before #GP. KVM doesn't need to intercept and emulate #GP
> faults as #GP is supposed to be triggered.
>
> Co-developed-by: Bandan Das
> Signed-off-by: Bandan Das
> Signed-off-by: Wei Huang
> Reviewed-by: Maxim Levitsky
> ---
> arch/x86/include/asm/cpufeatur
> + */
> + return kvm_emulate_instruction(vcpu,
> + EMULTYPE_VMWARE_GP | EMULTYPE_NO_DECODE);
> + } else
I would check svm_gp_erratum_intercept here, not do any emulation
if not set, and print a warning.
> + return emulate_svm_instr(vcpu, opcode);
> +
> +reinject:
> + kvm_queue_exception_e(vcpu, GP_VECTOR, error_code);
> + return 1;
> +}
> +
> void svm_set_gif(struct vcpu_svm *svm, bool value)
> {
> if (value) {
Best regards,
Maxim Levitsky
On Thu, 2021-01-21 at 14:40 -0800, Sean Christopherson wrote:
> On Thu, Jan 21, 2021, Maxim Levitsky wrote:
> > BTW, on unrelated note, currently the smap test is broken in kvm-unit tests.
> > I bisected it to commit 322cdd6405250a2a3e48db199f97a45ef519e226
> >
> >
On Thu, 2021-01-21 at 14:27 -0800, Sean Christopherson wrote:
> On Thu, Jan 21, 2021, Maxim Levitsky wrote:
> > This is very helpful to debug nested VMX issues.
> >
> > Signed-off-by: Maxim Levitsky
> > ---
> > arch/x86/kvm/trace.h | 30 +++
On Thu, 2021-01-14 at 16:14 -0800, Sean Christopherson wrote:
> On Thu, Jan 14, 2021, Maxim Levitsky wrote:
> > This is very helpful for debugging nested VMX issues.
> >
> > Signed-off-by: Maxim Levitsky
> > ---
> > arch/x86/kvm/trace.h | 30 +++
Since the fix for the bug in nested migration on VMX is
already merged by Paulo, those are the remaining
patches in this series.
I added a new patch to trace SVM nested entries from
SMM and nested state load as well.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: nSVM: move
This is very helpful to debug nested VMX issues.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/trace.h | 30 ++
arch/x86/kvm/vmx/nested.c | 5 +
arch/x86/kvm/x86.c| 3 ++-
3 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm
This way trace will capture all the nested mode entries
(including entries after migration, and from smm)
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 26 ++
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch
On Thu, 2021-01-14 at 16:29 -0800, Sean Christopherson wrote:
> On Thu, Jan 14, 2021, Maxim Levitsky wrote:
> > This allows it to be printed correctly by the trace print
>
> It'd be helpful to explicitly say which tracepoint, and explain that the value
> is read by vmx_get_exit
robably same question for whether or not to
> prepend zeros. E.g. kvm_entry has "vcpu %u, rip 0x%lx" versus "rip: 0x%016llx
> vmcs: 0x%016llx". It bugs me that we're so inconsistent.
>
As I said the kvm tracing has a lot of things that can be imporoved,
and as it is often the only way to figure out complex bugs as these I had to
deal with recently,
I will do more improvements in this area as time permits.
Best regards,
Maxim Levitsky
ion.
In fact I will change the svm's tracepoint to behave the same way
in the next patch series (I'll move it to enter_svm_guest_mode).
(When I wrote this patch I somehow thought that this is what SVM already does).
Best regards,
Maxim Levitsky
On Thu, 2021-01-21 at 10:06 -0600, Wei Huang wrote:
>
> On 1/21/21 8:07 AM, Maxim Levitsky wrote:
> > On Thu, 2021-01-21 at 01:55 -0500, Wei Huang wrote:
> > > From: Bandan Das
> > >
> > > While running SVM related instructions (VMRUN/VMSAVE/VMLOAD)
ts that have and don't have that bit.
I hope that I understand this correctly.
Best regards,
Maxim Levitsky
>
> Dave
>
>
> > Co-developed-by: Bandan Das
> > Signed-off-by: Bandan Das
> > Signed-off-by: Wei Huang
> > ---
> > arch/x86/kvm/svm/svm.c
gt; + best = kvm_find_cpuid_entry(vcpu, 0x800A, 0);
> + best->edx |= (1 << 28);
> + }
> +
> /* For sev guests, the memory encryption bit is not reserved in CR3. */
> if (sev_guest(vcpu->kvm)) {
> best = kvm_find_cpuid_entry(vcpu, 0x801F, 0);
Tested-by: Maxim Levitsky
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
);
>
> return 0;
> }
> @@ -933,6 +934,9 @@ static __init void svm_set_cpu_caps(void)
> boot_cpu_has(X86_FEATURE_AMD_SSBD))
> kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
>
> + if (boot_cpu_has(X86_FEATURE_SVME_ADDR_CHK))
> + kvm_cpu_cap_set(X86_FEA
ss when kvm unit tests run in a guest.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index fe97b0e41824a..4557fdc9c3e1b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2243,7 +2243,7 @@ static int gp_interception(struct vcpu_svm *svm)
opcode = svm_instr_opco
ion(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> int emulation_type, void *insn, int insn_len)
> {
> @@ -7317,32 +7354,12 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
> gpa_t cr2_or_gpa,
>*/
> write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
guest fields only when
needed")
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 0fbb46990dfce..776688f9d1017 100644
--- a/arch/x86/kvm/vm
This allows it to be printed correctly by the trace print
that follows.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2af05d3b05909..9b6e7dbf5e2bd 100644
This is very helpful for debugging nested VMX issues.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/trace.h | 30 ++
arch/x86/kvm/vmx/nested.c | 6 ++
arch/x86/kvm/x86.c| 1 +
3 files changed, 37 insertions(+)
diff --git a/arch/x86/kvm/trace.h b
which might have some stale fields,
if that vmcs was used to enter a guest already due to that optimization.
Plus I added two minor patches to improve VMX tracepoints
a bit. There is still a large room for improvement.
Best regards,
Maxim Levitsky
Maxim Levitsky (3):
KVM: nVMX: Always
1 - 100 of 663 matches
Mail list logo