Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 122 ++-
1 file changed, 47 insertions(+), 75 deletions(-)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index 4aedbdaffe90..bb0d6de071e6 100644
--- a/drivers/crypto/ccp
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 28 +++-
drivers/crypto/ccp/sev-dev.h | 2 ++
2 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index 47a372e07223..4aedbdaffe90
effect is a good thing from the
kernel's perspective.
Cc: Brijesh Singh
Cc: Borislav Petkov
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers
Leroy
Fixes: 200664d5237f ("crypto: ccp: Add Secure Encrypted Virtualization (SEV)
command support")
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-d
Free the SEV device if later initialization fails. The memory isn't
technically leaked as it's tracked in the top-level device's devres
list, but unless the top-level device is removed, the memory won't be
freed and is effectively leaked.
Signed-off-by: Sean Christopherson
---
drivers/crypto
r/20210402233702.3291792-1-sea...@google.com
Sean Christopherson (8):
crypto: ccp: Free SEV device if SEV init fails
crypto: ccp: Detect and reject "invalid" addresses destined for PSP
crypto: ccp: Reject SEV commands with mismatching command buffer
crypto: ccp: Play nice with
On Tue, Mar 30, 2021, Wanpeng Li wrote:
> On Tue, 30 Mar 2021 at 01:15, Sean Christopherson wrote:
> >
> > +Thomas
> >
> > On Mon, Mar 29, 2021, Wanpeng Li wrote:
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 32cf828
On Tue, Apr 06, 2021, Steve Rutherford wrote:
> On Tue, Apr 6, 2021 at 9:08 AM Ashish Kalra wrote:
> > I see the following in Documentation/virt/kvm/api.rst :
> > ..
> > ..
> > /* KVM_EXIT_HYPERCALL */
> > struct {
> > __u64 nr;
> >
has bothered to implemented support in SLOB.
Regardless, accounting vCPU allocations will not break SLOB+KVM+cgroup
users, if any exist.
Cc: Wanpeng Li
Signed-off-by: Sean Christopherson
---
v2: Drop the Fixes tag and rewrite the changelog since this is a nop when
using SLUB or SLAB
pping
> Cc: # 5.10.x: 33a3164161: KVM: x86/mmu: Don't allow
> TDP MMU to yield when recovering NX pages
> Cc:
> Signed-off-by: Paolo Bonzini
Reviewed-by: Sean Christopherson
> ---
> arch/x86/kvm/mmu/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
&g
Add a comment above the declaration of vcpu_svm.vmcb to call out that it
is simply a shorthand for current_vmcb->ptr. The myriad accesses to
svm->vmcb are quite confusing without this crucial detail.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm
unnecessary
newlines in the comment.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f62c56adf7c9..afc275ba5d59 100644
--- a/ar
about using vmcb01 for VMLOAD/VMSAVE, at
first glance using vmcb01 instead of vmcb_pa looks wrong.
No functional change intended.
Cc: Maxim Levitsky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 12 +---
arch/x86/kvm/svm/svm.h | 1 -
2 files changed, 9 insertions
hysical cpu of the vmcb vmrun
through the vmcb")
Cc: Cathy Avery
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 8
1 file changed, 8 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 48b396f33bee..89619cc52cf4 100644
--- a/arch/x86/kvm/svm/
Belated code review for the vmcb changes that are queued for 5.13.
Sean Christopherson (4):
KVM: SVM: Don't set current_vmcb->cpu when switching vmcb
KVM: SVM: Drop vcpu_svm.vmcb_pa
KVM: SVM: Add a comment to clarify what vcpu_svm.vmcb points at
KVM: SVM: Enhance and clean up the v
On Tue, Apr 06, 2021, Dave Hansen wrote:
> On 4/6/21 9:31 AM, Kirill A. Shutemov wrote:
> > On Thu, Apr 01, 2021 at 02:01:15PM -0700, Dave Hansen wrote:
> >>> @@ -1999,7 +2006,8 @@ static int __set_memory_enc_dec(unsigned long addr,
> >>> int numpages, bool enc)
> >>> /*
> >>>* Before
On Tue, Apr 06, 2021, Vitaly Kuznetsov wrote:
> Emanuele Giuseppe Esposito writes:
>
> > When retrieving emulated CPUID entries, check for an insufficient array
> > size if and only if KVM is actually inserting an entry.
> > If userspace has a priori knowledge of the exact array size,
> >
On Tue, Apr 06, 2021, Emanuele Giuseppe Esposito wrote:
> When retrieving emulated CPUID entries, check for an insufficient array
> size if and only if KVM is actually inserting an entry.
> If userspace has a priori knowledge of the exact array size,
> KVM_GET_EMULATED_CPUID will incorrectly fail
On Mon, Apr 05, 2021, Ashish Kalra wrote:
> From: Ashish Kalra
...
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3768819693e5..78284ebbbee7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1352,6 +1352,8 @@
On Tue, Apr 06, 2021, Sean Christopherson wrote:
> On Tue, Apr 06, 2021, Ashish Kalra wrote:
> > On Mon, Apr 05, 2021 at 01:42:42PM -0700, Steve Rutherford wrote:
> > > On Mon, Apr 5, 2021 at 7:28 AM Ashish Kalra wrote:
> > > > diff --git a/arch/x86/kvm/x86.c b/ar
On Tue, Apr 06, 2021, Ashish Kalra wrote:
> On Mon, Apr 05, 2021 at 01:42:42PM -0700, Steve Rutherford wrote:
> > On Mon, Apr 5, 2021 at 7:28 AM Ashish Kalra wrote:
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index f7d12fca397b..ef5c77d59651 100644
> > > ---
On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> Small refactoring that will be used in the next patch.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/kvm/kvm_cache_regs.h | 7 +++
> arch/x86/kvm/svm/svm.c| 6 ++
> 2 files changed, 9 insertions(+), 4 deletions(-)
>
> diff
On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
> part of the migration state and thus are loaded
> by those ioctls.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/kvm/svm/nested.c | 15 +--
> 1 file changed, 13 insertions(+), 2
On Mon, Apr 05, 2021, Tom Lendacky wrote:
> On 4/2/21 6:36 PM, Sean Christopherson wrote:
> > diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
> > index 6556d220713b..4c513318f16a 100644
> > --- a/drivers/crypto/ccp/sev-dev.c
> > +++ b/d
On Sun, Apr 04, 2021, Christophe Leroy wrote:
>
> Le 03/04/2021 à 01:37, Sean Christopherson a écrit :
> > @@ -152,11 +153,21 @@ static int __sev_do_cmd_locked(int cmd, void *data,
> > int *psp_ret)
> > sev = psp->sev_data;
> > buf_len = sev_cmd_buffer_l
allocation
is larger than the allocation itself.
Now that the PSP driver plays nice with vmalloc pointers, putting the
data on a virtually mapped stack (CONFIG_VMAP_STACK=y) will not cause
explosions.
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/s
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 122 ++-
1 file changed, 47 insertions(+), 75 deletions(-)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index 6d5882290cfc..6a380d483fce 100644
--- a/drivers/crypto/ccp
incorporates this alignment.
Cc: Brijesh Singh
Cc: Borislav Petkov
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 33 +++--
drivers/crypto/ccp/sev-dev.h | 7 +++
2 files changed, 34 insertions(+), 6 deletions(-)
diff --git
thing from the
kernel's perspective.
Cc: Brijesh Singh
Cc: Borislav Petkov
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev
to give it the right input.
Based on kvm/queue, commit f96be2deac9b ("KVM: x86: Support KVM VMs
sharing SEV context") to avoid a minor conflict.
Sean Christopherson (5):
crypto: ccp: Detect and reject vmalloc addresses destined for PSP
crypto: ccp: Reject SEV commands with mismatchi
Signed-off-by: Sean Christopherson
---
drivers/crypto/ccp/sev-dev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index cb9b4c4e371e..6556d220713b 100644
--- a/drivers/crypto/ccp/sev-dev.c
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -15
On Fri, Apr 02, 2021, Borislav Petkov wrote:
> On Fri, Apr 02, 2021 at 03:42:51PM +0000, Sean Christopherson wrote:
> > Nope! That's wrong, as sgx_epc_init() will not be called if sgx_drv_init()
> > succeeds. And writing it as "if (sgx_drv_init() || sgx_vepc_init())" is
On Tue, Mar 16, 2021, Nathan Tempelman wrote:
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 874ea309279f..b2c90c67a0d9 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -66,6 +66,11 @@ static int sev_flush_asids(void)
> return ret;
> }
>
>
On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> Similar to the rest of guest page accesses after migration,
> this should be delayed to KVM_REQ_GET_NESTED_STATE_PAGES
> request.
FWIW, I still object to this approach, and this patch has a plethora of issues.
I'm not against deferring various state
rol is in locked mode, or not supported in the
> > hardware at all. This allows (non-Linux) guests that support non-LC
> > configurations to use SGX.
> >
> > Acked-by: Dave Hansen
> > Reviewed-by: Sean Christopherson
> > Signed-off-by: Kai Huang
> > ---
On Fri, Apr 02, 2021, Paolo Bonzini wrote:
> On 02/04/21 01:05, Sean Christopherson wrote:
> > >
> > > +struct kvm_queued_exception {
> > > + bool valid;
> > > + u8 nr;
> >
> > If we're refactoring all this code anyways, maybe change "n
On Fri, Apr 02, 2021, Paolo Bonzini wrote:
> On 02/04/21 02:56, Sean Christopherson wrote:
> > + .handler= (void *)kvm_null_fn,
> > + .on_lock= kvm_dec_notifier_count,
> > + .flush_on_ret = true,
>
> Doesn't really matte
On Fri, Apr 02, 2021, Paolo Bonzini wrote:
> On 02/04/21 02:56, Sean Christopherson wrote:
> > Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
> > notifications. Because mmu_notifier_count must be modified while holding
> > mmu_lock for write, and must al
Let the TDP MMU yield when unmapping a range in response to a MMU
notification, if yielding is allowed by said notification. There is no
reason to disallow yielding in this case, and in theory the range being
invalidated could be quite large.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
sed heavily on code from Ben Gardon.
Suggested-by: Ben Gardon
Signed-off-by: Sean Christopherson
---
include/linux/kvm_host.h | 6 ++-
virt/kvm/kvm_main.c | 96 +++-
2 files changed, 80 insertions(+), 22 deletions(-)
diff --git a/include/linux/kvm_host.
rily taking mmu_lock each time means even a
single spurious sequence can be problematic.
Note, this optimizes only the unpaired callbacks. Optimizing the
.invalidate_range_{start,end}() pairs is more complex and will be done in
a future patch.
Suggested-by: Ben Gardon
Signed-off-by: Sean
the notifier count and sequence.
No functional change intended.
Signed-off-by: Sean Christopherson
---
Note, the WARN_ON_ONCE that asserts on_lock and handler aren't both null
is optimized out of all functions on recent gcc (for x86). I wanted to
make it a BUILD_BUG_ON, but older versions
Yank out the hva-based MMU notifier APIs now that all architectures that
use the notifiers have moved to the gfn-based APIs.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/arm64/include/asm/kvm_host.h | 1 -
arch/mips/include/asm/kvm_host.h| 1 -
arch
-off-by: Sean Christopherson
---
arch/powerpc/include/asm/kvm_book3s.h | 12 ++--
arch/powerpc/include/asm/kvm_host.h| 1 +
arch/powerpc/include/asm/kvm_ppc.h | 9 ++-
arch/powerpc/kvm/book3s.c | 18 +++--
arch/powerpc/kvm/book3s.h | 10 ++-
arch/powerpc/kvm
ing into arch code.
Signed-off-by: Sean Christopherson
---
arch/mips/include/asm/kvm_host.h | 1 +
arch/mips/kvm/mmu.c | 97 ++--
2 files changed, 17 insertions(+), 81 deletions(-)
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_hos
Move arm64 to the gfn-base MMU notifier APIs, which do the hva->gfn
lookup in common code.
No meaningful functional change intended, though the exact order of
operations is slightly different since the memslot lookups occur before
calling into arch code.
Signed-off-by: Sean Christopher
off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu/mmu.c | 127 +++--
arch/x86/kvm/mmu/tdp_mmu.c | 241
arch/x86/kvm/mmu/tdp_mmu.h | 14 +-
include/linux/kvm_host.h| 14 ++
virt/kvm/kvm_mai
ed to be reevaluated to justify
the added complexity and testing burden. Ripping out .change_pte()
entirely would be a lot easier.
Signed-off-by: Sean Christopherson
---
virt/kvm/kvm_main.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/
nal to avoid #ifdefs. [Paolo]
v1:
- https://lkml.kernel.org/r/20210326021957.1424875-1-sea...@google.com
Sean Christopherson (10):
KVM: Assert that notifier count is elevated in .change_pte()
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based M
On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> Use 'pending_exception' and 'injected_exception' fields
> to store the pending and the injected exceptions.
>
> After this patch still only one is active, but
> in the next patch both could co-exist in some cases.
Please explain _why_.
>
On Thu, Apr 01, 2021, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
> > +{
> > + int class1, class2, ret;
> > +
> > + /* try to deliver current pending exception as VM exit */
> > + if
On Thu, Apr 01, 2021, Dave Hansen wrote:
> On 2/5/21 3:38 PM, Kuppuswamy Sathyanarayanan wrote:
> > From: "Kirill A. Shutemov"
> >
> > Handle #VE due to MMIO operations. MMIO triggers #VE with EPT_VIOLATION
> > exit reason.
> >
> > For now we only handle subset of instruction that kernel uses
On Thu, Apr 01, 2021, Paolo Bonzini wrote:
> On 01/04/21 18:50, Ben Gardon wrote:
> > > retry:
> > > if (is_shadow_present_pte(iter.old_spte)) {
> > > if (is_large_pte(iter.old_spte)) {
> > > if
On Thu, Apr 01, 2021, Vitaly Kuznetsov wrote:
> Sean Christopherson writes:
>
> > On Wed, Mar 31, 2021, Yang Li wrote:
> >> Using __set_bit() to set a bit in an integer is not a good idea, since
> >> the function expects an unsigned long as argument, which ca
On Wed, Mar 31, 2021, Kuppuswamy, Sathyanarayanan wrote:
>
> On 3/31/21 3:11 PM, Dave Hansen wrote:
> > On 3/31/21 3:06 PM, Sean Christopherson wrote:
> > > I've no objection to a nice message in the #VE handler. What I'm
> > > objecting to
> > > is s
On Wed, Mar 31, 2021, Ben Gardon wrote:
> ---
> arch/x86/kvm/mmu/mmu.c | 6
> arch/x86/kvm/mmu/tdp_mmu.c | 74 +-
> arch/x86/kvm/mmu/tdp_mmu.h | 1 +
> 3 files changed, 80 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c
On Wed, Mar 31, 2021, Ben Gardon wrote:
> Provide a real mechanism for fast invalidation by marking roots as
> invalid so that their reference count will quickly fall to zero
> and they will be torn down.
>
> One negative side affect of this approach is that a vCPU thread will
> likely drop the
On Wed, Mar 31, 2021, Ben Gardon wrote:
> In order to parallelize more operations for the TDP MMU, make the
> refcount on TDP MMU roots atomic, so that a future patch can allow
> multiple threads to take a reference on the root concurrently, while
> holding the MMU lock in read mode.
>
>
On Wed, Mar 31, 2021, Dave Hansen wrote:
> On 3/31/21 2:53 PM, Sean Christopherson wrote:
> > On Wed, Mar 31, 2021, Kuppuswamy Sathyanarayanan wrote:
> >> Changes since v3:
> >> * WARN user if SEAM does not disable MONITOR/MWAIT instruction.
> > Why bother? T
On Wed, Mar 31, 2021, Kuppuswamy Sathyanarayanan wrote:
> Changes since v3:
> * WARN user if SEAM does not disable MONITOR/MWAIT instruction.
Why bother? There are a whole pile of features that are dictated by the TDX
module spec. MONITOR/MWAIT is about as uninteresting as it gets, e.g.
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 23:05, Sean Christopherson wrote:
> > > Wouldn't it be incorrect to lock a mutex (e.g. inside*another* MMU
> > > notifier's invalidate callback) while holding an rwlock_t? That makes
> > > sense
> > &
On Wed, Mar 31, 2021, Sean Christopherson wrote:
> On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> > On 31/03/21 21:47, Sean Christopherson wrote:
> > I also thought of busy waiting on down_read_trylock if the MMU notifier
> > cannot block, but that would also be invalid fo
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 21:47, Sean Christopherson wrote:
> > Rereading things, a small chunk of the rwsem nastiness can go away. I
> > don't see
> > any reason to use rw_semaphore instead of rwlock_t.
>
> Wouldn't it be incorrect t
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> Also, related to the first part of the series, perhaps you could structure
> the series in a slightly different way:
>
> 1) introduce the HVA walking API in common code, complete with on_l
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> > + /*
> > +* Reset the lock used to prevent memslot updates between MMU notifier
> > +* range_start and range_end. At this point no more MMU notifiers will
> > +
On Wed, Mar 31, 2021, Kees Cook wrote:
> On Wed, Mar 24, 2021 at 10:45:36PM +0000, Sean Christopherson wrote:
> > On Tue, Mar 23, 2021, Sami Tolvanen wrote:
> > > On Tue, Mar 23, 2021 at 9:36 AM Sean Christopherson
> > > wrote:
> > > >
> >
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 18:41, Sean Christopherson wrote:
> > > That said, the easiest way to avoid this would be to always update
> > > mmu_notifier_count.
> > Updating mmu_notifier_count requires taking mmu_lock, which w
On Wed, Mar 31, 2021, Emanuele Giuseppe Esposito wrote:
> Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires
> a nent field inside the kvm_cpuid2 struct to be big enough to contain
> all entries that will be set by kvm.
> Therefore if the nent field is too high, kvm will adjust it
On Wed, Mar 31, 2021, Yang Li wrote:
> Using __set_bit() to set a bit in an integer is not a good idea, since
> the function expects an unsigned long as argument, which can be 64bit wide.
> Coverity reports this problem as
>
> High:Out-of-bounds access(INCOMPATIBLE_CAST)
> CWE119: Out-of-bounds
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> > + /*
> > +* Reset the lock used to prevent memslot updates between MMU notifier
> > +* range_start and range_end. At this point no more MMU notifiers will
> > +
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> > +#ifdef KVM_ARCH_WANT_NEW_MMU_NOTIFIER_APIS
> > + kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
> > +#else
> > struct kvm *kvm = mmu_notifier_
On Wed, Mar 31, 2021, Wanpeng Li wrote:
> On Wed, 31 Mar 2021 at 10:32, Sean Christopherson wrote:
> >
> > Use GFP_KERNEL_ACCOUNT for the vCPU allocations, the vCPUs are very much
> > tied to a single task/VM. For x86, the allocations were accounted up
> > until th
support to launch and run an SEV-ES
guest")
Cc: sta...@vger.kernel.org
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 97d42a007b
S
guest")
Cc: sta...@vger.kernel.org
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 29 -
1 file changed, 12 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6481d71657
Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 83e00e524513..6481d7165701 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm
have an SEV-ES setup at this time.
Sean Christopherson (3):
KVM: SVM: Use online_vcpus, not created_vcpus, to iterate over vCPUs
KVM: SVM: Do not set sev->es_active until KVM_SEV_ES_INIT completes
KVM: SVM: Do not allow SEV/SEV-ES initialization after vCPUs are
created
arch/x86/kvm/
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
> Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires
> a nent field inside the kvm_cpuid2 struct to be big enough to contain
> all entries that will be set by kvm.
> Therefore if the nent field is too high, kvm will adjust it
Switch to GFP_KERNEL_ACCOUNT for a handful of allocations that are
clearly associated with a single task/VM.
Note, there are a several SEV allocations that aren't accounted, but
those can (hopefully) be fixed by using the local stack for memory.
Signed-off-by: Sean Christopherson
---
arch/x86
architectures lack accounting in general (for KVM).
Fixes: e529ef66e6b5 ("KVM: Move vcpu alloc and init invocation to common code")
Signed-off-by: Sean Christopherson
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/kvm_main.c
when running
with CONFIG_VMAP_STACKS=y.
I have functional code that uses a scratch buffer as a bounce buffer to
cleanly handle vmalloc'd memory in the CCP driver. I'll hopefully get
that posted tomorrow.
Sean Christopherson (2):
KVM: Account memory allocations for 'struct kvm_vcpu'
KVM: x86
Two minor fixes/cleanups for the TDP MMU, found by inspection.
Sean Christopherson (2):
KVM: x86/mmu: Remove spurious clearing of dirty bit from TDP MMU SPTE
KVM: x86/mmu: Simplify code for aging SPTEs in TDP MMU
arch/x86/kvm/mmu/tdp_mmu.c | 6 ++
1 file changed, 2 insertions(+), 4
,%rax
0x00058bf1 <+145>: mov%rax,%r15
thus eliminating several memory accesses, including a locked access.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu
the host PFN will be marked dirty, i.e. there is no potential for data
corruption.
Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/tdp_mmu.c | 1 -
1 file changed, 1 deletion(-)
diff --git
On Tue, Mar 30, 2021, Ben Gardon wrote:
> On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson wrote:
> > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely
> > because I could actually test it. All architectures use nearly identical
> > code, so I do
On Tue, Mar 30, 2021, Andy Lutomirski wrote:
>
> > On Mar 30, 2021, at 8:14 AM, Sean Christopherson wrote:
> >
> > On Mon, Mar 29, 2021, Andy Lutomirski wrote:
> >>
> >>>> On Mar 29, 2021, at 7:04 PM, Andi Kleen wrote:
> >>>
>
On Mon, Mar 29, 2021, Andy Lutomirski wrote:
>
> > On Mar 29, 2021, at 7:04 PM, Andi Kleen wrote:
> >
> >
> >>
> >>> No, if these instructions take a #VE then they were executed at CPL=0.
> >>> MONITOR
> >>> and MWAIT will #UD without VM-Exit->#VE. Same for WBINVD, s/#UD/#GP.
> >>
> >>
On Mon, Mar 29, 2021, Kuppuswamy, Sathyanarayanan wrote:
>
>
> On 3/29/21 4:23 PM, Andy Lutomirski wrote:
> >
> > > On Mar 29, 2021, at 4:17 PM, Kuppuswamy Sathyanarayanan
> > > wrote:
> > >
> > > In non-root TDX guest mode, MWAIT, MONITOR and WBINVD instructions
> > > are not supported. So
On Mon, Mar 29, 2021, Andy Lutomirski wrote:
>
> > On Mar 29, 2021, at 4:17 PM, Kuppuswamy Sathyanarayanan
> > wrote:
> >
> > In non-root TDX guest mode, MWAIT, MONITOR and WBINVD instructions
> > are not supported. So handle #VE due to these instructions
> > appropriately.
>
> Is there
explicit IRQ window for tick-based time accouting.
>
> Fixes: 87fa7f3e98a131 ("x86/kvm: Move context tracking where it belongs")
> Cc: Sean Christopherson
> Signed-off-by: Wanpeng Li
> ---
> arch/x86/kvm/svm/svm.c | 3 ++-
> arch/x86/kvm/vmx/vmx.c | 3 ++-
> ar
Let the TDP MMU yield when unmapping a range in response to a MMU
notification, if yielding is allowed by said notification. There is no
reason to disallow yielding in this case, and in theory the range being
invalidated could be quite large.
Cc: Ben Gardon
Signed-off-by: Sean Christopherson
the info.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 1 -
arch/x86/kvm/mmu/tdp_mmu.c | 2 --
include/trace/events/kvm.h | 24
3 files changed, 27 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2070c7a91fdd
ime is
not a scalability issue, and this is all more than complex enough.
Based heavily on code from Ben Gardon.
Suggested-by: Ben Gardon
Signed-off-by: Sean Christopherson
---
include/linux/kvm_host.h | 8 +-
virt/kvm/kvm_main.c | 174 ++-
2 files changed,
rily taking mmu_lock each time means even a
single spurious sequence can be problematic.
Note, this optimizes only the unpaired callbacks. Optimizing the
.invalidate_range_{start,end}() pairs is more complex and will be done in
a future patch.
Suggested-by: Ben Gardon
Signed-off-by: Sean
Yank out the hva-based MMU notifier APIs now that all architectures that
use the notifiers have moved to the gfn-based APIs.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/arm64/include/asm/kvm_host.h | 1 -
arch/mips/include/asm/kvm_host.h| 1 -
arch
ing into arch code.
Signed-off-by: Sean Christopherson
---
arch/mips/include/asm/kvm_host.h | 1 +
arch/mips/kvm/mmu.c | 97 ++--
2 files changed, 17 insertions(+), 81 deletions(-)
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_hos
-off-by: Sean Christopherson
---
arch/powerpc/include/asm/kvm_book3s.h | 12 ++--
arch/powerpc/include/asm/kvm_host.h| 1 +
arch/powerpc/include/asm/kvm_ppc.h | 9 ++-
arch/powerpc/kvm/book3s.c | 18 +++--
arch/powerpc/kvm/book3s.h | 10 ++-
arch/powerpc/kvm
off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu/mmu.c| 127 -
arch/x86/kvm/mmu/tdp_mmu.c| 247 +++---
arch/x86/kvm/mmu/tdp_mmu.h| 14 +-
include/linux/kvm_hos
ndy for debug regardless of architecture.
Remove a completely redundant tracepoint from PPC e500.
Signed-off-by: Sean Christopherson
---
arch/arm64/kvm/mmu.c | 7 +---
arch/arm64/kvm/trace_arm.h | 66
arch/powerpc/kvm/e500_mmu_host.c | 2 -
arch/powe
est memslots.
Signed-off-by: Sean Christopherson
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/mmu.c | 117 --
2 files changed, 33 insertions(+), 85 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h
b/arch/arm64/include/asm/kvm_hos
Move the address space ID check that is performed when iterating over
roots into the macro helpers to consolidate code.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu_internal.h | 7 ++-
arch/x86/kvm/mmu/tdp_mmu.c | 99
101 - 200 of 2860 matches
Mail list logo