userspace.
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/cpuid.c | 2 ++
arch/x86/kvm/cpuid.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 13036cf0b912..b7618cdd06b5 100644
--- a/arch/x86/kvm
tions")
Cc: Tom Lendacky
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c8ffdbc81709..0eeb6e1b803d 100644
--- a/arch/x86/kvm/
t
side of things has already laid claim to 'sev_enabled'.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 11 +++
arch/x86/kvm/svm/svm.c | 15 +--
arch/x86/kvm/svm/svm.h | 2 --
3 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/arch/x8
supported features to userspace.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/cpufeature.h | 7 +--
arch/x86/include/asm/cpufeatures.h | 17 +++--
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86
features solely on
whether or the kernel wants to dedicated a word for 'em, and hash out what
to do with KVM at large in the SGX thread.
Sean Christopherson (13):
KVM: SVM: Free sev_asid_bitmap during init if SEV setup fails
KVM: SVM: Zero out the VMCB array used to track SEV ASID asso
On Thu, Jan 07, 2021, Steve Rutherford wrote:
> Supporting merging of consecutive entries (or not) is less important
> to get right since it doesn't change any of the APIs. If someone runs
> into performance issues, they can loop back and fix this then. I'm
> slightly concerned with the behavior fo
On Thu, Jan 07, 2021, Paolo Bonzini wrote:
> On 07/01/21 10:38, Maxim Levitsky wrote:
> > The code to store it on the migration exists, but no code was restoring it.
> >
> > One of the side effects of fixing this is that L1->L2 injected events
> > are no longer lost when migration happens with nes
On Thu, Jan 07, 2021, Ashish Kalra wrote:
> On Thu, Jan 07, 2021 at 09:26:25AM -0800, Sean Christopherson wrote:
> > On Thu, Jan 07, 2021, Ashish Kalra wrote:
> > > Hello Steve,
> > >
> > > On Wed, Jan 06, 2021 at 05:01:33PM -0800, Steve Rutherford wrote:
&
On Thu, Jan 07, 2021, Paolo Bonzini wrote:
> On 07/01/21 18:00, Sean Christopherson wrote:
> > Ugh, I assume this is due to one of the "premature"
> > nested_ops->check_events()
> > calls that are necessitated by the event mess? I'm guessing
&g
> Fixes: 14881998566d ("kvm: x86/mmu: Support disabling dirty logging for the
> tdp MMU")
> Signed-off-by: Ben Gardon
Reviewed-by: Sean Christopherson
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 104 +
> 1 file changed, 48 insertions(+
On Thu, Jan 07, 2021, Ashish Kalra wrote:
> Hello Steve,
>
> On Wed, Jan 06, 2021 at 05:01:33PM -0800, Steve Rutherford wrote:
> > Avoiding an rbtree for such a small (but unstable) list seems correct.
> >
> > For the unencrypted region list strategy, the only questions that I
> > have are fairly
On Thu, Jan 07, 2021, Maxim Levitsky wrote:
> It is possible to exit the nested guest mode, entered by
> svm_set_nested_state prior to first vm entry to it (e.g due to pending event)
> if the nested run was not pending during the migration.
Ugh, I assume this is due to one of the "premature" neste
On Wed, Jan 06, 2021, Ben Gardon wrote:
> Many TDP MMU functions which need to perform some action on all TDP MMU
> roots hold a reference on that root so that they can safely drop the MMU
> lock in order to yield to other threads. However, when releasing the
> reference on the root, there is a bug
Use my @google.com address in MAINTAINERS, somehow only the .mailmap
entry was added when the original update patch was applied.
Fixes: c2b1209d852f ("MAINTAINERS: Update email address for Sean
Christopherson")
Cc: k...@vger.kernel.org
Reported-by: Nathan Chancellor
Signed-of
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> If migration happens while L2 entry with an injected event to L2 is pending,
> we weren't including the event in the migration state and it would be
> lost leading to L2 hang.
But the injected event should still be in vmcs12 and
KVM_STATE_NESTED_RUN_P
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> This is VMX version of the same issue as I reproduced on SVM.
>
> Unlike SVM, this version has 2 pending issues to resolve.
>
> 1. This seems to break 'vmx' kvm-unit-test in
> 'error code <-> (!URG || prot_mode) [+]' case.
>
> The test basically trie
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> This should prevent bad things from happening if the user calls the
> KVM_SET_NESTED_STATE twice.
This doesn't exactly inspire confidence, nor does it provide much help to
readers that don't already know why KVM should "leave nested" before processing
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> The code to store it on the migration exists, but no code was restoring it.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/kvm/svm/nested.c | 4
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm
On Wed, Jan 06, 2021, Vitaly Kuznetsov wrote:
> Nitesh Narayan Lal writes:
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 3f7c1fc7a3ce..3e17c9ffcad8 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -9023,18 +9023,7 @@ static int vcpu_enter_guest(struct kv
+tglx
On Tue, Jan 05, 2021, Nitesh Narayan Lal wrote:
> This reverts commit d7a08882a0a4b4e176691331ee3f492996579534.
>
> After the introduction of the patch:
>
> 87fa7f3e9: x86/kvm: Move context tracking where it belongs
>
> since we have moved guest_exit_irqoff closer to the VM-Exit, ex
On Mon, Jan 04, 2021, Like Xu wrote:
> When CPUID.01H:EDX.DS[21] is set, the IA32_DS_AREA MSR exists and
> points to the linear address of the first byte of the DS buffer
> management area, which is used to manage the PEBS records.
>
> When guest PEBS is enabled and the value is different from the
On Mon, Jan 04, 2021, Like Xu wrote:
> If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the
> IA32_PEBS_ENABLE MSR exists and all architecturally enumerated fixed
> and general purpose counters have corresponding bits in IA32_PEBS_ENABLE
> that enable generation of PEBS records. The general
On Tue, Jan 05, 2021, Paolo Bonzini wrote:
> On 05/01/21 18:49, Ben Gardon wrote:
> > for_each_tdp_mmu_root(kvm, root) {
> > kvm_mmu_get_root(kvm, root);
> >
> > kvm_mmu_put_root(kvm, root);
> > }
> >
> > In these cases the get and put root calls are there to ensure tha
On Tue, Jan 05, 2021, Michael Roth wrote:
> @@ -3703,16 +3688,9 @@ static noinstr void svm_vcpu_enter_exit(struct
> kvm_vcpu *vcpu,
> if (sev_es_guest(svm->vcpu.kvm)) {
> __svm_sev_es_vcpu_run(svm->vmcb_pa);
> } else {
> - __svm_vcpu_run(svm->vmcb_pa, (unsigne
that invoked it from assembly code.
Cc: Uros Bizjak
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 25 +
arch/x86/kvm/svm/sev.c | 2 --
arch/x86/kvm/svm/svm.c | 2 --
arch/x86/kvm/vmx/vmx_ops.h | 2 --
arch/x86/kvm/
From: Uros Bizjak
Move the declaration of kvm_spurious_fault() to KVM's "private" x86.h,
it should never be called by anything other than low level KVM code.
Cc: Paolo Bonzini
Cc: Sean Christopherson
Signed-off-by: Uros Bizjak
[sean: rebased to a seri
of optimizing the SVM context switching).
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 3 +-
arch/x86/kvm/svm/svm.c | 16 +--
arch/x86/kvm/svm/svm_ops.h | 59 ++
3 files changed, 62 insertions(+), 16 deletions(-)
create mode
d behavior, this should have no meaningful effects
as Intel PT behavior does not interact with CR4.VMXE.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/vmx/vmx.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/v
f the
__ex()/__kvm_handle_fault_on_reboot() macros, thus helping pave the way
toward dropping them entirely.
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/virtext.h | 7 ++-
arch/x86/kvm/vmx/vmx.c | 15 +++
2 files changed, 9 insertions(+), 13 deletions(-)
diff --git a/arch/x
Christopherson
Reviewed-and-tested-by: Sean Christopherson
Signed-off-by: Uros Bizjak
[sean: dropped versioning info from changelog]
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/vmx/nested.c | 32 +++-
arch/x86/kvm/vmx/vmenter.S | 2 +-
arch/x86/kvm/vmx/vmx.c | 2
arriers of their own, i.e.
VMXOFF can't get reordered after clearing CR4.VMXE, which is really
what's of interest.
Cc: Randy Dunlap
Signed-off-by: David P. Reed
[sean: rewrote changelog, dropped comment adjustments]
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/v
from being woken via INIT-SIPI-SIPI in the new kernel.
Fixes: d176720d34c7 ("x86: disable VMX on all CPUs on reboot")
Cc: sta...@vger.kernel.org
Suggested-by: Sean Christopherson
Signed-off-by: David P. Reed
[sean: reworked changelog and further tweaked comment]
Signed-off-by: Sean Chri
pu_vmxoff() inline function")
Reported-by: David P. Reed
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/virtext.h | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/incl
/lkml.kernel.org/r/20200704203809.76391-1-dpr...@deepplum.com
[2] https://lkml.kernel.org/r/20201029134145.107560-1-ubiz...@gmail.com
[3] https://lkml.kernel.org/r/20201221194800.46962-1-ubiz...@gmail.com
David P. Reed (1):
x86/virt: Mark flags and memory as clobbered by VMXOFF
Sean Christopherson (6):
x
On Wed, Dec 30, 2020, Borislav Petkov wrote:
> On Tue, Dec 22, 2020 at 04:31:55PM -0600, Babu Moger wrote:
> > @@ -2549,7 +2559,10 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct
> > msr_data *msr_info)
> > !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> >
On Mon, Dec 28, 2020, Sean Christopherson wrote:
> On Mon, Dec 21, 2020, Uros Bizjak wrote:
> > Merge __kvm_handle_fault_on_reboot with its sole user
> > and move the definition of __ex to a common include to be
> > shared between VMX and SVM.
> >
> > v
.
The v2, v3, ... vN patch history should go below the '---' so that it doesn't
need to be manually stripped when applying.
> Cc: Paolo Bonzini
> Cc: Sean Christopherson
> Signed-off-by: Uros Bizjak
> Reviewed-by: Krish Sadhukhan
> ---
vN stuff
On Mon, Dec 28, 2020, Borislav Petkov wrote:
> On Thu, Dec 17, 2020 at 09:19:13AM -0800, Sean Christopherson wrote:
> > On Wed, Dec 16, 2020, Peter Gonda wrote:
> > >
> > > The IN and OUT immediate instructions only use an 8-bit immediate. The
> > > curre
On Mon, Dec 28, 2020, Zhimin Feng wrote:
> The main motivation for this patch is to improve the performance of VM.
Do you have numbers that show when this improves performance, and by how much?
This adds hundreds of cycles of overhead (VMWRITEs, WRMSRs, RDMSRs, etc...) to
_every_ VM-Exit roundtri
On Fri, Dec 25, 2020, Borislav Petkov wrote:
> On Fri, Dec 25, 2020 at 06:50:33PM +0800, kernel test robot wrote:
> > If you fix the issue, kindly add following tag as appropriate
> > Reported-by: kernel test robot
> >
> > All warnings (new ones prefixed by >>):
> >
> > >> arch/x86/kernel/sev-es
On Mon, Dec 28, 2020, Borislav Petkov wrote:
> On Mon, Dec 28, 2020 at 09:59:48AM -0800, Sean Christopherson wrote:
> > Obvious and superfluous for people that are intimately familiar with the
> > code,
> > but explicit call stacks are extremely helpful when (re)learning
On Wed, Dec 23, 2020, Borislav Petkov wrote:
> From: Borislav Petkov
>
> Now that the different instruction-inspecting functions return a value,
> test that and return early from callers if error has been encountered.
>
> While at it, do not call insn_get_modrm() when calling
> insn_get_displacem
On Tue, Dec 22, 2020, Borislav Petkov wrote:
> On Tue, Dec 22, 2020 at 10:59:22AM -0800, Sean Christopherson wrote:
> > On Tue, Dec 22, 2020, Borislav Petkov wrote:
> > > +Backtraces help document the call chain leading to a problem. However,
> > > +not all backtrac
On Wed, Dec 23, 2020, Borislav Petkov wrote:
> From: Borislav Petkov
>
> Rename insn_decode() to insn_decode_regs() to denote that it receives
> regs as param and free the name for a more generic version of the
> function.
Can we add a preposition in there, e.g. insn_decode_from_regs() or
insn_d
On Tue, Dec 22, 2020, Paolo Bonzini wrote:
> On 22/12/20 19:31, David Laight wrote:
> > > /*
> > >* Use 2ULL to incorporate the necessary +1 in the shift; adding +1 in
> > >* the shift count will overflow SHL's max shift of 63 if s=0 and e=63.
> > >*/
> > A comment of the desired outp
On Tue, Dec 22, 2020, Borislav Petkov wrote:
> Ok, here's the next one which I think, is also, not really controversial.
Heh, are you trying to jinx yourself?
> diff --git a/Documentation/process/submitting-patches.rst
> b/Documentation/process/submitting-patches.rst
> index 5ba54120bef7..0ffb21
On Tue, Dec 22, 2020, Paolo Bonzini wrote:
> Since we know that e >= s, we can reassociate the left shift,
> changing the shifted number from 1 to 2 in exchange for
> decreasing the right hand side by 1.
I assume the edge case is that this ends up as `(1ULL << 64) - 1` and overflows
SHL's max shif
On Tue, Dec 22, 2020, Babu Moger wrote:
>
> On 12/9/20 5:11 PM, Jim Mattson wrote:
> > On Wed, Dec 9, 2020 at 2:39 PM Babu Moger wrote:
> >>
> >> On 12/7/20 5:22 PM, Jim Mattson wrote:
> >>> On Mon, Dec 7, 2020 at 2:38 PM Babu Moger wrote:
> diff --git a/arch/x86/include/asm/cpufeatures.h
On Mon, Dec 21, 2020, Krish Sadhukhan wrote:
>
> On 12/21/20 11:48 AM, Uros Bizjak wrote:
> > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> > index c5ee0f5ce0f1..5b16d2b5c3bc 100644
> > --- a/arch/x86/kvm/x86.h
> > +++ b/arch/x86/kvm/x86.h
> > @@ -8,6 +8,30 @@
> > #include "kvm_cache_re
On Mon, Dec 21, 2020, Uros Bizjak wrote:
> On Mon, Dec 21, 2020 at 7:19 PM Sean Christopherson wrote:
> >
> > On Sun, Dec 20, 2020, Uros Bizjak wrote:
> > > Merge __kvm_handle_fault_on_reboot with its sole user
> >
> > There's also a comment in vm
On Mon, Dec 21, 2020, Paolo Bonzini wrote:
> On 18/12/20 10:10, Vitaly Kuznetsov wrote:
> > > - int root = vcpu->arch.mmu->shadow_root_level;
> > > - int leaf;
> > > - int level;
> > > + int root, leaf, level;
> > > bool reserved = false;
> > Personal taste: I would've renamed 'root' to '
On Sun, Dec 20, 2020, Uros Bizjak wrote:
> Merge __kvm_handle_fault_on_reboot with its sole user
There's also a comment in vmx.c above kvm_cpu_vmxoff() that should be updated.
Alternatively, and probably preferably for me, what about keeping the long
__kvm_handle_fault_on_reboot() name for the mac
On Fri, Dec 18, 2020, Nathan Chancellor wrote:
> When using LLVM's integrated assembler (LLVM_IAS=1) while building
> x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> error occurs:
>
> $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
> arch/x86/kvm/svm/sev.c:2004:15: erro
On Fri, Dec 18, 2020, Vitaly Kuznetsov wrote:
> Sean Christopherson writes:
>
> > Return -1 from the get_walk() helpers if the shadow walk doesn't fill at
> > least one spte, which can theoretically happen if the walk hits a
> > not-present PTPDR. Returning the r
+Michael, as this will conflict with an in-progress series to use VMSAVE in the
common SVM run path.
https://lkml.kernel.org/r/20201214174127.1398114-1-michael.r...@amd.com
On Mon, Dec 21, 2020, Sean Christopherson wrote:
> On Fri, Dec 18, 2020, Nathan Chancellor wrote:
> > When usi
ot;)
Cc: Ben Gardon
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 7 ++-
arch/x86/kvm/mmu/tdp_mmu.c | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 7a6ae9e90bd7..a48cd12c01
PTEs.
This eliminates an extra check-and-branch in a relatively hot loop.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4798a4472066..769855f5f
happens to be on the stack.
Opportunistically nuke a few extra newlines.
Fixes: 95fb5b0258b7 ("kvm: x86/mmu: Support MMIO in the TDP MMU")
Reported-by: Richard Herbert
Cc: Ben Gardon
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 15 +++
explicitly initialized; bumping its size is nothing more than
a superficial adjustment to the stack frame.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 15 +++
arch/x86/kvm/mmu/tdp_mmu.c | 2 +-
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/arch/x86
Two fixes for bugs that were introduced along with the TDP MMU (though I
strongly suspect only the one reported by Richard, fixed in patch 2/4, is
hittable in practice). Two additional cleanup on top to try and make the
code a bit more readable and shave a few cycles.
Sean Christopherson (4
On Wed, Dec 16, 2020, Peter Gonda wrote:
>
> The IN and OUT immediate instructions only use an 8-bit immediate. The
> current VC handler uses the entire 32-bit immediate value. These
> instructions only set the first bytes.
>
> Tested with a loop back port with "outb %0,$0xe0". Before the port se
On Tue, Dec 15, 2020, Michael Roth wrote:
> Hi Sean,
>
> Sorry to reply out-of-thread, our mail server is having issues with
> certain email addresses at the moment so I only see your message via
> the archives atm. But regarding:
>
> >>> I think we can defer this until we're actually planning on
On Tue, Dec 15, 2020, Jarkko Sakkinen wrote:
> On Mon, Dec 14, 2020 at 11:01:32AM -0800, Sean Christopherson wrote:
> > Haitao reported the bug, and for all intents and purposes provided the fix.
> > I
> > just did the analysis to verify that there was a legiti
est.
>
> Cc: sta...@vger.kernel.org
> Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path")
> Signed-off-by: Lai Jiangshan
> ---
> Changed from V1:
> Update the patch and the changelog as Sean Christopherson suggested.
>
> virt/kvm/kvm_ma
On Mon, Dec 14, 2020, Michael Roth wrote:
> On Mon, Dec 14, 2020 at 11:38:23AM -0800, Sean Christopherson wrote:
> > > + asm volatile(__ex("vmsave")
> > > + : : "a" (page_to_pfn(sd->save_area) << PAGE_SHIFT)
On Mon, Dec 14, 2020, Tom Lendacky wrote:
> On 12/14/20 9:45 AM, Paolo Bonzini wrote:
> > On 10/12/20 18:09, Tom Lendacky wrote:
> >> @@ -3184,6 +3186,8 @@ static int svm_invoke_exit_handler(struct vcpu_svm
> >> *svm, u64 exit_code)
> >> return halt_interception(svm);
> >> else if (
+Andy, who provided a lot of feedback on v1.
On Mon, Dec 14, 2020, Michael Roth wrote:
Cc: Andy Lutomirski
> Suggested-by: Tom Lendacky
> Signed-off-by: Michael Roth
> ---
> v2:
> * rebase on latest kvm/next
> * move VMLOAD to just after vmexit so we can use it to handle all FS/GS
> host st
reclaimer")
> Cc: Borislav Petkov
> Cc: Dave Hansen
> Reported-by: Sean Christopherson
Haitao reported the bug, and for all intents and purposes provided the fix. I
just did the analysis to verify that there was a legitimate bug and that the
synchronization in sgx_encl_release()
On Sun, Dec 13, 2020, Lai Jiangshan wrote:
> From: Lai Jiangshan
>
> In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as:
> need_tlb_flush |= kvm->tlbs_dirty;
> with need_tlb_flush's type being int and tlbs_dirty's type being long.
>
> It means that tlbs_dirty is always use
ta...@nongnu.org
I assume you want sta...@vger.kernel.org?
> [Reorganize macros so that everything is computed from the bit ranges. -
> Paolo]
> Signed-off-by: Paolo Bonzini
> ---
> Compared to v2 by Maciej, I chose to keep GEN_MASK's argument
> calculated,
Bo.
Michael, please reply to all so that everyone can read along and so that the
conversation gets recorded in the various mailing list archives.
If you are replying all, then I think something funky is going on with AMD's
mail servers, as I'm not getting your responses (I double checked SPAM), nor ar
Shortlog should use "KVM: x86: ...", and probably s/for/in. It currently reads
like the kernel is exposing the flag to KVM for KVM's supported CPUID, e.g.:
KVM: x86: Expose AVX512_FP16 in supported CPUID
On Mon, Dec 07, 2020, Kyung Min Park wrote:
> From: Cathy Zhang
>
> AVX512_FP16 is suppo
On Sun, Dec 06, 2020, Paolo Bonzini wrote:
> On 05/12/20 01:48, Maciej S. Szmigiero wrote:
> > From: "Maciej S. Szmigiero"
> >
> > Commit cae7ed3c2cb0 ("KVM: x86: Refactor the MMIO SPTE generation handling")
> > cleaned up the computation of MMIO generation SPTE masks, however it
> > introduced a
On Mon, Dec 07, 2020, Babu Moger wrote:
> Newer AMD processors have a feature to virtualize the use of the
> SPEC_CTRL MSR. When supported, the SPEC_CTRL MSR is automatically
> virtualized and no longer requires hypervisor intervention.
Hrm, is MSR_AMD64_VIRT_SPEC_CTRL only for SSBD? Should that
On Sun, Dec 06, 2020, Paolo Bonzini wrote:
> On 03/12/20 01:34, Sean Christopherson wrote:
> > On Tue, Dec 01, 2020, Ashish Kalra wrote:
> > > From: Brijesh Singh
> > >
> > > KVM hypercall framework relies on alternative framework to patch the
> >
On Fri, Nov 20, 2020, Rick Edgecombe wrote:
> +struct perm_allocation {
> + struct page **pages;
> + virtual_perm cur_perm;
> + virtual_perm orig_perm;
> + struct vm_struct *area;
> + unsigned long offset;
> + unsigned long size;
> + void *writable;
> +};
> +
> +/*
> + *
On Fri, Dec 4, 2020 at 10:07 AM Ashish Kalra wrote:
>
> Yes i will post a fresh version of the live migration patches.
>
> Also, can you please check your email settings, we are only able to see your
> response on the
> mailing list but we are not getting your direct responses.
Hrm, as in you do
On Thu, Dec 03, 2020, David Woodhouse wrote:
> On Wed, 2020-12-02 at 12:32 -0800, Ankur Arora wrote:
> > > On IRC, Paolo told me that permanent pinning causes problems for memory
> > > hotplug, and pointed me at the trick we do with an MMU notifier and
> > > kvm_vcpu_reload_apic_access_page().
> >
On Fri, Dec 04, 2020, Ashish Kalra wrote:
> An immediate response, actually the SEV live migration patches are preferred
> over the Page encryption bitmap patches, in other words, if SEV live
> migration patches are applied then we don't need the Page encryption bitmap
> patches and we prefer the l
On Thu, Dec 03, 2020, Paolo Bonzini wrote:
> Until commit e7c587da1252 ("x86/speculation: Use synthetic bits for
> IBRS/IBPB/STIBP",
> 2018-05-17), KVM was testing both Intel and AMD CPUID bits before allowing the
> guest to write MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD. Testing only Intel
> bi
On Fri, Dec 04, 2020, Paolo Bonzini wrote:
> I applied patches -13, this one a bit changed as follows.
Can we hold up on applying this series? Unless I'm misunderstanding things,
much of what you're applying is superseded by a much more recent series to add
only the page encryption bitmap[*]. I
mmu: Support zapping SPTEs in the TDP MMU")
> Signed-off-by: Rick Edgecombe
Dang, in hindsight it'd be nice if KVM_CAP_SMALLER_MAXPHYADDR allowed explicitly
setting the max MAXPHYADDR for an entire VM instead of being a simple toggle.
E.g. TDX and SEV-ES likely could also make use
On Tue, Dec 01, 2020, Ashish Kalra wrote:
> From: Brijesh Singh
>
> KVM hypercall framework relies on alternative framework to patch the
> VMCALL -> VMMCALL on AMD platform. If a hypercall is made before
> apply_alternative() is called then it defaults to VMCALL. The approach
> works fine on non
On Thu, Nov 26, 2020, 彭浩(Richard) wrote:
> The return value of sev_asid_new is assigned to the variable asid, which
> should be returned directly if the asid is an error code.
>
> Fixes: 1654efcbc431 ("KVM: SVM: Add KVM_SEV_INIT command")
> Signed-off-by: Peng Hao
It's probably worth noting in t
On Mon, Nov 30, 2020, Paolo Bonzini wrote:
> On 16/09/20 00:44, Sean Christopherson wrote:
> > > KVM doesn't have control of them. They are part of the guest's encrypted
> > > state and that is what the guest uses. KVM can't alter the value that the
> >
+Isaku and Xiaoyao
On Mon, Nov 30, 2020, Paolo Bonzini wrote:
> On 30/11/20 19:14, Sean Christopherson wrote:
> > > > TDX also selectively blocks/skips portions of other ioctl()s so that the
> > > > TDX code itself can yell loudly if e.g. .get_cpl() is i
On Thu, Nov 26, 2020, Borislav Petkov wrote:
> On Thu, Nov 26, 2020 at 12:18:12AM +0000, Sean Christopherson wrote:
> > The SEAM module needs to be loaded during early boot, it can't be
> > deferred to a module, at least not without a lot more blood, sweat,
> > an
On Mon, Nov 30, 2020, Tom Lendacky wrote:
> On 11/30/20 9:31 AM, Paolo Bonzini wrote:
> > On 16/09/20 02:19, Sean Christopherson wrote:
> >>
> >> TDX also selectively blocks/skips portions of other ioctl()s so that the
> >> TDX code itself can yell loudly if e.
On Mon, Nov 30, 2020, Paolo Bonzini wrote:
> On 16/09/20 02:19, Sean Christopherson wrote:
> >
> > TDX also selectively blocks/skips portions of other ioctl()s so that the
> > TDX code itself can yell loudly if e.g. .get_cpl() is invoked. The event
> > injection res
On Sat, Nov 28, 2020, Lai Jiangshan wrote:
> On Sat, Nov 28, 2020 at 12:48 AM Paolo Bonzini wrote:
> >
> > On 26/11/20 01:05, Sean Christopherson wrote:
> > > On Fri, Nov 20, 2020, Lai Jiangshan wrote:
> > >> From: Lai Jiangshan
> > >>
>
The following commit has been merged into the x86/cleanups branch of tip:
Commit-ID: 8539d3f06710a9e91b9968fa736549d7c6b44206
Gitweb:
https://git.kernel.org/tip/8539d3f06710a9e91b9968fa736549d7c6b44206
Author:Sean Christopherson
AuthorDate:Tue, 27 Oct 2020 14:45:32 -07:00
On Wed, Nov 25, 2020, Borislav Petkov wrote:
> On Mon, Nov 16, 2020 at 10:25:48AM -0800, isaku.yamah...@intel.com wrote:
> > From: Zhang Chen
> >
> > Move get_builtin_firmware() to common.c so that it can be used to get
> > non-ucode firmware, e.g. Intel's SEAM modules, even if MICROCODE=n.
>
>
On Fri, Nov 20, 2020, Lai Jiangshan wrote:
> From: Lai Jiangshan
>
> Commit 41074d07c78b ("KVM: MMU: Fix inherited permissions for emulated
> guest pte updates") said role.access is common access permissions for
> all ptes in this shadow page, which is the inherited permissions from
> the parent
On Wed, Nov 25, 2020, 彭浩(Richard) wrote:
> If the ldr value is read out to zero, it does not call avic_ldr_write to
> update
> the virtual register, but the variable ldr_reg is updated.
Is there a failure associated with this? And/or can you elaborate on why
skipping the svm->ldr_reg is correct?
On Tue, Nov 24, 2020, Vipin Sharma wrote:
> On Tue, Nov 24, 2020 at 09:27:25PM +0000, Sean Christopherson wrote:
> > Is a root level stat file needed? Can't the infrastructure do .max -
> > .current
> > on the root cgroup to calculate the number of available ids
On Tue, Nov 24, 2020, Vipin Sharma wrote:
> On Tue, Nov 24, 2020 at 12:18:45PM -0800, David Rientjes wrote:
> > On Tue, 24 Nov 2020, Vipin Sharma wrote:
> >
> > > > > Looping Janosch and Christian back into the thread.
> > > > >
> > > > >
On Fri, Nov 13, 2020, David Rientjes wrote:
>
> On Mon, 2 Nov 2020, Sean Christopherson
On Mon, Nov 23, 2020, Tom Lendacky wrote:
> On 11/17/20 11:07 AM, Tom Lendacky wrote:
> > From: Tom Lendacky
> >
> > This patch series provides support for running SEV-ES guests under KVM.
>
> Any comments on this series?
I'm planning on doing a thorough review, but it'll probably take me a few
From: Sean Christopherson
Update my email address to one provided by my new benefactor.
Cc: Thomas Gleixner
Cc: Borislav Petkov
Cc: Jarkko Sakkinen
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Vitaly Kuznetsov
Cc: Wanpeng Li
Cc: Jim Mattson
Cc: Joerg Roedel
Cc: k...@vger.kernel.org
Signed
The following commit has been merged into the x86/sgx branch of tip:
Commit-ID: 84664369520170f48546c55cbc1f3fbde9b1e140
Gitweb:
https://git.kernel.org/tip/84664369520170f48546c55cbc1f3fbde9b1e140
Author:Sean Christopherson
AuthorDate:Fri, 13 Nov 2020 00:01:30 +02:00
801 - 900 of 1220 matches
Mail list logo