handled by wrappers, split_lock_set_guest() and
split_lock_restore_host(), that will be used by KVM when virtualizing
split lock detection for guest in the future.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/cpu.h | 33 +
arch/x86/kernel/cpu/inte
://lkml.kernel.org/r/20200315050517.127446-1-xiaoyao...@intel.com
- Use X86_FEATURE_SPLIT_LOCK_DETECT flag in kvm to ensure split lock
detection is really supported.
- Add and export sld related helper functions in their related usecase
kvm patches.
Xiaoyao Li (8):
x86/split_lock: Rename TIF_SLD
or
sld_fatal when handle_guest_split_lock() is called.
Signed-off-by: Xiaoyao Li
---
The alternative would be to remove the "SLD enabled" check from KVM so
that a truly unexpected/bogus #AC would generate a warn. It's not clear
whether or not calling handle_guest_split_lock() iff SLD
Unconditionally allow the guest to read and zero-write MSR TEST_CTRL.
This matches the fact that most Intel CPUs support MSR TEST_CTRL, and
it also alleviates the effort to handle wrmsr/rdmsr when split lock
detection is exposed to the guest in a future patch.
Signed-off-by: Xiaoyao Li
-by: Thomas Gleixner
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/thread_info.h | 6 +++---
arch/x86/kernel/cpu/intel.c| 6 +++---
arch/x86/kernel/process.c | 2 +-
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/thread_info.h
b/arch/x86/include
-by: Xiaoyao Li
---
Documentation/virt/kvm/cpuid.rst | 29
arch/x86/include/uapi/asm/kvm_para.h | 8 +---
2 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/Documentation/virt/kvm/cpuid.rst b/Documentation/virt/kvm/cpuid.rst
index 01b081f6e7ea
On 5/9/2020 7:37 AM, Sean Christopherson wrote:
Restore a guest CPUID update that was unintentional collateral damage
when the per-vCPU guest_xstate_size field was removed.
It's really unintentional. None of us noticed it. :(
It's good that you catch it!
Cc: Xiaoyao Li
Fixes: d87277414b851
On 4/29/2020 1:46 PM, Li RongQing wrote:
Guest kernel reports a fixed cpu frequency in /proc/cpuinfo,
this is confused to user when turbo is enable, and aperf/mperf
can be used to show current cpu frequency after 7d5905dc14a
"(x86 / CPU: Always show current CPU frequency in /proc/cpuinfo)"
so we
vcpu->arch.guest_xstate_size lost its only user since commit df1daba7d1cb
("KVM: x86: support XSAVES usage in the host"), so clean it up.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/cpuid.c| 8 ++--
arch/x86/kvm/x86.c
On 10/21/2019 9:09 PM, Paolo Bonzini wrote:
On 17/10/19 18:05, Sean Christopherson wrote:
On Wed, Oct 16, 2019 at 11:41:05AM +0200, Paolo Bonzini wrote:
On 16/10/19 09:48, Xiaoyao Li wrote:
BTW, could you have a look at the series I sent yesterday to refactor
the vcpu creation flow, which
There is no functional changs, just some cleanup and renaming to increase
readability.
Patch 1 is newly added from v2.
Patcd 2 and 3 is seperated from Patch 4.
Xiaoyao Li (4):
KVM: VMX: Write VPID to vmcs when creating vcpu
KVM: VMX: Remove vmx->hv_deadline_tsc initialization f
Move the code that writes vmx->vpid to vmcs from vmx_vcpu_reset() to
vmx_vcpu_setup(), because vmx->vpid is allocated when creating vcpu and
never changed. So we don't need to update the vmcs.vpid when resetting
vcpu.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 6 +++---
Rename {vmx,nested_vmx}_vcpu_setup() to match what they really do.
Signed-off-by: Xiaoyao Li
---
Changes in v3:
- Move vmcs unrelated changes into 2 seperate patches.
- refine the function name.
---
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/nested.h | 2 +-
arch/x86/kvm/vmx/vmx.c
Move the initialization of vmx->guest_msrs[] from vmx_vcpu_setup() to
vmx_create_vcpu(), and put it right after its allocation.
This also is the preperation for next patch.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 34 --
1 file changed,
... It can be removed here because the same code is called later in
vmx_vcpu_reset() as the flow:
kvm_arch_vcpu_setup()
-> kvm_vcpu_reset()
-> vmx_vcpu_reset()
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 1 -
1 file changed, 1 deletion(-)
diff --git
On 10/19/2019 1:27 AM, Sean Christopherson wrote:
On Fri, Oct 18, 2019 at 05:37:23PM +0800, Xiaoyao Li wrote:
Move the MSR bitmap capability check from vmx_disable_intercept_for_msr()
and vmx_enable_intercept_for_msr(), so that we can do the check far
early before we really want to touch
On 10/19/2019 1:27 AM, Sean Christopherson wrote:
On Fri, Oct 18, 2019 at 05:37:23PM +0800, Xiaoyao Li wrote:
Move the MSR bitmap capability check from vmx_disable_intercept_for_msr()
and vmx_enable_intercept_for_msr(), so that we can do the check far
early before we really want to touch
On 10/19/2019 1:09 AM, Sean Christopherson wrote:
On Fri, Oct 18, 2019 at 05:37:22PM +0800, Xiaoyao Li wrote:
Rename {vmx,nested_vmx}_vcpu_setup() to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
Aslo remove the vmcs unrelated codes to vmx_vcpu_create().
Do this in a separate
On 10/19/2019 12:57 AM, Sean Christopherson wrote:
On Fri, Oct 18, 2019 at 05:37:21PM +0800, Xiaoyao Li wrote:
Move vmcs related codes into a new function vmx_vmcs_reset() from
vmx_vcpu_reset(). So that it's more clearer which data is related with
vmcs and can be held in vmcs.
Suggested
On 10/18/2019 5:02 PM, Thomas Gleixner wrote:
On Fri, 18 Oct 2019, Xiaoyao Li wrote:
On 10/17/2019 8:29 PM, Thomas Gleixner wrote:
The more I look at this trainwreck, the less interested I am in merging any
of this at all.
The fact that it took Intel more than a year to figure out
Move vmcs related codes into a new function vmx_vmcs_reset() from
vmx_vcpu_reset(). So that it's more clearer which data is related with
vmcs and can be held in vmcs.
Suggested-by: Krish Sadhukhan
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 65
-by: Xiaoyao Li
---
Changes in v2:
- Remove the check of cpu_has_vmx_msr_bitmap() from
vmx_{disable,enable}_intercept_for_msr (Krish)
---
arch/x86/kvm/vmx/vmx.c | 65 +-
1 file changed, 33 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c
low:
kvm_arch_vcpu_setup()
-> kvm_vcpu_reset()
-> vmx_vcpu_reset()
Signed-off-by: Xiaoyao Li
---
Changes in v2:
- move out the vmcs unrelated codes
---
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/nested.h | 2 +-
arch/x86/kvm/vmx/vmx.c| 45 +--
Remove the vcpu creation refactor and FPU allocation cleanup from v1, since
I need more time to invest Sean's suggestion.
This series add one patch to move the vmcs reset from vcpu_reset, based on
Krish's suggestion.
Xiaoyao Li (3):
KVM: VMX: Move vmcs related resetting out of vmx_vcpu_reset
On 10/17/2019 8:29 PM, Thomas Gleixner wrote:
The more I look at this trainwreck, the less interested I am in merging any
of this at all.
The fact that it took Intel more than a year to figure out that the MSR is
per core and not per thread is yet another proof that this industry just
works by
On 10/17/2019 2:09 AM, Krish Sadhukhan wrote:
On 10/15/19 6:27 PM, Xiaoyao Li wrote:
On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
No functional
On 10/17/2019 1:42 AM, Sean Christopherson wrote:
On Wed, Oct 16, 2019 at 09:23:37AM -0700, Sean Christopherson wrote:
On Wed, Oct 16, 2019 at 05:43:53PM +0200, Paolo Bonzini wrote:
On 16/10/19 17:41, Sean Christopherson wrote:
On Wed, Oct 16, 2019 at 04:08:14PM +0200, Paolo Bonzini wrote:
On 10/16/2019 11:37 PM, Paolo Bonzini wrote:
On 16/10/19 16:43, Thomas Gleixner wrote:
N | #AC | #AC enabled | SMT | Ctrl| Guest | Action
R | available | on host | | exposed | #AC |
--|---|-|-|-|---|-
| |
On 10/16/2019 7:58 PM, Paolo Bonzini wrote:
On 16/10/19 13:49, Thomas Gleixner wrote:
On Wed, 16 Oct 2019, Paolo Bonzini wrote:
Yes it does. But Sean's proposal, as I understand it, leads to the
guest receiving #AC when it wasn't expecting one. So for an old guest,
as soon as the guest
On 10/16/2019 7:26 PM, Paolo Bonzini wrote:
On 16/10/19 13:23, Xiaoyao Li wrote:
KVM always traps #AC, and only advertises split-lock detection to guest
when the global variable split_lock_detection_enabled in host is true.
- If guest enables #AC (CPL3 alignment check or split-lock detection
On 10/16/2019 6:16 PM, Paolo Bonzini wrote:
On 16/10/19 11:47, Thomas Gleixner wrote:
On Wed, 16 Oct 2019, Paolo Bonzini wrote:
Just never advertise split-lock
detection to guests. If the host has enabled split-lock detection,
trap #AC and forward it to the host handler---which would disable
On 10/16/2019 3:35 PM, Paolo Bonzini wrote:
On 16/10/19 03:52, Xiaoyao Li wrote:
user_fpu could be made percpu too... That would save a bit of memory
for each vCPU. I'm holding on Xiaoyao's patch because a lot of the code
he's touching would go away then.
Sorry, I don't get clear your
On 9/26/2019 2:09 AM, Sean Christopherson wrote:
On Wed, Jun 26, 2019 at 11:47:40PM +0200, Thomas Gleixner wrote:
So only one of the CPUs will win the cmpxchg race, set te variable to 1 and
warn, the other and any subsequent AC on any other CPU will not warn
either. So you don't need
On 10/15/2019 5:28 PM, Paolo Bonzini wrote:
On 14/10/19 18:58, Vitaly Kuznetsov wrote:
Xiaoyao Li writes:
They are duplicated codes to create vcpu.arch.{user,guest}_fpu in VMX
and SVM. Make them common functions.
No functional change intended.
Would it rather make sense to move this code
On 10/16/2019 8:40 AM, Krish Sadhukhan wrote:
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
when hardware has msr_bitmap capability.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 39
On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
No functional change.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx
They are duplicated codes to create vcpu.arch.{user,guest}_fpu in VMX
and SVM. Make them common functions and delay it a little bit later
after .create_vcpu.
No functional change intended.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/svm.c | 18 --
arch/x86/kvm/vmx/vmx.c | 18
Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
No functional change.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/nested.h | 2 +-
arch/x86/kvm/vmx/vmx.c| 9 +++--
3 files changed, 5 insertions(+), 8
FPU allocation to generic x86 code (Patch 4).
This series intends to do no functional change. I just tested it with
kvm_unit_tests for vmx since I have no AMD machine at hand.
Xiaoyao Li (4):
KVM: VMX: rename {vmx,nested_vmx}_vcpu_setup functions
KVM: VMX: Setup MSR bitmap only when has
Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
when hardware has msr_bitmap capability.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 39 ---
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/vmx
for vcpu's data structure
allocation and then calls vcpu_init related functions to initialize the
vcpu.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm.c | 63 +-
arch/x86/kvm/vmx/vmx.c | 109
On 10/15/2019 2:37 AM, Sean Christopherson wrote:
On Mon, Oct 14, 2019 at 06:58:49PM +0200, Vitaly Kuznetsov wrote:
Xiaoyao Li writes:
They are duplicated codes to create vcpu.arch.{user,guest}_fpu in VMX
and SVM. Make them common functions.
No functional change intended.
Would it rather
They are duplicated codes to create vcpu.arch.{user,guest}_fpu in VMX
and SVM. Make them common functions.
No functional change intended.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/svm.c | 20 +++-
arch/x86/kvm/vmx/vmx.c | 20 +++-
arch/x86/kvm/x86.h
been set successfully.
Signed-off-by: Xiaoyao Li
---
v3:
refine the description based on Sean's comment.
v2:
elaborate the changelog and description of ioctl KVM_SET_MSRS based on
Sean's comments.
---
Documentation/virt/kvm/api.txt | 7 ++-
1 file changed, 6 insertions(+), 1 deletion
On 9/5/2019 1:41 AM, Sean Christopherson wrote:
On Wed, Sep 04, 2019 at 02:01:18PM +0800, Xiaoyao Li wrote:
Userspace can use ioctl KVM_SET_MSRS to update a set of MSRs of guest.
This ioctl sets specified MSRs one by one. Once it fails to set an MSR
due to setting reserved bits, the MSR
of MSRs have been set successfully.
Signed-off-by: Xiaoyao Li
---
v2:
elaborate the changelog and description of ioctl KVM_SET_MSRS based on
Sean's comments.
---
Documentation/virt/kvm/api.txt | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/Documentation/virt/kvm
On Thu, 2019-06-27 at 14:11 +0200, Thomas Gleixner wrote:
> On Thu, 27 Jun 2019, Xiaoyao Li wrote:
> > On 6/27/2019 3:12 PM, Thomas Gleixner wrote:
> > > The real interesting question is whether the #AC on split lock prevents
> > > the
> > > actual bus lo
Commit-ID: 2238246ff8d533a5f2327d1f953375876d8a013c
Gitweb: https://git.kernel.org/tip/2238246ff8d533a5f2327d1f953375876d8a013c
Author: Xiaoyao Li
AuthorDate: Thu, 27 Jun 2019 12:55:25 +0800
Committer: Ingo Molnar
CommitDate: Thu, 27 Jun 2019 10:56:11 +0200
x86/boot: Make the GDT 8
://daringfireball.net/2007/07/on_top
A: Yes
Q: Should I trim all irrelevant context?
Sorry about this.
Won't do it anymore.
On Thu, 27 Jun 2019, Xiaoyao Li wrote:
Do you have any comments on this one as the policy of how to expose split lock
detection (emulate TEST_CTL) for guest changed.
This patch makes
Commit-ID: 1c30fe6cbba6997ae4740bb46910036f8a4a9edb
Gitweb: https://git.kernel.org/tip/1c30fe6cbba6997ae4740bb46910036f8a4a9edb
Author: Xiaoyao Li
AuthorDate: Thu, 27 Jun 2019 12:55:25 +0800
Committer: Thomas Gleixner
CommitDate: Thu, 27 Jun 2019 09:40:41 +0200
x86/boot: Make gdt 8
When loading segment descriptor, it uses lock implicitly. Align gdt here
to avoid potential split lock from crossing cache lines case.
Signed-off-by: Xiaoyao Li
---
arch/x86/boot/compressed/head_64.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/boot/compressed/head_64.S
b/arch
uest killed by host. If #AC is from guest kernel,
guest kernel may clear it's split lock bit in test_ctl msr and
re-execute the instruction, then it goes into case 1, the #AC will
report to host userspace, e.g., QEMU.
On 6/19/2019 6:41 AM, Fenghua Yu wrote:
From: Xiaoyao Li
A control bit (bi
> state, by default we dont't expose it to kvm and enable it only when
> guest CPUID has it.
>
> Detailed information about user wait instructions can be found in the
> latest Intel 64 and IA-32 Architectures Software Developer's Manual.
>
> Co-developed-by: Jingqi Liu
>
On Thu, 2019-06-20 at 16:46 +0800, Tao Xu wrote:
> UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H
> to determines the maximum time in TSC-quanta that the processor can reside
> in either C0.1 or C0.2.
>
> This patch emulates MSR IA32_UMWAIT_CONTROL in guest and
On Thu, 2019-06-20 at 16:46 +0800, Tao Xu wrote:
> UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H
> to determines the maximum time in TSC-quanta that the processor can reside
> in either C0.1 or C0.2.
>
> This patch emulates MSR IA32_UMWAIT_CONTROL in guest and
On 6/20/2019 4:17 PM, Paolo Bonzini wrote:
On 20/06/19 08:46, Xiaoyao Li wrote:
It depends on whether or not processors support the 1-setting instead
of “enable XSAVES/XRSTORS” is 1 in VM-exection control field. Anyway,
Yes, whether this field exist or not depends on whether processors
.
Co-developed-by: Xiaoyao Li
Signed-off-by: Xiaoyao Li
Signed-off-by: Tao Xu
---
arch/x86/kvm/vmx/vmx.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index b93e36ddee5e..935cf72439a9 100644
--- a/arch/x86/kvm/vmx
On 6/19/2019 3:01 PM, Tao Xu wrote:
On 6/19/2019 2:23 PM, Xiaoyao Li wrote:
On 6/19/2019 2:09 PM, Tao Xu wrote:
UMONITOR, UMWAIT and TPAUSE are a set of user wait instructions.
This patch adds support for user wait instructions in KVM. Availability
of the user wait instructions is indicated
On 6/19/2019 2:09 PM, Tao Xu wrote:
UMONITOR, UMWAIT and TPAUSE are a set of user wait instructions.
This patch adds support for user wait instructions in KVM. Availability
of the user wait instructions is indicated by the presence of the CPUID
feature flag WAITPKG CPUID.0x07.0x0:ECX[5]. User
On 6/17/2019 11:50 PM, Radim Krčmář wrote:
2019-06-17 14:31+0800, Xiaoyao Li:
On 6/17/2019 11:32 AM, Xiaoyao Li wrote:
On 6/16/2019 5:55 PM, Tao Xu wrote:
+ if (vmx->msr_ia32_umwait_control != host_umwait_control)
+ add_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONT
On 6/17/2019 11:32 AM, Xiaoyao Li wrote:
On 6/16/2019 5:55 PM, Tao Xu wrote:
UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H
to determines the maximum time in TSC-quanta that the processor can
reside
in either C0.1 or C0.2.
This patch emulates MSR
On 6/16/2019 5:55 PM, Tao Xu wrote:
UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H
to determines the maximum time in TSC-quanta that the processor can reside
in either C0.1 or C0.2.
This patch emulates MSR IA32_UMWAIT_CONTROL in guest and differentiate
Ping.
On 4/19/2019 10:16 AM, Xiaoyao Li wrote:
1. Using X86_FEATURE_ARCH_CAPABILITIES to enumerate the existence of
MSR_IA32_ARCH_CAPABILITIES to avoid using rdmsrl_safe().
2. Since kvm_get_arch_capabilities() is only used in this file, making
it static.
Signed-off-by: Xiaoyao Li
---
arch
1F.
> >
> > Co-developed-by: Xiaoyao Li
> > Signed-off-by: Xiaoyao Li
> > Signed-off-by: Like Xu
> > ---
> >
> > ==changelog==
> > v2:
> > - Apply cpuid.1f check rule on Intel SDM page 3-222 Vol.2A
> > - Add comment to handle 0x1f anf 0xb in
On Thu, 2019-04-25 at 23:33 +0800, Like Xu wrote:
> On 2019/4/25 22:19, Sean Christopherson wrote:
> > On Thu, Apr 25, 2019 at 03:07:35PM +0800, Like Xu wrote:
> > > On 2019/4/25 14:30, Xiaoyao Li wrote:
> > > > > > Besides, the problem of simply u
On Thu, 2019-04-25 at 14:02 +0800, Like Xu wrote:
> On 2019/4/25 12:18, Xiaoyao Li wrote:
> > On Thu, 2019-04-25 at 10:58 +0800, Like Xu wrote:
> > > On 2019/4/24 22:32, Sean Christopherson wrote:
> > > > Now that I understand how min() works...
> > > >
On Thu, 2019-04-25 at 10:58 +0800, Like Xu wrote:
> On 2019/4/24 22:32, Sean Christopherson wrote:
> > Now that I understand how min() works...
> >
> > On Mon, Apr 22, 2019 at 02:40:34PM +0800, Like Xu wrote:
> > > Expose Intel V2 Extended Topology Enumeration Leaf to guest only when
> > > host
1. Using X86_FEATURE_ARCH_CAPABILITIES to enumerate the existence of
MSR_IA32_ARCH_CAPABILITIES to avoid using rdmsrl_safe().
2. Since kvm_get_arch_capabilities() is only used in this file, making
it static.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/x86
to guest. MSR_MISC_FEATURES_ENABLES can be just
cleared to zero for guest when any of the features is enabled in host.
Signed-off-by: Xiaoyao Li
---
arch/x86/kernel/process.c | 1 +
arch/x86/kvm/vmx/vmx.c| 8
2 files changed, 9 insertions(+)
diff --git a/arch/x86/kernel/process.c b
exit,
it should be updated to hardware while guest wrmsr and hardware cpuid
faulting is used for guest.
Note that MSR_MISC_FEATURES_ENABLES only exists in Intel CPU, only
applying this optimization to vmx.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx
nstead of reading hardware msr, from Sean
Christopherson
- avoid WRMSR whenever possible, from Sean Christopherson.
v1->v2:
- move the save/restore of cpuid faulting bit to
vmx_prepare_swich_to_guest/vmx_prepare_swich_to_host to avoid every
vmentry RDMSR, based on Paolo's comment.
Xiaoyao Li (
On Mon, 2019-03-25 at 08:33 -0700, Sean Christopherson wrote:
> On Mon, Mar 25, 2019 at 04:06:49PM +0800, Xiaoyao Li wrote:
> > There are two defined bits in MSR_MISC_FEATURES_ENABLES, bit 0 for cpuid
> > faulting and bit 1 for ring3mwait.
> >
> > == cpuid Fau
to guest. MSR_MISC_FEATURES_ENABLES can be just
cleared to zero for guest when any of the features is enabled in host.
Signed-off-by: Xiaoyao Li
---
arch/x86/kernel/process.c | 1 +
arch/x86/kvm/vmx/vmx.c| 24
2 files changed, 25 insertions(+)
diff --git a/arch/x
CPUID vm exit,
it should be updated to hardware while guest wrmsr and hardware cpuid
faulting is used for guest.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx/vmx.c | 13 ++---
arch/x86/kvm/x86.c | 15 ---
3 files
e save/restore of cpuid faulting bit to
vmx_prepare_swich_to_guest/vmx_prepare_swich_to_host to avoid every
vmentry RDMSR, based on Paolo's comment.
Xiaoyao Li (2):
kvm/vmx: Switch MSR_MISC_FEATURES_ENABLES between host and guest
x86/vmx: optimize MSR_MISC_FEATURES_ENABLES switch
arch/x86/inclu
On Tue, 2019-03-19 at 17:09 -0700, Sean Christopherson wrote:
> On Wed, Mar 20, 2019 at 01:51:28AM +0800, Xiaoyao Li wrote:
> > On Tue, 2019-03-19 at 07:28 -0700, Sean Christopherson wrote:
> > > On Tue, Mar 19, 2019 at 12:37:23PM +0800, Xiaoyao Li wrote:
> > > > On
On Tue, 2019-03-19 at 07:28 -0700, Sean Christopherson wrote:
> On Tue, Mar 19, 2019 at 12:37:23PM +0800, Xiaoyao Li wrote:
> > On Mon, 2019-03-18 at 09:38 -0700, Sean Christopherson wrote:
> > > On Mon, Mar 18, 2019 at 07:43:24PM +0800, Xiaoyao Li wrote:
> > > > C
On Mon, 2019-03-18 at 09:38 -0700, Sean Christopherson wrote:
> On Mon, Mar 18, 2019 at 07:43:24PM +0800, Xiaoyao Li wrote:
> > Current cpuid faulting of guest is purely emulated in kvm, which exploits
> > CPUID vm exit to inject #GP to guest. However, if host h
on Paolo's comment.
==previous version==
v1: https://patchwork.kernel.org/patch/10852253/
Xiaoyao Li (2):
kvm/vmx: avoid CPUID faulting leaking to guest
kvm/vmx: Using hardware cpuid faulting to avoid emulation overhead
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx/vmx.c | 45
priority over CPUID instruction vm
exit (Intel SDM vol3.25.1.1).
Since cpuid faulting only exists on some Intel's cpu, just apply this
optimization to vmx.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/vmx/vmx.c | 19 +++
arch/x86/kvm/x86.c
se kvm provides the software emulation of cpuid faulting, we can
just clear the cpuid faulting bit in hardware MSR when switching to
guest.
Signed-off-by: Xiaoyao Li
---
Changes in v2:
- move the save/restore of cpuid faulting bit to
vmx_prepare_swich_to_guest/vmx_prepare_swich_to_host to avoid
On Thu, 2019-03-14 at 12:28 +0100, Paolo Bonzini wrote:
> On 14/03/19 07:38, Xiaoyao Li wrote:
> > CPUID Faulting is a feature about CPUID instruction. When CPUID Faulting is
> > enabled, all execution of the CPUID instruction outside system-management
> > mode (SMM) cause
On Thu, 2019-03-14 at 12:28 +0100, Paolo Bonzini wrote:
> On 14/03/19 07:38, Xiaoyao Li wrote:
> > CPUID Faulting is a feature about CPUID instruction. When CPUID Faulting is
> > enabled, all execution of the CPUID instruction outside system-management
> > mode (SMM) cause
On Thu, 2019-03-14 at 21:43 +1300, Kyle Huey wrote:
> On Thu, Mar 14, 2019 at 7:50 PM Xiaoyao Li wrote:
> >
> > CPUID Faulting is a feature about CPUID instruction. When CPUID Faulting is
> > enabled, all execution of the CPUID instruction outside system-management
> >
, thus cpuid faulting will enabled by default after applying
Peter's patch. It will make the problem more obvious.
On Thu, 2019-03-14 at 14:38 +0800, Xiaoyao Li wrote:
> CPUID Faulting is a feature about CPUID instruction. When CPUID Faulting is
> enabled, all execution of the CPUID instruction o
o the kvm emualtion path but ues the hardware
feature. Also it's a benefit that we needn't use VM exit to inject #GP to
emulate cpuid faulting feature.
Intel SDM vol3.25.1.1 specifies the priority between cpuid faulting
and CPUID instruction.
Signed-off-by: Xiaoyao Li
---
arch/x86
On Mon, 2019-03-11 at 16:21 +0100, Paolo Bonzini wrote:
> On 11/03/19 16:10, Xiaoyao Li wrote:
> > On Mon, 2019-03-11 at 14:31 +0100, Paolo Bonzini wrote:
> > > On 09/03/19 03:31, Xiaoyao Li wrote:
> > > > Hi, Paolo,
> > > >
> > > > Do yo
On Mon, 2019-03-11 at 14:31 +0100, Paolo Bonzini wrote:
> On 09/03/19 03:31, Xiaoyao Li wrote:
> > Hi, Paolo,
> >
> > Do you have any comments on this patch?
> >
> > We are preparing v5 patches for split lock detection, if you have any
> > comments
Hi, Paolo,
Do you have any comments on this patch?
We are preparing v5 patches for split lock detection, if you have any comments
about this one, please let me know.
Thanks,
Xiaoyao
On Fri, 2019-03-01 at 18:45 -0800, Fenghua Yu wrote:
> From: Xiaoyao Li
>
> A control bit (bit 29) in
On Fri, 2019-03-08 at 08:54 +0100, Paolo Bonzini wrote:
> On 08/03/19 07:10, Xiaoyao Li wrote:
> > > so that non-virtualizable features are hidden and
> > >
> > > if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
> > > data |= CORE_CAP_SPLIT_LOCK
Hi, Paolo
On Mon, 2019-03-04 at 09:42 +0100, Paolo Bonzini wrote:
> On 02/03/19 03:45, Fenghua Yu wrote:
> > From: Xiaoyao Li
> >
> > MSR IA32_CORE_CAPABILITY is a feature-enumerating MSR, bit 5 of which
> > reports the capability of enabling detection of split l
On Thu, 2019-03-07 at 19:15 +0100, Paolo Bonzini wrote:
> On 07/03/19 18:37, Sean Christopherson wrote:
> > On Thu, Mar 07, 2019 at 05:31:43PM +0800, Xiaoyao Li wrote:
> > > At present, we report F(ARCH_CAPABILITIES) for x86 arch(both vmx and svm)
> > > unconditi
to emulate it in svm. Thus this patch chooses to only emulate it
for vmx, and moves the related handling to vmx related files.
Signed-off-by: Xiaoyao Li
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/cpuid.c| 8 +---
arch/x86/kvm/vmx/vmx.c | 26
On Mon, 2019-03-04 at 12:14 +0100, Paolo Bonzini wrote:
> On 04/03/19 12:10, Xiaoyao Li wrote:
> > Like you said before, I think we don't need the condition judgment
> > "if(boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))", but to set
> > F(CORE_CAPABIL
On Mon, 2019-03-04 at 09:42 +0100, Paolo Bonzini wrote:
> On 02/03/19 03:45, Fenghua Yu wrote:
> > From: Xiaoyao Li
> >
> > MSR IA32_CORE_CAPABILITY is a feature-enumerating MSR, bit 5 of which
> > reports the capability of enabling detection of split locks (will
On Mon, 2019-03-04 at 12:14 +0100, Paolo Bonzini wrote:
> On 04/03/19 12:10, Xiaoyao Li wrote:
> > Like you said before, I think we don't need the condition judgment
> > "if(boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))", but to set
> > F(CORE_CAPABIL
On Mon, 2019-03-04 at 11:49 +0100, Paolo Bonzini wrote:
> On 04/03/19 11:47, Xiaoyao Li wrote:
> > On Mon, 2019-03-04 at 09:38 +0100, Paolo Bonzini wrote:
> > > On 02/03/19 03:45, Fenghua Yu wrote:
> > > > From: Xiaoyao Li
> > > >
> > > >
On Mon, 2019-03-04 at 09:38 +0100, Paolo Bonzini wrote:
> On 02/03/19 03:45, Fenghua Yu wrote:
> > From: Xiaoyao Li
> >
> > In the latest Intel SDM, CPUID.(EAX=7H,ECX=0):EDX[30] will enumerate
> > the presence of the IA32_CORE_CAPABILITY MSR.
> >
> &g
On Tue, 2019-02-26 at 15:57 +0800, Yang Weijiang wrote:
> On Tue, Feb 26, 2019 at 11:48:59AM -0800, Jim Mattson wrote:
> > On Mon, Feb 25, 2019 at 10:32 PM Yang Weijiang
> > wrote:
> > >
> > > Guest queries CET SHSTK and IBT support by CPUID.(EAX=0x7,ECX=0),
> > > in return, ECX[bit 7]
On Mon, 2019-02-18 at 16:26 +0800, linux.intel.com wrote:
> On Fri, 2019-02-15 at 11:46 -0500, Konrad Rzeszutek Wilk wrote:
> > On Thu, Feb 14, 2019 at 12:08:58PM +0800, Xiaoyao Li wrote:
> > > Commit ca83b4a7f2d068da79a0 ("x86/KVM/VMX: Add find_msr() helper
> >
101 - 200 of 201 matches
Mail list logo