On 15/01/21 08:00, Wei Huang wrote:
If the whole body inside if-statement is moved out, do you expect the
interface of x86_emulate_decoded_instruction to be something like:
int x86_emulate_decoded_instruction(struct kvm_vcpu *vcpu,
gpa_t cr2_or_gpa,
On 1/12/21 8:01 AM, Paolo Bonzini wrote:
> On 12/01/21 07:37, Wei Huang wrote:
>> static int gp_interception(struct vcpu_svm *svm)
>> {
>> struct kvm_vcpu *vcpu = &svm->vcpu;
>> u32 error_code = svm->vmcb->control.exit_info_1;
>> -
>> - WARN_ON_ONCE(!enable_vmware_backdoor);
>
On Thu, Jan 14, 2021, Maxim Levitsky wrote:
> On Tue, 2021-01-12 at 15:00 -0500, Bandan Das wrote:
> > Sean Christopherson writes:
> > ...
> > > > - if ((emulation_type & EMULTYPE_VMWARE_GP) &&
> > > > - !is_vmware_backdoor_opcode(ctxt)) {
> > > > - kvm_queue_exceptio
On Tue, 2021-01-12 at 00:37 -0600, Wei Huang wrote:
> From: Bandan Das
>
> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> before checking VMCB's instruction intercept. If EAX falls into such
> memo
On Tue, 2021-01-12 at 15:00 -0500, Bandan Das wrote:
> Sean Christopherson writes:
> ...
> > > - if ((emulation_type & EMULTYPE_VMWARE_GP) &&
> > > - !is_vmware_backdoor_opcode(ctxt)) {
> > > - kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
> > > - return 1;
> > > + if (emulation_t
On Tue, 2021-01-12 at 23:15 -0600, Wei Huang wrote:
>
> On 1/12/21 12:58 PM, Andy Lutomirski wrote:
> > Andrew Cooper points out that there may be a nicer workaround. Make
> > sure that the SMRAM and HT region (FFFD - ) are
> > marked as reserved in the guest, too.
>
> In the
On 12/01/21 18:59, Sean Christopherson wrote:
It would be very helpful to list exactly which CPUs are/aren't affected, even if
that just means stating something like "all CPUs before XYZ". Given patch 2/2,
I assume it's all CPUs without the new CPUID flag?
Ah, despite calling this an 'errata',
On 12/01/21 18:42, Sean Christopherson wrote:
On a related topic, it feels like nested should be disabled by default on SVM
until it's truly ready for primetime, with the patch tagged for stable. That
way we don't have to worry about crafting non-trivial fixes (like this one) to
make them backpo
On 1/12/21 12:58 PM, Andy Lutomirski wrote:
Andrew Cooper points out that there may be a nicer workaround. Make
sure that the SMRAM and HT region (FFFD - ) are
marked as reserved in the guest, too.
In theory this proposed solution can avoid intercepting #GP. But in
real
On 1/12/21 11:59 AM, Sean Christopherson wrote:
On Tue, Jan 12, 2021, Sean Christopherson wrote:
On Tue, Jan 12, 2021, Wei Huang wrote:
From: Bandan Das
While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
CPUs check EAX against reserved memory regions (e.g. SMM memory on
On 1/12/21 11:56 AM, Sean Christopherson wrote:
On Tue, Jan 12, 2021, Andy Lutomirski wrote:
On Jan 12, 2021, at 7:46 AM, Bandan Das wrote:
Andy Lutomirski writes:
...
#endif diff --git a/arch/x86/kvm/mmu/mmu.c
b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
--- a/arch
Sean Christopherson writes:
...
>> -if ((emulation_type & EMULTYPE_VMWARE_GP) &&
>> -!is_vmware_backdoor_opcode(ctxt)) {
>> -kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
>> -return 1;
>> +if (emulation_type & EMULTYPE_PARAVIRT_GP) {
>> +vminstr = i
On 1/12/21 6:15 AM, Vitaly Kuznetsov wrote:
Wei Huang writes:
From: Bandan Das
While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
before checking VMCB's instruction intercept. If EAX falls into suc
On 1/12/21 5:09 AM, Maxim Levitsky wrote:
On Tue, 2021-01-12 at 00:37 -0600, Wei Huang wrote:
From: Bandan Das
While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
before checking VMCB's instruction in
On Tue, Jan 12, 2021, Wei Huang wrote:
> +/* Emulate SVM VM execution instructions */
> +static int svm_emulate_vm_instr(struct kvm_vcpu *vcpu, u8 modrm)
> +{
> + struct vcpu_svm *svm = to_svm(vcpu);
> +
> + switch (modrm) {
> + case 0xd8: /* VMRUN */
> + return vmrun_interc
On Tue, Jan 12, 2021 at 9:59 AM Sean Christopherson wrote:
>
> On Tue, Jan 12, 2021, Sean Christopherson wrote:
> > On Tue, Jan 12, 2021, Wei Huang wrote:
> > > From: Bandan Das
> > >
> > > While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> > > CPUs check EAX against reserved
On Tue, Jan 12, 2021, Sean Christopherson wrote:
> On Tue, Jan 12, 2021, Wei Huang wrote:
> > From: Bandan Das
> >
> > While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> > CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> > before checking VMCB's inst
On Tue, Jan 12, 2021, Andy Lutomirski wrote:
>
> > On Jan 12, 2021, at 7:46 AM, Bandan Das wrote:
> >
> > Andy Lutomirski writes:
> > ...
> >> #endif diff --git a/arch/x86/kvm/mmu/mmu.c
> >> b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
> >> --- a/arch/x86/kvm/mm
On Tue, Jan 12, 2021, Paolo Bonzini wrote:
> On 12/01/21 07:37, Wei Huang wrote:
> > static int gp_interception(struct vcpu_svm *svm)
> > {
> > struct kvm_vcpu *vcpu = &svm->vcpu;
> > u32 error_code = svm->vmcb->control.exit_info_1;
> > -
> > - WARN_ON_ONCE(!enable_vmware_backdoor);
>
On Tue, Jan 12, 2021, Wei Huang wrote:
> From: Bandan Das
>
> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> before checking VMCB's instruction intercept.
It would be very helpful to list exactly
> On Jan 12, 2021, at 7:46 AM, Bandan Das wrote:
>
> Andy Lutomirski writes:
> ...
>> #endif diff --git a/arch/x86/kvm/mmu/mmu.c
>> b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
>> --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@
>> -50,6 +50,7 @
Andy Lutomirski writes:
...
> #endif diff --git a/arch/x86/kvm/mmu/mmu.c
> b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
> --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@
> -50,6 +50,7 @@ #include #include #include
> +#include #include
>
> On Jan 12, 2021, at 7:17 AM, Maxim Levitsky wrote:
>
> On Tue, 2021-01-12 at 07:11 -0800, Andy Lutomirski wrote:
On Jan 12, 2021, at 4:15 AM, Vitaly Kuznetsov wrote:
>>>
>>> Wei Huang writes:
>>>
From: Bandan Das
While running VM related instructions (VMRUN/VMSAVE
On Tue, 2021-01-12 at 07:11 -0800, Andy Lutomirski wrote:
> > On Jan 12, 2021, at 4:15 AM, Vitaly Kuznetsov wrote:
> >
> > Wei Huang writes:
> >
> > > From: Bandan Das
> > >
> > > While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> > > CPUs check EAX against reserved memo
> On Jan 12, 2021, at 4:15 AM, Vitaly Kuznetsov wrote:
>
> Wei Huang writes:
>
>> From: Bandan Das
>>
>> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
>> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
>> before checking VMCB's instruction i
On 12/01/21 07:37, Wei Huang wrote:
static int gp_interception(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
u32 error_code = svm->vmcb->control.exit_info_1;
-
- WARN_ON_ONCE(!enable_vmware_backdoor);
+ int rc;
/*
-* VMware backdoor emu
Wei Huang writes:
> From: Bandan Das
>
> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> before checking VMCB's instruction intercept. If EAX falls into such
> memory areas, #GP is triggered before
On Tue, 2021-01-12 at 00:37 -0600, Wei Huang wrote:
> From: Bandan Das
>
> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> before checking VMCB's instruction intercept. If EAX falls into such
> memo
From: Bandan Das
While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
before checking VMCB's instruction intercept. If EAX falls into such
memory areas, #GP is triggered before VMEXIT. This causes problem un
29 matches
Mail list logo