Hi Gleb,

Another problem on AMD processors.

After each vm-exit, I need to check if this vm-exit is due to NMI. For
vmx.c, I add the check in vmx_complete_interrupts().

The code snippet is:

3539         if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) ==
INTR_TYPE_NMI_INTR &&
3540             (exit_intr_info & INTR_INFO_VALID_MASK)) {
3541
3542                 printk(KERN_INFO "kvm-oprofile: vm exit due to NMI.\n");
3543
3544                 /* indicate vm-exit due to conter overflow */
3545                 vcpu->vm_exit_on_cntr_overflow = 1;
3546         }

This works on Intel chips.

I did the similar check in svm_complete_interrupts().

2501 static void svm_complete_interrupts(struct vcpu_svm *svm)
2502 {
2503         u8 vector;
2504         int type;
2505         u32 exitintinfo = svm->vmcb->control.exit_int_info;
2506         struct kvm_vcpu *vcpu = &svm->vcpu;
2507
2508         if (svm->vcpu.arch.hflags & HF_IRET_MASK)
2509                 svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK);
2510
2511         svm->vcpu.arch.nmi_injected = false;
2512         kvm_clear_exception_queue(&svm->vcpu);
2513         kvm_clear_interrupt_queue(&svm->vcpu);
2514
2515         if (!(exitintinfo & SVM_EXITINTINFO_VALID))
2516                 return;
2517
2518         vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK;
2519         type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK;
2520
2521         /* kvm-oprofile */
2522         if (type == SVM_EXITINTINFO_TYPE_NMI) {
2523
2524                 printk(KERN_INFO "kvm-oprofile:
counter_overflowed & vm exit.\n");
2525                 vcpu->vm_exit_on_cntr_overflow = 1;
2526         }

However, this part (2522 to 2526) never got executed. By using qemu
monitor, I managed to inject NMI to the guests. But this check, after
vm-exit due to NMI, does not succeed.


Thanks,
Jiaqing

2009/7/30 Jiaqing Du <jiaq...@gmail.com>:
> Hi Gleb,
>
> My code works by setting "vcpu->arch.nmi_pending = 1;" inside
> vcpu_enter_guest().
>
>
> Thanks,
> Jiaqing
>
> 2009/7/27 Gleb Natapov <g...@redhat.com>:
>> On Sun, Jul 26, 2009 at 09:25:34PM +0200, Jiaqing Du wrote:
>>> Hi Gleb,
>>>
>>> Thanks for your reply.
>>>
>>> 2009/7/26 Gleb Natapov <g...@redhat.com>:
>>> > On Sat, Jul 25, 2009 at 10:46:39PM +0200, Jiaqing Du wrote:
>>> >> Hi list,
>>> >>
>>> >> I'm trying to extend OProfile to support guest profiling. One step of
>>> >> my work is to push an NMI to the guest(s) when a performance counter
>>> >> overflows. Please correct me if the following is not correct:
>>> >>
>>> >> counter overflow --> NMI to host --> VM exit --> "int $2" to handle
>>> >> NMI on host --> ...   --> VM entry --> NMI to guest
>>> >>
>>> > Correct except the last step (--> NMI to guest). Host nmi is not
>>> > propagated to guests.
>>>
>>> Yes. I need to add some code to propagate host NMI to guests.
>>> >
>>> >> On the path between VM-exit and VM-entry, I want to push an NMI to the
>>> >> guest. I tried to put the following code on the path, but never
>>> >> succeeded. Various wired things happened, such as KVM hangs, guest
>>> >> kernel oops, and host hangs. I tried both code with Linux 2.6.30 and
>>> >> version 88.
>>> >>
>>> >> if (vmx_nmi_allowed())  { vmx_inject_nmi(); }
>>> >>
>>> >> Any suggestions? Where is the right place to push an NMI and what are
>>> >> the necessary checks?
>>> > Call kvm_inject_nmi(vcpu). And don't forget to vcpu_load(vcpu) before
>>> > doing it. See kvm_vcpu_ioctl_nmi().
>>>
>>> Based on the code with Linux 2.6.30, what kvm_inject_nmi(vcpu) does is
>>> just set vcpu->arch.nmi_pending to 1. kvm_vcpu_ioctl_nmi() puts
>>> vcpu_load() before the setting and vcpu_put() after it.
>>>
>>> I need to push host NMI to guests between a VM-exit and a VM-entry
>>> after that. The VM-exit is due to an NMI caused by performance counter
>>> overflow. The following code with vcpu_enter_guest(), which is
>>> surrounded by a vcpu_load() and vcpu_put(), checks this
>>> vcpu->arch.nmi_pending and other related flags to decide whether an
>>> NMI should be pushed to guests.
>>>
>>>       if (vcpu->arch.exception.pending)
>>>               __queue_exception(vcpu);
>>>       else if (irqchip_in_kernel(vcpu->kvm))
>>>               kvm_x86_ops->inject_pending_irq(vcpu);
>>>       else
>>>               kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
>>>
>>> What I did is given below:
>>>
>>> 3097 static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run 
>>> *kvm_run)
>>> 3098 {
>>>                ... ...
>>>
>>> 3156         if (kvm_vm_exit_on_cnt_overflow) {
>>> 3157                 vcpu->arch.nmi_pending = 1;
>>> 3158         }
>>> 3159
>>> 3160         if (vcpu->arch.exception.pending)
>>> 3161                 __queue_exception(vcpu);
>>> 3162         else if (irqchip_in_kernel(vcpu->kvm))
>>> 3163                 kvm_x86_ops->inject_pending_irq(vcpu);
>>> 3164         else
>>> 3165                 kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
>>>
>>>               ... ....
>>> 3236 }
>>>
>>> In vcpu_enter_guest(), before this part of code is reached,
>>> vcpu->arch.nmi_pending is set to 1 if the VM-exit is due to
>>> performance counter overflow. Still, no NMIs are seen by the guests. I
>>> also tried to put this "vcpu->arch.nmi_pending = 1;" somewhere else on
>>> the path between a VM-exit and VM-entry, it does not seem to work
>>> neither. Only vmx_inject_nmi() manages to push NMIs to guests, but
>>> without right sanity checks, it causes various wired host and guest
>>> behaviors.
>>>
>>> To inject NMIs on the path between a VM-exit and VM-entry, what's to try 
>>> next?
>>>
>> If you set vcpu->arch.nmi_pending here there vmx_inject_nmi() will be
>> called inside kvm_x86_ops->inject_pending_irq(vcpu) (if there is not
>> pending exceptions or interrupt at that moment), so if NMI is not
>> injected either you have a bug somewhere (why kvm_vm_exit_on_cnt_overflow
>> is global?) or you guest ignores NMIs. Does your guest react to NMI if
>> you send it via qemu monitor (type "nmi 0" in qemu monitor).
>>
>> Post you code here, may be I'll see something.
>>
>> --
>>                        Gleb.
>>
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to