On Sun, Jul 26, 2009 at 09:25:34PM +0200, Jiaqing Du wrote:
> Hi Gleb,
> 
> Thanks for your reply.
> 
> 2009/7/26 Gleb Natapov <g...@redhat.com>:
> > On Sat, Jul 25, 2009 at 10:46:39PM +0200, Jiaqing Du wrote:
> >> Hi list,
> >>
> >> I'm trying to extend OProfile to support guest profiling. One step of
> >> my work is to push an NMI to the guest(s) when a performance counter
> >> overflows. Please correct me if the following is not correct:
> >>
> >> counter overflow --> NMI to host --> VM exit --> "int $2" to handle
> >> NMI on host --> ...   --> VM entry --> NMI to guest
> >>
> > Correct except the last step (--> NMI to guest). Host nmi is not
> > propagated to guests.
> 
> Yes. I need to add some code to propagate host NMI to guests.
> >
> >> On the path between VM-exit and VM-entry, I want to push an NMI to the
> >> guest. I tried to put the following code on the path, but never
> >> succeeded. Various wired things happened, such as KVM hangs, guest
> >> kernel oops, and host hangs. I tried both code with Linux 2.6.30 and
> >> version 88.
> >>
> >> if (vmx_nmi_allowed())  { vmx_inject_nmi(); }
> >>
> >> Any suggestions? Where is the right place to push an NMI and what are
> >> the necessary checks?
> > Call kvm_inject_nmi(vcpu). And don't forget to vcpu_load(vcpu) before
> > doing it. See kvm_vcpu_ioctl_nmi().
> 
> Based on the code with Linux 2.6.30, what kvm_inject_nmi(vcpu) does is
> just set vcpu->arch.nmi_pending to 1. kvm_vcpu_ioctl_nmi() puts
> vcpu_load() before the setting and vcpu_put() after it.
> 
> I need to push host NMI to guests between a VM-exit and a VM-entry
> after that. The VM-exit is due to an NMI caused by performance counter
> overflow. The following code with vcpu_enter_guest(), which is
> surrounded by a vcpu_load() and vcpu_put(), checks this
> vcpu->arch.nmi_pending and other related flags to decide whether an
> NMI should be pushed to guests.
> 
>       if (vcpu->arch.exception.pending)
>               __queue_exception(vcpu);
>       else if (irqchip_in_kernel(vcpu->kvm))
>               kvm_x86_ops->inject_pending_irq(vcpu);
>       else
>               kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
> 
> What I did is given below:
> 
> 3097 static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run 
> *kvm_run)
> 3098 {
>                ... ...
> 
> 3156         if (kvm_vm_exit_on_cnt_overflow) {
> 3157                 vcpu->arch.nmi_pending = 1;
> 3158         }
> 3159
> 3160         if (vcpu->arch.exception.pending)
> 3161                 __queue_exception(vcpu);
> 3162         else if (irqchip_in_kernel(vcpu->kvm))
> 3163                 kvm_x86_ops->inject_pending_irq(vcpu);
> 3164         else
> 3165                 kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
> 
>               ... ....
> 3236 }
> 
> In vcpu_enter_guest(), before this part of code is reached,
> vcpu->arch.nmi_pending is set to 1 if the VM-exit is due to
> performance counter overflow. Still, no NMIs are seen by the guests. I
> also tried to put this "vcpu->arch.nmi_pending = 1;" somewhere else on
> the path between a VM-exit and VM-entry, it does not seem to work
> neither. Only vmx_inject_nmi() manages to push NMIs to guests, but
> without right sanity checks, it causes various wired host and guest
> behaviors.
> 
> To inject NMIs on the path between a VM-exit and VM-entry, what's to try next?
> 
If you set vcpu->arch.nmi_pending here there vmx_inject_nmi() will be
called inside kvm_x86_ops->inject_pending_irq(vcpu) (if there is not
pending exceptions or interrupt at that moment), so if NMI is not
injected either you have a bug somewhere (why kvm_vm_exit_on_cnt_overflow
is global?) or you guest ignores NMIs. Does your guest react to NMI if
you send it via qemu monitor (type "nmi 0" in qemu monitor).

Post you code here, may be I'll see something.

--
                        Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to