On Thu, Apr 01, 2021, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
> > +{
> > +   int class1, class2, ret;
> > +
> > +   /* try to deliver current pending exception as VM exit */
> > +   if (is_guest_mode(vcpu)) {
> > +           ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> > +           if (ret || !vcpu->arch.pending_exception.valid)
> > +                   return ret;
> > +   }
> > +
> > +   /* No injected exception, so just deliver the payload and inject it */
> > +   if (!vcpu->arch.injected_exception.valid) {
> > +           trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> > +                                   
> > vcpu->arch.pending_exception.has_error_code,
> > +                                   
> > vcpu->arch.pending_exception.error_code);
> > +queue:
> 
> If you move the queue label to the top of the function, you can "goto queue" 
> for #DF as well and you don't need to call kvm_do_deliver_pending_exception 
> again.  In fact you can merge this function and kvm_deliver_pending_exception 
> completely:
> 
> 
> static int kvm_deliver_pending_exception_as_vmexit(struct kvm_vcpu *vcpu)
> {
>       WARN_ON(!vcpu->arch.pending_exception.valid);
>       if (is_guest_mode(vcpu))
>               return 
> kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
>       else
>               return 0;
> }
> 
> static int kvm_merge_injected_exception(struct kvm_vcpu *vcpu)
> {
>       /*
>        * First check if the pending exception takes precedence
>        * over the injected one, which will be reported in the
>        * vmexit info.
>        */
>       ret = kvm_deliver_pending_exception_as_vmexit(vcpu);
>       if (ret || !vcpu->arch.pending_exception.valid)
>               return ret;
> 
>       if (vcpu->arch.injected_exception.nr == DF_VECTOR) {
>               ...
>               return 0;
>       }
>       ...
>       if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
>           || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
>               ...
>       }
>       vcpu->arch.injected_exception.valid = false;
> }
> 
> static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu)
> {
>       if (!vcpu->arch.pending_exception.valid)
>               return 0;
> 
>       if (vcpu->arch.injected_exception.valid)
>               kvm_merge_injected_exception(vcpu);
> 
>       ret = kvm_deliver_pending_exception_as_vmexit(vcpu));
>       if (ret || !vcpu->arch.pending_exception.valid)

I really don't like querying arch.pending_exception.valid to see if the 
exception
was morphed to a VM-Exit.  I also find kvm_deliver_pending_exception_as_vmexit()
to be misleading; to me, that reads as being a command, i.e. "deliver this
pending exception as a VM-Exit".

It' also be nice to make the helpers closer to pure functions, i.e. pass the
exception as a param instead of pulling it from vcpu->arch.

Now that we have static_call, the number of calls into vendor code isn't a huge
issue.  Moving nested_run_pending to arch code would help, too.  What about
doing something like:

static bool kvm_l1_wants_exception_vmexit(struct kvm_vcpu *vcpu, u8 vector)
{
        return is_guest_mode(vcpu) && kvm_x86_l1_wants_exception(vcpu, vector);
}

        ...

        if (!kvm_x86_exception_allowed(vcpu))
                return -EBUSY;

        if (kvm_l1_wants_exception_vmexit(vcpu, vcpu->arch...))
                return kvm_x86_deliver_exception_as_vmexit(...);

>               return ret;
> 
>       trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
>                               vcpu->arch.pending_exception.has_error_code,
>                               vcpu->arch.pending_exception.error_code);
>       ...
> }
> 

Reply via email to