On Thu, 2020-08-20 at 11:58 +0200, Paolo Bonzini wrote:
> On 20/08/20 11:13, Maxim Levitsky wrote:
> > @@ -3912,6 +3914,14 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, 
> > const char *smstate)
> >     vmcb_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
> >  
> >     if (guest) {
> > +           /*
> > +            * This can happen if SVM was not enabled prior to #SMI,
> > +            * but guest corrupted the #SMI state and marked it as
> > +            * enabled it there
> > +            */
> > +           if (!svm->nested.initialized)
> > +                   return 1;
> > +
> >             if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == 
> > -EINVAL)
> >                     return 1;
> 
> This can also happen if you live migrate while in SMM (EFER.SVME=0).
> You need to check for the SVME bit in the SMM state save area, and:
> 
> 1) triple fault if it is clear
> 
> 2) call svm_allocate_nested if it is set.
> 
> Paolo
> 
Makes sense, will do.

Best regards,
        Maxim Levitsky

Reply via email to