On Thu, Mar 17, 2022 at 12:07 PM Jan Beulich <jbeul...@suse.com> wrote:
>
> On 17.03.2022 16:59, Tamas K Lengyel wrote:
> > On Thu, Mar 17, 2022 at 11:06 AM Jan Beulich <jbeul...@suse.com> wrote:
> >>
> >> On 17.03.2022 15:43, Tamas K Lengyel wrote:
> >>> On Thu, Mar 17, 2022 at 9:56 AM Jan Beulich <jbeul...@suse.com> wrote:
> >>>> On 10.03.2022 19:44, Tamas K Lengyel wrote:
> >>>>> @@ -1155,6 +1154,8 @@ static int cf_check hvm_load_cpu_ctxt(struct 
> >>>>> domain *d, hvm_domain_context_t *h)
> >>>>>      v->arch.dr6   = ctxt.dr6;
> >>>>>      v->arch.dr7   = ctxt.dr7;
> >>>>>
> >>>>> +    hvm_set_interrupt_shadow(v, ctxt.interruptibility_info);
> >>>>
> >>>> Setting reserved bits as well as certain combinations of bits will
> >>>> cause VM entry to fail. I think it would be nice to report this as
> >>>> an error here rather than waiting for the VM entry failure.
> >>>
> >>> Not sure if this would be the right spot to do that since that's all
> >>> VMX specific and this is the common hvm code.
> >>
> >> Well, it would be the VMX hook to do the checking, with an error
> >> propagated back here.
> >
> > I'm actually against it because the overhead of that error-checking
> > during vm forking would be significant with really no benefit. We are
> > copying the state from the parent where it was running fine before, so
> > doing that sanity checking thousands of times per second when we
> > already know its fine is bad.
>
> I can see your point, but my concern is not forking but normal migration
> or restoring of guests, where the incoming data is of effectively
> unknown origin.

IMHO for that route the error checking is better performed at the
toolstack level that sends the data to Xen.

Tamas

Reply via email to