Hi Julien,

On Fri, Feb 13, 2026 at 1:48 PM Julien Grall <[email protected]> wrote:
>
> Hi Mykola,
>
> On 13/02/2026 11:36, Mykola Kvach wrote:
> > For this reason it may be worth conditionally recommending (or even
> > auto-selecting) `vwfi=native` when direct injection is enabled for a
> > vCPU, so measurements reflect the actual delivery fast-path rather than
> > exit/scheduling overhead.
>
> I don't think this is a straightforward answer. "vwfi=native" is
> beneficial when you have a single vCPU scheduled per pCPU. But if you
> have multiple vCPUs running, then you may impair the overall performance
> of the system as the scheduler will not be able to run another vCPU even
> if the current vCPU is doing nothing (it is waiting for an interrupt).
>
> As a data point, Xen didn't initially trapped WFI/WFE. But we noticed a
> lot of slow down during boot if all the vCPUs for a guest were running
> on the same pCPU. The difference was quite noticeable.
>
> So instead of recommending to always set "vwfi=native", I would consider
> an approach where Xen decides whether WFI/WFE is trapped based on the
> number of vCPUs that can be scheduled on a given pCPU. This could be
> adjusted on demand.

Thanks for the clarification. I agree: recommending vwfi=native
unconditionally is not correct.

What I meant was specifically for benchmarking direct injection in a
1:1 vCPU:pCPU setup (or with vCPUs pinned), where trapping WFI/WFE adds
extra exits and can hide the fast-path benefit.

For general setups with oversubscription, vwfi=trap is the right
default, because it lets Xen schedule another runnable vCPU instead of
leaving a pCPU effectively idle while the guest sits in WFI.

I like your suggestion: make WFI/WFE trapping adaptive based on whether
the current pCPU has other runnable vCPUs.


Best regards,
Mykola

Reply via email to