On 22/01/2026 4:49 pm, Alejandro Vallejo wrote:
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 40e4c71244..34e988ee61 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -797,8 +797,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu 
> *v)
>      const struct cpu_policy *cp = v->domain->arch.cpu_policy;
>      int rc = 0;
>  
> -    if ( opt_hvm_fep ||
> -         (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
> +    if ( opt_hvm_fep )
>          v->arch.hvm.vmx.exception_bitmap |= (1U << X86_EXC_UD);
>      else
>          v->arch.hvm.vmx.exception_bitmap &= ~(1U << X86_EXC_UD);
> @@ -4576,6 +4575,7 @@ void asmlinkage vmx_vmexit_handler(struct cpu_user_regs 
> *regs)
>              /* Already handled above. */
>              break;
>          case X86_EXC_UD:
> +            BUG_ON(!IS_ENABLED(CONFIG_HVM_FEP));
>              TRACE(TRC_HVM_TRAP, vector);
>              hvm_ud_intercept(regs);
>              break;

Again, nested virt makes this more complicated than to simply believe it
doesn't happen.

Also, more often than I'd like, I enable #UD interception for other
reasons, and I'd prefer that that doesn't get any harder than it does
right now.

In an ideal world I'd have already upstreamed the logic to decompose
double/triple faults...

~Andrew

Reply via email to