On 07/06/18 12:05, Jan Beulich wrote:
>>>> On 06.06.18 at 16:16, <andrew.coop...@citrix.com> wrote:
>> On 06/06/18 14:50, Jan Beulich wrote:
>>>>>> On 04.06.18 at 15:59, <andrew.coop...@citrix.com> wrote:
>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>> @@ -3696,6 +3696,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>>>               */
>>>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>>> +            __restore_debug_registers(v);
>>>>              write_debugreg(6, exit_qualification | 
>>>> DR_STATUS_RESERVED_ONE);
>>> The change is certainly correct as is, but I'd still like to put out for
>>> discussion the alternative option:
>>>
>>>     if ( v->arch.hvm_vcpu.flag_dr_dirty )
>>>         write_debugreg(6, exit_qualification | DR_STATUS_RESERVED_ONE);
>>>     else
>>>         v->arch.debugreg[6] = exit_qualification | DR_STATUS_RESERVED_ONE;
>>>
>>> After all the guest may know it's single stepping, and may not care to
>>> read DR6 at all.
>> All of this code changes across the series (so this specific suggestion
>> is incorrect), but to the recommendation in general...
> While I've not made it through the second half of the series yet,
> another consideration: To avoid the double DR6 write, yet still avoid
> an immediate further exit to restore debug registers, why not
>
>     if ( v->arch.hvm_vcpu.flag_dr_dirty )
>         write_debugreg(6, exit_qualification | DR_STATUS_RESERVED_ONE);
>     else
>     {
>         v->arch.debugreg[6] = exit_qualification | DR_STATUS_RESERVED_ONE;
>         __restore_debug_registers(v);
>     }

Do you really think the added complexity is worth shaving a few cycles
off an almost unused cold path?  The double %dr6 write (which isn't
serialising) happens at most once per vcpu context switch, and only ever
when the guest is debugging.

Also remember that this patch wants backporting to all the stable trees,
because the code has been broken for about 10 years.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to