On 2017/3/31 6:00, Julien Grall wrote:
>
>
> On 30/03/2017 22:49, Stefano Stabellini wrote:
>> On Thu, 30 Mar 2017, Wei Chen wrote:
>>> +    /*
>>> +     * If the SErrors option is "FORWARD", we have to prevent forwarding
>>> +     * serror to wrong vCPU. So before context switch, we have to use the
>>> +     * synchronize_serror to guarantee that the pending serror would be
>>> +     * caught by current vCPU.
>>> +     *
>>> +     * The SKIP_CTXT_SWITCH_SERROR_SYNC will be set to cpu_hwcaps when the
>>> +     * SErrors option is NOT "FORWARD".
>>> +     */
>>> +    asm volatile(ALTERNATIVE("bl synchronize_serror",
>>> +                             "nop",
>>> +                             SKIP_CTXT_SWITCH_SERROR_SYNC));
>>
>>
>> This good, but here you need to add:
>>
>>   barrier();
>>
>> because the compiler is free to reorder even asm volatile instructions
>> (it could move the asm volatile after __context_switch theoretically).
>
> ... or it could moved before hand because there are no barrier... What
> you want to use is asm volatile(ALTERNATIVE(...) : : : "memory");
>

I would cover this consideration in the change of #16 in next version.

> Cheers,
>


-- 
Regards,
Wei Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to