On Aug 10, 2016 05:52, "Jan Beulich" wrote:
>
> >>> On 10.08.16 at 12:55, wrote:
> > On 08/10/2016 01:12 PM, Jan Beulich wrote:
> > On 10.08.16 at 09:35, wrote:
> >>> --- a/xen/common/vm_event.c
> >>> +++
>>> On 10.08.16 at 12:55, wrote:
> On 08/10/2016 01:12 PM, Jan Beulich wrote:
> On 10.08.16 at 09:35, wrote:
>>> --- a/xen/common/vm_event.c
>>> +++ b/xen/common/vm_event.c
>>> @@ -388,6 +388,13 @@ void vm_event_resume(struct domain *d,
On 08/10/2016 01:12 PM, Jan Beulich wrote:
On 10.08.16 at 09:35, wrote:
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -388,6 +388,13 @@ void vm_event_resume(struct domain *d, struct
>> vm_event_domain *ved)
>> v =
>>> On 10.08.16 at 09:35, wrote:
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -388,6 +388,13 @@ void vm_event_resume(struct domain *d, struct
> vm_event_domain *ved)
> v = d->vcpu[rsp.vcpu_id];
>
> /*
> + * Make sure the
Vm_event_vcpu_pause() needs to use vcpu_pause_nosync() in order
for the current vCPU to not get stuck. A consequence of this is
that the custom vm_event response handlers will not always see
the real vCPU state in v->arch.user_regs. This patch makes sure
that the state is always synchronized in