On 15/11/2018 10:36, Julien Grall wrote:
> Hi Andrew,
>
> On 11/15/18 10:26 AM, Andrew Cooper wrote:
>> On 15/11/2018 10:13, Julien Grall wrote:
>>> (+ Andre)
>>>
>>> On 11/15/18 12:47 AM, Andrew Cooper wrote:
>>>> On 14/11/2018 12:49, Julien Grall wrote:
>>>>> Hi Mirela,
>>>>>
>>>>> On 14/11/2018 12:08, Mirela Simonovic wrote:
>>>>>>
>>>>>>
>>>>>> On 11/13/2018 09:32 AM, Andrew Cooper wrote:
>>>>>>> On 12/11/2018 19:56, Julien Grall wrote:
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> On 11/12/18 4:41 PM, Andrew Cooper wrote:
>>>>>>>>> On 12/11/18 16:35, Mirela Simonovic wrote:
>>>>>>>>>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>>>>>>>>>> index e594b48d81..7f8105465c 100644
>>>>>>>>>>>> --- a/xen/arch/arm/domain.c
>>>>>>>>>>>> +++ b/xen/arch/arm/domain.c
>>>>>>>>>>>> @@ -97,6 +97,11 @@ static void ctxt_switch_from(struct vcpu
>>>>>>>>>>>> *p)
>>>>>>>>>>>>           if ( is_idle_vcpu(p) )
>>>>>>>>>>>>               return;
>>>>>>>>>>>>
>>>>>>>>>>>> +    /* VCPU's context should not be saved if its domain is
>>>>>>>>>>>> suspended */
>>>>>>>>>>>> +    if ( p->domain->is_shut_down &&
>>>>>>>>>>>> +        (p->domain->shutdown_code == SHUTDOWN_suspend) )
>>>>>>>>>>>> +        return;
>>>>>>>>>>> SHUTDOWN_suspend is used in Xen for other purpose (see
>>>>>>>>>>> SCHEDOP_shutdown). The other user of that code relies on all
>>>>>>>>>>> the
>>>>>>>>>>> state
>>>>>>>>>>> to be saved on suspend.
>>>>>>>>>>>
>>>>>>>>>> We just need a flag to mark a domain as suspended, and I do
>>>>>>>>>> believe
>>>>>>>>>> SHUTDOWN_suspend is not used anywhere else.
>>>>>>>>>> Let's come back on this.
>>>>>>>>> SHUTDOWN_suspend is used for migration.  Grep for it through the
>>>>>>>>> Xen
>>>>>>>>> tree and you'll find several pieces of documentation,
>>>>>>>>> including the
>>>>>>>>> description of what this shutdown code means.
>>>>>>>>>
>>>>>>>>> What you are introducing here is not a shutdown - it is a suspend
>>>>>>>>> with
>>>>>>>>> the intent to resume executing later.  As such, it shouldn't use
>>>>>>>>> Xen's
>>>>>>>>> shutdown infrastructure, which exists mainly to communicate with
>>>>>>>>> the
>>>>>>>>> toolstack.
>>>>>>>> Would domain pause/unpause be a better solution?
>>>>>>> Actually yes - that sounds like a very neat solution.
>>>>>>
>>>>>> I believe domain pause will not work here - a domain cannot pause
>>>>>> itself, i.e. current domain cannot be paused. This functionality
>>>>>> seems to assume that a domain is pausing another domain. Or I
>>>>>> missed vC
>>>>>> the point.
>>>>>
>>>>> Yes domain pause/unpause will not work. However, you can introduce a
>>>>> boolean to tell you whether the domain was suspend.
>>>>>
>>>>> I actually quite like how suspend work for x86 HVM. This is based on
>>>>> pause/unpause. Have a look at hvm_s3_{suspend/resume}.
>>>>
>>>> That only exists because the ACPI controller is/was implemented in
>>>> QEMU.  I wouldn't recommend copying it.
>>>
>>> May I ask why? I don't think the properties are very different from
>>> Arm here.
>>
>> If you observe, that can only be actioned by a hypercall from qemu.  It
>> can't be actioned by an ACPI controller emulated by Xen.
>
> How does the ACPI controller emulated by Xen deal with suspend/resume?
> Do you have any pointer?

It doesn't, and that is one of the outstanding issues for trying to make
it work sensibly.  In the end it wasn't merged, and is still on
someone’s TODO list.

>
>>
>>>
>>>>
>>>> Having spent some more time thinking about this problem, what
>>>> properties
>>>> does the PSCI call have?
>>>>
>>>> I gather from other parts of this thread that there may be a partial
>>>> reset of state?  Beyond that, what else?
>>>>
>>>> I ask, because conceptually, domU suspend to RAM is literally just
>>>> "de-schedule until we want to wake up", and in this case, it
>>>> appears to
>>>> be "wake up on any of the still-active interrupts".  We've already
>>>> got a
>>>> scheduler API for that, and its called vcpu_block().  This already
>>>> exists with "wait until a new event occurs" semantics.
>>>>
>>>> Is there anything else I've missed?
>>>
>>> All vCPUs but the vCPU calling SYSTEM_SUSPEND should be off. Also,
>>> only events on that vCPU can trigger a resume. All the other event
>>> should not be taken into account.
>>>
>>> My worry with vcpu_block() is we don't prevent the other vCPUs to run.
>>> Technically they should be off, but I would like some safety to avoid
>>> any potential corner case (i.e other way to turn a vCPU on).
>>
>> Why not have the SYSTEM_SUSPEND check that all other vCPUs are DOWN
>> first, and fail the call if not?
>
> The code already check for that. My concern is there might (today or
> in the future) other bits of Xen that can potentially turn on the vCPU
> (e.g the toolstack).

I wouldn't be concerned - there is nothing you can do about it.

Because of things like domain construction / migration / emulation /
etc, the toolstack(/control domain) has all the low level hooks into Xen
to do everything.  Your concern here about the toolstack onlining other
vcpus is just as problematic as the toolstack issuing an unpause
hypercall, or mapping part of the guest and splatting /dev/random
everywhere.

Any state you can arrange in Xen can be undone/altered by the toolstack,
and you've got to accept that the toolstack can do this, and won't,
because it is a trusted piece of the system and has a vested interest in
the guest not crashing.

Beyond that, you'll want to use the most appropriate interfaces within
Xen to implement this, and it very much sounds like vcpu_block() has the
semantics that you want.

>
>>
>> If instead of waiting for any event, you need to wait for a specific
>> event, there is also vcpu_poll() which is a related scheduler API.
> I guess we need to agree how what kind of event can resume the guest
> from suspend/resume. I am not convinced that all events should be
> equal here.
>
> By vcpu_poll, do you mean do_poll?

That is the hypercall, but there is a variation of it for internal use.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to