Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-07 Thread Boris Ostrovsky

 Boris Ostrovsky (10):
   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
>>> Why is this necessary?  Given that a paravirtual hotplug mechanism
>>> already exists, why isn't its equivalent mechanism suitable?
>> PV guests register a xenstore watch and the toolstack updates cpu's 
>> "available" entry. And ACPI codepath (at least for Linux guest) is not
>> involved at all.
>>
>> I don't think we can use anything like that in hypervisor.
> There must be something in the hypervisor; what currently prevents the
> PV path ignoring xenstore and onlining CPUs themselves?
>
> Or do we currently have nothing... ?


I don't think we have anything. libxl__set_vcpuonline_xenstore() is the
only thing that the toolstack does.

HVM is *possibly* more strict in that onlining involves qemu but I am
not sure even about that (especially qemu-trad, which also triggers
hotplug via xenstore watch).


>
>>
   acpi: Define ACPI IO registers for PVH guests
>>> Can Xen use pm1b, or does there have to be a pm1a available to the guest?
>> pm1a is a required block (unlike pm1b). ACPICA, for example, always
>> first checks pm1a when handling an SCI.
>>
>> (And how having pm1b only would have helped?)
> For the HVM case, I think we are going to need to one pm1 block
> belonging to qemu, and one belonging to Xen.

The only place we use pm1 block in the hypervisor is for pmtimer (and I
am actually not sure I see how qemu uses it for Xen guests).

>
   acpi: Make pmtimer optional in FADT
   acpi: PVH guests need _E02 method
>>> Patch 6 Reviewed-by: Andrew Cooper 
>>>
   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
>>> Do not make any assumptions about PVHness based on IOREQ servers.  It
>>> will not be true for usecases such as vGPU.
>> Is this comment related to the last patch or is it a general one?  If
>> it's the latter and we use XEN_X86_EMU_ACPI then I think this will not
>> be an issue.
> It was about that patch specifically, but XEN_X86_EMU_ACPI is definitely
> the better way to go.
>
> The only question is whether there might be other things APCI things we
> wish to emulate in the future (PCI hotplug by any chance?), in which
> case, should we be slightly more specific than just ACPI in the name label?

The flag  would be meant to say that no ACPI accesses are emulated by
qemu and that would be true for any accesses by PVH guests --- either
CPU or PCI-related.

But the name is somewhat misleading, even without considering other
APCI-related things: we do emulate ACPI, but in the hypervisor and not
in qemu. (As ridiculous as it sounds that actually was one of the
reasons why I didn't use a flag). EMU_NO_DM? But that's the whole PVH thing.


-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-07 Thread Andrew Cooper
On 07/11/16 14:19, Boris Ostrovsky wrote:
> On 11/07/2016 06:41 AM, Andrew Cooper wrote:
>> On 06/11/16 21:42, Boris Ostrovsky wrote:
>>> This series adds support for ACPI-based VCPU hotplug for unprivileged
>>> PVH guests.
>>>
>>> New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
>>> guest creation and in response to 'xl vcpu-set' command. This domctl
>>> updates GPE0's status and enable registers and sends an SCI to the
>>> guest using (newly added) VIRQ_SCI.
>> Thankyou for doing this.  Getting APCI hotplug working has been a low
>> item on my TODO list for while now.
>>
>> Some queries and comments however.
>>
>> This series is currently very PVH centric, to the point of making it
>> unusable for plain HVM guests.  While I won't insist on you implementing
>> this for HVM (there are some particularly awkward migration problems to
>> be considered), I do insist that its implementation isn't tied
>> implicitly to being PVH.
>>
>> The first part of this will be controlling the hypervisor emulation of
>> the PM1* blocks with an XEN_X86_EMU_* flag just like all other emulation.
> Something like XEN_X86_EMU_ACPI?

Sounds good.

>
> That would also eliminate the need for explicitly setting
> HVM_PARAM_NR_IOREQ_SERVER_PAGES to zero which I used as indication that
> we should have IO handler in the hypervisor. Paul (copied) didn't like that.

Definitely an improvement.

>>> Boris Ostrovsky (10):
>>>   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
>> Why is this necessary?  Given that a paravirtual hotplug mechanism
>> already exists, why isn't its equivalent mechanism suitable?
> PV guests register a xenstore watch and the toolstack updates cpu's 
> "available" entry. And ACPI codepath (at least for Linux guest) is not
> involved at all.
>
> I don't think we can use anything like that in hypervisor.

There must be something in the hypervisor; what currently prevents the
PV path ignoring xenstore and onlining CPUs themselves?

Or do we currently have nothing... ?

>
>
>>>   acpi: Define ACPI IO registers for PVH guests
>> Can Xen use pm1b, or does there have to be a pm1a available to the guest?
> pm1a is a required block (unlike pm1b). ACPICA, for example, always
> first checks pm1a when handling an SCI.
>
> (And how having pm1b only would have helped?)

For the HVM case, I think we are going to need to one pm1 block
belonging to qemu, and one belonging to Xen.

>
>>>   pvh: Set online VCPU map to avail_vcpus
>>>   acpi: Power and Sleep ACPI buttons are not emulated
>> PVH might not want power/sleep, but you cannot assume that HVM guests
>> have a paravirtual mechnism of shutting down.
> AFAIK they don't rely on a button-initiated codepath. At least Linux.
>
> I don't know Windows path though. I can add ACPI_HAS_BUTTONS.

Windows very definitely does respond to button presses (although not in
a helpful way).  Please keep them enabled by default for HVM guests,
even if we disallow their use with PVH.

>
>>>   acpi: Make pmtimer optional in FADT
>>>   acpi: PVH guests need _E02 method
>> Patch 6 Reviewed-by: Andrew Cooper 
>>
>>>   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
>> Do not make any assumptions about PVHness based on IOREQ servers.  It
>> will not be true for usecases such as vGPU.
> Is this comment related to the last patch or is it a general one?  If
> it's the latter and we use XEN_X86_EMU_ACPI then I think this will not
> be an issue.

It was about that patch specifically, but XEN_X86_EMU_ACPI is definitely
the better way to go.

The only question is whether there might be other things APCI things we
wish to emulate in the future (PCI hotplug by any chance?), in which
case, should we be slightly more specific than just ACPI in the name label?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-07 Thread Boris Ostrovsky
On 11/07/2016 06:41 AM, Andrew Cooper wrote:
> On 06/11/16 21:42, Boris Ostrovsky wrote:
>> This series adds support for ACPI-based VCPU hotplug for unprivileged
>> PVH guests.
>>
>> New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
>> guest creation and in response to 'xl vcpu-set' command. This domctl
>> updates GPE0's status and enable registers and sends an SCI to the
>> guest using (newly added) VIRQ_SCI.
> Thankyou for doing this.  Getting APCI hotplug working has been a low
> item on my TODO list for while now.
>
> Some queries and comments however.
>
> This series is currently very PVH centric, to the point of making it
> unusable for plain HVM guests.  While I won't insist on you implementing
> this for HVM (there are some particularly awkward migration problems to
> be considered), I do insist that its implementation isn't tied
> implicitly to being PVH.
>
> The first part of this will be controlling the hypervisor emulation of
> the PM1* blocks with an XEN_X86_EMU_* flag just like all other emulation.

Something like XEN_X86_EMU_ACPI?

That would also eliminate the need for explicitly setting
HVM_PARAM_NR_IOREQ_SERVER_PAGES to zero which I used as indication that
we should have IO handler in the hypervisor. Paul (copied) didn't like that.


>
>>
>> Boris Ostrovsky (10):
>>   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
> Why is this necessary?  Given that a paravirtual hotplug mechanism
> already exists, why isn't its equivalent mechanism suitable?

PV guests register a xenstore watch and the toolstack updates cpu's 
"available" entry. And ACPI codepath (at least for Linux guest) is not
involved at all.

I don't think we can use anything like that in hypervisor.


>
>>   acpi: Define ACPI IO registers for PVH guests
> Can Xen use pm1b, or does there have to be a pm1a available to the guest?

pm1a is a required block (unlike pm1b). ACPICA, for example, always
first checks pm1a when handling an SCI.

(And how having pm1b only would have helped?)

>
>>   pvh: Set online VCPU map to avail_vcpus
>>   acpi: Power and Sleep ACPI buttons are not emulated
> PVH might not want power/sleep, but you cannot assume that HVM guests
> have a paravirtual mechnism of shutting down.

AFAIK they don't rely on a button-initiated codepath. At least Linux.

I don't know Windows path though. I can add ACPI_HAS_BUTTONS.

>
>>   acpi: Make pmtimer optional in FADT
>>   acpi: PVH guests need _E02 method
> Patch 6 Reviewed-by: Andrew Cooper 
>
>>   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
> Do not make any assumptions about PVHness based on IOREQ servers.  It
> will not be true for usecases such as vGPU.

Is this comment related to the last patch or is it a general one?  If
it's the latter and we use XEN_X86_EMU_ACPI then I think this will not
be an issue.

-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-07 Thread Andrew Cooper
On 06/11/16 21:42, Boris Ostrovsky wrote:
> This series adds support for ACPI-based VCPU hotplug for unprivileged
> PVH guests.
>
> New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
> guest creation and in response to 'xl vcpu-set' command. This domctl
> updates GPE0's status and enable registers and sends an SCI to the
> guest using (newly added) VIRQ_SCI.

Thankyou for doing this.  Getting APCI hotplug working has been a low
item on my TODO list for while now.

Some queries and comments however.

This series is currently very PVH centric, to the point of making it
unusable for plain HVM guests.  While I won't insist on you implementing
this for HVM (there are some particularly awkward migration problems to
be considered), I do insist that its implementation isn't tied
implicitly to being PVH.

The first part of this will be controlling the hypervisor emulation of
the PM1* blocks with an XEN_X86_EMU_* flag just like all other emulation.

>
>
> Boris Ostrovsky (10):
>   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus

Why is this necessary?  Given that a paravirtual hotplug mechanism
already exists, why isn't its equivalent mechanism suitable?

>   acpi: Define ACPI IO registers for PVH guests

Can Xen use pm1b, or does there have to be a pm1a available to the guest?

>   pvh: Set online VCPU map to avail_vcpus
>   acpi: Power and Sleep ACPI buttons are not emulated

PVH might not want power/sleep, but you cannot assume that HVM guests
have a paravirtual mechnism of shutting down.

>   acpi: Make pmtimer optional in FADT
>   acpi: PVH guests need _E02 method

Patch 6 Reviewed-by: Andrew Cooper 

>   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses

Do not make any assumptions about PVHness based on IOREQ servers.  It
will not be true for usecases such as vGPU.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 00/10] PVH VCPU hotplug support

2016-11-06 Thread Boris Ostrovsky
This series adds support for ACPI-based VCPU hotplug for unprivileged
PVH guests.

New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
guest creation and in response to 'xl vcpu-set' command. This domctl
updates GPE0's status and enable registers and sends an SCI to the
guest using (newly added) VIRQ_SCI.


Boris Ostrovsky (10):
  x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  acpi: Define ACPI IO registers for PVH guests
  pvh: Set online VCPU map to avail_vcpus
  acpi: Power and Sleep ACPI buttons are not emulated
  acpi: Make pmtimer optional in FADT
  acpi: PVH guests need _E02 method
  pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  pvh/acpi: Handle ACPI accesses for PVH guests
  events/x86: Define SCI virtual interrupt
  pvh: Send an SCI on VCPU hotplug event

 tools/firmware/hvmloader/util.c   |  3 +-
 tools/flask/policy/modules/dom0.te|  2 +-
 tools/flask/policy/modules/xen.if |  4 +-
 tools/libacpi/build.c |  5 +++
 tools/libacpi/libacpi.h   |  1 +
 tools/libacpi/mk_dsdt.c   | 10 ++---
 tools/libacpi/static_tables.c | 31 ++---
 tools/libxc/include/xenctrl.h |  5 +++
 tools/libxc/xc_dom_x86.c  | 14 ++
 tools/libxl/libxl.c   | 10 -
 tools/libxl/libxl_arch.h  |  4 ++
 tools/libxl/libxl_arm.c   |  6 +++
 tools/libxl/libxl_dom.c   |  7 +++
 tools/libxl/libxl_x86.c   |  6 +++
 tools/libxl/libxl_x86_acpi.c  |  6 +--
 xen/arch/x86/domctl.c | 25 +++
 xen/arch/x86/hvm/hvm.c| 13 --
 xen/arch/x86/hvm/ioreq.c  | 83 +++
 xen/include/asm-x86/domain.h  |  6 +++
 xen/include/asm-x86/event.h   |  3 +-
 xen/include/asm-x86/hvm/domain.h  |  6 +++
 xen/include/asm-x86/hvm/ioreq.h   |  1 +
 xen/include/public/arch-x86/xen-mca.h |  2 -
 xen/include/public/arch-x86/xen.h |  3 ++
 xen/include/public/domctl.h   |  9 
 xen/include/public/hvm/ioreq.h|  3 ++
 xen/xsm/flask/hooks.c |  3 ++
 xen/xsm/flask/policy/access_vectors   |  2 +
 28 files changed, 235 insertions(+), 38 deletions(-)

-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel