Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-12 Thread Wei Liu
On Mon, Jul 11, 2016 at 10:40:17AM -0400, Boris Ostrovsky wrote:
> On 07/11/2016 09:41 AM, Wei Liu wrote:
> > On Mon, Jul 11, 2016 at 09:33:21AM -0400, Boris Ostrovsky wrote:
> >> On 07/11/2016 06:47 AM, Wei Liu wrote:
> >>> On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
>  On 07/08/2016 12:07 PM, Wei Liu wrote:
> > On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
> >> On 07/08/2016 06:55 AM, Wei Liu wrote:
>  +
>  +/* Map page that will hold RSDP */
>  +extent = RSDP_ADDRESS >> ctxt.page_shift;
>  +rc = populate_acpi_pages(dom, , 1, );
>  +if (rc) {
>  +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with 
>  %d",
>  +__FUNCTION__, rc);
>  +goto out;
>  +}
>  +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
>  +  
>  ctxt.page_size,
>  +  PROT_READ | 
>  PROT_WRITE,
>  +  RSDP_ADDRESS 
>  >> ctxt.page_shift);
> >>> I think with Anthony's on-going work you should be more flexible for 
> >>> all
> >>> you tables.
> >> Not sure I understand what you mean here. You want the address
> >> (RSDP_ADDRESS) to be a variable as opposed to a macro?
> >>
> > I'm still trying to wrap my head around the possible interaction between
> > Anthony's work and your work.
> >
> > Anthony's work allows dynamically loading of firmware blobs. If you use
> > a fixed address, theoretically it can clash with firmware blobs among
> > other things libxc can load. The high address is a safe bet so that
> > probably won't happen in practice.
> >
> > Anthony's work allows loading arbitrary blobs actually. Can you take
> > advantage of that mechanism as well? That is, to specify all your tables
> > as modules and let libxc handle actually loading them  into guest
> > memory.
> >
> > Does this make sense?
> >
> > Also CC Anthony here.
>  My understanding of Anthony's series is that its goal was to provide an
>  interface to pass DSDT (and maybe some other tables) from the toolstack
>  to hvmloader.
> 
>  Here we don't have hvmloader, we are loading the tables directly into
>  guest's memory.
> 
> >>> Do you use the same hvm_start_info structure? I don't think that
> >>> structure is restricted to hvmloader.
> >>
> >> Yes, we do. However, in PVH(v2) case it will be seen next by the guest
> >> who will expect the tables to already be in memory. I.e. there is no
> >> intermediate Xen component, such as hvmloader, who can load the blobs.
> >>
> > Maybe you misunderstood. Anthony's work will load the blobs into guest
> > memory -- all fields in hvm_start_info points to guest physical address
> > IIRC. Hvmloader might want to relocate them, but that's a different
> > matter.
> 
> What's the status on Anthony's series? I can rebase on top of his tree
> (I might indeed be able to reuse some of his code) but if this is way
> off the dependencies become problematic (because Shannon's series would
> also be delayed).
> 

His series is very close to get merged. I believe toolstack in his
series only require cosmetic changes and hvmloader patches are all
acked.

> >
> >> Having said that, I wonder whether we (both x86 and ARM) could use
> >> Anthony's xc_dom_image.full_acpi_module instead 
> 
> 
> (no full_acpi_module anymore, I was looking at an earlier series
> version. I guess it's system_firmware_module now)
> 
> 
> >> of acpitables_blob that
> >> Shannon's series added. (even if we can't, I think
> >> xc_hvm_firmware_module is the right datastructure to store the blob
> >> since it has both toolstack's virtual and guest's physical addresses).
> >>
> > Yes, that's along the line I'm thinking about.
> 
> So I am confused about xc_hvm_firmware_mode: is guest_addr_out meant to
> be guest's physical or virtual?
> 
> One one hand it looks like virtual:
> 
> in
> https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02901.html
> +module->guest_addr_out = seg.vstart;
> 
> but then in
> https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02902.html
> 
> +modlist[index].paddr = module->guest_addr_out;
> 

It should be guest physical.

Wei.

> 
> -boris
> 
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Boris Ostrovsky
On 07/11/2016 09:41 AM, Wei Liu wrote:
> On Mon, Jul 11, 2016 at 09:33:21AM -0400, Boris Ostrovsky wrote:
>> On 07/11/2016 06:47 AM, Wei Liu wrote:
>>> On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
 On 07/08/2016 12:07 PM, Wei Liu wrote:
> On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
>> On 07/08/2016 06:55 AM, Wei Liu wrote:
 +
 +/* Map page that will hold RSDP */
 +extent = RSDP_ADDRESS >> ctxt.page_shift;
 +rc = populate_acpi_pages(dom, , 1, );
 +if (rc) {
 +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
 +__FUNCTION__, rc);
 +goto out;
 +}
 +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
 +  ctxt.page_size,
 +  PROT_READ | 
 PROT_WRITE,
 +  RSDP_ADDRESS >> 
 ctxt.page_shift);
>>> I think with Anthony's on-going work you should be more flexible for all
>>> you tables.
>> Not sure I understand what you mean here. You want the address
>> (RSDP_ADDRESS) to be a variable as opposed to a macro?
>>
> I'm still trying to wrap my head around the possible interaction between
> Anthony's work and your work.
>
> Anthony's work allows dynamically loading of firmware blobs. If you use
> a fixed address, theoretically it can clash with firmware blobs among
> other things libxc can load. The high address is a safe bet so that
> probably won't happen in practice.
>
> Anthony's work allows loading arbitrary blobs actually. Can you take
> advantage of that mechanism as well? That is, to specify all your tables
> as modules and let libxc handle actually loading them  into guest
> memory.
>
> Does this make sense?
>
> Also CC Anthony here.
 My understanding of Anthony's series is that its goal was to provide an
 interface to pass DSDT (and maybe some other tables) from the toolstack
 to hvmloader.

 Here we don't have hvmloader, we are loading the tables directly into
 guest's memory.

>>> Do you use the same hvm_start_info structure? I don't think that
>>> structure is restricted to hvmloader.
>>
>> Yes, we do. However, in PVH(v2) case it will be seen next by the guest
>> who will expect the tables to already be in memory. I.e. there is no
>> intermediate Xen component, such as hvmloader, who can load the blobs.
>>
> Maybe you misunderstood. Anthony's work will load the blobs into guest
> memory -- all fields in hvm_start_info points to guest physical address
> IIRC. Hvmloader might want to relocate them, but that's a different
> matter.

What's the status on Anthony's series? I can rebase on top of his tree
(I might indeed be able to reuse some of his code) but if this is way
off the dependencies become problematic (because Shannon's series would
also be delayed).

>
>> Having said that, I wonder whether we (both x86 and ARM) could use
>> Anthony's xc_dom_image.full_acpi_module instead 


(no full_acpi_module anymore, I was looking at an earlier series
version. I guess it's system_firmware_module now)


>> of acpitables_blob that
>> Shannon's series added. (even if we can't, I think
>> xc_hvm_firmware_module is the right datastructure to store the blob
>> since it has both toolstack's virtual and guest's physical addresses).
>>
> Yes, that's along the line I'm thinking about.

So I am confused about xc_hvm_firmware_mode: is guest_addr_out meant to
be guest's physical or virtual?

One one hand it looks like virtual:

in
https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02901.html
+module->guest_addr_out = seg.vstart;

but then in
https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02902.html

+modlist[index].paddr = module->guest_addr_out;


-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Anthony PERARD
On Mon, Jul 11, 2016 at 09:33:21AM -0400, Boris Ostrovsky wrote:
> On 07/11/2016 06:47 AM, Wei Liu wrote:
> > On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
> >> On 07/08/2016 12:07 PM, Wei Liu wrote:
> >>> On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
>  On 07/08/2016 06:55 AM, Wei Liu wrote:
> >> +
> >> +/* Map page that will hold RSDP */
> >> +extent = RSDP_ADDRESS >> ctxt.page_shift;
> >> +rc = populate_acpi_pages(dom, , 1, );
> >> +if (rc) {
> >> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
> >> +__FUNCTION__, rc);
> >> +goto out;
> >> +}
> >> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
> >> +  ctxt.page_size,
> >> +  PROT_READ | 
> >> PROT_WRITE,
> >> +  RSDP_ADDRESS >> 
> >> ctxt.page_shift);
> > I think with Anthony's on-going work you should be more flexible for all
> > you tables.

If the tool stack already knows where it can (or should) load the rsdp,
there is not much reason to be flexible.

>  Not sure I understand what you mean here. You want the address
>  (RSDP_ADDRESS) to be a variable as opposed to a macro?
> 
> >>> I'm still trying to wrap my head around the possible interaction between
> >>> Anthony's work and your work.
> >>>
> >>> Anthony's work allows dynamically loading of firmware blobs. If you use
> >>> a fixed address, theoretically it can clash with firmware blobs among
> >>> other things libxc can load. The high address is a safe bet so that
> >>> probably won't happen in practice.
> >>>
> >>> Anthony's work allows loading arbitrary blobs actually. Can you take
> >>> advantage of that mechanism as well? That is, to specify all your tables
> >>> as modules and let libxc handle actually loading them  into guest
> >>> memory.
> >>>
> >>> Does this make sense?
> >>>
> >>> Also CC Anthony here.
> >>
> >> My understanding of Anthony's series is that its goal was to provide an
> >> interface to pass DSDT (and maybe some other tables) from the toolstack
> >> to hvmloader.

Not anymore. The only new functionality provided by the patch series is
to load the BIOS (or OVMF) from the filesystem (instead of having this
blob embedded into hvmloader).

It does also change the way an extra acpi tables or a smbios is loaded
into guest memory for consumption by hvmloader. But that just make the
libxc code a bit cleaner.

> >> Here we don't have hvmloader, we are loading the tables directly into
> >> guest's memory.
> >>
> > Do you use the same hvm_start_info structure? I don't think that
> > structure is restricted to hvmloader.
> 
> 
> Yes, we do. However, in PVH(v2) case it will be seen next by the guest
> who will expect the tables to already be in memory. I.e. there is no
> intermediate Xen component, such as hvmloader, who can load the blobs.
> 
> Having said that, I wonder whether we (both x86 and ARM) could use
> Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that

I don't have full_acpi_module anymore in my patch series. But that does
not prevent you from introducing one.

> Shannon's series added. (even if we can't, I think
> xc_hvm_firmware_module is the right datastructure to store the blob
> since it has both toolstack's virtual and guest's physical addresses).

-- 
Anthony PERARD

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Julien Grall



On 11/07/16 14:42, Wei Liu wrote:

On Mon, Jul 11, 2016 at 02:39:05PM +0100, Julien Grall wrote:

Yes, we do. However, in PVH(v2) case it will be seen next by the guest
who will expect the tables to already be in memory. I.e. there is no
intermediate Xen component, such as hvmloader, who can load the blobs.

Having said that, I wonder whether we (both x86 and ARM) could use
Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that
Shannon's series added. (even if we can't, I think
xc_hvm_firmware_module is the right datastructure to store the blob
since it has both toolstack's virtual and guest's physical addresses).


In this case, xc_hvm_firmware_module would need to be renamed as ARM guests
are neither HVM nor PV.



That's trivial. It's an internal structure that we can rename at will.


FWIW, from the toolstack point of view, ARM guests is considered as PV
guest.


... while at the same time utilises HVM param...

Not complaining, just this makes me chuckle a bit. :-)


ARM guests is a combination of HVM and PV features. I agree it is a bit 
a mess, but there is code in the hypervisor/toolstack/Linux which rely 
on the type of guests (e.g LIBXL_DOMAIN_TYPE_* in libxl) and not a set 
of available features.


In the hypervisor, we are trying to move towards a set of features (i.e 
dropping is_pv_domain/is_hvm_domain in common code) as none suit ARM 
guests.


I think it will benefit for both ARM and x86 to move available features 
rather than type. However this is requiring a lot of rework which cannot 
be done quickly.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Wei Liu
On Mon, Jul 11, 2016 at 02:39:05PM +0100, Julien Grall wrote:
> >Yes, we do. However, in PVH(v2) case it will be seen next by the guest
> >who will expect the tables to already be in memory. I.e. there is no
> >intermediate Xen component, such as hvmloader, who can load the blobs.
> >
> >Having said that, I wonder whether we (both x86 and ARM) could use
> >Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that
> >Shannon's series added. (even if we can't, I think
> >xc_hvm_firmware_module is the right datastructure to store the blob
> >since it has both toolstack's virtual and guest's physical addresses).
> 
> In this case, xc_hvm_firmware_module would need to be renamed as ARM guests
> are neither HVM nor PV.
> 

That's trivial. It's an internal structure that we can rename at will.

> FWIW, from the toolstack point of view, ARM guests is considered as PV
> guest.

... while at the same time utilises HVM param...

Not complaining, just this makes me chuckle a bit. :-)

Wei.

> 
> Regards,
> 
> -- 
> Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Wei Liu
On Mon, Jul 11, 2016 at 09:33:21AM -0400, Boris Ostrovsky wrote:
> On 07/11/2016 06:47 AM, Wei Liu wrote:
> > On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
> >> On 07/08/2016 12:07 PM, Wei Liu wrote:
> >>> On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
>  On 07/08/2016 06:55 AM, Wei Liu wrote:
> >> +
> >> +/* Map page that will hold RSDP */
> >> +extent = RSDP_ADDRESS >> ctxt.page_shift;
> >> +rc = populate_acpi_pages(dom, , 1, );
> >> +if (rc) {
> >> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
> >> +__FUNCTION__, rc);
> >> +goto out;
> >> +}
> >> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
> >> +  ctxt.page_size,
> >> +  PROT_READ | 
> >> PROT_WRITE,
> >> +  RSDP_ADDRESS >> 
> >> ctxt.page_shift);
> > I think with Anthony's on-going work you should be more flexible for all
> > you tables.
>  Not sure I understand what you mean here. You want the address
>  (RSDP_ADDRESS) to be a variable as opposed to a macro?
> 
> >>> I'm still trying to wrap my head around the possible interaction between
> >>> Anthony's work and your work.
> >>>
> >>> Anthony's work allows dynamically loading of firmware blobs. If you use
> >>> a fixed address, theoretically it can clash with firmware blobs among
> >>> other things libxc can load. The high address is a safe bet so that
> >>> probably won't happen in practice.
> >>>
> >>> Anthony's work allows loading arbitrary blobs actually. Can you take
> >>> advantage of that mechanism as well? That is, to specify all your tables
> >>> as modules and let libxc handle actually loading them  into guest
> >>> memory.
> >>>
> >>> Does this make sense?
> >>>
> >>> Also CC Anthony here.
> >>
> >> My understanding of Anthony's series is that its goal was to provide an
> >> interface to pass DSDT (and maybe some other tables) from the toolstack
> >> to hvmloader.
> >>
> >> Here we don't have hvmloader, we are loading the tables directly into
> >> guest's memory.
> >>
> > Do you use the same hvm_start_info structure? I don't think that
> > structure is restricted to hvmloader.
> 
> 
> Yes, we do. However, in PVH(v2) case it will be seen next by the guest
> who will expect the tables to already be in memory. I.e. there is no
> intermediate Xen component, such as hvmloader, who can load the blobs.
> 

Maybe you misunderstood. Anthony's work will load the blobs into guest
memory -- all fields in hvm_start_info points to guest physical address
IIRC. Hvmloader might want to relocate them, but that's a different
matter.

> Having said that, I wonder whether we (both x86 and ARM) could use
> Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that
> Shannon's series added. (even if we can't, I think
> xc_hvm_firmware_module is the right datastructure to store the blob
> since it has both toolstack's virtual and guest's physical addresses).
> 

Yes, that's along the line I'm thinking about.

Wei.

> -boris
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Julien Grall



On 11/07/16 14:33, Boris Ostrovsky wrote:

On 07/11/2016 06:47 AM, Wei Liu wrote:

On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:

On 07/08/2016 12:07 PM, Wei Liu wrote:

On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:

On 07/08/2016 06:55 AM, Wei Liu wrote:

+
+/* Map page that will hold RSDP */
+extent = RSDP_ADDRESS >> ctxt.page_shift;
+rc = populate_acpi_pages(dom, , 1, );
+if (rc) {
+LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
+__FUNCTION__, rc);
+goto out;
+}
+config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
+  ctxt.page_size,
+  PROT_READ | PROT_WRITE,
+  RSDP_ADDRESS >> 
ctxt.page_shift);

I think with Anthony's on-going work you should be more flexible for all
you tables.

Not sure I understand what you mean here. You want the address
(RSDP_ADDRESS) to be a variable as opposed to a macro?


I'm still trying to wrap my head around the possible interaction between
Anthony's work and your work.

Anthony's work allows dynamically loading of firmware blobs. If you use
a fixed address, theoretically it can clash with firmware blobs among
other things libxc can load. The high address is a safe bet so that
probably won't happen in practice.

Anthony's work allows loading arbitrary blobs actually. Can you take
advantage of that mechanism as well? That is, to specify all your tables
as modules and let libxc handle actually loading them  into guest
memory.

Does this make sense?

Also CC Anthony here.


My understanding of Anthony's series is that its goal was to provide an
interface to pass DSDT (and maybe some other tables) from the toolstack
to hvmloader.

Here we don't have hvmloader, we are loading the tables directly into
guest's memory.


Do you use the same hvm_start_info structure? I don't think that
structure is restricted to hvmloader.



Yes, we do. However, in PVH(v2) case it will be seen next by the guest
who will expect the tables to already be in memory. I.e. there is no
intermediate Xen component, such as hvmloader, who can load the blobs.

Having said that, I wonder whether we (both x86 and ARM) could use
Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that
Shannon's series added. (even if we can't, I think
xc_hvm_firmware_module is the right datastructure to store the blob
since it has both toolstack's virtual and guest's physical addresses).


In this case, xc_hvm_firmware_module would need to be renamed as ARM 
guests are neither HVM nor PV.


FWIW, from the toolstack point of view, ARM guests is considered as PV 
guest.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Boris Ostrovsky
On 07/11/2016 06:47 AM, Wei Liu wrote:
> On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
>> On 07/08/2016 12:07 PM, Wei Liu wrote:
>>> On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
 On 07/08/2016 06:55 AM, Wei Liu wrote:
>> +
>> +/* Map page that will hold RSDP */
>> +extent = RSDP_ADDRESS >> ctxt.page_shift;
>> +rc = populate_acpi_pages(dom, , 1, );
>> +if (rc) {
>> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
>> +__FUNCTION__, rc);
>> +goto out;
>> +}
>> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
>> +  ctxt.page_size,
>> +  PROT_READ | 
>> PROT_WRITE,
>> +  RSDP_ADDRESS >> 
>> ctxt.page_shift);
> I think with Anthony's on-going work you should be more flexible for all
> you tables.
 Not sure I understand what you mean here. You want the address
 (RSDP_ADDRESS) to be a variable as opposed to a macro?

>>> I'm still trying to wrap my head around the possible interaction between
>>> Anthony's work and your work.
>>>
>>> Anthony's work allows dynamically loading of firmware blobs. If you use
>>> a fixed address, theoretically it can clash with firmware blobs among
>>> other things libxc can load. The high address is a safe bet so that
>>> probably won't happen in practice.
>>>
>>> Anthony's work allows loading arbitrary blobs actually. Can you take
>>> advantage of that mechanism as well? That is, to specify all your tables
>>> as modules and let libxc handle actually loading them  into guest
>>> memory.
>>>
>>> Does this make sense?
>>>
>>> Also CC Anthony here.
>>
>> My understanding of Anthony's series is that its goal was to provide an
>> interface to pass DSDT (and maybe some other tables) from the toolstack
>> to hvmloader.
>>
>> Here we don't have hvmloader, we are loading the tables directly into
>> guest's memory.
>>
> Do you use the same hvm_start_info structure? I don't think that
> structure is restricted to hvmloader.


Yes, we do. However, in PVH(v2) case it will be seen next by the guest
who will expect the tables to already be in memory. I.e. there is no
intermediate Xen component, such as hvmloader, who can load the blobs.

Having said that, I wonder whether we (both x86 and ARM) could use
Anthony's xc_dom_image.full_acpi_module instead of acpitables_blob that
Shannon's series added. (even if we can't, I think
xc_hvm_firmware_module is the right datastructure to store the blob
since it has both toolstack's virtual and guest's physical addresses).

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-11 Thread Wei Liu
On Fri, Jul 08, 2016 at 01:20:46PM -0400, Boris Ostrovsky wrote:
> On 07/08/2016 12:07 PM, Wei Liu wrote:
> > On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
> >> On 07/08/2016 06:55 AM, Wei Liu wrote:
>  +
>  +/* Map page that will hold RSDP */
>  +extent = RSDP_ADDRESS >> ctxt.page_shift;
>  +rc = populate_acpi_pages(dom, , 1, );
>  +if (rc) {
>  +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
>  +__FUNCTION__, rc);
>  +goto out;
>  +}
>  +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
>  +  ctxt.page_size,
>  +  PROT_READ | 
>  PROT_WRITE,
>  +  RSDP_ADDRESS >> 
>  ctxt.page_shift);
> >>> I think with Anthony's on-going work you should be more flexible for all
> >>> you tables.
> >> Not sure I understand what you mean here. You want the address
> >> (RSDP_ADDRESS) to be a variable as opposed to a macro?
> >>
> > I'm still trying to wrap my head around the possible interaction between
> > Anthony's work and your work.
> >
> > Anthony's work allows dynamically loading of firmware blobs. If you use
> > a fixed address, theoretically it can clash with firmware blobs among
> > other things libxc can load. The high address is a safe bet so that
> > probably won't happen in practice.
> >
> > Anthony's work allows loading arbitrary blobs actually. Can you take
> > advantage of that mechanism as well? That is, to specify all your tables
> > as modules and let libxc handle actually loading them  into guest
> > memory.
> >
> > Does this make sense?
> >
> > Also CC Anthony here.
> 
> 
> My understanding of Anthony's series is that its goal was to provide an
> interface to pass DSDT (and maybe some other tables) from the toolstack
> to hvmloader.
> 
> Here we don't have hvmloader, we are loading the tables directly into
> guest's memory.
> 

Do you use the same hvm_start_info structure? I don't think that
structure is restricted to hvmloader.

Wei.

> -boris
> 
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-08 Thread Boris Ostrovsky
On 07/08/2016 12:07 PM, Wei Liu wrote:
> On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
>> On 07/08/2016 06:55 AM, Wei Liu wrote:
 +
 +/* Map page that will hold RSDP */
 +extent = RSDP_ADDRESS >> ctxt.page_shift;
 +rc = populate_acpi_pages(dom, , 1, );
 +if (rc) {
 +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
 +__FUNCTION__, rc);
 +goto out;
 +}
 +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
 +  ctxt.page_size,
 +  PROT_READ | 
 PROT_WRITE,
 +  RSDP_ADDRESS >> 
 ctxt.page_shift);
>>> I think with Anthony's on-going work you should be more flexible for all
>>> you tables.
>> Not sure I understand what you mean here. You want the address
>> (RSDP_ADDRESS) to be a variable as opposed to a macro?
>>
> I'm still trying to wrap my head around the possible interaction between
> Anthony's work and your work.
>
> Anthony's work allows dynamically loading of firmware blobs. If you use
> a fixed address, theoretically it can clash with firmware blobs among
> other things libxc can load. The high address is a safe bet so that
> probably won't happen in practice.
>
> Anthony's work allows loading arbitrary blobs actually. Can you take
> advantage of that mechanism as well? That is, to specify all your tables
> as modules and let libxc handle actually loading them  into guest
> memory.
>
> Does this make sense?
>
> Also CC Anthony here.


My understanding of Anthony's series is that its goal was to provide an
interface to pass DSDT (and maybe some other tables) from the toolstack
to hvmloader.

Here we don't have hvmloader, we are loading the tables directly into
guest's memory.

-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-08 Thread Wei Liu
On Fri, Jul 08, 2016 at 10:48:52AM -0400, Boris Ostrovsky wrote:
> On 07/08/2016 06:55 AM, Wei Liu wrote:
> >
> >> +
> >> +/* Map page that will hold RSDP */
> >> +extent = RSDP_ADDRESS >> ctxt.page_shift;
> >> +rc = populate_acpi_pages(dom, , 1, );
> >> +if (rc) {
> >> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
> >> +__FUNCTION__, rc);
> >> +goto out;
> >> +}
> >> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
> >> +  ctxt.page_size,
> >> +  PROT_READ | 
> >> PROT_WRITE,
> >> +  RSDP_ADDRESS >> 
> >> ctxt.page_shift);
> > I think with Anthony's on-going work you should be more flexible for all
> > you tables.
> 
> Not sure I understand what you mean here. You want the address
> (RSDP_ADDRESS) to be a variable as opposed to a macro?
> 

I'm still trying to wrap my head around the possible interaction between
Anthony's work and your work.

Anthony's work allows dynamically loading of firmware blobs. If you use
a fixed address, theoretically it can clash with firmware blobs among
other things libxc can load. The high address is a safe bet so that
probably won't happen in practice.

Anthony's work allows loading arbitrary blobs actually. Can you take
advantage of that mechanism as well? That is, to specify all your tables
as modules and let libxc handle actually loading them  into guest
memory.

Does this make sense?

Also CC Anthony here.

> >
> > I haven't really looked into details for this patch. Let's sort out
> > the linkage issue between GPLv2 cod and LGPL code first.
> 
> 
> Ugh.. Yes, this one didn't even cross my mind until you brought it up
> yesterday.
> 
> What are the options? Can we dual-license those files as GPLv2/LGPL,
> assuming copyright holders --- Keir (or Citrix?) and Intel --- agree?
> 

I don't claim I know much about licenses. Let's discuss this issue in
the other thread instead of having two threads.

I think what Ian said makes sense FWIW.

Wei.

> -boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-08 Thread Boris Ostrovsky
On 07/08/2016 06:55 AM, Wei Liu wrote:
>
>> +
>> +/* Map page that will hold RSDP */
>> +extent = RSDP_ADDRESS >> ctxt.page_shift;
>> +rc = populate_acpi_pages(dom, , 1, );
>> +if (rc) {
>> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
>> +__FUNCTION__, rc);
>> +goto out;
>> +}
>> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
>> +  ctxt.page_size,
>> +  PROT_READ | 
>> PROT_WRITE,
>> +  RSDP_ADDRESS >> 
>> ctxt.page_shift);
> I think with Anthony's on-going work you should be more flexible for all
> you tables.

Not sure I understand what you mean here. You want the address
(RSDP_ADDRESS) to be a variable as opposed to a macro?

>
> I haven't really looked into details for this patch. Let's sort out
> the linkage issue between GPLv2 cod and LGPL code first.


Ugh.. Yes, this one didn't even cross my mind until you brought it up
yesterday.

What are the options? Can we dual-license those files as GPLv2/LGPL,
assuming copyright holders --- Keir (or Citrix?) and Intel --- agree?

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-08 Thread Wei Liu
On Tue, Jul 05, 2016 at 03:05:19PM -0400, Boris Ostrovsky wrote:
> Signed-off-by: Boris Ostrovsky 
> ---
> 
> Changes in v1:
> * Move to libxl
> * Added populate_acpi_pages()
> * Stash location/size of tables in xc_dom_image (to be used in constructing 
> e820 map)
> * Use libxl allocator
> * Only set XEN_X86_EMU_LAPIC flag if 'apic' option is set.
> * Make acpi_build_tables() return error code
> 
>  .gitignore   |4 +
>  tools/libacpi/build.c|7 +-
>  tools/libacpi/libacpi.h  |   15 ++-
>  tools/libxl/Makefile |   17 +++-
>  tools/libxl/libxl_arch.h |3 +
>  tools/libxl/libxl_dom.c  |1 +
>  tools/libxl/libxl_x86.c  |   29 +++--
>  tools/libxl/libxl_x86_acpi.c |  292 
> ++
>  tools/libxl/libxl_x86_acpi.h |   21 +++
>  9 files changed, 373 insertions(+), 16 deletions(-)
>  create mode 100644 tools/libxl/libxl_x86_acpi.c
>  create mode 100644 tools/libxl/libxl_x86_acpi.h
> 
> diff --git a/.gitignore b/.gitignore
> index 9dd2086..d4da37f 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -179,6 +179,10 @@ tools/libxl/testenum.c
>  tools/libxl/tmp.*
>  tools/libxl/_libxl.api-for-check
>  tools/libxl/*.api-ok
> +tools/libxl/mk_dsdt
> +tools/libxl/dsdt*.c
> +tools/libxl/dsdt_*.asl
> +tools/libxl/ssdt_*.h

Please sort these alphabetically.

>  tools/misc/cpuperf/cpuperf-perfcntr
>  tools/misc/cpuperf/cpuperf-xen
>  tools/misc/xc_shadow
> diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
> index 290f005..a6ddf53 100644
> --- a/tools/libacpi/build.c
> +++ b/tools/libacpi/build.c
> @@ -23,6 +23,7 @@
>  #include "ssdt_tpm.h"
>  #include "ssdt_pm.h"
>  #include "x86.h"
> +#include 
>  #include 
>  #include 
>  
[...]
> +int libxl__dom_load_acpi(libxl__gc *gc,
> +  libxl_domain_build_info *info,
> +  struct xc_dom_image *dom);

Indentation.

>  #endif
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index ba3472e..7e4e289 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -657,6 +657,7 @@ static int libxl__build_dom(libxl__gc *gc, uint32_t domid,
>  LOGE(ERROR, "xc_dom_build_image failed");
>  goto out;
>  }
> +

Stray blank line.

>  if ( (ret = xc_dom_boot_image(dom)) != 0 ) {
>  LOGE(ERROR, "xc_dom_boot_image failed");
>  goto out;
> diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
> index 32ce1d2..7201dbb 100644
> --- a/tools/libxl/libxl_x86.c
> +++ b/tools/libxl/libxl_x86.c
[...]
> +int libxl__dom_load_acpi(libxl__gc *gc,
> +  libxl_domain_build_info *info,
> +  struct xc_dom_image *dom)
> +{
> +struct acpi_config config = {0};
> +struct acpi_ctxt ctxt;
> +uint32_t domid = dom->guest_domid;
> +xc_interface *xch = dom->xch;
> +int rc, i, acpi_pages_num;
> +xen_pfn_t extent, *extents;
> +void *acpi_pages, *guest_acpi_pages = NULL;
> +unsigned long page_mask;
> +
> +if ((info->type != LIBXL_DOMAIN_TYPE_HVM) ||
> +(info->device_model_version != LIBXL_DEVICE_MODEL_VERSION_NONE))
> +return 0;
> +
> +ctxt.page_size = XC_DOM_PAGE_SIZE(dom);
> +ctxt.page_shift = XC_DOM_PAGE_SHIFT(dom);
> +page_mask = (1UL << ctxt.page_shift) - 1;
> +
> +ctxt.mem_ops.alloc = mem_alloc;
> +ctxt.mem_ops.v2p = virt_to_phys;
> +
> +rc = init_acpi_config(gc, dom, info, );
> +if (rc) {
> +LOG(ERROR, "%s: init_acpi_config failed (rc=%d)", __FUNCTION__, rc);
> +return rc;
> +}
> +
> +/* Map page that will hold RSDP */
> +extent = RSDP_ADDRESS >> ctxt.page_shift;
> +rc = populate_acpi_pages(dom, , 1, );
> +if (rc) {
> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
> +__FUNCTION__, rc);
> +goto out;
> +}
> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
> +  ctxt.page_size,
> +  PROT_READ | PROT_WRITE,
> +  RSDP_ADDRESS >> 
> ctxt.page_shift);

I think with Anthony's on-going work you should be more flexible for all
you tables.

I haven't really looked into details for this patch. Let's sort out
the linkage issue between GPLv2 cod and LGPL code first.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-07 Thread Jan Beulich
>>> On 07.07.16 at 17:12,  wrote:

> 
> On 07/07/16 16:08, Boris Ostrovsky wrote:
>> On 07/07/2016 04:38 AM, Jan Beulich wrote:
>> On 06.07.16 at 19:33,  wrote:
 On 07/06/2016 01:03 PM, Julien Grall wrote:
>
> On 06/07/16 17:30, Boris Ostrovsky wrote:
>> On 07/06/2016 12:04 PM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 06/07/16 16:50, Boris Ostrovsky wrote:
 On 07/06/2016 07:05 AM, Julien Grall wrote:
>> +static int populate_acpi_pages(struct xc_dom_image *dom,
>> +   xen_pfn_t *extents,
>> +   unsigned int num_pages,
>> +   struct acpi_ctxt *ctxt)
>> +{
>> +int rc;
>> +xc_interface *xch = dom->xch;
>> +uint32_t domid = dom->guest_domid;
>> +unsigned long idx, first_high_idx = (1ull << (32 -
>> ctxt->page_shift));
>> +
>> +for (; num_pages; num_pages--, extents++) {
>> +
>> +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0,
>> extents)
>> == 1)
> It looks like this is working because libxl is setting the maximum
> size of the domain with some slack (1MB). You might get in trouble if
> the slack is reduced or used by someone or the ACPI blob is
> increasing.

 I saw your conversation about slack with Stefano and I am not sure I
 understood what it was about. If this was about padding guest's memory
 to be able to put there some additional data (such ACPI tables) then
 this is not my intention here: if I can't populate (because it is
 already populated, I guess) then I try to move memory around with code
 below. That's what mem_hole_populate_ram() does as well.
>>> The maximum amount of memory that could be assigned to a domain is
>>> fixed per-domain. This maximum amount does not take into account the
>>> size of the ACPI blob. So you may end up to fail because all the
>>> memory was assigned somewhere else.
>> Why shouldn't max amount of guest's memory include ACPI tables? I think
>> we should fail and not rely on this slack if no more memory is
>> available.
>>
>> ...
> Because, at least for ARM, the ACPI memory region is not part of the
> "real" RAM. So this value is not taken into account into the current
> max mem.

 Hmm.. I've always assumed it is part of memory but marked as a special
 type in e820 map on x86. Maybe Andrew or Jan can comment on this.
>>> I guess you both mean the same - note how Julien says _"real" RAM_.
>>> So from a hypervisor resource consumption view this ought to be
>>> considered RAM; from a guest view it wouldn't be.
>>
>> In which case we shouldn't pad maxmem specified by guest's config file.
>> Space to put ACPI tables must come from requested resources. If the
>> tables don't fit then more memory should have been asked for.
> 
> Why? In the case of ARM, the ACPI tables lives outside the guest RAM in 
> a special region. I would have expect to be the same in x86.

This is still RAM from a host resource accounting POV.

> If so, I don't think this should be part of the maxmem requested by the 
> user. IIRC, it is the same of the PCI ROM. The maxmem is incremented by 
> the toolstack.

For some parts iirc, and not for others. See also (for example) the
recent discussion George had with PGNet Dev
(https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg00353.html).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-07 Thread Julien Grall



On 07/07/16 16:08, Boris Ostrovsky wrote:

On 07/07/2016 04:38 AM, Jan Beulich wrote:

On 06.07.16 at 19:33,  wrote:

On 07/06/2016 01:03 PM, Julien Grall wrote:


On 06/07/16 17:30, Boris Ostrovsky wrote:

On 07/06/2016 12:04 PM, Julien Grall wrote:

Hi Boris,

On 06/07/16 16:50, Boris Ostrovsky wrote:

On 07/06/2016 07:05 AM, Julien Grall wrote:

+static int populate_acpi_pages(struct xc_dom_image *dom,
+   xen_pfn_t *extents,
+   unsigned int num_pages,
+   struct acpi_ctxt *ctxt)
+{
+int rc;
+xc_interface *xch = dom->xch;
+uint32_t domid = dom->guest_domid;
+unsigned long idx, first_high_idx = (1ull << (32 -
ctxt->page_shift));
+
+for (; num_pages; num_pages--, extents++) {
+
+if (xc_domain_populate_physmap(xch, domid, 1, 0, 0,
extents)
== 1)

It looks like this is working because libxl is setting the maximum
size of the domain with some slack (1MB). You might get in trouble if
the slack is reduced or used by someone or the ACPI blob is
increasing.


I saw your conversation about slack with Stefano and I am not sure I
understood what it was about. If this was about padding guest's memory
to be able to put there some additional data (such ACPI tables) then
this is not my intention here: if I can't populate (because it is
already populated, I guess) then I try to move memory around with code
below. That's what mem_hole_populate_ram() does as well.

The maximum amount of memory that could be assigned to a domain is
fixed per-domain. This maximum amount does not take into account the
size of the ACPI blob. So you may end up to fail because all the
memory was assigned somewhere else.

Why shouldn't max amount of guest's memory include ACPI tables? I think
we should fail and not rely on this slack if no more memory is
available.

...

Because, at least for ARM, the ACPI memory region is not part of the
"real" RAM. So this value is not taken into account into the current
max mem.


Hmm.. I've always assumed it is part of memory but marked as a special
type in e820 map on x86. Maybe Andrew or Jan can comment on this.

I guess you both mean the same - note how Julien says _"real" RAM_.
So from a hypervisor resource consumption view this ought to be
considered RAM; from a guest view it wouldn't be.


In which case we shouldn't pad maxmem specified by guest's config file.
Space to put ACPI tables must come from requested resources. If the
tables don't fit then more memory should have been asked for.


Why? In the case of ARM, the ACPI tables lives outside the guest RAM in 
a special region. I would have expect to be the same in x86.


If so, I don't think this should be part of the maxmem requested by the 
user. IIRC, it is the same of the PCI ROM. The maxmem is incremented by 
the toolstack.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-07 Thread Boris Ostrovsky
On 07/07/2016 04:38 AM, Jan Beulich wrote:
 On 06.07.16 at 19:33,  wrote:
>> On 07/06/2016 01:03 PM, Julien Grall wrote:
>>>
>>> On 06/07/16 17:30, Boris Ostrovsky wrote:
 On 07/06/2016 12:04 PM, Julien Grall wrote:
> Hi Boris,
>
> On 06/07/16 16:50, Boris Ostrovsky wrote:
>> On 07/06/2016 07:05 AM, Julien Grall wrote:
 +static int populate_acpi_pages(struct xc_dom_image *dom,
 +   xen_pfn_t *extents,
 +   unsigned int num_pages,
 +   struct acpi_ctxt *ctxt)
 +{
 +int rc;
 +xc_interface *xch = dom->xch;
 +uint32_t domid = dom->guest_domid;
 +unsigned long idx, first_high_idx = (1ull << (32 -
 ctxt->page_shift));
 +
 +for (; num_pages; num_pages--, extents++) {
 +
 +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0,
 extents)
 == 1)
>>> It looks like this is working because libxl is setting the maximum
>>> size of the domain with some slack (1MB). You might get in trouble if
>>> the slack is reduced or used by someone or the ACPI blob is
>>> increasing.
>>
>> I saw your conversation about slack with Stefano and I am not sure I
>> understood what it was about. If this was about padding guest's memory
>> to be able to put there some additional data (such ACPI tables) then
>> this is not my intention here: if I can't populate (because it is
>> already populated, I guess) then I try to move memory around with code
>> below. That's what mem_hole_populate_ram() does as well.
> The maximum amount of memory that could be assigned to a domain is
> fixed per-domain. This maximum amount does not take into account the
> size of the ACPI blob. So you may end up to fail because all the
> memory was assigned somewhere else.
 Why shouldn't max amount of guest's memory include ACPI tables? I think
 we should fail and not rely on this slack if no more memory is
 available.

 ...
>>> Because, at least for ARM, the ACPI memory region is not part of the
>>> "real" RAM. So this value is not taken into account into the current
>>> max mem.
>>
>> Hmm.. I've always assumed it is part of memory but marked as a special
>> type in e820 map on x86. Maybe Andrew or Jan can comment on this.
> I guess you both mean the same - note how Julien says _"real" RAM_.
> So from a hypervisor resource consumption view this ought to be
> considered RAM; from a guest view it wouldn't be.

In which case we shouldn't pad maxmem specified by guest's config file.
Space to put ACPI tables must come from requested resources. If the
tables don't fit then more memory should have been asked for.

-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-07 Thread Jan Beulich
>>> On 06.07.16 at 19:33,  wrote:
> On 07/06/2016 01:03 PM, Julien Grall wrote:
>>
>>
>> On 06/07/16 17:30, Boris Ostrovsky wrote:
>>> On 07/06/2016 12:04 PM, Julien Grall wrote:
 Hi Boris,

 On 06/07/16 16:50, Boris Ostrovsky wrote:
> On 07/06/2016 07:05 AM, Julien Grall wrote:
>>> +static int populate_acpi_pages(struct xc_dom_image *dom,
>>> +   xen_pfn_t *extents,
>>> +   unsigned int num_pages,
>>> +   struct acpi_ctxt *ctxt)
>>> +{
>>> +int rc;
>>> +xc_interface *xch = dom->xch;
>>> +uint32_t domid = dom->guest_domid;
>>> +unsigned long idx, first_high_idx = (1ull << (32 -
>>> ctxt->page_shift));
>>> +
>>> +for (; num_pages; num_pages--, extents++) {
>>> +
>>> +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0,
>>> extents)
>>> == 1)
>>
>> It looks like this is working because libxl is setting the maximum
>> size of the domain with some slack (1MB). You might get in trouble if
>> the slack is reduced or used by someone or the ACPI blob is
>> increasing.
>
>
> I saw your conversation about slack with Stefano and I am not sure I
> understood what it was about. If this was about padding guest's memory
> to be able to put there some additional data (such ACPI tables) then
> this is not my intention here: if I can't populate (because it is
> already populated, I guess) then I try to move memory around with code
> below. That's what mem_hole_populate_ram() does as well.

 The maximum amount of memory that could be assigned to a domain is
 fixed per-domain. This maximum amount does not take into account the
 size of the ACPI blob. So you may end up to fail because all the
 memory was assigned somewhere else.
>>>
>>> Why shouldn't max amount of guest's memory include ACPI tables? I think
>>> we should fail and not rely on this slack if no more memory is
>>> available.
>>>
>>> ...
>>
>> Because, at least for ARM, the ACPI memory region is not part of the
>> "real" RAM. So this value is not taken into account into the current
>> max mem.
> 
> 
> Hmm.. I've always assumed it is part of memory but marked as a special
> type in e820 map on x86. Maybe Andrew or Jan can comment on this.

I guess you both mean the same - note how Julien says _"real" RAM_.
So from a hypervisor resource consumption view this ought to be
considered RAM; from a guest view it wouldn't be.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Boris Ostrovsky
On 07/06/2016 01:03 PM, Julien Grall wrote:
>
>
> On 06/07/16 17:30, Boris Ostrovsky wrote:
>> On 07/06/2016 12:04 PM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 06/07/16 16:50, Boris Ostrovsky wrote:
 On 07/06/2016 07:05 AM, Julien Grall wrote:
>> +static int populate_acpi_pages(struct xc_dom_image *dom,
>> +   xen_pfn_t *extents,
>> +   unsigned int num_pages,
>> +   struct acpi_ctxt *ctxt)
>> +{
>> +int rc;
>> +xc_interface *xch = dom->xch;
>> +uint32_t domid = dom->guest_domid;
>> +unsigned long idx, first_high_idx = (1ull << (32 -
>> ctxt->page_shift));
>> +
>> +for (; num_pages; num_pages--, extents++) {
>> +
>> +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0,
>> extents)
>> == 1)
>
> It looks like this is working because libxl is setting the maximum
> size of the domain with some slack (1MB). You might get in trouble if
> the slack is reduced or used by someone or the ACPI blob is
> increasing.


 I saw your conversation about slack with Stefano and I am not sure I
 understood what it was about. If this was about padding guest's memory
 to be able to put there some additional data (such ACPI tables) then
 this is not my intention here: if I can't populate (because it is
 already populated, I guess) then I try to move memory around with code
 below. That's what mem_hole_populate_ram() does as well.
>>>
>>> The maximum amount of memory that could be assigned to a domain is
>>> fixed per-domain. This maximum amount does not take into account the
>>> size of the ACPI blob. So you may end up to fail because all the
>>> memory was assigned somewhere else.
>>
>> Why shouldn't max amount of guest's memory include ACPI tables? I think
>> we should fail and not rely on this slack if no more memory is
>> available.
>>
>> ...
>
> Because, at least for ARM, the ACPI memory region is not part of the
> "real" RAM. So this value is not taken into account into the current
> max mem.


Hmm.. I've always assumed it is part of memory but marked as a special
type in e820 map on x86. Maybe Andrew or Jan can comment on this.


> My point is we need to add the size of the ACPI blob to max mem to
> avoid any error here.

So you (or Shannon) plan on doing this for ARM, right (it's not done
with the current version of patches)?

>
>>
>>>

> However, as mentioned in the ACPI thread [1], all the blobs are
> generally loaded by libxc and not libxl. This is more true on ARM
> because the guest address space is controlled by libxc (the position
> of all the blob are decided by it).

 The difference is that I not only load the tables here but also build
 them. Which may or may not be the right thing to do in libxc.

 I suppose I can defer loading (and then keep pointer to tables in
 acpitable_blob) but the then I need to keep RSDP descriptor somewhere
 else (it is not part of the blob since it lives in lower MB of the
 guest).
>>>
>>> The device tree for ARM are built in libxl and loaded for libxc. IHMO,
>>> it would be strange to have a different pattern for ACPI.
>>
>> Is RSDP part of the ACPI blob for ARM? If not, how do you load it?
>
> RSDP is part of the ACPI blob for ARM.


And it's not for x86. So to separate building and loading the tables
would require two blobs.

I suppose we cal loop over blobs in xc_dom_load_acpi in
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg00309.html


-boris




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Julien Grall



On 06/07/16 17:30, Boris Ostrovsky wrote:

On 07/06/2016 12:04 PM, Julien Grall wrote:

Hi Boris,

On 06/07/16 16:50, Boris Ostrovsky wrote:

On 07/06/2016 07:05 AM, Julien Grall wrote:

+static int populate_acpi_pages(struct xc_dom_image *dom,
+   xen_pfn_t *extents,
+   unsigned int num_pages,
+   struct acpi_ctxt *ctxt)
+{
+int rc;
+xc_interface *xch = dom->xch;
+uint32_t domid = dom->guest_domid;
+unsigned long idx, first_high_idx = (1ull << (32 -
ctxt->page_shift));
+
+for (; num_pages; num_pages--, extents++) {
+
+if (xc_domain_populate_physmap(xch, domid, 1, 0, 0, extents)
== 1)


It looks like this is working because libxl is setting the maximum
size of the domain with some slack (1MB). You might get in trouble if
the slack is reduced or used by someone or the ACPI blob is increasing.



I saw your conversation about slack with Stefano and I am not sure I
understood what it was about. If this was about padding guest's memory
to be able to put there some additional data (such ACPI tables) then
this is not my intention here: if I can't populate (because it is
already populated, I guess) then I try to move memory around with code
below. That's what mem_hole_populate_ram() does as well.


The maximum amount of memory that could be assigned to a domain is
fixed per-domain. This maximum amount does not take into account the
size of the ACPI blob. So you may end up to fail because all the
memory was assigned somewhere else.


Why shouldn't max amount of guest's memory include ACPI tables? I think
we should fail and not rely on this slack if no more memory is available.

...


Because, at least for ARM, the ACPI memory region is not part of the 
"real" RAM. So this value is not taken into account into the current max 
mem. My point is we need to add the size of the ACPI blob to max mem to 
avoid any error here.









However, as mentioned in the ACPI thread [1], all the blobs are
generally loaded by libxc and not libxl. This is more true on ARM
because the guest address space is controlled by libxc (the position
of all the blob are decided by it).


The difference is that I not only load the tables here but also build
them. Which may or may not be the right thing to do in libxc.

I suppose I can defer loading (and then keep pointer to tables in
acpitable_blob) but the then I need to keep RSDP descriptor somewhere
else (it is not part of the blob since it lives in lower MB of the
guest).


The device tree for ARM are built in libxl and loaded for libxc. IHMO,
it would be strange to have a different pattern for ACPI.


Is RSDP part of the ACPI blob for ARM? If not, how do you load it?


RSDP is part of the ACPI blob for ARM.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Boris Ostrovsky
On 07/06/2016 12:04 PM, Julien Grall wrote:
> Hi Boris,
>
> On 06/07/16 16:50, Boris Ostrovsky wrote:
>> On 07/06/2016 07:05 AM, Julien Grall wrote:
 +static int populate_acpi_pages(struct xc_dom_image *dom,
 +   xen_pfn_t *extents,
 +   unsigned int num_pages,
 +   struct acpi_ctxt *ctxt)
 +{
 +int rc;
 +xc_interface *xch = dom->xch;
 +uint32_t domid = dom->guest_domid;
 +unsigned long idx, first_high_idx = (1ull << (32 -
 ctxt->page_shift));
 +
 +for (; num_pages; num_pages--, extents++) {
 +
 +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0, extents)
 == 1)
>>>
>>> It looks like this is working because libxl is setting the maximum
>>> size of the domain with some slack (1MB). You might get in trouble if
>>> the slack is reduced or used by someone or the ACPI blob is increasing.
>>
>>
>> I saw your conversation about slack with Stefano and I am not sure I
>> understood what it was about. If this was about padding guest's memory
>> to be able to put there some additional data (such ACPI tables) then
>> this is not my intention here: if I can't populate (because it is
>> already populated, I guess) then I try to move memory around with code
>> below. That's what mem_hole_populate_ram() does as well.
>
> The maximum amount of memory that could be assigned to a domain is
> fixed per-domain. This maximum amount does not take into account the
> size of the ACPI blob. So you may end up to fail because all the
> memory was assigned somewhere else.

Why shouldn't max amount of guest's memory include ACPI tables? I think
we should fail and not rely on this slack if no more memory is available.

...

>
>>
>>> However, as mentioned in the ACPI thread [1], all the blobs are
>>> generally loaded by libxc and not libxl. This is more true on ARM
>>> because the guest address space is controlled by libxc (the position
>>> of all the blob are decided by it).
>>
>> The difference is that I not only load the tables here but also build
>> them. Which may or may not be the right thing to do in libxc.
>>
>> I suppose I can defer loading (and then keep pointer to tables in
>> acpitable_blob) but the then I need to keep RSDP descriptor somewhere
>> else (it is not part of the blob since it lives in lower MB of the
>> guest).
>
> The device tree for ARM are built in libxl and loaded for libxc. IHMO,
> it would be strange to have a different pattern for ACPI. 

Is RSDP part of the ACPI blob for ARM? If not, how do you load it?

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Julien Grall

Hi Boris,

On 06/07/16 16:50, Boris Ostrovsky wrote:

On 07/06/2016 07:05 AM, Julien Grall wrote:

+static int populate_acpi_pages(struct xc_dom_image *dom,
+   xen_pfn_t *extents,
+   unsigned int num_pages,
+   struct acpi_ctxt *ctxt)
+{
+int rc;
+xc_interface *xch = dom->xch;
+uint32_t domid = dom->guest_domid;
+unsigned long idx, first_high_idx = (1ull << (32 -
ctxt->page_shift));
+
+for (; num_pages; num_pages--, extents++) {
+
+if (xc_domain_populate_physmap(xch, domid, 1, 0, 0, extents)
== 1)


It looks like this is working because libxl is setting the maximum
size of the domain with some slack (1MB). You might get in trouble if
the slack is reduced or used by someone or the ACPI blob is increasing.



I saw your conversation about slack with Stefano and I am not sure I
understood what it was about. If this was about padding guest's memory
to be able to put there some additional data (such ACPI tables) then
this is not my intention here: if I can't populate (because it is
already populated, I guess) then I try to move memory around with code
below. That's what mem_hole_populate_ram() does as well.


The maximum amount of memory that could be assigned to a domain is fixed 
per-domain. This maximum amount does not take into account the size of 
the ACPI blob. So you may end up to fail because all the memory was 
assigned somewhere else.







+continue;
+
+if (dom->highmem_end) {
+idx = --dom->highmem_end;
+if ( idx == first_high_idx )
+dom->highmem_end = 0;
+} else
+idx = --dom->lowmem_end;
+
+rc = xc_domain_add_to_physmap(xch, domid,
+  XENMAPSPACE_gmfn,
+  idx, *extents);
+if (rc)
+return rc;
+}
+
+return 0;
+}
+
+int libxl__dom_load_acpi(libxl__gc *gc,
+ libxl_domain_build_info *info,
+ struct xc_dom_image *dom)
+{
+struct acpi_config config = {0};
+struct acpi_ctxt ctxt;
+uint32_t domid = dom->guest_domid;
+xc_interface *xch = dom->xch;
+int rc, i, acpi_pages_num;
+xen_pfn_t extent, *extents;
+void *acpi_pages, *guest_acpi_pages = NULL;
+unsigned long page_mask;
+
+if ((info->type != LIBXL_DOMAIN_TYPE_HVM) ||
+(info->device_model_version !=
LIBXL_DEVICE_MODEL_VERSION_NONE))
+return 0;
+
+ctxt.page_size = XC_DOM_PAGE_SIZE(dom);
+ctxt.page_shift = XC_DOM_PAGE_SHIFT(dom);
+page_mask = (1UL << ctxt.page_shift) - 1;
+
+ctxt.mem_ops.alloc = mem_alloc;
+ctxt.mem_ops.v2p = virt_to_phys;
+
+rc = init_acpi_config(gc, dom, info, );
+if (rc) {
+LOG(ERROR, "%s: init_acpi_config failed (rc=%d)",
__FUNCTION__, rc);
+return rc;
+}
+
+/* Map page that will hold RSDP */
+extent = RSDP_ADDRESS >> ctxt.page_shift;
+rc = populate_acpi_pages(dom, , 1, );
+if (rc) {
+LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
+__FUNCTION__, rc);
+goto out;
+}
+config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
+  ctxt.page_size,
+  PROT_READ |
PROT_WRITE,
+  RSDP_ADDRESS

ctxt.page_shift);

+if (!config.rsdp) {
+LOG(ERROR, "%s: Can't map acpi_physical", __FUNCTION__);
+rc = -1;
+goto out;
+}
+
+/* Map acpi_info page */
+extent = ACPI_INFO_PHYSICAL_ADDRESS >> ctxt.page_shift;
+rc = populate_acpi_pages(dom, , 1, );
+if (rc) {
+LOG(ERROR, "%s: populate_acpi_pages for acpi_info failed
with %d",
+__FUNCTION__, rc);
+goto out;
+}
+
+config.ainfop = (unsigned long)xc_map_foreign_range(xch, domid,
+ctxt.page_size,
+PROT_READ |
PROT_WRITE,
+
ACPI_INFO_PHYSICAL_ADDRESS >> ctxt.page_shift);


Loading the ACPI blob on ARM will be very similar except for the base
address. So it would be nice to share some code with it.

However, as mentioned in the ACPI thread [1], all the blobs are
generally loaded by libxc and not libxl. This is more true on ARM
because the guest address space is controlled by libxc (the position
of all the blob are decided by it).


The difference is that I not only load the tables here but also build
them. Which may or may not be the right thing to do in libxc.

I suppose I can defer loading (and then keep pointer to tables in
acpitable_blob) but the then I need to keep RSDP descriptor somewhere
else (it is not part of the blob since it lives in lower MB of the guest).


The device tree for ARM are built in libxl and loaded for libxc. IHMO, 
it would be strange to have a different 

Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Boris Ostrovsky
On 07/06/2016 07:05 AM, Julien Grall wrote:
> (CC Stefano)
>
> Hi Boris,
>
> On 05/07/16 20:05, Boris Ostrovsky wrote:
>> diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
>> index 34a853c..7c6536b 100644
>> --- a/tools/libxl/libxl_arch.h
>> +++ b/tools/libxl/libxl_arch.h
>> @@ -62,4 +62,7 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc,
>>   uint32_t domid,
>>   struct xc_dom_image *dom);
>>
>> +int libxl__dom_load_acpi(libxl__gc *gc,
>> + libxl_domain_build_info *info,
>> + struct xc_dom_image *dom);
>
> This file contains arch specific function with are called by the
> generic libxl code.
>
> I don't think this is the right file for an x86 specific function
> which has a generic name. IHMO, this should go in libxl_x86_acpi.h
> with "x86" in the name and with '_hidden' attribute.

Right. I used to call this routine from libxl_dom.c. I moved the call
site to libxl_x86.c (as you suggested earlier) but forgot to move the
declaration.

>
>>   #endif
>
> [...]
>
>> +static int populate_acpi_pages(struct xc_dom_image *dom,
>> +   xen_pfn_t *extents,
>> +   unsigned int num_pages,
>> +   struct acpi_ctxt *ctxt)
>> +{
>> +int rc;
>> +xc_interface *xch = dom->xch;
>> +uint32_t domid = dom->guest_domid;
>> +unsigned long idx, first_high_idx = (1ull << (32 -
>> ctxt->page_shift));
>> +
>> +for (; num_pages; num_pages--, extents++) {
>> +
>> +if (xc_domain_populate_physmap(xch, domid, 1, 0, 0, extents)
>> == 1)
>
> It looks like this is working because libxl is setting the maximum
> size of the domain with some slack (1MB). You might get in trouble if
> the slack is reduced or used by someone or the ACPI blob is increasing.


I saw your conversation about slack with Stefano and I am not sure I
understood what it was about. If this was about padding guest's memory
to be able to put there some additional data (such ACPI tables) then
this is not my intention here: if I can't populate (because it is
already populated, I guess) then I try to move memory around with code
below. That's what mem_hole_populate_ram() does as well.


>
>> +continue;
>> +
>> +if (dom->highmem_end) {
>> +idx = --dom->highmem_end;
>> +if ( idx == first_high_idx )
>> +dom->highmem_end = 0;
>> +} else
>> +idx = --dom->lowmem_end;
>> +
>> +rc = xc_domain_add_to_physmap(xch, domid,
>> +  XENMAPSPACE_gmfn,
>> +  idx, *extents);
>> +if (rc)
>> +return rc;
>> +}
>> +
>> +return 0;
>> +}
>> +
>> +int libxl__dom_load_acpi(libxl__gc *gc,
>> + libxl_domain_build_info *info,
>> + struct xc_dom_image *dom)
>> +{
>> +struct acpi_config config = {0};
>> +struct acpi_ctxt ctxt;
>> +uint32_t domid = dom->guest_domid;
>> +xc_interface *xch = dom->xch;
>> +int rc, i, acpi_pages_num;
>> +xen_pfn_t extent, *extents;
>> +void *acpi_pages, *guest_acpi_pages = NULL;
>> +unsigned long page_mask;
>> +
>> +if ((info->type != LIBXL_DOMAIN_TYPE_HVM) ||
>> +(info->device_model_version !=
>> LIBXL_DEVICE_MODEL_VERSION_NONE))
>> +return 0;
>> +
>> +ctxt.page_size = XC_DOM_PAGE_SIZE(dom);
>> +ctxt.page_shift = XC_DOM_PAGE_SHIFT(dom);
>> +page_mask = (1UL << ctxt.page_shift) - 1;
>> +
>> +ctxt.mem_ops.alloc = mem_alloc;
>> +ctxt.mem_ops.v2p = virt_to_phys;
>> +
>> +rc = init_acpi_config(gc, dom, info, );
>> +if (rc) {
>> +LOG(ERROR, "%s: init_acpi_config failed (rc=%d)",
>> __FUNCTION__, rc);
>> +return rc;
>> +}
>> +
>> +/* Map page that will hold RSDP */
>> +extent = RSDP_ADDRESS >> ctxt.page_shift;
>> +rc = populate_acpi_pages(dom, , 1, );
>> +if (rc) {
>> +LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
>> +__FUNCTION__, rc);
>> +goto out;
>> +}
>> +config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
>> +  ctxt.page_size,
>> +  PROT_READ |
>> PROT_WRITE,
>> +  RSDP_ADDRESS
>> >> ctxt.page_shift);
>> +if (!config.rsdp) {
>> +LOG(ERROR, "%s: Can't map acpi_physical", __FUNCTION__);
>> +rc = -1;
>> +goto out;
>> +}
>> +
>> +/* Map acpi_info page */
>> +extent = ACPI_INFO_PHYSICAL_ADDRESS >> ctxt.page_shift;
>> +rc = populate_acpi_pages(dom, , 1, );
>> +if (rc) {
>> +LOG(ERROR, "%s: populate_acpi_pages for acpi_info failed
>> with %d",
>> +__FUNCTION__, rc);
>> +goto out;
>> +}
>> +
>> +config.ainfop = (unsigned 

Re: [Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-06 Thread Julien Grall

(CC Stefano)

Hi Boris,

On 05/07/16 20:05, Boris Ostrovsky wrote:

diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 34a853c..7c6536b 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -62,4 +62,7 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc,
  uint32_t domid,
  struct xc_dom_image *dom);

+int libxl__dom_load_acpi(libxl__gc *gc,
+libxl_domain_build_info *info,
+struct xc_dom_image *dom);


This file contains arch specific function with are called by the generic 
libxl code.


I don't think this is the right file for an x86 specific function which 
has a generic name. IHMO, this should go in libxl_x86_acpi.h with "x86" 
in the name and with '_hidden' attribute.



  #endif


[...]


+static int populate_acpi_pages(struct xc_dom_image *dom,
+   xen_pfn_t *extents,
+   unsigned int num_pages,
+   struct acpi_ctxt *ctxt)
+{
+int rc;
+xc_interface *xch = dom->xch;
+uint32_t domid = dom->guest_domid;
+unsigned long idx, first_high_idx = (1ull << (32 - ctxt->page_shift));
+
+for (; num_pages; num_pages--, extents++) {
+
+if (xc_domain_populate_physmap(xch, domid, 1, 0, 0, extents) == 1)


It looks like this is working because libxl is setting the maximum size 
of the domain with some slack (1MB). You might get in trouble if the 
slack is reduced or used by someone or the ACPI blob is increasing.



+continue;
+
+if (dom->highmem_end) {
+idx = --dom->highmem_end;
+if ( idx == first_high_idx )
+dom->highmem_end = 0;
+} else
+idx = --dom->lowmem_end;
+
+rc = xc_domain_add_to_physmap(xch, domid,
+  XENMAPSPACE_gmfn,
+  idx, *extents);
+if (rc)
+return rc;
+}
+
+return 0;
+}
+
+int libxl__dom_load_acpi(libxl__gc *gc,
+libxl_domain_build_info *info,
+struct xc_dom_image *dom)
+{
+struct acpi_config config = {0};
+struct acpi_ctxt ctxt;
+uint32_t domid = dom->guest_domid;
+xc_interface *xch = dom->xch;
+int rc, i, acpi_pages_num;
+xen_pfn_t extent, *extents;
+void *acpi_pages, *guest_acpi_pages = NULL;
+unsigned long page_mask;
+
+if ((info->type != LIBXL_DOMAIN_TYPE_HVM) ||
+(info->device_model_version != LIBXL_DEVICE_MODEL_VERSION_NONE))
+return 0;
+
+ctxt.page_size = XC_DOM_PAGE_SIZE(dom);
+ctxt.page_shift = XC_DOM_PAGE_SHIFT(dom);
+page_mask = (1UL << ctxt.page_shift) - 1;
+
+ctxt.mem_ops.alloc = mem_alloc;
+ctxt.mem_ops.v2p = virt_to_phys;
+
+rc = init_acpi_config(gc, dom, info, );
+if (rc) {
+LOG(ERROR, "%s: init_acpi_config failed (rc=%d)", __FUNCTION__, rc);
+return rc;
+}
+
+/* Map page that will hold RSDP */
+extent = RSDP_ADDRESS >> ctxt.page_shift;
+rc = populate_acpi_pages(dom, , 1, );
+if (rc) {
+LOG(ERROR, "%s: populate_acpi_pages for rsdp failed with %d",
+__FUNCTION__, rc);
+goto out;
+}
+config.rsdp = (unsigned long)xc_map_foreign_range(xch, domid,
+  ctxt.page_size,
+  PROT_READ | PROT_WRITE,
+  RSDP_ADDRESS >> 
ctxt.page_shift);
+if (!config.rsdp) {
+LOG(ERROR, "%s: Can't map acpi_physical", __FUNCTION__);
+rc = -1;
+goto out;
+}
+
+/* Map acpi_info page */
+extent = ACPI_INFO_PHYSICAL_ADDRESS >> ctxt.page_shift;
+rc = populate_acpi_pages(dom, , 1, );
+if (rc) {
+LOG(ERROR, "%s: populate_acpi_pages for acpi_info failed with %d",
+__FUNCTION__, rc);
+goto out;
+}
+
+config.ainfop = (unsigned long)xc_map_foreign_range(xch, domid,
+ctxt.page_size,
+PROT_READ | PROT_WRITE,
+ACPI_INFO_PHYSICAL_ADDRESS 
>> ctxt.page_shift);


Loading the ACPI blob on ARM will be very similar except for the base 
address. So it would be nice to share some code with it.


However, as mentioned in the ACPI thread [1], all the blobs are 
generally loaded by libxc and not libxl. This is more true on ARM 
because the guest address space is controlled by libxc (the position of 
all the blob are decided by it).


Regards,

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg00040.html


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v1 20/20] libxl/acpi: Build ACPI tables for HVMlite guests

2016-07-05 Thread Boris Ostrovsky
Signed-off-by: Boris Ostrovsky 
---

Changes in v1:
* Move to libxl
* Added populate_acpi_pages()
* Stash location/size of tables in xc_dom_image (to be used in constructing 
e820 map)
* Use libxl allocator
* Only set XEN_X86_EMU_LAPIC flag if 'apic' option is set.
* Make acpi_build_tables() return error code

 .gitignore   |4 +
 tools/libacpi/build.c|7 +-
 tools/libacpi/libacpi.h  |   15 ++-
 tools/libxl/Makefile |   17 +++-
 tools/libxl/libxl_arch.h |3 +
 tools/libxl/libxl_dom.c  |1 +
 tools/libxl/libxl_x86.c  |   29 +++--
 tools/libxl/libxl_x86_acpi.c |  292 ++
 tools/libxl/libxl_x86_acpi.h |   21 +++
 9 files changed, 373 insertions(+), 16 deletions(-)
 create mode 100644 tools/libxl/libxl_x86_acpi.c
 create mode 100644 tools/libxl/libxl_x86_acpi.h

diff --git a/.gitignore b/.gitignore
index 9dd2086..d4da37f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -179,6 +179,10 @@ tools/libxl/testenum.c
 tools/libxl/tmp.*
 tools/libxl/_libxl.api-for-check
 tools/libxl/*.api-ok
+tools/libxl/mk_dsdt
+tools/libxl/dsdt*.c
+tools/libxl/dsdt_*.asl
+tools/libxl/ssdt_*.h
 tools/misc/cpuperf/cpuperf-perfcntr
 tools/misc/cpuperf/cpuperf-xen
 tools/misc/xc_shadow
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 290f005..a6ddf53 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -23,6 +23,7 @@
 #include "ssdt_tpm.h"
 #include "ssdt_pm.h"
 #include "x86.h"
+#include 
 #include 
 #include 
 
@@ -473,7 +474,7 @@ static int new_vm_gid(struct acpi_ctxt *ctxt,
 return 1;
 }
 
-void acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config)
+int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config)
 {
 struct acpi_20_rsdp *rsdp;
 struct acpi_20_rsdt *rsdt;
@@ -594,11 +595,11 @@ void acpi_build_tables(struct acpi_ctxt *ctxt, struct 
acpi_config *config)
 
 *(struct acpi_info *)config->ainfop = config->ainfo;
 
-return;
+return 0;
 
 oom:
 printf("unable to build ACPI tables: out of memory\n");
-
+return -1;
 }
 
 /*
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 87a2937..791ebac 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -69,6 +69,15 @@ struct acpi_ctxt {
 void *(*alloc)(struct acpi_ctxt *ctxt, uint32_t size, uint32_t align);
 unsigned long (*v2p)(struct acpi_ctxt *ctxt, void *v);
 } mem_ops;
+
+unsigned int page_size;
+unsigned int page_shift;
+
+/* Memory allocator */
+unsigned long alloc_base_paddr;
+unsigned long alloc_base_vaddr;
+unsigned long alloc_currp;
+unsigned long alloc_end;
 };
 
 struct acpi_config {
@@ -98,13 +107,13 @@ struct acpi_config {
  * This must match the OperationRegion(BIOS, SystemMemory, )
  * definition in the DSDT
  */
-unsigned int ainfop;
+unsigned long ainfop;
 
 /* RSDP address */
-unsigned int rsdp;
+unsigned long rsdp;
 };
 
-void acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config);
+int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config);
 
 #endif /* __LIBACPI_H__ */
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 3a2d64a..18be2e7 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -75,7 +75,20 @@ else
 LIBXL_OBJS-y += libxl_no_colo.o
 endif
 
-LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o libxl_psr.o
+ACPI_PATH  = $(XEN_ROOT)/tools/libacpi
+ACPI_FILES = dsdt_pvh.c build.c static_tables.c
+ACPI_OBJS  = $(patsubst %.c,%.o,$(ACPI_FILES))
+$(ACPI_FILES): acpi
+$(ACPI_OBJS): CFLAGS += -I. -DSTDUTILS=\"$(CURDIR)/libxl_x86_acpi.h\"
+vpath build.c $(ACPI_PATH)/
+vpath static_tables.c $(ACPI_PATH)/
+LIBXL_OBJS-$(CONFIG_X86) += $(ACPI_OBJS)
+
+.PHONY: acpi
+acpi:
+   $(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(shell pwd)
+
+LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o libxl_psr.o 
libxl_x86_acpi.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o libxl_libfdt_compat.o
 
 ifeq ($(CONFIG_NetBSD),y)
@@ -166,6 +179,7 @@ $(XL_OBJS): CFLAGS += $(CFLAGS_XL)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs 
it.
 
 libxl_dom.o: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
+libxl_x86_acpi.o: CFLAGS += -I$(XEN_ROOT)/tools
 
 SAVE_HELPER_OBJS = libxl_save_helper.o _libxl_save_msgs_helper.o
 $(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn)
@@ -308,6 +322,7 @@ clean:
$(RM) -f testidl.c.new testidl.c *.api-ok
$(RM) -f xenlight.pc
$(RM) -f xlutil.pc
+   $(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(shell pwd) clean
 
 distclean: clean
$(RM) -f xenlight.pc.in xlutil.pc.in
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 34a853c..7c6536b 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -62,4 +62,7 @@ int