Re: [Xen-devel] [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP

2018-11-23 Thread Samuel Ortiz
On Thu, Nov 22, 2018 at 05:26:52PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 15:42:11 +0100
> Samuel Ortiz  wrote:
> 
> > Hi Igor,
> > 
> > On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> > > On Mon,  5 Nov 2018 02:40:28 +0100
> > > Samuel Ortiz  wrote:
> > >   
> > > > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > > > Description Table). RSDT only allow for 32-bit addressses and have thus
> > > > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > > > no longer RSDTs, although RSDTs are still supported for backward
> > > > compatibility.
> > > > 
> > > > Since version 2.0, RSDPs should add an extended checksum, a complete 
> > > > table
> > > > length and a version field to the table.  
> > > 
> > > This patch re-implements what arm/virt board already does
> > > and fixes checksum bug in the later and at the same time
> > > without a user (within the patch).
> > > 
> > > I'd suggest redo it a way similar to FADT refactoring
> > >   patch 1: fix checksum bug in virt/arm
> > >   patch 2: update reference tables in test
> > >   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
> > >  (both arm and x86) wich stores all data in hos byte order
> > >   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 
> > > 5d7a334f7)
> > >
> > >... move out to aml-build.c
> > >   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 
> > > specific one
> > >   amending it to generate rev1 variant defined by revision in 
> > > AcpiRsdpData
> > >   (commit dd1b2037a)  
> > I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
> > you're asking about something that's out of scope of the current serie.
> /me guilty of that, but I have excuses for doing so:
>   * that's the only way to get rid of legacy approach given limited resources.
> So task goes to whomever touches old code. /others and me included/
> I'd be glad if someone would volunteer and do clean ups but in absence
> of such, the victim is interested party.
>   * contributor to ACPI part learns how to use preferred approach,
> makes code more robust and clear as it's not possible to make
> endianness mistakes, very simple to review and notice mistakes
> as end result practically matches row by row a table described in spec.
I understand and agree with that. And to be clear: I'm happy to
contribute and work on that. But I'm also lucky to have an employer
that can afford to let me spend as much time as needed to do this kind
of refactoring/modernizing work. I just want to point out that other
potential newcomers to the project may not have that luxury.
I wonder (I sincerely do, I'm not making any assumptions) how much code
is left unmerged because the original submitter did not have the time or
budget to polish it up to the expected level.

Cheers,
Samuel.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition

2018-11-21 Thread Samuel Ortiz
On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin"  wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini  wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >  neither post huge patches (unless it's pure code movement).
> > > > >  (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >  one small patchset. Tables are often independent
> > > > >  and it's much easier on both author/reviewer to agree upon
> > > > >  changes and rewrite it if necessary.
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support.
That's what this is going to eventually look like, unfortunately.
And there's no technical reasons why we could not have a arch agnostic
hw-reduced support, so this should only be an intermediate step.

> That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
I think I got the idea, and I will try to rework this serie according
to these directions.


> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
I'd argue that the same could be said of a potential "x86 hw-reduced"
solution. The same random contributor may not be around to push it to
the next step and make it more generic. I'd also argue we're not
planning to be random contributors, dropping code to the mailing list
and leaving.


> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives.
Practically speaking, a large chunk of the NEMU work rely on having a
generic hardware-reduced ACPI implementation. We could not have blocked
the project waiting for an upstream acceptable solution for it and we
had to pick one route.
Retroactively I think we should have gone the self-contained, fully
duplicated route and move on with the rest of the NEMU work. Upstream
discussions could have then happened in parallel without much disruption
for the project.

Cheers,
Samuel.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface

2018-11-21 Thread Samuel Ortiz
On Fri, Nov 16, 2018 at 05:02:26PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:43 +0100
> Samuel Ortiz  wrote:
> 
> > In order to decouple ACPI APIs from specific machine types, we are
> > creating an ACPI builder interface that each ACPI platform can choose to
> > implement.
> > This way, a new machine type can re-use the high level ACPI APIs and
> > define some custom table build methods, without having to duplicate most
> > of the existing implementation only to add small variations to it.
> I'm not sure about motivation behind so high APIs,
> what obvious here is an extra level of indirection for not clear gain.
> 
> Yep using table callbacks, one can attempt to generalize
> acpi_setup() and help boards to decide which tables do not build
> (MCFG comes to the mind). But I'm not convinced that acpi_setup()
> could be cleanly generalized as a whole (probably some parts but
> not everything)
It's more about generalizing acpi_build(), and then having one
acpi_setup for non hardware-reduced ACPI and a acpi_reduced_setup() for
hardware-reduced.

Right now there's no generalization at all but with this patch we could
already use the same acpi_reduced_setup() implementation for both arm
and i386/virt.

> so it's minor benefit for extra headache of
> figuring out what callback will be actually called when reading code.
This is the same complexity that already exists for essentially all
current interfaces.

> However if board needs a slightly different table, it will have to
> duplicate an exiting one and then modify to suit its needs.
> 
> to me it pretty much looks the same as calling build_foo()
> we use now but with an extra indirection level and then
> duplicating the later for usage in another board in slightly
> different manner.
I believe what you're trying to say is that this abstraction may be
useful but you're arguing the granularity is not properly defined? Am I
getting this right?

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method

2018-11-21 Thread Samuel Ortiz
On Thu, Nov 15, 2018 at 02:28:54PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:38 +0100
> Samuel Ortiz  wrote:
> 
> > This is the standard way of building SRAT on x86 platfoms. But future
> > machine types could decide to define their own custom SRAT build method
> > through the ACPI builder methods.
> > Moreover, we will also need to reach build_srat() from outside of
> > acpi-build in order to use it as the ACPI builder SRAT build method.
> SRAT is usually highly machine specific (memory holes, layout, guest OS
> specific quirks) so it's hard to generalize it.
Hence the need for an SRAT builder interface.

> I'd  drop SRAT related patches from this series and introduce
> i386/virt specific SRAT when you post patches for it.
virt uses the existing i386 build_srat() routine, there's nothing
special about it. So this would be purely duplicated code.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter

2018-11-21 Thread Samuel Ortiz
Hi Igor,

On Thu, Nov 15, 2018 at 01:36:58PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:35 +0100
> Samuel Ortiz  wrote:
> 
> > From: Yang Zhong 
> > 
> > The ACPI MCFG getter is not x86 specific and could be called from
> > anywhere within generic ACPI API, so let's export it.
> So far it's x86 or more exactly q35 specific thing,
It's property based, and it's using a generic PCIE property afaict.
So it's up to each machine type to define those properties.
I'm curious here: What's the idiomatic way to define a machine
setting/attribute/property, let each instance define it or not, and
make it available at run time?
Would you be getting the PCI host pointer from the ACPI build state and
getting that information back from there?

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API

2018-11-21 Thread Samuel Ortiz
Hi Igor,

On Wed, Nov 14, 2018 at 11:55:37AM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:34 +0100
> Samuel Ortiz  wrote:
> 
> > From: Yang Zhong 
> > 
> > The AML build routines for the PCI host bridge and the corresponding
> > DSDT addition are neither x86 nor PC machine type specific.
> > We can move them to the architecture agnostic hw/acpi folder, and by
> > carrying all the needed information through a new AcpiPciBus structure,
> > we can make them PC machine type independent.
> 
> I'm don't know anything about PCI, but functional changes doesn't look
> correct to me.
>
> See more detailed comments below.
> 
> Marcel,
> could you take a look on this patch (in particular main csr changes), pls?
> 
> > 
> > Signed-off-by: Yang Zhong 
> > Signed-off-by: Rob Bradford 
> > Signed-off-by: Samuel Ortiz 
> > ---
> >  include/hw/acpi/aml-build.h |   8 ++
> >  hw/acpi/aml-build.c | 157 
> >  hw/i386/acpi-build.c| 115 ++
> >  3 files changed, 173 insertions(+), 107 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index fde2785b9a..1861e37ebf 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
> >  uint32_t mcfg_size;
> >  } AcpiMcfgInfo;
> >  
> > +typedef struct AcpiPciBus {
> > +PCIBus *pci_bus;
> > +Range *pci_hole;
> > +Range *pci_hole64;
> > +} AcpiPciBus;
> Again, this and all below is not aml-build material.
> Consider adding/using pci specific acpi file for it.
> 
> Also even though pci AML in arm/virt is to a large degree a subset
> of x86 target and it would be much better to unify ARM part with x86,
> it probably will be to big/complex of a change if we take on it in
> one go.
> 
> So not to derail you from the goal too much, we probably should
> generalize this a little bit less, limiting refactoring to x86
> target for now.
So keeping it under i386 means it won't be accessible through hw/acpi/,
which means we won't be able to have a generic hw/acpi/reduced.c
implementation. From our perspective, this is the problem with keeping
things under i386 because we're not sure yet how much generic it is: It
still won't be shareable for a generic hardware-reduced ACPI
implementation which means we'll have to temporarily have yet another
hardware-reduced ACPI implementation under hw/i386 this time.
I guess this is what Michael meant by keeping some parts of the code
duplicated for now.

I feel it'd be easier to move those APIs under a shareable location, to
make it easier for ARM to consume it even if it's not entirely generic yet.
But you guys are the maintainers and if you think we should restric the
generalization to x86 only for now, we can go for it.

> For example, move generic x86 pci parts to hw/i386/acpi-pci.[hc],
> and structure it so that building blocks in acpi-pci.c could be
> reused for x86 reduced profile later.
> Once it's been done, it might be easier and less complex to
> unify a bit more generic code in i386/acpi-pci.c with corresponding
> ARM code.
> 
> Patch is too big and should be split into smaller logical chunks
> and you should separate code movement vs functional changes you're
> a making here.
> 
> Once you split patch properly, it should be easier to assess
> changes.
> 
> >  typedef struct CrsRangeEntry {
> >  uint64_t base;
> >  uint64_t limit;
> > @@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
> >  void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo 
> > *info);
> >  Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
> >  Aml *build_prt(bool is_pci0_prt);
> > +void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
> > +Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
> >  void crs_range_set_init(CrsRangeSet *range_set);
> >  Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
> >  void crs_replace_with_free_ranges(GPtrArray *ranges,
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index b8e32f15f7..869ed70db3 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -29,6 +29,19 @@
> >  #include "hw/pci/pci_bus.h"
> >  #include "qemu/range.h"
> >  #include "hw/pci/pci_bridge.h"
> > +#include "hw/i386/pc.h"
> > +#include "sysemu/tpm.h"
> > +#include "hw/acpi/tpm.h"
> > +
> > +#

Re: [Xen-devel] [Qemu-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters

2018-11-21 Thread Samuel Ortiz
On Tue, Nov 13, 2018 at 04:59:18PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:33 +0100
> Samuel Ortiz  wrote:
> 
> > This is going to be needed by the hardware reduced implementation, so
> > let's export it.
> > Once the ACPI builder methods and getters will be implemented, the
> > acpi_get_pci_host() implementation will become hardware agnostic.
> > 
> > Signed-off-by: Samuel Ortiz 
> > ---
> >  include/hw/acpi/aml-build.h |  2 ++
> >  hw/acpi/aml-build.c | 43 +
> >  hw/i386/acpi-build.c| 47 ++---
> >  3 files changed, 47 insertions(+), 45 deletions(-)
> > 
> > diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> > index c27c0935ae..fde2785b9a 100644
> > --- a/include/hw/acpi/aml-build.h
> > +++ b/include/hw/acpi/aml-build.h
> > @@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
> >   const char *oem_id, const char *oem_table_id);
> >  void *acpi_data_push(GArray *table_data, unsigned size);
> >  unsigned acpi_data_len(GArray *table);
> > +Object *acpi_get_pci_host(void);
> > +void acpi_get_pci_holes(Range *hole, Range *hole64);
> >  /* Align AML blob size to a multiple of 'align' */
> >  void acpi_align_size(GArray *blob, unsigned align);
> >  void acpi_add_table(GArray *table_offsets, GArray *table_data);
> > diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> > index 2b9a636e75..b8e32f15f7 100644
> > --- a/hw/acpi/aml-build.c
> > +++ b/hw/acpi/aml-build.c
> > @@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables 
> > *tables, bool mfre)
> >  g_array_free(tables->vmgenid, mfre);
> >  }
> 
> > +/*
> > + * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
> > + */
> > +Object *acpi_get_pci_host(void)
> > +{
> > +PCIHostState *host;
> > +
> > +host = OBJECT_CHECK(PCIHostState,
> > +object_resolve_path("/machine/i440fx", NULL),
> > +TYPE_PCI_HOST_BRIDGE);
> > +if (!host) {
> > +host = OBJECT_CHECK(PCIHostState,
> > +object_resolve_path("/machine/q35", NULL),
> > +TYPE_PCI_HOST_BRIDGE);
> > +}
> > +
> > +return OBJECT(host);
> > +}
> in general aml-build.c is a place to put ACPI spec primitives,
> so I'd suggest to move it somewhere else.
> 
> Considering it's x86 code (so far), maybe move it to something like
> hw/i386/acpi-pci.c
> 
> Also it might be good to get rid of acpi_get_pci_host() and pass
> a pointer to pci_host as acpi_setup() an argument, since it's static
> for life of boar we can keep it in AcpiBuildState, and reuse for
> mfg/pci_hole/pci bus accesses.
That's what I'm trying to do with patches #23 and 24, but through the
ACPI configuration structure. I could try using the build state instead,
as it's platform agnostic as well.

Cheers,
Samuel.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines

2018-11-21 Thread Samuel Ortiz
Hi Igor,

On Fri, Nov 09, 2018 at 02:37:33PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:30 +0100
> Samuel Ortiz  wrote:
> 
> > From: Yang Zhong 
> > 
> > Most of the AML build routines under acpi-build are not even
> > architecture specific. They can be moved to the more generic hw/acpi
> > folder where they could be shared across machine types and
> > architectures.
> 
> I'd prefer if won't pull into aml-build PCI specific headers,
> Suggest to create hw/acpi/pci.c and move generic PCI related
> code there, with corresponding header the would export API
> (preferably without PCI dependencies in it)
> 
> 
> Also patch is too big and does too much at a time.
> Here I'd suggest to split it in smaller parts to make it more digestible
> 
> 1. split it in 3 parts
> * MCFG
> * CRS
> * PTR
> 2. mcfg between x86 and ARM look pretty much the same with ARM
>open codding bus number calculation and missing migration hack
>* a patch to make bus number calculation in ARM the same as x86
>* a patch to bring migration hack (dummy MCFG table in case it's disabled)
>  it's questionable if we actually need it in generic,
>  we most likely need it for legacy machines that predate
>  resizable MemeoryRegion, but we probably don't need it for
>  later machines as problem doesn't exists there.
>  So it might be better to push hack out from generic code
>  to a legacy caller and keep generic MCFG clean.
>  (this patch might be better at the beginning of the series as
>   it might affect acpi test results, and might need an update to 
> reference tables
>   I don't really sure)
>* at this point arm and x86 impl. would be the same so
>  a patch to move mcfg build routine to a generic place and replace
>  x86/arm with a single impl.
>* a patch to convert mcfg build routine to build_append_int_noprefix() API
>  and drop AcpiTableMcfg structure
Ok, I'll build another patch serie for that one then.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type

2018-11-21 Thread Samuel Ortiz
Hi Igor,

On Fri, Nov 09, 2018 at 03:23:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:24 +0100
> Samuel Ortiz  wrote:
> 
> > ACPI tables are platform and machine type and even architecture
> > agnostic, and as such we want to provide an internal ACPI API that
> > only depends on platform agnostic information.
> > 
> > For the x86 architecture, in order to build ACPI tables independently
> > from the PC or Q35 machine types, we are moving a few MachineState
> > structure fields into a machine type agnostic structure called
> > AcpiConfiguration. The structure fields we move are:
> 
> It's not obvious why new structure is needed, especially at
> the beginning of series. We probably should place this patch
> much later in the series (if we need it at all) and try
> generalize a much as possible without using it.
Patches order set aside, this new structure is needed to make the
existing API not completely bound to the pc machine type anymore and
"Decouple the ACPI build from the PC machine type".

It was either creating a structure to build ACPI tables in a machine
type independent fashion, or pass custom structures (or potentially long
list of arguments) to the existing APIs. See below.


> And try to come up with an API that doesn't need centralized collection
> of data somehow related to ACPI (most of the fields here are not generic
> and applicable to a specific board/target).
> 
> For generic API, I'd prefer a separate building blocks
> like build_fadt()/... that take as an input only parameters
> necessary to compose a table/aml part with occasional board
> interface hooks instead of all encompassing AcpiConfiguration
> and board specific 'acpi_build' that would use them when/if needed.
Let's take build_madt as an example. With my patch we define:

void build_madt(GArray *table_data, BIOSLinker *linker,
MachineState *ms, AcpiConfiguration *conf);

And you're suggesting we'd define:

void build_madt(GArray *table_data, BIOSLinker *linker,
MachineState *ms, HotplugHandler *acpi_dev,
bool apic_xrupt_override);

instead. Is that correct?

Pros for the latter is the fact that, as you said, we would not need to
define a centralized structure holding all possibly needed ACPI related
fields.
Pros for the former is about defining a pointer to all needed ACPI
fields once and for all and hiding the details of the API in the AML
building implementation.


> We probably should split series into several smaller
> (if possible independent) ones, so people won't be scared of
> its sheer size and run away from reviewing it.
I will try to split it in smaller chunks if that helps.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API

2018-11-21 Thread Samuel Ortiz
On Fri, Nov 09, 2018 at 03:27:16PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:25 +0100
> Samuel Ortiz  wrote:
> 
> > This is going to be needed by the Hardware-reduced ACPI routines.
> > 
> > Reviewed-by: Philippe Mathieu-Daudé 
> > Tested-by: Philippe Mathieu-Daudé 
> > Signed-off-by: Samuel Ortiz 
> the patch is probably misplaced withing series,
> if there is an external user within this series then this patch should
> be squashed there, otherwise it doesn't belong to this series.
hw/acpi/reduced.c needs it, I forgot to remove that patch when removing
the hardware-reduced code from the serie. I will remove it.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP

2018-11-21 Thread Samuel Ortiz
Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz  wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
>   patch 3: introduce AcpiRsdpData similar to commit 937d1b587
>  (both arm and x86) wich stores all data in hos byte order
>   patch 4: convert arm's impl. to build_append_int_noprefix() API (commit 
> 5d7a334f7)
>
>... move out to aml-build.c
>   patch 5: reuse generalized arm's build_rsdp() for x86, dropping x86 
> specific one
>   amending it to generate rev1 variant defined by revision in AcpiRsdpData
>   (commit dd1b2037a)
I agree patches #1, #2 and #5 make sense. 3 and 4 as well, but here
you're asking about something that's out of scope of the current serie.
I'll move those patches from this serie and build a 6 patches new serie
as suggested.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition

2018-11-21 Thread Samuel Ortiz
Igor,

On Wed, Nov 21, 2018 at 03:15:26PM +0100, Igor Mammedov wrote:
> On Wed, 21 Nov 2018 07:35:47 -0500
> "Michael S. Tsirkin"  wrote:
> 
> > On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > > On Fri, 16 Nov 2018 17:37:54 +0100
> > > Paolo Bonzini  wrote:
> > >   
> > > > On 16/11/18 17:29, Igor Mammedov wrote:  
> > > > > General suggestions for this series:
> > > > >   1. Preferably don't do multiple changes within a patch
> > > > >  neither post huge patches (unless it's pure code movement).
> > > > >  (it's easy to squash patches later it necessary)
> > > > >   2. Start small, pick a table generalize it and send as
> > > > >  one small patchset. Tables are often independent
> > > > >  and it's much easier on both author/reviewer to agree upon
> > > > >  changes and rewrite it if necessary.
> > > > 
> > > > How would that be done?  This series is on the bigger side, agreed, but
> > > > most of it is really just code movement.  It's a starting point, having
> > > > a generic ACPI library is way beyond what this is trying to do.  
> > > I've tried to give suggestions how to restructure series
> > > on per patch basis. In my opinion it quite possible to split
> > > series in several smaller ones and it should really help with
> > > making series cleaner and easier/faster to review/amend/merge
> > > vs what we have in v5.
> > > (it's more frustrating to rework large series vs smaller one)
> > > 
> > > If something isn't clear, it's easy to reach out to me here
> > > or directly (email/irc/github) for clarification/feed back.  
> > 
> > I assume the #1 goal is to add reduced HW support.  So another
> > option to speed up merging is to just go ahead and duplicate a
> > bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> > file.
> > This way it might be easier to see what's common code and what isn't.
> > And I think offline Igor said he might prefer that way. Right Igor?
> You mean probably 'x86 reduced hw' support. That's was what I've
> already suggested for PCI AML code during patch review. Just don't
> call it generic when it's not and place code in hw/i386/ directory beside
> acpi-build.c. It might apply to some other tables (i.e. complex cases).
> 
> On per patch review I gave suggestions how to amend series to make
> it acceptable without doing complex refactoring and pointed out
> places we probably shouldn't refactor now and just duplicate as
> it's too complex or not clear how to generalize it yet.
> 
> Problem with duplication is that a random contributor is not
> around to clean code up after a feature is merged and we end up
> with a bunch of messy code.
> 
> A word to the contributors,
> Don't do refactoring in silence, keep discussing approaches here,
> suggest alternatives. That way it's easier to reach a compromise
> and merge it with less iterations. And if you do split it in smaller
> parts, the process should go even faster.
> 
> I'll sent a small RSDP refactoring series for reference.
I was already working on the RSDP changes. Let me know if I should drop
that work too.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Qemu-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition

2018-11-21 Thread Samuel Ortiz
Hi Michael,

On Wed, Nov 21, 2018 at 07:35:47AM -0500, Michael S. Tsirkin wrote:
> On Mon, Nov 19, 2018 at 04:31:10PM +0100, Igor Mammedov wrote:
> > On Fri, 16 Nov 2018 17:37:54 +0100
> > Paolo Bonzini  wrote:
> > 
> > > On 16/11/18 17:29, Igor Mammedov wrote:
> > > > General suggestions for this series:
> > > >   1. Preferably don't do multiple changes within a patch
> > > >  neither post huge patches (unless it's pure code movement).
> > > >  (it's easy to squash patches later it necessary)
> > > >   2. Start small, pick a table generalize it and send as
> > > >  one small patchset. Tables are often independent
> > > >  and it's much easier on both author/reviewer to agree upon
> > > >  changes and rewrite it if necessary.  
> > > 
> > > How would that be done?  This series is on the bigger side, agreed, but
> > > most of it is really just code movement.  It's a starting point, having
> > > a generic ACPI library is way beyond what this is trying to do.
> > I've tried to give suggestions how to restructure series
> > on per patch basis. In my opinion it quite possible to split
> > series in several smaller ones and it should really help with
> > making series cleaner and easier/faster to review/amend/merge
> > vs what we have in v5.
> > (it's more frustrating to rework large series vs smaller one)
> > 
> > If something isn't clear, it's easy to reach out to me here
> > or directly (email/irc/github) for clarification/feed back.
> 
> I assume the #1 goal is to add reduced HW support.
From our perspective, yes. From the project's point of view, it's about
making the current ACPI code more generic and not bound to any specific
machine type.

> So another
> option to speed up merging is to just go ahead and duplicate a
> bunch of code e.g. in pc_virt.c acpi/reduced.c or in any other
> file.
It's precisely what we wanted to avoid in the very first place and we
assumed this would be largely frowned upon by the community. It's also a
burden for everyone to maintain that amount of duplicated code. Also I
suppose this would also mean we'd have to eventually de-duplicate and
factorize things in.
Honestly I'd rather not rush things out and work on code sharing first.
I'll answer Igor's numerous comments today and will start addressing
some of his concerns right aways as well.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP

2018-11-08 Thread Samuel Ortiz
Hi Igor,

On Thu, Nov 08, 2018 at 03:16:23PM +0100, Igor Mammedov wrote:
> On Mon,  5 Nov 2018 02:40:28 +0100
> Samuel Ortiz  wrote:
> 
> > XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
> > Description Table). RSDT only allow for 32-bit addressses and have thus
> > been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
> > no longer RSDTs, although RSDTs are still supported for backward
> > compatibility.
> > 
> > Since version 2.0, RSDPs should add an extended checksum, a complete table
> > length and a version field to the table.
> 
> This patch re-implements what arm/virt board already does
> and fixes checksum bug in the later and at the same time
> without a user (within the patch).
> 
> I'd suggest redo it a way similar to FADT refactoring
>   patch 1: fix checksum bug in virt/arm
>   patch 2: update reference tables in test
I now see what you meant with the ACPI reference tables, thanks.
I'll follow your advice.

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void

2018-11-06 Thread Samuel Ortiz
On Tue, Nov 06, 2018 at 11:23:39AM +0100, Paolo Bonzini wrote:
> On 05/11/2018 02:40, Samuel Ortiz wrote:
> >  /* RSDP */
> > -static GArray *
> > +static void
> >  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned 
> > xsdt_tbl_offset)
> >  {
> >  AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
> > @@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, 
> > unsigned xsdt_tbl_offset)
> >  bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
> >  (char *)rsdp - rsdp_table->data, sizeof *rsdp,
> >  (char *)&rsdp->checksum - rsdp_table->data);
> > -
> > -return rsdp_table;
> >  }
> >  
> 
> Better than v4. :)
Right, I followed Philippe's advice and it does make things clearer :)

Cheers,
Samuel.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 23/24] hw: i386: Set ACPI configuration PCI host pointer

2018-11-04 Thread Samuel Ortiz
For both PC and Q35 machine types, we can set it at the PCI host
bridge creation time.

Signed-off-by: Samuel Ortiz 
---
 hw/i386/pc_piix.c | 1 +
 hw/i386/pc_q35.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index f5b139a3eb..f1f0de3585 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -216,6 +216,7 @@ static void pc_init1(MachineState *machine,
 no_hpet = 1;
 }
 isa_bus_irqs(isa_bus, pcms->gsi);
+acpi_conf->pci_host = pci_host;
 
 if (kvm_pic_in_kernel()) {
 i8259 = kvm_i8259_init(isa_bus);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index cdde4a4beb..a8772e29a5 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -188,6 +188,7 @@ static void pc_q35_init(MachineState *machine)
 qdev_init_nofail(DEVICE(q35_host));
 phb = PCI_HOST_BRIDGE(q35_host);
 host_bus = phb->bus;
+acpi_conf->pci_host = phb;
 /* create ISA bus */
 lpc = pci_create_simple_multifunction(host_bus, PCI_DEVFN(ICH9_LPC_DEV,
   ICH9_LPC_FUNC), true,
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 19/24] hw: acpi: Retrieve the PCI bus from AcpiPciHpState

2018-11-04 Thread Samuel Ortiz
From: Sebastien Boeuf 

Instead of using the machine type specific method find_i440fx() to
retrieve the PCI bus, this commit aims to rely on the fact that the
PCI bus is known by the structure AcpiPciHpState.

When the structure is initialized through acpi_pcihp_init() call,
it saves the PCI bus, which means there is no need to invoke a
special function later on.

Based on the fact that find_i440fx() was only used there, this
patch also removes the function find_i440fx() itself from the
entire codebase.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Sebastien Boeuf 
Signed-off-by: Jing Liu 
---
 include/hw/i386/pc.h  |  1 -
 hw/acpi/pcihp.c   | 10 --
 hw/pci-host/piix.c|  8 
 stubs/pci-host-piix.c |  6 --
 stubs/Makefile.objs   |  1 -
 5 files changed, 4 insertions(+), 22 deletions(-)
 delete mode 100644 stubs/pci-host-piix.c

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 44cb6bf3f3..8e5f1464eb 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -255,7 +255,6 @@ PCIBus *i440fx_init(const char *host_type, const char 
*pci_type,
 MemoryRegion *pci_memory,
 MemoryRegion *ram_memory);
 
-PCIBus *find_i440fx(void);
 /* piix4.c */
 extern PCIDevice *piix4_dev;
 int piix4_init(PCIBus *bus, ISABus **isa_bus, int devfn);
diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
index 80d42e12ff..254b2e50ab 100644
--- a/hw/acpi/pcihp.c
+++ b/hw/acpi/pcihp.c
@@ -93,10 +93,9 @@ static void *acpi_set_bsel(PCIBus *bus, void *opaque)
 return bsel_alloc;
 }
 
-static void acpi_set_pci_info(void)
+static void acpi_set_pci_info(AcpiPciHpState *s)
 {
 static bool bsel_is_set;
-PCIBus *bus;
 unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
 
 if (bsel_is_set) {
@@ -104,10 +103,9 @@ static void acpi_set_pci_info(void)
 }
 bsel_is_set = true;
 
-bus = find_i440fx(); /* TODO: Q35 support */
-if (bus) {
+if (s->root) {
 /* Scan all PCI buses. Set property to enable acpi based hotplug. */
-pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
+pci_for_each_bus_depth_first(s->root, acpi_set_bsel, NULL, 
&bsel_alloc);
 }
 }
 
@@ -213,7 +211,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
 
 void acpi_pcihp_reset(AcpiPciHpState *s)
 {
-acpi_set_pci_info();
+acpi_set_pci_info(s);
 acpi_pcihp_update(s);
 }
 
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 47293a3915..658460264b 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -445,14 +445,6 @@ PCIBus *i440fx_init(const char *host_type, const char 
*pci_type,
 return b;
 }
 
-PCIBus *find_i440fx(void)
-{
-PCIHostState *s = OBJECT_CHECK(PCIHostState,
-   object_resolve_path("/machine/i440fx", 
NULL),
-   TYPE_PCI_HOST_BRIDGE);
-return s ? s->bus : NULL;
-}
-
 /* PIIX3 PCI to ISA bridge */
 static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
 {
diff --git a/stubs/pci-host-piix.c b/stubs/pci-host-piix.c
deleted file mode 100644
index 6ed81b1f21..00
--- a/stubs/pci-host-piix.c
+++ /dev/null
@@ -1,6 +0,0 @@
-#include "qemu/osdep.h"
-#include "hw/i386/pc.h"
-PCIBus *find_i440fx(void)
-{
-return NULL;
-}
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 5dd0aeeec6..725f78bedc 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -41,6 +41,5 @@ stub-obj-y += pc_madt_cpu_entry.o
 stub-obj-y += vmgenid.o
 stub-obj-y += xen-common.o
 stub-obj-y += xen-hvm.o
-stub-obj-y += pci-host-piix.o
 stub-obj-y += ram-block.o
 stub-obj-y += ramfb.o
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 22/24] hw: pci-host: piix: Return PCI host pointer instead of PCI bus

2018-11-04 Thread Samuel Ortiz
For building the MCFG table, we need to track a given machine
type PCI host pointer, and we can't get it from the bus pointer alone.
As piix returns a PCI bus pointer, we simply modify its builder to
return a PCI host pointer instead.

Signed-off-by: Samuel Ortiz 
---
 include/hw/i386/pc.h | 21 +++--
 hw/i386/pc_piix.c| 18 +++---
 hw/pci-host/piix.c   | 24 
 3 files changed, 34 insertions(+), 29 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 8e5f1464eb..b6b79e146d 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -244,16 +244,17 @@ typedef struct PCII440FXState PCII440FXState;
  */
 #define RCR_IOPORT 0xcf9
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-PCII440FXState **pi440fx_state, int *piix_devfn,
-ISABus **isa_bus, qemu_irq *pic,
-MemoryRegion *address_space_mem,
-MemoryRegion *address_space_io,
-ram_addr_t ram_size,
-ram_addr_t below_4g_mem_size,
-ram_addr_t above_4g_mem_size,
-MemoryRegion *pci_memory,
-MemoryRegion *ram_memory);
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+ PCII440FXState **pi440fx_state,
+ int *piix_devfn,
+ ISABus **isa_bus, qemu_irq *pic,
+ MemoryRegion *address_space_mem,
+ MemoryRegion *address_space_io,
+ ram_addr_t ram_size,
+ ram_addr_t below_4g_mem_size,
+ ram_addr_t above_4g_mem_size,
+ MemoryRegion *pci_memory,
+ MemoryRegion *ram_memory);
 
 /* piix4.c */
 extern PCIDevice *piix4_dev;
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 0620d10715..f5b139a3eb 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -32,6 +32,7 @@
 #include "hw/display/ramfb.h"
 #include "hw/smbios/smbios.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_host.h"
 #include "hw/pci/pci_ids.h"
 #include "hw/usb.h"
 #include "net/net.h"
@@ -75,6 +76,7 @@ static void pc_init1(MachineState *machine,
 MemoryRegion *system_memory = get_system_memory();
 MemoryRegion *system_io = get_system_io();
 int i;
+struct PCIHostState *pci_host;
 PCIBus *pci_bus;
 ISABus *isa_bus;
 PCII440FXState *i440fx_state;
@@ -196,15 +198,17 @@ static void pc_init1(MachineState *machine,
 }
 
 if (pcmc->pci_enabled) {
-pci_bus = i440fx_init(host_type,
-  pci_type,
-  &i440fx_state, &piix3_devfn, &isa_bus, pcms->gsi,
-  system_memory, system_io, machine->ram_size,
-  acpi_conf->below_4g_mem_size,
-  acpi_conf->above_4g_mem_size,
-  pci_memory, ram_memory);
+pci_host = i440fx_init(host_type,
+   pci_type,
+   &i440fx_state, &piix3_devfn, &isa_bus, 
pcms->gsi,
+   system_memory, system_io, machine->ram_size,
+   acpi_conf->below_4g_mem_size,
+   acpi_conf->above_4g_mem_size,
+   pci_memory, ram_memory);
+pci_bus = pci_host->bus;
 pcms->bus = pci_bus;
 } else {
+pci_host = NULL;
 pci_bus = NULL;
 i440fx_state = NULL;
 isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index 658460264b..4a412db44c 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -342,17 +342,17 @@ static void i440fx_realize(PCIDevice *dev, Error **errp)
 }
 }
 
-PCIBus *i440fx_init(const char *host_type, const char *pci_type,
-PCII440FXState **pi440fx_state,
-int *piix3_devfn,
-ISABus **isa_bus, qemu_irq *pic,
-MemoryRegion *address_space_mem,
-MemoryRegion *address_space_io,
-ram_addr_t ram_size,
-ram_addr_t below_4g_mem_size,
-ram_addr_t above_4g_mem_size,
-MemoryRegion *pci_address_space,
-MemoryRegion *ram_memory)
+struct PCIHostState *i440fx_init(const char *host_type, const char *pci_type,
+ PCII440FXState **pi440fx_state,
+ int *piix3_devfn,
+ ISABus **isa_b

[Xen-devel] [PATCH v5 20/24] hw: acpi: Define ACPI tables builder interface

2018-11-04 Thread Samuel Ortiz
In order to decouple ACPI APIs from specific machine types, we are
creating an ACPI builder interface that each ACPI platform can choose to
implement.
This way, a new machine type can re-use the high level ACPI APIs and
define some custom table build methods, without having to duplicate most
of the existing implementation only to add small variations to it.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/builder.h | 100 ++
 hw/acpi/builder.c |  97 
 hw/acpi/Makefile.objs |   1 +
 3 files changed, 198 insertions(+)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 hw/acpi/builder.c

diff --git a/include/hw/acpi/builder.h b/include/hw/acpi/builder.h
new file mode 100644
index 00..a63b88ffe9
--- /dev/null
+++ b/include/hw/acpi/builder.h
@@ -0,0 +1,100 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef ACPI_BUILDER_H
+#define ACPI_BUILDER_H
+
+#include "qemu/osdep.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "qom/object.h"
+
+#define TYPE_ACPI_BUILDER "acpi-builder"
+
+#define ACPI_BUILDER_METHODS(klass) \
+ OBJECT_CLASS_CHECK(AcpiBuilderMethods, (klass), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER_GET_METHODS(obj) \
+ OBJECT_GET_CLASS(AcpiBuilderMethods, (obj), TYPE_ACPI_BUILDER)
+#define ACPI_BUILDER(obj)   \
+ INTERFACE_CHECK(AcpiBuilder, (obj), TYPE_ACPI_BUILDER)
+
+typedef struct AcpiConfiguration AcpiConfiguration;
+typedef struct AcpiBuildState AcpiBuildState;
+typedef struct AcpiMcfgInfo AcpiMcfgInfo;
+
+typedef struct AcpiBuilder {
+/*  */
+Object Parent;
+} AcpiBuilder;
+
+/**
+ * AcpiBuildMethods:
+ *
+ * Interface to be implemented by a machine type that needs to provide
+ * custom ACPI tables build method.
+ *
+ * @parent: Opaque parent interface.
+ * @rsdp: ACPI RSDP (Root System Description Pointer) table build callback.
+ * @madt: ACPI MADT (Multiple APIC Description Table) table build callback.
+ * @mcfg: ACPI MCFG table build callback.
+ * @srat: ACPI SRAT (System/Static Resource Affinity Table)
+ *table build callback.
+ * @slit: ACPI SLIT (System Locality System Information Table)
+ *table build callback.
+ * @configuration: ACPI configuration getter.
+ * This is used to query the machine instance for its
+ * AcpiConfiguration pointer.
+ */
+typedef struct AcpiBuilderMethods {
+/*  */
+InterfaceClass parent;
+
+/*  */
+void (*rsdp)(GArray *table_data, BIOSLinker *linker,
+ unsigned rsdt_tbl_offset);
+void (*madt)(GArray *table_data, BIOSLinker *linker,
+ MachineState *ms, AcpiConfiguration *conf);
+void (*mcfg)(GArray *table_data, BIOSLinker *linker,
+ AcpiMcfgInfo *info);
+void (*srat)(GArray *table_data, BIOSLinker *linker,
+ MachineState *machine, AcpiConfiguration *conf);
+void (*slit)(GArray *table_data, BIOSLinker *linker);
+
+AcpiConfiguration *(*configuration)(AcpiBuilder *builder);
+} AcpiBuilderMethods;
+
+void acpi_builder_rsdp(AcpiBuilder *builder,
+   GArray *table_data, BIOSLinker *linker,
+   unsigned rsdt_tbl_offset);
+
+void acpi_builder_madt(AcpiBuilder *builder,
+   GArray *table_data, BIOSLinker *linker,
+   MachineState *ms, AcpiConfiguration *conf);
+
+void acpi_builder_mcfg(AcpiBuilder *builder,
+   GArray *table_data, BIOSLinker *linker,
+   AcpiMcfgInfo *info);
+
+void acpi_builder_srat(AcpiBuilder *builder,
+   GArray *table_data, BIOSLinker *linker,
+   MachineState *machine, AcpiConfiguration *conf);
+
+void acpi_builder_slit(AcpiBuilder *builder,
+   GArray *table_data, BIOSLinker *linker);
+
+AcpiConfiguration *acpi_builder_configuration(AcpiBuilder *builder);
+
+#endif
diff --git a/hw/acpi/builder.c b/hw/acpi/builder.c
new file mode 100644
index 00..c29a614793
--- /dev/null
+++ b/hw/acpi/builder.c
@@ -0,0 +1,97 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corporation
+ *
+ * This program is free software; you can redistribu

[Xen-devel] [PATCH v5 15/24] hw: i386: Export the i386 ACPI SRAT build method

2018-11-04 Thread Samuel Ortiz
This is the standard way of building SRAT on x86 platfoms. But future
machine types could decide to define their own custom SRAT build method
through the ACPI builder methods.
Moreover, we will also need to reach build_srat() from outside of
acpi-build in order to use it as the ACPI builder SRAT build method.

Signed-off-by: Samuel Ortiz 
---
 hw/i386/acpi-build.h | 5 +
 hw/i386/acpi-build.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 065a1d8250..d73c41fe8f 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -4,6 +4,11 @@
 
 #include "hw/acpi/acpi.h"
 
+/* ACPI SRAT (Static Resource Affinity Table) build method for x86 */
+void
+build_srat(GArray *table_data, BIOSLinker *linker,
+   MachineState *machine, AcpiConfiguration *acpi_conf);
+
 void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1ef1a38441..673c5dfafc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1615,7 +1615,7 @@ build_tpm2(GArray *table_data, BIOSLinker *linker, GArray 
*tcpalog)
 #define HOLE_640K_START  (640 * KiB)
 #define HOLE_640K_END   (1 * MiB)
 
-static void
+void
 build_srat(GArray *table_data, BIOSLinker *linker,
MachineState *machine, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 16/24] hw: acpi: Fix memory hotplug AML generation error

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

When using the generated memory hotplug AML, the iasl
compiler would give the following error:

dsdt.dsl 266: Return (MOST (_UID, Arg0, Arg1, Arg2))
Error 6080 - Called method returns no value ^

Signed-off-by: Yang Zhong 
---
 hw/acpi/memory_hotplug.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index db2c4df961..893fc2bd27 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -686,15 +686,15 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
 
 method = aml_method("_OST", 3, AML_NOTSERIALIZED);
 s = MEMORY_SLOT_OST_METHOD;
-aml_append(method, aml_return(aml_call4(
-s, aml_name("_UID"), aml_arg(0), aml_arg(1), aml_arg(2)
-)));
+aml_append(method,
+   aml_call4(s, aml_name("_UID"), aml_arg(0),
+ aml_arg(1), aml_arg(2)));
 aml_append(dev, method);
 
 method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
 s = MEMORY_SLOT_EJECT_METHOD;
-aml_append(method, aml_return(aml_call2(
-   s, aml_name("_UID"), aml_arg(0;
+aml_append(method,
+   aml_call2(s, aml_name("_UID"), aml_arg(0)));
 aml_append(dev, method);
 
 aml_append(dev_container, dev);
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 21/24] hw: i386: Implement the ACPI builder interface for PC

2018-11-04 Thread Samuel Ortiz
All PC machine type derivatives will use the same ACPI table build
methods. But with that change in place, any new x86 machine type will be
able to re-use the acpi-build API and customize part of it by defining
its own ACPI table build methods.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Samuel Ortiz 
---
 hw/i386/acpi-build.c | 14 +-
 hw/i386/pc.c | 19 +++
 2 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4b1d8fbe3f..93d89b96f1 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
+#include "hw/acpi/builder.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
@@ -1683,6 +1684,7 @@ void acpi_build(AcpiBuildTables *tables,
 GArray *tables_blob = tables->table_data;
 AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
 Object *vmgenid_dev;
+AcpiBuilder *ab = ACPI_BUILDER(machine);
 
 acpi_get_pm_info(&pm);
 acpi_get_misc_info(&misc);
@@ -1732,7 +1734,8 @@ void acpi_build(AcpiBuildTables *tables,
 aml_len += tables_blob->len - fadt;
 
 acpi_add_table(table_offsets, tables_blob);
-build_madt(tables_blob, tables->linker, machine, acpi_conf);
+acpi_builder_madt(ab, tables_blob, tables->linker,
+  machine, acpi_conf);
 
 vmgenid_dev = find_vmgenid_dev();
 if (vmgenid_dev) {
@@ -1756,15 +1759,16 @@ void acpi_build(AcpiBuildTables *tables,
 }
 if (acpi_conf->numa_nodes) {
 acpi_add_table(table_offsets, tables_blob);
-build_srat(tables_blob, tables->linker, machine, acpi_conf);
+acpi_builder_srat(ab, tables_blob, tables->linker,
+  machine, acpi_conf);
 if (have_numa_distance) {
 acpi_add_table(table_offsets, tables_blob);
-build_slit(tables_blob, tables->linker);
+acpi_builder_slit(ab, tables_blob, tables->linker);
 }
 }
 if (acpi_get_mcfg(&mcfg)) {
 acpi_add_table(table_offsets, tables_blob);
-build_mcfg(tables_blob, tables->linker, &mcfg);
+acpi_builder_mcfg(ab, tables_blob, tables->linker, &mcfg);
 }
 if (x86_iommu_get_default()) {
 IommuType IOMMUType = x86_iommu_get_type();
@@ -1795,7 +1799,7 @@ void acpi_build(AcpiBuildTables *tables,
slic_oem.id, slic_oem.table_id);
 
 /* RSDP is in FSEG memory, so allocate it separately */
-build_rsdp_rsdt(tables->rsdp, tables->linker, rsdt);
+acpi_builder_rsdp(ab, tables->rsdp, tables->linker, rsdt);
 
 /* We'll expose it all to Guest so we want to reduce
  * chance of size changes.
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c9ffc8cff6..53a3036066 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -64,6 +64,7 @@
 #include "qemu/option.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu_hotplug.h"
+#include "hw/acpi/builder.h"
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
@@ -75,6 +76,7 @@
 #include "hw/nmi.h"
 #include "hw/i386/intel_iommu.h"
 #include "hw/net/ne2000-isa.h"
+#include "hw/i386/acpi.h"
 
 /* debug PC/ISA interrupts */
 //#define DEBUG_IRQ
@@ -2404,12 +2406,20 @@ static void x86_nmi(NMIState *n, int cpu_index, Error 
**errp)
 }
 }
 
+static AcpiConfiguration *pc_acpi_configuration(AcpiBuilder *builder)
+{
+PCMachineState *pcms = PC_MACHINE(builder);
+
+return &pcms->acpi_configuration;
+}
+
 static void pc_machine_class_init(ObjectClass *oc, void *data)
 {
 MachineClass *mc = MACHINE_CLASS(oc);
 PCMachineClass *pcmc = PC_MACHINE_CLASS(oc);
 HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(oc);
 NMIClass *nc = NMI_CLASS(oc);
+AcpiBuilderMethods *abm = ACPI_BUILDER_METHODS(oc);
 
 pcmc->pci_enabled = true;
 pcmc->has_acpi_build = true;
@@ -2444,6 +2454,14 @@ static void pc_machine_class_init(ObjectClass *oc, void 
*data)
 nc->nmi_monitor_handler = x86_nmi;
 mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
+/* ACPI building methods */
+abm->madt = build_madt;
+abm->rsdp = build_rsdp_rsdt;
+abm->mcfg = build_mcfg;
+abm->srat = build_srat;
+abm->slit = build_slit;
+abm->configuration = pc_acpi_configuration;
+
 object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
 pc_machine_get_device_memory_region_size, NULL,
 NULL, NULL, &error_abort);
@@ -2495,6 +2513,7 @@ static const TypeInfo pc_machine_info = {
 .interfaces = (InterfaceInfo[]) {
  { TYPE_HOTPLUG_HANDLER },
  { TYPE_NMI },
+ { TYPE_ACPI_BUILDER },
  { }
 },
 };
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 17/24] hw: acpi: Export the PCI hotplug API

2018-11-04 Thread Samuel Ortiz
From: Sebastien Boeuf 

The ACPI hotplug support for PCI devices APIs are not x86 or even
machine type specific. In order for future machine types to be able to
re-use that code, we export it through the architecture agnostic
hw/acpi folder.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Sebastien Boeuf 
Signed-off-by: Jing Liu 
---
 include/hw/acpi/aml-build.h |   3 +
 hw/acpi/aml-build.c | 194 
 hw/i386/acpi-build.c| 192 +--
 3 files changed, 199 insertions(+), 190 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 64ea371656..6b0a9735c5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -418,6 +418,9 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void build_acpi_pcihp(Aml *scope);
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+  bool pcihp_bridge_en);
 void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
 Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2c5446ab23..6112cc2149 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -34,6 +34,7 @@
 #include "hw/acpi/tpm.h"
 #include "qom/qom-qobject.h"
 #include "qapi/qmp/qnum.h"
+#include "hw/acpi/pcihp.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR  0x
@@ -2305,6 +2306,199 @@ Aml *build_pci_host_bridge(Aml *table, AcpiPciBus 
*pci_host)
 return scope;
 }
 
+void build_acpi_pcihp(Aml *scope)
+{
+Aml *field;
+Aml *method;
+
+aml_append(scope,
+aml_operation_region("PCST", AML_SYSTEM_IO, aml_int(0xae00), 0x08));
+field = aml_field("PCST", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+aml_append(field, aml_named_field("PCIU", 32));
+aml_append(field, aml_named_field("PCID", 32));
+aml_append(scope, field);
+
+aml_append(scope,
+aml_operation_region("SEJ", AML_SYSTEM_IO, aml_int(0xae08), 0x04));
+field = aml_field("SEJ", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+aml_append(field, aml_named_field("B0EJ", 32));
+aml_append(scope, field);
+
+aml_append(scope,
+aml_operation_region("BNMR", AML_SYSTEM_IO, aml_int(0xae10), 0x04));
+field = aml_field("BNMR", AML_DWORD_ACC, AML_NOLOCK, AML_WRITE_AS_ZEROS);
+aml_append(field, aml_named_field("BNUM", 32));
+aml_append(scope, field);
+
+aml_append(scope, aml_mutex("BLCK", 0));
+
+method = aml_method("PCEJ", 2, AML_NOTSERIALIZED);
+aml_append(method, aml_acquire(aml_name("BLCK"), 0x));
+aml_append(method, aml_store(aml_arg(0), aml_name("BNUM")));
+aml_append(method,
+aml_store(aml_shiftleft(aml_int(1), aml_arg(1)), aml_name("B0EJ")));
+aml_append(method, aml_release(aml_name("BLCK")));
+aml_append(method, aml_return(aml_int(0)));
+aml_append(scope, method);
+}
+
+static void build_append_pcihp_notify_entry(Aml *method, int slot)
+{
+Aml *if_ctx;
+int32_t devfn = PCI_DEVFN(slot, 0);
+
+if_ctx = aml_if(aml_and(aml_arg(0), aml_int(0x1U << slot), NULL));
+aml_append(if_ctx, aml_notify(aml_name("S%.02X", devfn), aml_arg(1)));
+aml_append(method, if_ctx);
+}
+
+void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus,
+  bool pcihp_bridge_en)
+{
+Aml *dev, *notify_method = NULL, *method;
+QObject *bsel;
+PCIBus *sec;
+int i;
+
+bsel = object_property_get_qobject(OBJECT(bus), ACPI_PCIHP_PROP_BSEL, 
NULL);
+if (bsel) {
+uint64_t bsel_val = qnum_get_uint(qobject_to(QNum, bsel));
+
+aml_append(parent_scope, aml_name_decl("BSEL", aml_int(bsel_val)));
+notify_method = aml_method("DVNT", 2, AML_NOTSERIALIZED);
+}
+
+for (i = 0; i < ARRAY_SIZE(bus->devices); i += PCI_FUNC_MAX) {
+DeviceClass *dc;
+PCIDeviceClass *pc;
+PCIDevice *pdev = bus->devices[i];
+int slot = PCI_SLOT(i);
+bool hotplug_enabled_dev;
+bool bridge_in_acpi;
+
+if (!pdev) {
+if (bsel) { /* add hotplug slots for non present devices */
+dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
+aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
+aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
+method = aml_method("_EJ0", 1, AML_NOTSERIALIZED);
+aml_append(method,
+aml_call2("PCEJ", aml_name("BSEL"), aml_name("_SUN"))
+);
+aml_append(dev, method);
+aml_append(parent_scope, dev);
+
+   

[Xen-devel] [PATCH v5 24/24] hw: i386: Refactor PCI host getter

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

Now that the ACPI builder methods are added, we can reach the ACPI
configuration pointer from the MachineState pointer. From there we can
get to the PCI host pointer and return it.

This makes the PCI host getter an ACPI, architecture agnostic function.

Signed-off-by: Yang Zhong 
---
 hw/acpi/aml-build.c | 20 +++-
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 6112cc2149..b532817fb5 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -22,6 +22,8 @@
 #include "qemu/osdep.h"
 #include 
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/builder.h"
+#include "hw/mem/memory-device.h"
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
@@ -1617,23 +1619,15 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, 
bool mfre)
 g_array_free(tables->vmgenid, mfre);
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- */
 Object *acpi_get_pci_host(void)
 {
-PCIHostState *host;
+MachineState *ms = MACHINE(qdev_get_machine());
+AcpiBuilder *ab = ACPI_BUILDER(ms);
+AcpiConfiguration *acpi_conf;
 
-host = OBJECT_CHECK(PCIHostState,
-object_resolve_path("/machine/i440fx", NULL),
-TYPE_PCI_HOST_BRIDGE);
-if (!host) {
-host = OBJECT_CHECK(PCIHostState,
-object_resolve_path("/machine/q35", NULL),
-TYPE_PCI_HOST_BRIDGE);
-}
+acpi_conf = acpi_builder_configuration(ab);
 
-return OBJECT(host);
+return OBJECT(acpi_conf->pci_host);
 }
 
 
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 18/24] hw: i386: Export the MADT build method

2018-11-04 Thread Samuel Ortiz
It is going to be used by the PC machine type as the MADT table builder
method and thus needs to be exported outside of acpi-build.c

Also, now that the generic build_madt() API is exported, we have to
rename the ARM static one in order to avoid build time conflicts.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Samuel Ortiz 
---
 include/hw/i386/acpi.h   | 28 
 hw/arm/virt-acpi-build.c |  4 ++--
 hw/i386/acpi-build.c |  4 ++--
 3 files changed, 32 insertions(+), 4 deletions(-)
 create mode 100644 include/hw/i386/acpi.h

diff --git a/include/hw/i386/acpi.h b/include/hw/i386/acpi.h
new file mode 100644
index 00..b7a887111d
--- /dev/null
+++ b/include/hw/i386/acpi.h
@@ -0,0 +1,28 @@
+/*
+ *
+ * Copyright (c) 2018 Intel Corportation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_I386_ACPI_H
+#define HW_I386_ACPI_H
+
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/bios-linker-loader.h"
+
+/* ACPI MADT (Multiple APIC Description Table) build method */
+void build_madt(GArray *table_data, BIOSLinker *linker,
+MachineState *ms, AcpiConfiguration *conf);
+
+#endif
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index b5e165543a..b0354c5f03 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -564,7 +564,7 @@ build_gtdt(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
 
 /* MADT */
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
+virt_build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
 VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
 int madt_start = table_data->len;
@@ -745,7 +745,7 @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables 
*tables)
 build_fadt_rev5(tables_blob, tables->linker, vms, dsdt);
 
 acpi_add_table(table_offsets, tables_blob);
-build_madt(tables_blob, tables->linker, vms);
+virt_build_madt(tables_blob, tables->linker, vms);
 
 acpi_add_table(table_offsets, tables_blob);
 build_gtdt(tables_blob, tables->linker, vms);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bef5b23168..4b1d8fbe3f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -35,7 +35,6 @@
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu.h"
 #include "hw/nvram/fw_cfg.h"
-#include "hw/acpi/bios-linker-loader.h"
 #include "hw/loader.h"
 #include "hw/isa/isa.h"
 #include "hw/block/fdc.h"
@@ -60,6 +59,7 @@
 #include "qom/qom-qobject.h"
 #include "hw/i386/amd_iommu.h"
 #include "hw/i386/intel_iommu.h"
+#include "hw/i386/acpi.h"
 
 #include "hw/acpi/ipmi.h"
 
@@ -279,7 +279,7 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
 }
 }
 
-static void
+void
 build_madt(GArray *table_data, BIOSLinker *linker,
MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 14/24] hw: i386: Make the hotpluggable memory size property more generic

2018-11-04 Thread Samuel Ortiz
This property is currently defined under i386/pc while it only describes
a region size that's eventually fetched from the AML ACPI code.

We can make it more generic and shareable across machine types by moving
it to memory-device.h instead.

Signed-off-by: Samuel Ortiz 
---
 include/hw/i386/pc.h   | 1 -
 include/hw/mem/memory-device.h | 2 ++
 hw/i386/acpi-build.c   | 2 +-
 hw/i386/pc.c   | 3 ++-
 4 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index bbbdb33ea3..44cb6bf3f3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -62,7 +62,6 @@ struct PCMachineState {
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
-#define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
 #define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
 #define PC_MACHINE_VMPORT   "vmport"
 #define PC_MACHINE_SMM  "smm"
diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h
index e904e194d5..d9a4fc7c3e 100644
--- a/include/hw/mem/memory-device.h
+++ b/include/hw/mem/memory-device.h
@@ -97,6 +97,8 @@ typedef struct MemoryDeviceClass {
  MemoryDeviceInfo *info);
 } MemoryDeviceClass;
 
+#define MEMORY_DEVICE_REGION_SIZE "memory-device-region-size"
+
 MemoryDeviceInfoList *qmp_memory_device_list(void);
 uint64_t get_plugged_memory_size(void);
 void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d8bba16776..1ef1a38441 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1628,7 +1628,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
 MachineClass *mc = MACHINE_GET_CLASS(machine);
 const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
 ram_addr_t hotplugabble_address_space_size =
-object_property_get_int(OBJECT(machine), PC_MACHINE_DEVMEM_REGION_SIZE,
+object_property_get_int(OBJECT(machine), MEMORY_DEVICE_REGION_SIZE,
 NULL);
 
 srat_start = table_data->len;
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 090f969933..c9ffc8cff6 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -67,6 +67,7 @@
 #include "hw/boards.h"
 #include "acpi-build.h"
 #include "hw/mem/pc-dimm.h"
+#include "hw/mem/memory-device.h"
 #include "qapi/error.h"
 #include "qapi/qapi-visit-common.h"
 #include "qapi/visitor.h"
@@ -2443,7 +2444,7 @@ static void pc_machine_class_init(ObjectClass *oc, void 
*data)
 nc->nmi_monitor_handler = x86_nmi;
 mc->default_cpu_type = TARGET_DEFAULT_CPU_TYPE;
 
-object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
+object_class_property_add(oc, MEMORY_DEVICE_REGION_SIZE, "int",
 pc_machine_get_device_memory_region_size, NULL,
 NULL, NULL, &error_abort);
 
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 08/24] hw: acpi: Factorize _OSC AML across architectures

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

The _OSC AML table is almost identical between the i386 Q35 and arm virt
machine types. We can make it slightly more generic and share it across
all PCIe architectures.

Signed-off-by: Yang Zhong 
---
 include/hw/acpi/acpi-defs.h | 14 +++
 include/hw/acpi/aml-build.h |  2 +-
 hw/acpi/aml-build.c | 84 +++--
 hw/arm/virt-acpi-build.c| 45 ++--
 hw/i386/acpi-build.c|  6 ++-
 5 files changed, 66 insertions(+), 85 deletions(-)

diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
index af8e023968..6e1726e0a2 100644
--- a/include/hw/acpi/acpi-defs.h
+++ b/include/hw/acpi/acpi-defs.h
@@ -652,4 +652,18 @@ struct AcpiIortRC {
 } QEMU_PACKED;
 typedef struct AcpiIortRC AcpiIortRC;
 
+/* _OSC */
+
+#define ACPI_OSC_CTRL_PCIE_NATIVE_HP (1 << 0)
+#define ACPI_OSC_CTRL_SHPC_NATIVE_HP (1 << 1)
+#define ACPI_OSC_CTRL_PCIE_PM_EVT(1 << 2)
+#define ACPI_OSC_CTRL_PCIE_AER   (1 << 3)
+#define ACPI_OSC_CTRL_PCIE_CAP_CTRL  (1 << 4)
+#define ACPI_OSC_CTRL_PCI_ALL \
+(ACPI_OSC_CTRL_PCIE_NATIVE_HP | \
+ ACPI_OSC_CTRL_SHPC_NATIVE_HP | \
+ ACPI_OSC_CTRL_PCIE_PM_EVT |\
+ ACPI_OSC_CTRL_PCIE_AER |   \
+ ACPI_OSC_CTRL_PCIE_CAP_CTRL)
+
 #endif
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 4f678c45a5..c27c0935ae 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -405,7 +405,7 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
-Aml *build_osc_method(void);
+Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index d3242c6b31..2b9a636e75 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1869,51 +1869,55 @@ Aml *build_crs(PCIHostState *host, CrsRangeSet 
*range_set)
 return crs;
 }
 
-Aml *build_osc_method(void)
+/*
+ * ctrl_mask is the _OSC capabilities buffer control field mask.
+ */
+Aml *build_osc_method(uint32_t ctrl_mask)
 {
-Aml *if_ctx;
-Aml *if_ctx2;
-Aml *else_ctx;
-Aml *method;
-Aml *a_cwd1 = aml_name("CDW1");
-Aml *a_ctrl = aml_local(0);
+Aml *ifctx, *ifctx1, *elsectx, *method, *UUID;
 
 method = aml_method("_OSC", 4, AML_NOTSERIALIZED);
-aml_append(method, aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
-
-if_ctx = aml_if(aml_equal(
-aml_arg(0), aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766")));
-aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
-aml_append(if_ctx, aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
-
-aml_append(if_ctx, aml_store(aml_name("CDW3"), a_ctrl));
-
-/*
- * Always allow native PME, AER (no dependencies)
- * Allow SHPC (PCI bridges can have SHPC controller)
+aml_append(method,
+aml_create_dword_field(aml_arg(3), aml_int(0), "CDW1"));
+
+/* PCI Firmware Specification 3.0
+ * 4.5.1. _OSC Interface for PCI Host Bridge Devices
+ * The _OSC interface for a PCI/PCI-X/PCI Express hierarchy is
+ * identified by the Universal Unique IDentifier (UUID)
+ * 33DB4D5B-1FF7-401C-9657-7441C03DD766
  */
-aml_append(if_ctx, aml_and(a_ctrl, aml_int(0x1F), a_ctrl));
-
-if_ctx2 = aml_if(aml_lnot(aml_equal(aml_arg(1), aml_int(1;
-/* Unknown revision */
-aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x08), a_cwd1));
-aml_append(if_ctx, if_ctx2);
-
-if_ctx2 = aml_if(aml_lnot(aml_equal(aml_name("CDW3"), a_ctrl)));
-/* Capabilities bits were masked */
-aml_append(if_ctx2, aml_or(a_cwd1, aml_int(0x10), a_cwd1));
-aml_append(if_ctx, if_ctx2);
-
-/* Update DWORD3 in the buffer */
-aml_append(if_ctx, aml_store(a_ctrl, aml_name("CDW3")));
-aml_append(method, if_ctx);
-
-else_ctx = aml_else();
-/* Unrecognized UUID */
-aml_append(else_ctx, aml_or(a_cwd1, aml_int(4), a_cwd1));
-aml_append(method, else_ctx);
+UUID = aml_touuid("33DB4D5B-1FF7-401C-9657-7441C03DD766");
+ifctx = aml_if(aml_equal(aml_arg(0), UUID));
+aml_append(ifctx,
+aml_create_dword_field(aml_arg(3), aml_int(4), "CDW2"));
+aml_append(ifctx,
+aml_create_dword_field(aml_arg(3), aml_int(8), "CDW3"));
+aml_append(ifctx, aml_store(aml_name("CDW2"), aml_name("SUPP")));
+aml_append(ifctx, aml_store(aml_name("CDW3"), aml_name("CTRL")));
+aml_append(ifctx, aml_store(aml_and(aml_name("CTRL"),
+aml_int(ctrl_mask), NULL),
+aml_name("CTRL")));
+
+ifctx1 = aml_if(aml_lnot(aml_equal(a

[Xen-devel] [PATCH v5 09/24] hw: i386: Move PCI host definitions to pci_host.h

2018-11-04 Thread Samuel Ortiz
The PCI hole properties are not pc or i386 specific. They belong to the
PCI host header instead.

Signed-off-by: Samuel Ortiz 
---
 include/hw/i386/pc.h  | 5 -
 include/hw/pci/pci_host.h | 6 ++
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index fed136fcdd..bbbdb33ea3 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -182,11 +182,6 @@ void pc_acpi_init(const char *default_dsdt);
 
 void pc_guest_info_init(PCMachineState *pcms);
 
-#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
-#define PCI_HOST_PROP_PCI_HOLE_END "pci-hole-end"
-#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
-#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
-#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
 #define PCI_HOST_BELOW_4G_MEM_SIZE "below-4g-mem-size"
 #define PCI_HOST_ABOVE_4G_MEM_SIZE "above-4g-mem-size"
 
diff --git a/include/hw/pci/pci_host.h b/include/hw/pci/pci_host.h
index ba31595fc7..e343f4d9ca 100644
--- a/include/hw/pci/pci_host.h
+++ b/include/hw/pci/pci_host.h
@@ -38,6 +38,12 @@
 #define PCI_HOST_BRIDGE_GET_CLASS(obj) \
  OBJECT_GET_CLASS(PCIHostBridgeClass, (obj), TYPE_PCI_HOST_BRIDGE)
 
+#define PCI_HOST_PROP_PCI_HOLE_START   "pci-hole-start"
+#define PCI_HOST_PROP_PCI_HOLE_END "pci-hole-end"
+#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
+#define PCI_HOST_PROP_PCI_HOLE64_END   "pci-hole64-end"
+#define PCI_HOST_PROP_PCI_HOLE64_SIZE  "pci-hole64-size"
+
 struct PCIHostState {
 SysBusDevice busdev;
 
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 12/24] hw: acpi: Export the MCFG getter

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

The ACPI MCFG getter is not x86 specific and could be called from
anywhere within generic ACPI API, so let's export it.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Yang Zhong 
---
 include/hw/acpi/aml-build.h |  1 +
 hw/acpi/aml-build.c | 24 
 hw/i386/acpi-build.c| 22 --
 3 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 1861e37ebf..64ea371656 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -408,6 +408,7 @@ void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
 Object *acpi_get_pci_host(void);
 void acpi_get_pci_holes(Range *hole, Range *hole64);
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 869ed70db3..2c5446ab23 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -32,6 +32,8 @@
 #include "hw/i386/pc.h"
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
+#include "qom/qom-qobject.h"
+#include "qapi/qmp/qnum.h"
 
 #define PCI_HOST_BRIDGE_CONFIG_ADDR0xcf8
 #define PCI_HOST_BRIDGE_IO_0_MIN_ADDR  0x
@@ -1657,6 +1659,28 @@ void acpi_get_pci_holes(Range *hole, Range *hole64)
NULL));
 }
 
+bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
+{
+Object *pci_host;
+QObject *o;
+
+pci_host = acpi_get_pci_host();
+g_assert(pci_host);
+
+o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
+if (!o) {
+return false;
+}
+mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
+qobject_unref(o);
+
+o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
+assert(o);
+mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
+qobject_unref(o);
+return true;
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
 CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 14e2624d14..d8bba16776 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -1856,28 +1856,6 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-{
-Object *pci_host;
-QObject *o;
-
-pci_host = acpi_get_pci_host();
-g_assert(pci_host);
-
-o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_BASE, NULL);
-if (!o) {
-return false;
-}
-mcfg->mcfg_base = qnum_get_uint(qobject_to(QNum, o));
-qobject_unref(o);
-
-o = object_property_get_qobject(pci_host, PCIE_HOST_MCFG_SIZE, NULL);
-assert(o);
-mcfg->mcfg_size = qnum_get_uint(qobject_to(QNum, o));
-qobject_unref(o);
-return true;
-}
-
 static
 void acpi_build(AcpiBuildTables *tables,
 MachineState *machine, AcpiConfiguration *acpi_conf)
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 06/24] hw: acpi: Factorize the RSDP build API implementation

2018-11-04 Thread Samuel Ortiz
We can now share the RSDP build between the ARM and x86 architectures.
Here we make the default RSDP build use XSDT and keep the existing x86
ACPI build implementation using the legacy RSDT version.

Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h |  8 
 hw/acpi/aml-build.c | 28 +---
 hw/arm/virt-acpi-build.c| 28 
 hw/i386/acpi-build.c| 26 +-
 4 files changed, 30 insertions(+), 60 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 3580d0ce90..a2ef8b6f31 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,11 +390,11 @@ void acpi_add_table(GArray *table_offsets, GArray 
*table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
-build_rsdp(GArray *table_data,
-   BIOSLinker *linker, unsigned rsdt_tbl_offset);
+build_rsdp_rsdt(GArray *table_data,
+BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
-build_rsdp_xsdt(GArray *table_data,
-BIOSLinker *linker, unsigned xsdt_tbl_offset);
+build_rsdp(GArray *table_data,
+   BIOSLinker *linker, unsigned xsdt_tbl_offset);
 void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
const char *oem_id, const char *oem_table_id);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index a030d40674..8c2388274c 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,10 +1651,32 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, 
GArray *table_offsets,
  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* Legacy RSDP pointing at an RSDT. This is deprecated */
+void build_rsdp_rsdt(GArray *rsdp_table, BIOSLinker *linker, unsigned 
rsdt_tbl_offset)
+{
+AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+unsigned rsdt_pa_size = sizeof(rsdp->rsdt_physical_address);
+unsigned rsdt_pa_offset =
+(char *)&rsdp->rsdt_physical_address - rsdp_table->data;
+
+bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+ true /* fseg memory */);
+
+memcpy(&rsdp->signature, "RSD PTR ", 8);
+memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+/* Address to be filled by Guest linker */
+bios_linker_loader_add_pointer(linker,
+ACPI_BUILD_RSDP_FILE, rsdt_pa_offset, rsdt_pa_size,
+ACPI_BUILD_TABLE_FILE, rsdt_tbl_offset);
+
+/* Checksum to be filled by Guest linker */
+bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+(char *)rsdp - rsdp_table->data, sizeof *rsdp,
+(char *)&rsdp->checksum - rsdp_table->data);
+}
+
 /* RSDP pointing at an XSDT */
-void
-build_rsdp_xsdt(GArray *rsdp_table,
-BIOSLinker *linker, unsigned xsdt_tbl_offset)
+void build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned 
xsdt_tbl_offset)
 {
 AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
 unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 623a6c4eac..261363e20c 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -366,34 +366,6 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 aml_append(scope, dev);
 }
 
-/* RSDP */
-void
-build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
-{
-AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
-unsigned xsdt_pa_offset =
-(char *)&rsdp->xsdt_physical_address - rsdp_table->data;
-
-bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
- true /* fseg memory */);
-
-memcpy(&rsdp->signature, "RSD PTR ", sizeof(rsdp->signature));
-memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, sizeof(rsdp->oem_id));
-rsdp->length = cpu_to_le32(sizeof(*rsdp));
-rsdp->revision = 0x02;
-
-/* Address to be filled by Guest linker */
-bios_linker_loader_add_pointer(linker,
-ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
-ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
-
-/* Checksum to be filled by Guest linker */
-bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
-(char *)rsdp - rsdp_table->data, sizeof *rsdp,
-(char *)&rsdp->checksum - rsdp_table->data);
-}
-
 static void
 build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index f7a67f5c9c..cfc2444d0d 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,30 +2513,6 @@ build_amd_iommu(GArray *table_data, BIOSL

[Xen-devel] [PATCH v5 07/24] hw: acpi: Generalize AML build routines

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

Most of the AML build routines under acpi-build are not even
architecture specific. They can be moved to the more generic hw/acpi
folder where they could be shared across machine types and
architectures.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Yang Zhong 
---
 include/hw/acpi/aml-build.h |  25 ++
 hw/acpi/aml-build.c | 498 ++
 hw/arm/virt-acpi-build.c|   4 +-
 hw/i386/acpi-build.c| 518 +---
 4 files changed, 528 insertions(+), 517 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index a2ef8b6f31..4f678c45a5 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -3,6 +3,7 @@
 
 #include "hw/acpi/acpi-defs.h"
 #include "hw/acpi/bios-linker-loader.h"
+#include "hw/pci/pcie_host.h"
 
 /* Reserve RAM space for tables: add another order of magnitude. */
 #define ACPI_BUILD_TABLE_MAX_SIZE 0x20
@@ -223,6 +224,21 @@ struct AcpiBuildTables {
 BIOSLinker *linker;
 } AcpiBuildTables;
 
+typedef struct AcpiMcfgInfo {
+uint64_t mcfg_base;
+uint32_t mcfg_size;
+} AcpiMcfgInfo;
+
+typedef struct CrsRangeEntry {
+uint64_t base;
+uint64_t limit;
+} CrsRangeEntry;
+
+typedef struct CrsRangeSet {
+GPtrArray *io_ranges;
+GPtrArray *mem_ranges;
+GPtrArray *mem_64bit_ranges;
+} CrsRangeSet;
 /**
  * init_aml_allocator:
  *
@@ -389,6 +405,15 @@ void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
+Aml *build_osc_method(void);
+void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
+Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
+Aml *build_prt(bool is_pci0_prt);
+void crs_range_set_init(CrsRangeSet *range_set);
+Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+  uint64_t start, uint64_t end);
+void crs_range_set_free(CrsRangeSet *range_set);
 void
 build_rsdp_rsdt(GArray *table_data,
 BIOSLinker *linker, unsigned rsdt_tbl_offset);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 8c2388274c..d3242c6b31 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -25,6 +25,10 @@
 #include "qemu/bswap.h"
 #include "qemu/bitops.h"
 #include "sysemu/numa.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
+#include "qemu/range.h"
+#include "hw/pci/pci_bridge.h"
 
 static GArray *build_alloc_array(void)
 {
@@ -1597,6 +1601,500 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, 
bool mfre)
 g_array_free(tables->vmgenid, mfre);
 }
 
+static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
+{
+CrsRangeEntry *entry;
+
+entry = g_malloc(sizeof(*entry));
+entry->base = base;
+entry->limit = limit;
+
+g_ptr_array_add(ranges, entry);
+}
+
+static void crs_range_free(gpointer data)
+{
+CrsRangeEntry *entry = (CrsRangeEntry *)data;
+g_free(entry);
+}
+
+void crs_range_set_init(CrsRangeSet *range_set)
+{
+range_set->io_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+range_set->mem_ranges = g_ptr_array_new_with_free_func(crs_range_free);
+range_set->mem_64bit_ranges =
+g_ptr_array_new_with_free_func(crs_range_free);
+}
+
+void crs_range_set_free(CrsRangeSet *range_set)
+{
+g_ptr_array_free(range_set->io_ranges, true);
+g_ptr_array_free(range_set->mem_ranges, true);
+g_ptr_array_free(range_set->mem_64bit_ranges, true);
+}
+
+static gint crs_range_compare(gconstpointer a, gconstpointer b)
+{
+ CrsRangeEntry *entry_a = *(CrsRangeEntry **)a;
+ CrsRangeEntry *entry_b = *(CrsRangeEntry **)b;
+
+ return (int64_t)entry_a->base - (int64_t)entry_b->base;
+}
+
+/*
+ * crs_replace_with_free_ranges - given the 'used' ranges within [start - end]
+ * interval, computes the 'free' ranges from the same interval.
+ * Example: If the input array is { [a1 - a2],[b1 - b2] }, the function
+ * will return { [base - a1], [a2 - b1], [b2 - limit] }.
+ */
+void crs_replace_with_free_ranges(GPtrArray *ranges,
+ uint64_t start, uint64_t end)
+{
+GPtrArray *free_ranges = g_ptr_array_new();
+uint64_t free_base = start;
+int i;
+
+g_ptr_array_sort(ranges, crs_range_compare);
+for (i = 0; i < ranges->len; i++) {
+CrsRangeEntry *used = g_ptr_array_index(ranges, i);
+
+if (free_base < used->base) {
+crs_range_insert(free_ranges, free_base, used->base - 1);
+}
+
+free_base = used->limit + 1;
+}
+
+if (free_base < end) {
+crs_range_insert(free_ranges, free_base, end);
+}
+
+g_ptr_array_set_size(ranges, 0);
+for (i = 0; i < free_r

[Xen-devel] [PATCH v5 13/24] hw: acpi: Do not create hotplug method when handler is not defined

2018-11-04 Thread Samuel Ortiz
CPU and memory ACPI hotplug are not necessarily handled through SCI
events. For example, with Hardware-reduced ACPI, the GED device will
manage ACPI hotplug entirely.
As a consequence, we make the CPU and memory specific events AML
generation optional. The code will only be added when the method name is
not NULL.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Samuel Ortiz 
---
 hw/acpi/cpu.c|  8 +---
 hw/acpi/memory_hotplug.c | 11 +++
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index f10b190019..cd41377b5a 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -569,9 +569,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 aml_append(sb_scope, cpus_dev);
 aml_append(table, sb_scope);
 
-method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
-aml_append(table, method);
+if (event_handler_method) {
+method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
+aml_append(table, method);
+}
 
 g_free(cphp_res_path);
 }
diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
index 8c7c1013f3..db2c4df961 100644
--- a/hw/acpi/memory_hotplug.c
+++ b/hw/acpi/memory_hotplug.c
@@ -715,10 +715,13 @@ void build_memory_hotplug_aml(Aml *table, uint32_t nr_mem,
 }
 aml_append(table, dev_container);
 
-method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
-aml_append(method,
-aml_call0(MEMORY_DEVICES_CONTAINER "." MEMORY_SLOT_SCAN_METHOD));
-aml_append(table, method);
+if (event_handler_method) {
+method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
+aml_append(method,
+   aml_call0(MEMORY_DEVICES_CONTAINER "."
+ MEMORY_SLOT_SCAN_METHOD));
+aml_append(table, method);
+}
 
 g_free(mhp_res_path);
 }
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 05/24] hw: acpi: Implement XSDT support for RSDP

2018-11-04 Thread Samuel Ortiz
XSDT is the 64-bit version of the legacy ACPI RSDT (Root System
Description Table). RSDT only allow for 32-bit addressses and have thus
been deprecated. Since ACPI version 2.0, RSDPs should point at XSDTs and
no longer RSDTs, although RSDTs are still supported for backward
compatibility.

Since version 2.0, RSDPs should add an extended checksum, a complete table
length and a version field to the table.

Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h |  3 +++
 hw/acpi/aml-build.c | 37 +
 2 files changed, 40 insertions(+)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c9bcb32d81..3580d0ce90 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -393,6 +393,9 @@ void
 build_rsdp(GArray *table_data,
BIOSLinker *linker, unsigned rsdt_tbl_offset);
 void
+build_rsdp_xsdt(GArray *table_data,
+BIOSLinker *linker, unsigned xsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 51b608432f..a030d40674 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1651,6 +1651,43 @@ build_xsdt(GArray *table_data, BIOSLinker *linker, 
GArray *table_offsets,
  (void *)xsdt, "XSDT", xsdt_len, 1, oem_id, oem_table_id);
 }
 
+/* RSDP pointing at an XSDT */
+void
+build_rsdp_xsdt(GArray *rsdp_table,
+BIOSLinker *linker, unsigned xsdt_tbl_offset)
+{
+AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
+unsigned xsdt_pa_size = sizeof(rsdp->xsdt_physical_address);
+unsigned xsdt_pa_offset =
+(char *)&rsdp->xsdt_physical_address - rsdp_table->data;
+unsigned xsdt_offset =
+(char *)&rsdp->length - rsdp_table->data;
+
+bios_linker_loader_alloc(linker, ACPI_BUILD_RSDP_FILE, rsdp_table, 16,
+ true /* fseg memory */);
+
+memcpy(&rsdp->signature, "RSD PTR ", 8);
+memcpy(rsdp->oem_id, ACPI_BUILD_APPNAME6, 6);
+rsdp->length = cpu_to_le32(sizeof(*rsdp));
+/* version 2, we will use the XSDT pointer */
+rsdp->revision = 0x02;
+
+/* Address to be filled by Guest linker */
+bios_linker_loader_add_pointer(linker,
+ACPI_BUILD_RSDP_FILE, xsdt_pa_offset, xsdt_pa_size,
+ACPI_BUILD_TABLE_FILE, xsdt_tbl_offset);
+
+/* Legacy checksum to be filled by Guest linker */
+bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+(char *)rsdp - rsdp_table->data, xsdt_offset,
+(char *)&rsdp->checksum - rsdp_table->data);
+
+/* Extended checksum to be filled by Guest linker */
+bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
+(char *)rsdp - rsdp_table->data, sizeof *rsdp,
+(char *)&rsdp->extended_checksum - rsdp_table->data);
+}
+
 void build_srat_memory(AcpiSratMemoryAffinity *numamem, uint64_t base,
uint64_t len, int node, MemoryAffinityFlags flags)
 {
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 01/24] hw: i386: Decouple the ACPI build from the PC machine type

2018-11-04 Thread Samuel Ortiz
ACPI tables are platform and machine type and even architecture
agnostic, and as such we want to provide an internal ACPI API that
only depends on platform agnostic information.

For the x86 architecture, in order to build ACPI tables independently
from the PC or Q35 machine types, we are moving a few MachineState
structure fields into a machine type agnostic structure called
AcpiConfiguration. The structure fields we move are:

   HotplugHandler *acpi_dev
   AcpiNVDIMMState acpi_nvdimm_state;
   FWCfgState *fw_cfg
   ram_addr_t below_4g_mem_size, above_4g_mem_size
   bool apic_xrupt_override
   unsigned apic_id_limit
   uint64_t numa_nodes
   uint64_t numa_mem

Signed-off-by: Samuel Ortiz 
---
 hw/i386/acpi-build.h |   4 +-
 include/hw/acpi/acpi.h   |  44 ++
 include/hw/i386/pc.h |  19 ++---
 hw/acpi/cpu_hotplug.c|   9 +-
 hw/arm/virt-acpi-build.c |  10 ---
 hw/i386/acpi-build.c | 136 ++
 hw/i386/pc.c | 176 ---
 hw/i386/pc_piix.c|  21 ++---
 hw/i386/pc_q35.c |  21 ++---
 hw/i386/xen/xen-hvm.c|  19 +++--
 10 files changed, 257 insertions(+), 202 deletions(-)

diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
index 007332e51c..065a1d8250 100644
--- a/hw/i386/acpi-build.h
+++ b/hw/i386/acpi-build.h
@@ -2,6 +2,8 @@
 #ifndef HW_I386_ACPI_BUILD_H
 #define HW_I386_ACPI_BUILD_H
 
-void acpi_setup(void);
+#include "hw/acpi/acpi.h"
+
+void acpi_setup(MachineState *machine, AcpiConfiguration *acpi_conf);
 
 #endif
diff --git a/include/hw/acpi/acpi.h b/include/hw/acpi/acpi.h
index c20ace0d0b..254c8d0cfc 100644
--- a/include/hw/acpi/acpi.h
+++ b/include/hw/acpi/acpi.h
@@ -24,6 +24,8 @@
 #include "exec/memory.h"
 #include "hw/irq.h"
 #include "hw/acpi/acpi_dev_interface.h"
+#include "hw/hotplug.h"
+#include "hw/mem/nvdimm.h"
 
 /*
  * current device naming scheme supports up to 256 memory devices
@@ -186,6 +188,48 @@ extern int acpi_enabled;
 extern char unsigned *acpi_tables;
 extern size_t acpi_tables_len;
 
+typedef
+struct AcpiBuildState {
+/* Copy of table in RAM (for patching). */
+MemoryRegion *table_mr;
+/* Is table patched? */
+bool patched;
+void *rsdp;
+MemoryRegion *rsdp_mr;
+MemoryRegion *linker_mr;
+} AcpiBuildState;
+
+typedef
+struct AcpiConfiguration {
+/* Machine class ACPI settings */
+int legacy_acpi_table_size;
+bool rsdp_in_ram;
+unsigned acpi_data_size;
+
+/* Machine state ACPI settings */
+HotplugHandler *acpi_dev;
+AcpiNVDIMMState acpi_nvdimm_state;
+
+/*
+ * The fields below are machine settings that
+ * are not ACPI specific. However they are needed
+ * for building ACPI tables and as such should be
+ * carried through the ACPI configuration structure.
+ */
+bool legacy_cpu_hotplug;
+bool linuxboot_dma_enabled;
+FWCfgState *fw_cfg;
+ram_addr_t below_4g_mem_size, above_4g_mem_size;;
+uint64_t numa_nodes;
+uint64_t *node_mem;
+bool apic_xrupt_override;
+unsigned apic_id_limit;
+PCIHostState *pci_host;
+
+/* Build state */
+AcpiBuildState *build_state;
+} AcpiConfiguration;
+
 uint8_t *acpi_table_first(void);
 uint8_t *acpi_table_next(uint8_t *current);
 unsigned acpi_table_len(void *current);
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 136fe497b6..fed136fcdd 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -12,6 +12,7 @@
 #include "qemu/range.h"
 #include "qemu/bitmap.h"
 #include "sysemu/sysemu.h"
+#include "hw/acpi/acpi.h"
 #include "hw/pci/pci.h"
 #include "hw/compat.h"
 #include "hw/mem/pc-dimm.h"
@@ -35,10 +36,8 @@ struct PCMachineState {
 Notifier machine_done;
 
 /* Pointers to devices and objects: */
-HotplugHandler *acpi_dev;
 ISADevice *rtc;
 PCIBus *bus;
-FWCfgState *fw_cfg;
 qemu_irq *gsi;
 
 /* Configuration options: */
@@ -46,28 +45,20 @@ struct PCMachineState {
 OnOffAuto vmport;
 OnOffAuto smm;
 
-AcpiNVDIMMState acpi_nvdimm_state;
-
 bool acpi_build_enabled;
 bool smbus;
 bool sata;
 bool pit;
 
-/* RAM information (sizes, addresses, configuration): */
-ram_addr_t below_4g_mem_size, above_4g_mem_size;
-
-/* CPU and apic information: */
-bool apic_xrupt_override;
-unsigned apic_id_limit;
+/* CPU information */
 uint16_t boot_cpus;
 
-/* NUMA information: */
-uint64_t numa_nodes;
-uint64_t *node_mem;
-
 /* Address space used by IOAPIC device. All IOAPIC interrupts
  * will be translated to MSI messages in the address space. */
 AddressSpace *ioapic_as;
+
+/* ACPI configuration */
+AcpiConfiguration acpi_configuration;
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
diff --git a/hw/acpi/cpu_hotplug.c b/hw/acpi/cpu_hotplug.c
index 524

[Xen-devel] [PATCH v5 11/24] hw: acpi: Export and generalize the PCI host AML API

2018-11-04 Thread Samuel Ortiz
From: Yang Zhong 

The AML build routines for the PCI host bridge and the corresponding
DSDT addition are neither x86 nor PC machine type specific.
We can move them to the architecture agnostic hw/acpi folder, and by
carrying all the needed information through a new AcpiPciBus structure,
we can make them PC machine type independent.

Signed-off-by: Yang Zhong 
Signed-off-by: Rob Bradford 
Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h |   8 ++
 hw/acpi/aml-build.c | 157 
 hw/i386/acpi-build.c| 115 ++
 3 files changed, 173 insertions(+), 107 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index fde2785b9a..1861e37ebf 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -229,6 +229,12 @@ typedef struct AcpiMcfgInfo {
 uint32_t mcfg_size;
 } AcpiMcfgInfo;
 
+typedef struct AcpiPciBus {
+PCIBus *pci_bus;
+Range *pci_hole;
+Range *pci_hole64;
+} AcpiPciBus;
+
 typedef struct CrsRangeEntry {
 uint64_t base;
 uint64_t limit;
@@ -411,6 +417,8 @@ Aml *build_osc_method(uint32_t value);
 void build_mcfg(GArray *table_data, BIOSLinker *linker, AcpiMcfgInfo *info);
 Aml *build_gsi_link_dev(const char *name, uint8_t uid, uint8_t gsi);
 Aml *build_prt(bool is_pci0_prt);
+void acpi_dsdt_add_pci_bus(Aml *dsdt, AcpiPciBus *pci_host);
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host);
 void crs_range_set_init(CrsRangeSet *range_set);
 Aml *build_crs(PCIHostState *host, CrsRangeSet *range_set);
 void crs_replace_with_free_ranges(GPtrArray *ranges,
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index b8e32f15f7..869ed70db3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -29,6 +29,19 @@
 #include "hw/pci/pci_bus.h"
 #include "qemu/range.h"
 #include "hw/pci/pci_bridge.h"
+#include "hw/i386/pc.h"
+#include "sysemu/tpm.h"
+#include "hw/acpi/tpm.h"
+
+#define PCI_HOST_BRIDGE_CONFIG_ADDR0xcf8
+#define PCI_HOST_BRIDGE_IO_0_MIN_ADDR  0x
+#define PCI_HOST_BRIDGE_IO_0_MAX_ADDR  0x0cf7
+#define PCI_HOST_BRIDGE_IO_1_MIN_ADDR  0x0d00
+#define PCI_HOST_BRIDGE_IO_1_MAX_ADDR  0x
+#define PCI_VGA_MEM_BASE_ADDR  0x000a
+#define PCI_VGA_MEM_MAX_ADDR   0x000b
+#define IO_0_LEN   0xcf8
+#define VGA_MEM_LEN0x2
 
 static GArray *build_alloc_array(void)
 {
@@ -2142,6 +2155,150 @@ Aml *build_prt(bool is_pci0_prt)
 return method;
 }
 
+Aml *build_pci_host_bridge(Aml *table, AcpiPciBus *pci_host)
+{
+CrsRangeEntry *entry;
+Aml *scope, *dev, *crs;
+CrsRangeSet crs_range_set;
+Range *pci_hole = NULL;
+Range *pci_hole64 = NULL;
+PCIBus *bus = NULL;
+int root_bus_limit = 0xFF;
+int i;
+
+bus = pci_host->pci_bus;
+assert(bus);
+pci_hole = pci_host->pci_hole;
+pci_hole64 = pci_host->pci_hole64;
+
+crs_range_set_init(&crs_range_set);
+QLIST_FOREACH(bus, &bus->child, sibling) {
+uint8_t bus_num = pci_bus_num(bus);
+uint8_t numa_node = pci_bus_numa_node(bus);
+
+/* look only for expander root buses */
+if (!pci_bus_is_root(bus)) {
+continue;
+}
+
+if (bus_num < root_bus_limit) {
+root_bus_limit = bus_num - 1;
+}
+
+scope = aml_scope("\\_SB");
+dev = aml_device("PC%.02X", bus_num);
+aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
+aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
+aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
+if (pci_bus_is_express(bus)) {
+aml_append(dev, aml_name_decl("SUPP", aml_int(0)));
+aml_append(dev, aml_name_decl("CTRL", aml_int(0)));
+aml_append(dev, build_osc_method(0x1F));
+}
+if (numa_node != NUMA_NODE_UNASSIGNED) {
+aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
+}
+
+aml_append(dev, build_prt(false));
+crs = build_crs(PCI_HOST_BRIDGE(BUS(bus)->parent), &crs_range_set);
+aml_append(dev, aml_name_decl("_CRS", crs));
+aml_append(scope, dev);
+aml_append(table, scope);
+}
+scope = aml_scope("\\_SB.PCI0");
+/* build PCI0._CRS */
+crs = aml_resource_template();
+/* set the pcie bus num */
+aml_append(crs,
+aml_word_bus_number(AML_MIN_FIXED, AML_MAX_FIXED, AML_POS_DECODE,
+0x, 0x0, root_bus_limit,
+0x, root_bus_limit + 1));
+aml_append(crs, aml_io(AML_DECODE16, PCI_HOST_BRIDGE_CONFIG_ADDR,
+   PCI_HOST_BRIDGE_CONFIG_ADDR, 0x01, 0x08));
+   

[Xen-devel] [PATCH v5 10/24] hw: acpi: Export the PCI host and holes getters

2018-11-04 Thread Samuel Ortiz
This is going to be needed by the hardware reduced implementation, so
let's export it.
Once the ACPI builder methods and getters will be implemented, the
acpi_get_pci_host() implementation will become hardware agnostic.

Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h |  2 ++
 hw/acpi/aml-build.c | 43 +
 hw/i386/acpi-build.c| 47 ++---
 3 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index c27c0935ae..fde2785b9a 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -400,6 +400,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
  const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+Object *acpi_get_pci_host(void);
+void acpi_get_pci_holes(Range *hole, Range *hole64);
 /* Align AML blob size to a multiple of 'align' */
 void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 2b9a636e75..b8e32f15f7 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1601,6 +1601,49 @@ void acpi_build_tables_cleanup(AcpiBuildTables *tables, 
bool mfre)
 g_array_free(tables->vmgenid, mfre);
 }
 
+/*
+ * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
+ */
+Object *acpi_get_pci_host(void)
+{
+PCIHostState *host;
+
+host = OBJECT_CHECK(PCIHostState,
+object_resolve_path("/machine/i440fx", NULL),
+TYPE_PCI_HOST_BRIDGE);
+if (!host) {
+host = OBJECT_CHECK(PCIHostState,
+object_resolve_path("/machine/q35", NULL),
+TYPE_PCI_HOST_BRIDGE);
+}
+
+return OBJECT(host);
+}
+
+
+void acpi_get_pci_holes(Range *hole, Range *hole64)
+{
+Object *pci_host;
+
+pci_host = acpi_get_pci_host();
+g_assert(pci_host);
+
+range_set_bounds1(hole,
+  object_property_get_uint(pci_host,
+   PCI_HOST_PROP_PCI_HOLE_START,
+   NULL),
+  object_property_get_uint(pci_host,
+   PCI_HOST_PROP_PCI_HOLE_END,
+   NULL));
+range_set_bounds1(hole64,
+  object_property_get_uint(pci_host,
+   PCI_HOST_PROP_PCI_HOLE64_START,
+   NULL),
+  object_property_get_uint(pci_host,
+   PCI_HOST_PROP_PCI_HOLE64_END,
+   NULL));
+}
+
 static void crs_range_insert(GPtrArray *ranges, uint64_t base, uint64_t limit)
 {
 CrsRangeEntry *entry;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bd147a6bd2..a5f5f83500 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -232,49 +232,6 @@ static void acpi_get_misc_info(AcpiMiscInfo *info)
 info->applesmc_io_base = applesmc_port();
 }
 
-/*
- * Because of the PXB hosts we cannot simply query TYPE_PCI_HOST_BRIDGE.
- * On i386 arch we only have two pci hosts, so we can look only for them.
- */
-static Object *acpi_get_i386_pci_host(void)
-{
-PCIHostState *host;
-
-host = OBJECT_CHECK(PCIHostState,
-object_resolve_path("/machine/i440fx", NULL),
-TYPE_PCI_HOST_BRIDGE);
-if (!host) {
-host = OBJECT_CHECK(PCIHostState,
-object_resolve_path("/machine/q35", NULL),
-TYPE_PCI_HOST_BRIDGE);
-}
-
-return OBJECT(host);
-}
-
-static void acpi_get_pci_holes(Range *hole, Range *hole64)
-{
-Object *pci_host;
-
-pci_host = acpi_get_i386_pci_host();
-g_assert(pci_host);
-
-range_set_bounds1(hole,
-  object_property_get_uint(pci_host,
-   PCI_HOST_PROP_PCI_HOLE_START,
-   NULL),
-  object_property_get_uint(pci_host,
-   PCI_HOST_PROP_PCI_HOLE_END,
-   NULL));
-range_set_bounds1(hole64,
-  object_property_get_uint(pci_host,
-   PCI_HOST_PROP_PCI_HOLE64_START,
-   NULL),
-  object_property_get_uint(pci_host,
-   PCI_HOST_PROP_PCI_HOLE64_END,
-   NULL));
-}
-
 /* FACS */
 s

[Xen-devel] [PATCH v5 04/24] hw: acpi: Export the RSDP build API

2018-11-04 Thread Samuel Ortiz
The hardware-reduced API will need to build RSDP as well, so we should
export this routine.

Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h | 3 +++
 hw/arm/virt-acpi-build.c| 2 +-
 hw/i386/acpi-build.c| 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 73fc6659f2..c9bcb32d81 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -390,6 +390,9 @@ void acpi_add_table(GArray *table_offsets, GArray 
*table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
 void
+build_rsdp(GArray *table_data,
+   BIOSLinker *linker, unsigned rsdt_tbl_offset);
+void
 build_rsdt(GArray *table_data, BIOSLinker *linker, GArray *table_offsets,
const char *oem_id, const char *oem_table_id);
 void
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index fc59cce769..623a6c4eac 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
 AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 74419d0663..f7a67f5c9c 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static void
+void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
 AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 02/24] hw: acpi: Export ACPI build alignment API

2018-11-04 Thread Samuel Ortiz
This is going to be needed by the Hardware-reduced ACPI routines.

Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Signed-off-by: Samuel Ortiz 
---
 include/hw/acpi/aml-build.h | 2 ++
 hw/acpi/aml-build.c | 8 
 hw/i386/acpi-build.c| 8 
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index 6c36903c0a..73fc6659f2 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -384,6 +384,8 @@ build_header(BIOSLinker *linker, GArray *table_data,
  const char *oem_id, const char *oem_table_id);
 void *acpi_data_push(GArray *table_data, unsigned size);
 unsigned acpi_data_len(GArray *table);
+/* Align AML blob size to a multiple of 'align' */
+void acpi_align_size(GArray *blob, unsigned align);
 void acpi_add_table(GArray *table_offsets, GArray *table_data);
 void acpi_build_tables_init(AcpiBuildTables *tables);
 void acpi_build_tables_cleanup(AcpiBuildTables *tables, bool mfre);
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 1e43cd736d..51b608432f 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -1565,6 +1565,14 @@ unsigned acpi_data_len(GArray *table)
 return table->len;
 }
 
+void acpi_align_size(GArray *blob, unsigned align)
+{
+/* Align size to multiple of given size. This reduces the chance
+ * we need to change size in the future (breaking cross version migration).
+ */
+g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
+}
+
 void acpi_add_table(GArray *table_offsets, GArray *table_data)
 {
 uint32_t offset = table_data->len;
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index d0362e1382..81d98fa34f 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -282,14 +282,6 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
NULL));
 }
 
-static void acpi_align_size(GArray *blob, unsigned align)
-{
-/* Align size to multiple of given size. This reduces the chance
- * we need to change size in the future (breaking cross version migration).
- */
-g_array_set_size(blob, ROUND_UP(acpi_data_len(blob), align));
-}
-
 /* FACS */
 static void
 build_facs(GArray *table_data, BIOSLinker *linker)
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 00/24] ACPI reorganization for hardware-reduced API addition

2018-11-04 Thread Samuel Ortiz
This patch set provides an ACPI code reorganization in preparation for
adding a shared hardware-reduced ACPI API to QEMU.

The changes are coming from the NEMU [1] project where we're defining
a new x86 machine type: i386/virt. This is an EFI only, ACPI
hardware-reduced platform that is built on top of a generic
hardware-reduced ACPI API [2]. This API was initially based off the
generic parts of the arm/virt-acpi-build.c implementation, and the goal
is for both i386/virt and arm/virt to duplicate as little code as
possible by using this new, shared API.

As a preliminary for adding this hardware-reduced ACPI API to QEMU, we did
some ACPI code reorganization with the following goals:

* Share as much as possible of the current ACPI build APIs between
  legacy and hardware-reduced ACPI.
* Share the ACPI build code across machine types and architectures and
  remove the typical PC machine type dependency.

The patches are also available in their own git branch [3].

[1] https://github.com/intel/nemu
[2] https://github.com/intel/nemu/blob/topic/virt-x86/hw/acpi/reduced.c
[3] https://github.com/intel/nemu/tree/topic/upstream/acpi

v1 -> v2:
   * Drop the hardware-reduced implementation for now. Our next patch
   * set
 will add hardware-reduced and convert arm/virt to it.
   * Implement the ACPI build methods as a QOM Interface Class and
   * convert
 the PC machine type to it.
   * acpi_conf_pc_init() uses a PCMachineState pointer and not a
 MachineState one as its argument.

v2 -> v3:
   * Cc all relevant maintainers, no functional changes.

v3 -> v4:
   * Renamed all AcpiConfiguration pointers from conf to acpi_conf.
   * Removed the ACPI_BUILD_ALIGN_SIZE export.
   * Temporarily updated the arm virt build_rsdp() prototype for
 bisectability purposes.
   * Removed unneeded pci headers from acpi-build.c.
   * Refactor the acpi PCI host getter so that it truly is architecture
 agnostic, by carrying the PCI host pointer through the
 AcpiConfiguration structure.
   * Splitted the PCI host AML builder API export patch from the PCI
 host and holes getter one.
   * Reduced the build_srat() export scope to hw/i386 instead of the
 broader hw/acpi. SRAT builders are truly architecture specific
 and can hardly be generalized.
   * Completed the ACPI builder documentation.

v4 -> v5:
   * Reorganize the ACPI RSDP export and XSDT implementation into 3
 patches.
   * Fix the hw/i386/acpi header inclusions.

Samuel Ortiz (16):
  hw: i386: Decouple the ACPI build from the PC machine type
  hw: acpi: Export ACPI build alignment API
  hw: acpi: The RSDP build API can return void
  hw: acpi: Export the RSDP build API
  hw: acpi: Implement XSDT support for RSDP
  hw: acpi: Factorize the RSDP build API implementation
  hw: i386: Move PCI host definitions to pci_host.h
  hw: acpi: Export the PCI host and holes getters
  hw: acpi: Do not create hotplug method when handler is not defined
  hw: i386: Make the hotpluggable memory size property more generic
  hw: i386: Export the i386 ACPI SRAT build method
  hw: i386: Export the MADT build method
  hw: acpi: Define ACPI tables builder interface
  hw: i386: Implement the ACPI builder interface for PC
  hw: pci-host: piix: Return PCI host pointer instead of PCI bus
  hw: i386: Set ACPI configuration PCI host pointer

Sebastien Boeuf (2):
  hw: acpi: Export the PCI hotplug API
  hw: acpi: Retrieve the PCI bus from AcpiPciHpState

Yang Zhong (6):
  hw: acpi: Generalize AML build routines
  hw: acpi: Factorize _OSC AML across architectures
  hw: acpi: Export and generalize the PCI host AML API
  hw: acpi: Export the MCFG getter
  hw: acpi: Fix memory hotplug AML generation error
  hw: i386: Refactor PCI host getter

 hw/i386/acpi-build.h   |9 +-
 include/hw/acpi/acpi-defs.h|   14 +
 include/hw/acpi/acpi.h |   44 ++
 include/hw/acpi/aml-build.h|   47 ++
 include/hw/acpi/builder.h  |  100 +++
 include/hw/i386/acpi.h |   28 +
 include/hw/i386/pc.h   |   49 +-
 include/hw/mem/memory-device.h |2 +
 include/hw/pci/pci_host.h  |6 +
 hw/acpi/aml-build.c|  981 +
 hw/acpi/builder.c  |   97 +++
 hw/acpi/cpu.c  |8 +-
 hw/acpi/cpu_hotplug.c  |9 +-
 hw/acpi/memory_hotplug.c   |   21 +-
 hw/acpi/pcihp.c|   10 +-
 hw/arm/virt-acpi-build.c   |   93 +--
 hw/i386/acpi-build.c   | 1072 +++-
 hw/i386/pc.c   |  198 +++---
 hw/i386/pc_piix.c  |   36 +-
 hw/i386/pc_q35.c   |   22 +-
 hw/i386/xen/xen-hvm.c  |   19 +-
 hw/pci-host/piix.c |   32 +-
 stubs/pci-host-piix.c  |6 -
 hw/acpi/Makefile.objs  |1 +
 stubs/Makefile.objs|1 -
 25 files changed, 1644 insertions(+), 1261 deletions(-)
 create mode 100644 include/hw/acpi/builder.h
 create mode 100644 inclu

[Xen-devel] [PATCH v5 03/24] hw: acpi: The RSDP build API can return void

2018-11-04 Thread Samuel Ortiz
For both x86 and ARM architectures, the internal RSDP build API can
return void as the current return value is unused.

Signed-off-by: Samuel Ortiz 
---
 hw/arm/virt-acpi-build.c | 4 +---
 hw/i386/acpi-build.c | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index f28a2faa53..fc59cce769 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -367,7 +367,7 @@ static void acpi_dsdt_add_power_button(Aml *scope)
 }
 
 /* RSDP */
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
 {
 AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -392,8 +392,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned 
xsdt_tbl_offset)
 bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
 (char *)rsdp - rsdp_table->data, sizeof *rsdp,
 (char *)&rsdp->checksum - rsdp_table->data);
-
-return rsdp_table;
 }
 
 static void
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 81d98fa34f..74419d0663 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -2513,7 +2513,7 @@ build_amd_iommu(GArray *table_data, BIOSLinker *linker)
  "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
 }
 
-static GArray *
+static void
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
 {
 AcpiRsdpDescriptor *rsdp = acpi_data_push(rsdp_table, sizeof *rsdp);
@@ -2535,8 +2535,6 @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, 
unsigned rsdt_tbl_offset)
 bios_linker_loader_add_checksum(linker, ACPI_BUILD_RSDP_FILE,
 (char *)rsdp - rsdp_table->data, sizeof *rsdp,
 (char *)&rsdp->checksum - rsdp_table->data);
-
-return rsdp_table;
 }
 
 static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
-- 
2.19.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 01/23] hw: i386: Decouple the ACPI build from the PC machine type

2018-11-01 Thread Samuel Ortiz
ACPI tables are platform and machine type and even architecture
agnostic, and as such we want to provide an internal ACPI API that
only depends on platform agnostic information.

For the x86 architecture, in order to build ACPI tables independently
from the PC or Q35 machine types, we are moving a few MachineState
structure fields into a machine type agnostic structure called
AcpiConfiguration. The structure fields we move are:

   HotplugHandler *acpi_dev
   AcpiNVDIMMState acpi_nvdimm_state;
   FWCfgState *fw_cfg
   ram_addr_t below_4g_mem_size, above_4g_mem_size
   bool apic_xrupt_override
   unsigned apic_id_limit
   uint64_t numa_nodes
   uint64_t numa_mem

Signed-off-by: Samuel Ortiz 
---
 hw/acpi/cpu_hotplug.c|   9 +-
 hw/arm/virt-acpi-build.c |  10 ---
 hw/i386/acpi-build.c | 136 ++
 hw/i386/acpi-build.h |   4 +-
 hw/i386/pc.c | 176 ---
 hw/i386/pc_piix.c|  21 ++---
 hw/i386/pc_q35.c |  21 ++---
 hw/i386/xen/xen-hvm.c|  19 +++--
 include/hw/acpi/acpi.h   |  44 ++
 include/hw/i386/pc.h |  19 ++---
 10 files changed, 257 insertions(+), 202 deletions(-)

diff --git a/hw/acpi/cpu_hotplug.c b/hw/acpi/cpu_hotplug.c
index 5243918125..634dc3b846 100644
--- a/hw/acpi/cpu_hotplug.c
+++ b/hw/acpi/cpu_hotplug.c
@@ -237,9 +237,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState 
*machine,
 /* The current AML generator can cover the APIC ID range [0..255],
  * inclusive, for VCPU hotplug. */
 QEMU_BUILD_BUG_ON(ACPI_CPU_HOTPLUG_ID_LIMIT > 256);
-if (pcms->apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
+if (pcms->acpi_configuration.apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
 error_report("max_cpus is too large. APIC ID of last CPU is %u",
- pcms->apic_id_limit - 1);
+ pcms->acpi_configuration.apic_id_limit - 1);
 exit(1);
 }
 
@@ -316,8 +316,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState 
*machine,
  * ith up to 255 elements. Windows guests up to win2k8 fail when
  * VarPackageOp is used.
  */
-pkg = pcms->apic_id_limit <= 255 ? aml_package(pcms->apic_id_limit) :
-   aml_varpackage(pcms->apic_id_limit);
+pkg = pcms->acpi_configuration.apic_id_limit <= 255 ?
+aml_package(pcms->acpi_configuration.apic_id_limit) :
+aml_varpackage(pcms->acpi_configuration.apic_id_limit);
 
 for (i = 0, apic_idx = 0; i < apic_ids->len; i++) {
 int apic_id = apic_ids->cpus[i].arch_id;
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 5785fb697c..f28a2faa53 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -790,16 +790,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
 free_aml_allocator();
 }
 
-typedef
-struct AcpiBuildState {
-/* Copy of table in RAM (for patching). */
-MemoryRegion *table_mr;
-MemoryRegion *rsdp_mr;
-MemoryRegion *linker_mr;
-/* Is table patched? */
-bool patched;
-} AcpiBuildState;
-
 static
 void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1599caa7c5..d0362e1382 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -338,13 +338,14 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
 }
 
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
+build_madt(GArray *table_data, BIOSLinker *linker,
+   MachineState *ms, AcpiConfiguration *acpi_conf)
 {
-MachineClass *mc = MACHINE_GET_CLASS(pcms);
-const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(ms);
 int madt_start = table_data->len;
-AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
-AcpiDeviceIf *adev = ACPI_DEVICE_IF(pcms->acpi_dev);
+AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(acpi_conf->acpi_dev);
+AcpiDeviceIf *adev = ACPI_DEVICE_IF(acpi_conf->acpi_dev);
 bool x2apic_mode = false;
 
 AcpiMultipleApicTable *madt;
@@ -370,7 +371,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
 io_apic->interrupt = cpu_to_le32(0);
 
-if (pcms->apic_xrupt_override) {
+if (acpi_conf->apic_xrupt_override) {
 intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
 intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
 intsrcovr->length = sizeof(*intsrcovr);
@@ -1786,13 +1787,12 @@ static Aml *build_q35_osc_method(void)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
AcpiPmInfo *pm, AcpiMiscInfo *m

[Xen-devel] [PATCH v3 01/19] hw: i386: Decouple the ACPI build from the PC machine type

2018-10-29 Thread Samuel Ortiz
ACPI tables are platform and machine type and even architecture
agnostic, and as such we want to provide an internal ACPI API that
only depends on platform agnostic information.

For the x86 architecture, in order to build ACPI tables independently
from the PC or Q35 machine types, we are moving a few MachineState
structure fields into a machine type agnostic structure called
AcpiConfiguration. The structure fields we move are:

   HotplugHandler *acpi_dev
   AcpiNVDIMMState acpi_nvdimm_state;
   FWCfgState *fw_cfg
   ram_addr_t below_4g_mem_size, above_4g_mem_size
   bool apic_xrupt_override
   unsigned apic_id_limit
   uint64_t numa_nodes
   uint64_t numa_mem

Signed-off-by: Samuel Ortiz 
---
 hw/acpi/cpu_hotplug.c|   9 +-
 hw/arm/virt-acpi-build.c |  10 ---
 hw/i386/acpi-build.c | 135 ++
 hw/i386/acpi-build.h |   4 +-
 hw/i386/pc.c | 175 ---
 hw/i386/pc_piix.c|  21 ++---
 hw/i386/pc_q35.c |  21 ++---
 hw/i386/xen/xen-hvm.c|  19 +++--
 include/hw/acpi/acpi.h   |  43 ++
 include/hw/i386/pc.h |  19 ++---
 10 files changed, 254 insertions(+), 202 deletions(-)

diff --git a/hw/acpi/cpu_hotplug.c b/hw/acpi/cpu_hotplug.c
index 5243918125..634dc3b846 100644
--- a/hw/acpi/cpu_hotplug.c
+++ b/hw/acpi/cpu_hotplug.c
@@ -237,9 +237,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState 
*machine,
 /* The current AML generator can cover the APIC ID range [0..255],
  * inclusive, for VCPU hotplug. */
 QEMU_BUILD_BUG_ON(ACPI_CPU_HOTPLUG_ID_LIMIT > 256);
-if (pcms->apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
+if (pcms->acpi_configuration.apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
 error_report("max_cpus is too large. APIC ID of last CPU is %u",
- pcms->apic_id_limit - 1);
+ pcms->acpi_configuration.apic_id_limit - 1);
 exit(1);
 }
 
@@ -316,8 +316,9 @@ void build_legacy_cpu_hotplug_aml(Aml *ctx, MachineState 
*machine,
  * ith up to 255 elements. Windows guests up to win2k8 fail when
  * VarPackageOp is used.
  */
-pkg = pcms->apic_id_limit <= 255 ? aml_package(pcms->apic_id_limit) :
-   aml_varpackage(pcms->apic_id_limit);
+pkg = pcms->acpi_configuration.apic_id_limit <= 255 ?
+aml_package(pcms->acpi_configuration.apic_id_limit) :
+aml_varpackage(pcms->acpi_configuration.apic_id_limit);
 
 for (i = 0, apic_idx = 0; i < apic_ids->len; i++) {
 int apic_id = apic_ids->cpus[i].arch_id;
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 5785fb697c..f28a2faa53 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -790,16 +790,6 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
 free_aml_allocator();
 }
 
-typedef
-struct AcpiBuildState {
-/* Copy of table in RAM (for patching). */
-MemoryRegion *table_mr;
-MemoryRegion *rsdp_mr;
-MemoryRegion *linker_mr;
-/* Is table patched? */
-bool patched;
-} AcpiBuildState;
-
 static
 void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
 {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 1599caa7c5..c8545238c4 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -338,13 +338,14 @@ void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
 }
 
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
+build_madt(GArray *table_data, BIOSLinker *linker,
+   MachineState *ms, AcpiConfiguration *conf)
 {
-MachineClass *mc = MACHINE_GET_CLASS(pcms);
-const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(ms);
 int madt_start = table_data->len;
-AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
-AcpiDeviceIf *adev = ACPI_DEVICE_IF(pcms->acpi_dev);
+AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(conf->acpi_dev);
+AcpiDeviceIf *adev = ACPI_DEVICE_IF(conf->acpi_dev);
 bool x2apic_mode = false;
 
 AcpiMultipleApicTable *madt;
@@ -370,7 +371,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
 io_apic->interrupt = cpu_to_le32(0);
 
-if (pcms->apic_xrupt_override) {
+if (conf->apic_xrupt_override) {
 intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
 intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
 intsrcovr->length = sizeof(*intsrcovr);
@@ -1786,13 +1787,12 @@ static Aml *build_q35_osc_method(void)
 static void
 build_dsdt(GArray *table_data, BIOSLinker *linker,
AcpiPmInfo *pm, AcpiMiscInfo *misc,
-   Ra