Re: [PATCH] tests: increase timeout per instance of bios-tables-test

2024-07-22 Thread Igor Mammedov
On Mon, 22 Jul 2024 09:35:17 +0200
Thomas Huth  wrote:

> On 16/07/2024 14.59, Igor Mammedov wrote:
> > CI often fails 'cross-i686-tci' job due to runner slowness
> > Log shows that test almost complete, with a few remaining
> > when bios-tables-test timeout hits:
> > 
> >19/270 qemu:qtest+qtest-aarch64 / qtest-aarch64/bios-tables-test
> >  TIMEOUT610.02s   killed by signal 15 SIGTERM
> >...
> >stderr:
> >TAP parsing error: Too few tests run (expected 8, got 7)
> > 
> > At the same time overall job running time is only ~30 out of 1hr allowed.
> > 
> > Increase bios-tables-test instance timeout on 5min as a fix
> > for slow CI runners.
> > 
> > Signed-off-by: Igor Mammedov 
> > ---
> >   tests/qtest/meson.build | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)  
> 
> Since we're entering the freeze period this week, I'm going to pick up this 
> patch for my next pull request in the hope that it will help to get this job 
> green again during the freeze period. But in the long run, it would be 
> really good if someone familiar with the bios-tables-test could analyze why 
> the run time increased so much in recent times for this test and provide a 
> better fix for the problem.

running UEFI guest under TCG was always slow (more so for aarch64),
but we keep adding more sub-cases to bios-tables-test (each running
firmware), and that takes extra time.
Overall time to run bios-tables-test naturally increases,
so we have to increase timeout eventually regardless of everything else.
(however there has not been new tests added since 9.0)

In cases I've looked at, meson timed out at the end of bios-tables-test,
where almost all cases were executed modulo the last.
Given flakiness of failure and changes to sub-cases, a few things
could lead to it:
  * performance regression when executing aarch64 tests, not much but enough
to push the last sub-test behind 10min boundary more often than not
(either qemu or firmware) (maybe running a few 9.0 jobs and comparing
 them to the current master would show if guest became slower)
  * CI job runners became slower (we can't control that if I'm not wrong).
As was suggested in another thread running less test in parallel
might help. But I won't bet on it, since it the end (by the time close
to timeout) only a few test are still running so it might be not much
contention on CPU resources.

> 
>   Thanks,
>Thomas
> 
> 




Re: [PATCH V15 0/7] Add architecture agnostic code to support vCPU Hotplug

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 11:43:00 +
Salil Mehta  wrote:

> Hi Igor,
> 
> >  From: Igor Mammedov 
> >  Sent: Tuesday, July 16, 2024 10:52 AM
> >  To: Salil Mehta 
> >  
> >  On Tue, 16 Jul 2024 03:38:29 +
> >  Salil Mehta  wrote:
> >
> >  > Hi Igor,
> >  >
> >  > On 15/07/2024 15:11, Igor Mammedov wrote:  
> >  > > On Mon, 15 Jul 2024 14:19:12 +
> >  > > Salil Mehta  wrote:
> >  > >  
> >  > >>>   From: qemu-arm-bounces+salil.mehta=huawei@nongnu.org   >  > >>>   arm-bounces+salil.mehta=huawei@nongnu.org> On Behalf Of  Salil
> >  > >>>   Mehta via
> >  > >>>   Sent: Monday, July 15, 2024 3:14 PM
> >  > >>>   To: Igor Mammedov 
> >  > >>>
> >  > >>>   Hi Igor,
> >  > >>>  
> >  > >>>   >  From: Igor Mammedov 
> >  > >>>   >  Sent: Monday, July 15, 2024 2:55 PM
> >  > >>>   >  To: Salil Mehta 
> >  > >>>   >
> >  > >>>   >  On Sat, 13 Jul 2024 19:25:09 +0100
> >  > >>>   >  Salil Mehta  wrote:
> >  > >>>   >  
> >  > >>>   >  > [Note: References are present at the last after the revision  
> >  > >>>   > history]  >  > Virtual CPU hotplug support is being added across
> >  > >>>   > various architectures  [1][3].  
> >  > >>>   >  > This series adds various code bits common across all 
> > architectures:
> >  > >>>   >  >
> >  > >>>   >  > 1. vCPU creation and Parking code refactor [Patch 1] 2. 
> > Update ACPI
> >  > >>>   > > GED framework to support vCPU Hotplug [Patch 2,3] 3. ACPI CPUs 
> > AML
> >  > >>>   > > code change [Patch 4,5] 4. Helper functions to support 
> > unrealization
> >  > >>>   > > of CPU objects [Patch 6,7]  
> >  > >>>   >
> >  > >>>   >  with patch 1 and 3 fixed should be good to go.
> >  > >>>   >
> >  > >>>   >  Salil,
> >  > >>>   >  Can you remind me what happened to migration part of this?
> >  > >>>   >  Ideally it should be a part of of this series as it should be 
> > common
> >  > >>>   > for  everything that uses GED and should be a conditional part of
> >  > >>>   > GED's  VMSTATE.
> >  > >>>   >
> >  > >>>   >  If this series is just a common base and no actual hotplug on 
> > top of
> >  > >>>   > it is  merged in this release (provided patch 13 is fixed), I'm 
> > fine
> >  > >>>   > with migration  bits being a separate series on top.
> >  > >>>   >
> >  > >>>   >  However if some machine would be introducing cpu hotplug in the 
> > same
> >  > >>>   > release, then the migration part should be merged before it or 
> > be a
> >  > >>>   > part  that cpu hotplug series.  
> >  > >>>
> >  > >>>   We have tested Live/Pseudo Migration and it seem to work with the
> >  > >>>   changes part of the architecture specific patch-set.  
> >  > >
> >  > > have you tested, migration from new QEMU to an older one (that doesn't 
> > have cpuhotplug builtin)?  
> >  >
> >  >
> >  > Just curious, how can we detect at source Qemu what version of the
> >  > Qemu destination is running. We require some sort of compatibility
> >  > check but then this is a problem not specific to CPU Hotplug?  
> >  
> >  it's usually managed by version machine types + compat settings for
> >  machine/device.  
> 
> Ok. it looks to be a static checking at the source. I'm sure there must be
> a way to dynamically do the same by negotiating the features i.e. only
> enabling the common subset at the destination. I quickly skimmed the
> migration code and I cannot find any thing like this being done as of now.
> And this problem looks to be a pandoras box to me. 
no dynamic negotiating as far as I'm aware.

We've managed to survive so far with static compat knobs
(with an occasional disaster along the way)

...
> 
> Thanks
> Salil.
> 




Re: [PATCH V16 3/7] hw/acpi: Update ACPI GED framework to support vCPU Hotplug

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 12:14:58 +0100
Salil Mehta  wrote:

> ACPI GED (as described in the ACPI 6.4 spec) uses an interrupt listed in the
> _CRS object of GED to intimate OSPM about an event. Later then demultiplexes 
> the
> notified event by evaluating ACPI _EVT method to know the type of event. Use
> ACPI GED to also notify the guest kernel about any CPU hot(un)plug events.
> 
> Note, GED interface is used by many hotplug events like memory hotplug, NVDIMM
> hotplug and non-hotplug events like system power down event. Each of these can
> be selected using a bit in the 32 bit GED IO interface. A bit has been 
> reserved
> for the CPU hotplug event.
> 
> ACPI CPU hotplug related initialization should only happen if ACPI_CPU_HOTPLUG
> support has been enabled for particular architecture. Add 
> cpu_hotplug_hw_init()
> stub to avoid compilation break.
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Reviewed-by: David Hildenbrand 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Vishnu Pajjuri 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 

I haven't tested it but looks fine to me,
it missing migration bits, but as long as there is no actual users
in this release it could be a patch on top later on.

Acked-by: Igor Mammedov 

> ---
>  docs/specs/acpi_hw_reduced_hotplug.rst |  3 +-
>  hw/acpi/acpi-cpu-hotplug-stub.c|  6 
>  hw/acpi/generic_event_device.c | 47 ++
>  include/hw/acpi/generic_event_device.h |  4 +++
>  4 files changed, 59 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/specs/acpi_hw_reduced_hotplug.rst 
> b/docs/specs/acpi_hw_reduced_hotplug.rst
> index 0bd3f9399f..3acd6fcd8b 100644
> --- a/docs/specs/acpi_hw_reduced_hotplug.rst
> +++ b/docs/specs/acpi_hw_reduced_hotplug.rst
> @@ -64,7 +64,8 @@ GED IO interface (4 byte access)
> 0: Memory hotplug event
> 1: System power down event
> 2: NVDIMM hotplug event
> -3-31: Reserved
> +   3: CPU hotplug event
> +4-31: Reserved
>  
>  **write_access:**
>  
> diff --git a/hw/acpi/acpi-cpu-hotplug-stub.c b/hw/acpi/acpi-cpu-hotplug-stub.c
> index 3fc4b14c26..c6c61bb9cd 100644
> --- a/hw/acpi/acpi-cpu-hotplug-stub.c
> +++ b/hw/acpi/acpi-cpu-hotplug-stub.c
> @@ -19,6 +19,12 @@ void legacy_acpi_cpu_hotplug_init(MemoryRegion *parent, 
> Object *owner,
>  return;
>  }
>  
> +void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
> + CPUHotplugState *state, hwaddr base_addr)
> +{
> +return;
> +}
> +
>  void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList ***list)
>  {
>  return;
> diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
> index 2d6e91b124..4641933a0f 100644
> --- a/hw/acpi/generic_event_device.c
> +++ b/hw/acpi/generic_event_device.c
> @@ -25,6 +25,7 @@ static const uint32_t ged_supported_events[] = {
>  ACPI_GED_MEM_HOTPLUG_EVT,
>  ACPI_GED_PWR_DOWN_EVT,
>  ACPI_GED_NVDIMM_HOTPLUG_EVT,
> +ACPI_GED_CPU_HOTPLUG_EVT,
>  };
>  
>  /*
> @@ -234,6 +235,8 @@ static void acpi_ged_device_plug_cb(HotplugHandler 
> *hotplug_dev,
>  } else {
>  acpi_memory_plug_cb(hotplug_dev, >memhp_state, dev, errp);
>  }
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +acpi_cpu_plug_cb(hotplug_dev, >cpuhp_state, dev, errp);
>  } else {
>  error_setg(errp, "virt: device plug request for unsupported device"
> " type: %s", object_get_typename(OBJECT(dev)));
> @@ -248,6 +251,8 @@ static void acpi_ged_unplug_request_cb(HotplugHandler 
> *hotplug_dev,
>  if ((object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) &&
> !(object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM {
>  acpi_memory_unplug_request_cb(hotplug_dev, >memhp_state, dev, 
> errp);
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +acpi_cpu_unplug_request_cb(hotplug_dev, >cpuhp_state, dev, errp);
>  } else {
>  error_setg(errp, "acpi: device unplug request for unsupported device"
> " type: %s", object_get_typename(OBJECT(dev)));
> @@ -261,6 +266,8 @@ static void acpi_ged_unplug_cb(HotplugHandler 
> *hotplug_dev,
>  
>  if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
>  acpi_memory_unplug_cb(>memhp_state, dev, errp);
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +ac

Re: [PATCH] tests: increase timeout per instance of bios-tables-test

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 09:06:59 -0400
"Michael S. Tsirkin"  wrote:

> On Tue, Jul 16, 2024 at 02:59:30PM +0200, Igor Mammedov wrote:
> > CI often fails 'cross-i686-tci' job due to runner slowness
> > Log shows that test almost complete, with a few remaining
> > when bios-tables-test timeout hits:
> > 
> >   19/270 qemu:qtest+qtest-aarch64 / qtest-aarch64/bios-tables-test
> > TIMEOUT610.02s   killed by signal 15 SIGTERM
> >   ...
> >   stderr:
> >   TAP parsing error: Too few tests run (expected 8, got 7)
> > 
> > At the same time overall job running time is only ~30 out of 1hr allowed.
> > 
> > Increase bios-tables-test instance timeout on 5min as a fix
> > for slow CI runners.
> > 
> > Signed-off-by: Igor Mammedov   
> 
> We can't just keep increasing the timeout.
in this case I'm following precedent
https://gitlab.com/qemu-project/qemu/-/commit/a1f5a47b60d119859d974bed4d66db745448aac6
I'm not saying that the right approach (though seems to work for now)

> The issue is checking wall time on a busy host,
> isn't it? Let's check CPU time instead.
It likely won't help as we still racing with wallclock
overall job timeout (which sometimes triggers failure too,
I guess it depends on stars alignment and load on the host).

Anyways, I don't have know-how when it comes to meson,
to do more than this patch.

with this patch 'cross-i686-tci' job passes for me,
but we have msys2-64bit failing atm due timeouts as well
(seems to be limited to sparc tests)

> 
> > ---
> >  tests/qtest/meson.build | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
> > index 6508bfb1a2..ff9200f882 100644
> > --- a/tests/qtest/meson.build
> > +++ b/tests/qtest/meson.build
> > @@ -1,6 +1,6 @@
> >  slow_qtests = {
> >'aspeed_smc-test': 360,
> > -  'bios-tables-test' : 610,
> > +  'bios-tables-test' : 910,
> >'cdrom-test' : 610,
> >'device-introspect-test' : 720,
> >'migration-test' : 480,
> > -- 
> > 2.43.0  
> 




Re: [PATCH v2 0/9] RISC-V: ACPI: Namespace updates

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 16:28:07 +0200
Igor Mammedov  wrote:

> On Tue, 16 Jul 2024 17:56:11 +0530
> Sunil V L  wrote:
> 
> > On Mon, Jul 15, 2024 at 02:43:52PM +0200, Igor Mammedov wrote:  
> > > On Sun, 14 Jul 2024 03:46:36 -0400
> > > "Michael S. Tsirkin"  wrote:
> > > 
> > > > On Fri, Jul 12, 2024 at 03:50:10PM +0200, Igor Mammedov wrote:
> > > > > On Fri, 12 Jul 2024 13:51:04 +0100
> > > > > Daniel P. Berrangé  wrote:
> > > > >   
> > > > > > On Fri, Jul 12, 2024 at 02:43:19PM +0200, Igor Mammedov wrote:  
> > > > > > > On Mon,  8 Jul 2024 17:17:32 +0530
> > > > > > > Sunil V L  wrote:
> > > > > > > 
> > > > > > > > This series adds few updates to RISC-V ACPI namespace for virt 
> > > > > > > > platform.
> > > > > > > > Additionally, it has patches to enable ACPI table testing for 
> > > > > > > > RISC-V.
> > > > > > > > 
> > > > > > > > 1) PCI Link devices need to be created outside the scope of the 
> > > > > > > > PCI root
> > > > > > > > complex to ensure correct probe ordering by the OS. This 
> > > > > > > > matches the
> > > > > > > > example given in ACPI spec as well.
> > > > > > > > 
> > > > > > > > 2) Add PLIC and APLIC as platform devices as well to ensure 
> > > > > > > > probing
> > > > > > > > order as per BRS spec [1] requirement.
> > > > > > > > 
> > > > > > > > 3) BRS spec requires RISC-V to use new ACPI ID for the generic 
> > > > > > > > UART. So,
> > > > > > > > update the HID of the UART.
> > > > > > > > 
> > > > > > > > 4) Enabled ACPI tables tests for RISC-V which were originally 
> > > > > > > > part of
> > > > > > > > [2] but couldn't get merged due to updates required in the 
> > > > > > > > expected AML
> > > > > > > > files. I think combining those patches with this series makes 
> > > > > > > > it easier
> > > > > > > > to merge since expected AML files are updated.
> > > > > > > > 
> > > > > > > > [1] - https://github.com/riscv-non-isa/riscv-brs
> > > > > > > > [2] - 
> > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2024-06/msg04734.html
> > > > > > > > 
> > > > > > > 
> > > > > > > btw: CI is not happy about series, see:
> > > > > > >  https://gitlab.com/imammedo/qemu/-/pipelines/1371119552
> > > > > > > also 'cross-i686-tci' job routinely timeouts on bios-tables-test
> > > > > > > but we still keep adding more tests to it.
> > > > > > > We should either bump timeout to account for slowness or
> > > > > > > disable bios-tables-test for that job.
> > > > > > 
> > > > > > Asumming the test is functionally correct, and not hanging, then 
> > > > > > bumping
> > > > > > the timeout is the right answer. You can do this in the meson.build
> > > > > > file  
> > > > > 
> > > > > I think test is fine, since once in a while it passes (I guess it 
> > > > > depends on runner host/load)
> > > > > 
> > > > > Overal job timeout is 1h, but that's not what fails.
> > > > > What I see is, the test aborts after 10min timeout.
> > > > > it's likely we hit boot_sector_test()/acpi_find_rsdp_address_uefi() 
> > > > > timeout.
> > > > > That's what we should try to bump.
> > > > > 
> > > > > PS:
> > > > > I've just started the job with 5min bump, lets see if it is enough.   
> > > > >
> > > > 
> > > > Because we should wait for 5min CPU time, not wall time.
> > > > Why don't we do that?
> > > > Something like getrusage should work I think.
> > > > 
> > > 
> > > It turned out to be a meson timeout that's set individually per test file.
> > > I'll send a patch later on.
> > > 
> > Hi Igor,
> > 
> > I am unable to get msys2-64bit test in CI to pass. I tried including
> > your change in meson as well but no luck. I can't guess how enabling
> > bios-tables-test for RISC-V is affecting this particular test. Does this
> > pass for you? 
> > 
> > https://gitlab.com/vlsunil/qemu/-/jobs/7343701148  
> 
> it doesn't pass for me either,
> but bios-tables-test is not among those that timed out,
> so I'd ignore failure in this case

as in your case it was sparc target tests that timed out:
https://gitlab.com/imammedo/qemu/-/jobs/7352989984

CCIng sparc folks as well

> 
> > 
> > Thanks!
> > Sunil
> >   
> 




Re: [PATCH v2 0/9] RISC-V: ACPI: Namespace updates

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 17:56:11 +0530
Sunil V L  wrote:

> On Mon, Jul 15, 2024 at 02:43:52PM +0200, Igor Mammedov wrote:
> > On Sun, 14 Jul 2024 03:46:36 -0400
> > "Michael S. Tsirkin"  wrote:
> >   
> > > On Fri, Jul 12, 2024 at 03:50:10PM +0200, Igor Mammedov wrote:  
> > > > On Fri, 12 Jul 2024 13:51:04 +0100
> > > > Daniel P. Berrangé  wrote:
> > > > 
> > > > > On Fri, Jul 12, 2024 at 02:43:19PM +0200, Igor Mammedov wrote:
> > > > > > On Mon,  8 Jul 2024 17:17:32 +0530
> > > > > > Sunil V L  wrote:
> > > > > >   
> > > > > > > This series adds few updates to RISC-V ACPI namespace for virt 
> > > > > > > platform.
> > > > > > > Additionally, it has patches to enable ACPI table testing for 
> > > > > > > RISC-V.
> > > > > > > 
> > > > > > > 1) PCI Link devices need to be created outside the scope of the 
> > > > > > > PCI root
> > > > > > > complex to ensure correct probe ordering by the OS. This matches 
> > > > > > > the
> > > > > > > example given in ACPI spec as well.
> > > > > > > 
> > > > > > > 2) Add PLIC and APLIC as platform devices as well to ensure 
> > > > > > > probing
> > > > > > > order as per BRS spec [1] requirement.
> > > > > > > 
> > > > > > > 3) BRS spec requires RISC-V to use new ACPI ID for the generic 
> > > > > > > UART. So,
> > > > > > > update the HID of the UART.
> > > > > > > 
> > > > > > > 4) Enabled ACPI tables tests for RISC-V which were originally 
> > > > > > > part of
> > > > > > > [2] but couldn't get merged due to updates required in the 
> > > > > > > expected AML
> > > > > > > files. I think combining those patches with this series makes it 
> > > > > > > easier
> > > > > > > to merge since expected AML files are updated.
> > > > > > > 
> > > > > > > [1] - https://github.com/riscv-non-isa/riscv-brs
> > > > > > > [2] - 
> > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2024-06/msg04734.html
> > > > > > >   
> > > > > > 
> > > > > > btw: CI is not happy about series, see:
> > > > > >  https://gitlab.com/imammedo/qemu/-/pipelines/1371119552
> > > > > > also 'cross-i686-tci' job routinely timeouts on bios-tables-test
> > > > > > but we still keep adding more tests to it.
> > > > > > We should either bump timeout to account for slowness or
> > > > > > disable bios-tables-test for that job.  
> > > > > 
> > > > > Asumming the test is functionally correct, and not hanging, then 
> > > > > bumping
> > > > > the timeout is the right answer. You can do this in the meson.build
> > > > > file
> > > > 
> > > > I think test is fine, since once in a while it passes (I guess it 
> > > > depends on runner host/load)
> > > > 
> > > > Overal job timeout is 1h, but that's not what fails.
> > > > What I see is, the test aborts after 10min timeout.
> > > > it's likely we hit boot_sector_test()/acpi_find_rsdp_address_uefi() 
> > > > timeout.
> > > > That's what we should try to bump.
> > > > 
> > > > PS:
> > > > I've just started the job with 5min bump, lets see if it is enough.
> > > 
> > > Because we should wait for 5min CPU time, not wall time.
> > > Why don't we do that?
> > > Something like getrusage should work I think.
> > >   
> > 
> > It turned out to be a meson timeout that's set individually per test file.
> > I'll send a patch later on.
> >   
> Hi Igor,
> 
> I am unable to get msys2-64bit test in CI to pass. I tried including
> your change in meson as well but no luck. I can't guess how enabling
> bios-tables-test for RISC-V is affecting this particular test. Does this
> pass for you? 
> 
> https://gitlab.com/vlsunil/qemu/-/jobs/7343701148

it doesn't pass for me either,
but bios-tables-test is not among those that timed out,
so I'd ignore failure in this case

> 
> Thanks!
> Sunil
> 




[PATCH] tests: increase timeout per instance of bios-tables-test

2024-07-16 Thread Igor Mammedov
CI often fails 'cross-i686-tci' job due to runner slowness
Log shows that test almost complete, with a few remaining
when bios-tables-test timeout hits:

  19/270 qemu:qtest+qtest-aarch64 / qtest-aarch64/bios-tables-test
TIMEOUT610.02s   killed by signal 15 SIGTERM
  ...
  stderr:
  TAP parsing error: Too few tests run (expected 8, got 7)

At the same time overall job running time is only ~30 out of 1hr allowed.

Increase bios-tables-test instance timeout on 5min as a fix
for slow CI runners.

Signed-off-by: Igor Mammedov 
---
 tests/qtest/meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 6508bfb1a2..ff9200f882 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -1,6 +1,6 @@
 slow_qtests = {
   'aspeed_smc-test': 360,
-  'bios-tables-test' : 610,
+  'bios-tables-test' : 910,
   'cdrom-test' : 610,
   'device-introspect-test' : 720,
   'migration-test' : 480,
-- 
2.43.0




Re: [PATCH v3 4/9] acpi/gpex: Create PCI link devices outside PCI root bridge

2024-07-16 Thread Igor Mammedov
On Mon, 15 Jul 2024 22:41:24 +0530
Sunil V L  wrote:

> Currently, PCI link devices (PNP0C0F) are always created within the
> scope of the PCI root bridge. However, RISC-V needs these link devices
> to be created outside to ensure the probing order in the OS. This
> matches the example given in the ACPI specification [1] as well. Hence,
> create these link devices directly under _SB instead of under the PCI
> root bridge.
> 
> To keep these link device names unique for multiple PCI bridges, change
> the device name from GSIx to LXXY format where XX is the PCI bus number
> and Y is the INTx.
> 
> GPEX is currently used by riscv, aarch64/virt and x86/microvm machines.
> So, this change will alter the DSDT for those systems.
> 
> [1] - ACPI 5.1: 6.2.13.1 Example: Using _PRT to Describe PCI IRQ Routing
> 
> Signed-off-by: Sunil V L 

Acked-by: Igor Mammedov 

> ---
>  hw/pci-host/gpex-acpi.c | 13 +++--
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
> index f69413ea2c..391fabb8a8 100644
> --- a/hw/pci-host/gpex-acpi.c
> +++ b/hw/pci-host/gpex-acpi.c
> @@ -7,7 +7,8 @@
>  #include "hw/pci/pcie_host.h"
>  #include "hw/acpi/cxl.h"
>  
> -static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq)
> +static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq,
> +  Aml *scope, uint8_t bus_num)
>  {
>  Aml *method, *crs;
>  int i, slot_no;
> @@ -20,7 +21,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  Aml *pkg = aml_package(4);
>  aml_append(pkg, aml_int((slot_no << 16) | 0x));
>  aml_append(pkg, aml_int(i));
> -aml_append(pkg, aml_name("GSI%d", gsi));
> +aml_append(pkg, aml_name("L%.02X%X", bus_num, gsi));
>  aml_append(pkg, aml_int(0));
>  aml_append(rt_pkg, pkg);
>  }
> @@ -30,7 +31,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  /* Create GSI link device */
>  for (i = 0; i < PCI_NUM_PINS; i++) {
>  uint32_t irqs = irq + i;
> -Aml *dev_gsi = aml_device("GSI%d", i);
> +Aml *dev_gsi = aml_device("L%.02X%X", bus_num, i);
>  aml_append(dev_gsi, aml_name_decl("_HID", aml_string("PNP0C0F")));
>  aml_append(dev_gsi, aml_name_decl("_UID", aml_int(i)));
>  crs = aml_resource_template();
> @@ -45,7 +46,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  aml_append(dev_gsi, aml_name_decl("_CRS", crs));
>  method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
>  aml_append(dev_gsi, method);
> -aml_append(dev, dev_gsi);
> +aml_append(scope, dev_gsi);
>  }
>  }
>  
> @@ -174,7 +175,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
>  }
>  
> -acpi_dsdt_add_pci_route_table(dev, cfg->irq);
> +acpi_dsdt_add_pci_route_table(dev, cfg->irq, scope, bus_num);
>  
>  /*
>   * Resources defined for PXBs are composed of the following 
> parts:
> @@ -205,7 +206,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  aml_append(dev, aml_name_decl("_STR", aml_unicode("PCIe 0 Device")));
>  aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
>  
> -acpi_dsdt_add_pci_route_table(dev, cfg->irq);
> +acpi_dsdt_add_pci_route_table(dev, cfg->irq, scope, 0);
>  
>  method = aml_method("_CBA", 0, AML_NOTSERIALIZED);
>  aml_append(method, aml_return(aml_int(cfg->ecam.base)));




Re: [PATCH v3 2/9] hw/riscv/virt-acpi-build.c: Update the HID of RISC-V UART

2024-07-16 Thread Igor Mammedov
On Mon, 15 Jul 2024 22:41:22 +0530
Sunil V L  wrote:

> The requirement ACPI_060 in the RISC-V BRS specification [1], requires
> NS16550 compatible UART to have the HID RSCV0003. So, update the HID for
> the UART.
> 
> [1] - https://github.com/riscv-non-isa/riscv-brs/commits/main/acpi.adoc
this should point to text like in previous patch and not to commit 

>   (commit: 7bfa87e86ad5658283731207dbfc8ab3744d3265)
> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 

with above fixed:
Reviewed-by: Igor Mammedov 

> ---
>  hw/riscv/virt-acpi-build.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/riscv/virt-acpi-build.c b/hw/riscv/virt-acpi-build.c
> index 5f5082a35b..36d6a3a412 100644
> --- a/hw/riscv/virt-acpi-build.c
> +++ b/hw/riscv/virt-acpi-build.c
> @@ -170,7 +170,7 @@ acpi_dsdt_add_uart(Aml *scope, const MemMapEntry 
> *uart_memmap,
>  uint32_t uart_irq)
>  {
>  Aml *dev = aml_device("COM0");
> -aml_append(dev, aml_name_decl("_HID", aml_string("PNP0501")));
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0003")));
>  aml_append(dev, aml_name_decl("_UID", aml_int(0)));
>  
>  Aml *crs = aml_resource_template();




Re: [PATCH v3 1/9] hw/riscv/virt-acpi-build.c: Add namespace devices for PLIC and APLIC

2024-07-16 Thread Igor Mammedov
On Mon, 15 Jul 2024 22:41:21 +0530
Sunil V L  wrote:

> As per the requirement ACPI_080 in the RISC-V Boot and Runtime Services
> (BRS) specification [1],  PLIC and APLIC should be in namespace as well.
> So, add them using the defined HID.
> 
> [1] - https://github.com/riscv-non-isa/riscv-brs/blob/main/acpi.adoc
>   (commit : 241575b3189c5d9e60b5e55e78cf0443092713bf)

in spec links 'See RVI ACPI IDs' and right below it 'additional guidance',
do lead nowhere hence do not clarify anything.

> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 

Acked-by: Igor Mammedov 

> ---
>  hw/riscv/virt-acpi-build.c | 32 
>  1 file changed, 32 insertions(+)
> 
> diff --git a/hw/riscv/virt-acpi-build.c b/hw/riscv/virt-acpi-build.c
> index 0925528160..5f5082a35b 100644
> --- a/hw/riscv/virt-acpi-build.c
> +++ b/hw/riscv/virt-acpi-build.c
> @@ -141,6 +141,30 @@ static void acpi_dsdt_add_cpus(Aml *scope, 
> RISCVVirtState *s)
>  }
>  }
>  
> +static void acpi_dsdt_add_plic_aplic(Aml *scope, uint8_t socket_count,
> + uint64_t mmio_base, uint64_t mmio_size,
> + const char *hid)
> +{
> +uint64_t plic_aplic_addr;
> +uint32_t gsi_base;
> +uint8_t  socket;
> +
> +for (socket = 0; socket < socket_count; socket++) {
> +plic_aplic_addr = mmio_base + mmio_size * socket;
> +gsi_base = VIRT_IRQCHIP_NUM_SOURCES * socket;
> +Aml *dev = aml_device("IC%.02X", socket);
> +aml_append(dev, aml_name_decl("_HID", aml_string("%s", hid)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(socket)));
> +aml_append(dev, aml_name_decl("_GSB", aml_int(gsi_base)));
> +
> +Aml *crs = aml_resource_template();
> +aml_append(crs, aml_memory32_fixed(plic_aplic_addr, mmio_size,
> +   AML_READ_WRITE));
> +aml_append(dev, aml_name_decl("_CRS", crs));
> +aml_append(scope, dev);
> +}
> +}
> +
>  static void
>  acpi_dsdt_add_uart(Aml *scope, const MemMapEntry *uart_memmap,
>  uint32_t uart_irq)
> @@ -411,6 +435,14 @@ static void build_dsdt(GArray *table_data,
>  
>  socket_count = riscv_socket_count(ms);
>  
> +if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> +acpi_dsdt_add_plic_aplic(scope, socket_count, memmap[VIRT_PLIC].base,
> + memmap[VIRT_PLIC].size, "RSCV0001");
> +} else {
> +acpi_dsdt_add_plic_aplic(scope, socket_count, 
> memmap[VIRT_APLIC_S].base,
> + memmap[VIRT_APLIC_S].size, "RSCV0002");
> +}
> +
>  acpi_dsdt_add_uart(scope, [VIRT_UART0], UART0_IRQ);
>  
>  if (socket_count == 1) {




Re: [PATCH V15 0/7] Add architecture agnostic code to support vCPU Hotplug

2024-07-16 Thread Igor Mammedov
On Tue, 16 Jul 2024 03:38:29 +
Salil Mehta  wrote:

> Hi Igor,
> 
> On 15/07/2024 15:11, Igor Mammedov wrote:
> > On Mon, 15 Jul 2024 14:19:12 +
> > Salil Mehta  wrote:
> >   
> >>>   From: qemu-arm-bounces+salil.mehta=huawei@nongnu.org  >>>   arm-bounces+salil.mehta=huawei@nongnu.org> On Behalf Of Salil
> >>>   Mehta via
> >>>   Sent: Monday, July 15, 2024 3:14 PM
> >>>   To: Igor Mammedov 
> >>>   
> >>>   Hi Igor,
> >>>   
> >>>   >  From: Igor Mammedov 
> >>>   >  Sent: Monday, July 15, 2024 2:55 PM
> >>>   >  To: Salil Mehta 
> >>>   >
> >>>   >  On Sat, 13 Jul 2024 19:25:09 +0100
> >>>   >  Salil Mehta  wrote:
> >>>   >  
> >>>   >  > [Note: References are present at the last after the revision  
> >>>   > history]  >  > Virtual CPU hotplug support is being added across
> >>>   > various architectures  [1][3].  
> >>>   >  > This series adds various code bits common across all architectures:
> >>>   >  >
> >>>   >  > 1. vCPU creation and Parking code refactor [Patch 1] 2. Update ACPI
> >>>   > > GED framework to support vCPU Hotplug [Patch 2,3] 3. ACPI CPUs AML
> >>>   > > code change [Patch 4,5] 4. Helper functions to support unrealization
> >>>   > > of CPU objects [Patch 6,7]  
> >>>   >
> >>>   >  with patch 1 and 3 fixed should be good to go.
> >>>   >
> >>>   >  Salil,
> >>>   >  Can you remind me what happened to migration part of this?
> >>>   >  Ideally it should be a part of of this series as it should be common
> >>>   > for  everything that uses GED and should be a conditional part of
> >>>   > GED's  VMSTATE.
> >>>   >
> >>>   >  If this series is just a common base and no actual hotplug on top of
> >>>   > it is  merged in this release (provided patch 13 is fixed), I'm fine
> >>>   > with migration  bits being a separate series on top.
> >>>   >
> >>>   >  However if some machine would be introducing cpu hotplug in the same
> >>>   > release, then the migration part should be merged before it or be a
> >>>   > part  that cpu hotplug series.  
> >>>   
> >>>   We have tested Live/Pseudo Migration and it seem to work with the
> >>>   changes part of the architecture specific patch-set.  
> > 
> > have you tested, migration from new QEMU to an older one (that doesn't have 
> > cpuhotplug builtin)?  
> 
> 
> Just curious, how can we detect at source Qemu what version of the Qemu
> destination is running. We require some sort of compatibility check but
> then this is a problem not specific to CPU Hotplug?

it's usually managed by version machine types + compat settings for
machine/device.

> We  are not initializing CPU Hotplug VMSD in this patch-set. I was
> wondering then how can a new machine attempt to migrate VMSD state from 
> new Qemu to older Qemu.

If I'm not mistaken without VMSD it shouldn't explode, since CPUHP
code shouldn't create memory-regions that are migrated.
(If I recall correctly, mmio regions aren't going into migration stream)

> ARM vCPU Hotplug patches will be on top of this later in next Qemu cycle.
then it's fine to introduce VMSD later on, just make sure others
who adding cpu hotplug elsewhere also aware of it and pickup the same patch.

> 
> 
> >   
> >>>   
> >>>   Ampere: https://lore.kernel.org/all/e17e28ac-28c7-496f-b212-
> >>>   2c9b552db...@amperemail.onmicrosoft.com/
> >>>   Oracle: https://lore.kernel.org/all/46D74D30-EE54-4AD2-8F0E-
> >>>   ba5627faa...@oracle.com/
> >>>   
> >>>   
> >>>   For ARM, please check below patch part of RFC V3 for changes related to
> >>>   migration:
> >>>   https://lore.kernel.org/qemu-devel/20240613233639.202896-15-
> >>>   salil.me...@huawei.com/  
> >>
> >>
> >> Do you wish to move below change into this path-set and make it common
> >> to all instead?  
> > 
> > it would be the best to include this with here.
> >   
> >>
> >>
> >> diff --git a/hw/acpi/generic_event_device.c 
> >> b/hw/acpi/generic_event_device.c
> >> index 63226b0040..e92ce07955 100644
> >> --- a/hw/acpi/generic_event_device.c
> >> +++ b/hw/acpi/generic_event_device.c

Re: [PATCH V2 01/11] machine: alloc-anon option

2024-07-16 Thread Igor Mammedov
On Sun, 30 Jun 2024 12:40:24 -0700
Steve Sistare  wrote:

> Allocate anonymous memory using mmap MAP_ANON or memfd_create depending
> on the value of the anon-alloc machine property.  This affects
> memory-backend-ram objects, guest RAM created with the global -m option
> but without an associated memory-backend object and without the -mem-path
> option
nowadays, all machines were converted to use memory backend for VM RAM.
so -m option implicitly creates memory-backend object,
which will be either MEMORY_BACKEND_FILE if -mem-path present
or MEMORY_BACKEND_RAM otherwise.


> To access the same memory in the old and new QEMU processes, the memory
> must be mapped shared.  Therefore, the implementation always sets

> RAM_SHARED if alloc-anon=memfd, except for memory-backend-ram, where the
> user must explicitly specify the share option.  In lieu of defining a new
so statement at the top that memory-backend-ram is affected is not
really valid? 

> RAM flag, at the lowest level the implementation uses RAM_SHARED with fd=-1
> as the condition for calling memfd_create.

In general I do dislike adding yet another option that will affect
guest RAM allocation (memory-backends  should be sufficient).

However I do see that you need memfd for device memory (vram, roms, ...).
Can we just use memfd/shared unconditionally for those and
avoid introducing a new confusing option?


> Signed-off-by: Steve Sistare 
> ---
>  hw/core/machine.c   | 24 
>  include/hw/boards.h |  1 +
>  qapi/machine.json   | 14 ++
>  qemu-options.hx | 13 +
>  system/memory.c | 12 +---
>  system/physmem.c| 38 +-
>  system/trace-events |  3 +++
>  7 files changed, 101 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 655d75c..7ca2ad0 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -454,6 +454,20 @@ static void machine_set_mem_merge(Object *obj, bool 
> value, Error **errp)
>  ms->mem_merge = value;
>  }
>  
> +static int machine_get_anon_alloc(Object *obj, Error **errp)
> +{
> +MachineState *ms = MACHINE(obj);
> +
> +return ms->anon_alloc;
> +}
> +
> +static void machine_set_anon_alloc(Object *obj, int value, Error **errp)
> +{
> +MachineState *ms = MACHINE(obj);
> +
> +ms->anon_alloc = value;
> +}
> +
>  static bool machine_get_usb(Object *obj, Error **errp)
>  {
>  MachineState *ms = MACHINE(obj);
> @@ -1066,6 +1080,11 @@ static void machine_class_init(ObjectClass *oc, void 
> *data)
>  object_class_property_set_description(oc, "mem-merge",
>  "Enable/disable memory merge support");
>  
> +object_class_property_add_enum(oc, "anon-alloc", "AnonAllocOption",
> +   _lookup,
> +   machine_get_anon_alloc,
> +   machine_set_anon_alloc);
> +
>  object_class_property_add_bool(oc, "usb",
>  machine_get_usb, machine_set_usb);
>  object_class_property_set_description(oc, "usb",
> @@ -1416,6 +1435,11 @@ static bool create_default_memdev(MachineState *ms, 
> const char *path, Error **er
>  if (!object_property_set_int(obj, "size", ms->ram_size, errp)) {
>  goto out;
>  }
> +if (!object_property_set_bool(obj, "share",
> +  ms->anon_alloc == ANON_ALLOC_OPTION_MEMFD,
> +  errp)) {
> +goto out;
> +}
>  object_property_add_child(object_get_objects_root(), mc->default_ram_id,
>obj);
>  /* Ensure backend's memory region name is equal to mc->default_ram_id */
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 73ad319..77f16ad 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -383,6 +383,7 @@ struct MachineState {
>  bool enable_graphics;
>  ConfidentialGuestSupport *cgs;
>  HostMemoryBackend *memdev;
> +AnonAllocOption anon_alloc;
>  /*
>   * convenience alias to ram_memdev_id backend memory region
>   * or to numa container memory region
> diff --git a/qapi/machine.json b/qapi/machine.json
> index 2fd3e9c..9173953 100644
> --- a/qapi/machine.json
> +++ b/qapi/machine.json
> @@ -1881,3 +1881,17 @@
>  { 'command': 'x-query-interrupt-controllers',
>'returns': 'HumanReadableText',
>'features': [ 'unstable' ]}
> +
> +##
> +# @AnonAllocOption:
> +#
> +# An enumeration of the options for allocating anonymous guest memory.
> +#
> +# @mmap: allocate using mmap MAP_ANON
> +#
> +# @memfd: allocate using memfd_create
> +#
> +# Since: 9.1
> +##
> +{ 'enum': 'AnonAllocOption',
> +  'data': [ 'mmap', 'memfd' ] }
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 8ca7f34..595b693 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -38,6 +38,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_machine, \
>  "nvdimm=on|off controls NVDIMM support 

Re: [PATCH V15 0/7] Add architecture agnostic code to support vCPU Hotplug

2024-07-15 Thread Igor Mammedov
On Mon, 15 Jul 2024 14:19:12 +
Salil Mehta  wrote:

> >  From: qemu-arm-bounces+salil.mehta=huawei@nongnu.org  >  arm-bounces+salil.mehta=huawei@nongnu.org> On Behalf Of Salil
> >  Mehta via
> >  Sent: Monday, July 15, 2024 3:14 PM
> >  To: Igor Mammedov 
> >  
> >  Hi Igor,
> >
> >  >  From: Igor Mammedov 
> >  >  Sent: Monday, July 15, 2024 2:55 PM
> >  >  To: Salil Mehta 
> >  >
> >  >  On Sat, 13 Jul 2024 19:25:09 +0100
> >  >  Salil Mehta  wrote:
> >  >  
> >  >  > [Note: References are present at the last after the revision  
> >  > history]  >  > Virtual CPU hotplug support is being added across
> >  > various architectures  [1][3].  
> >  >  > This series adds various code bits common across all architectures:
> >  >  >
> >  >  > 1. vCPU creation and Parking code refactor [Patch 1] 2. Update ACPI
> >  > > GED framework to support vCPU Hotplug [Patch 2,3] 3. ACPI CPUs AML
> >  > > code change [Patch 4,5] 4. Helper functions to support unrealization
> >  > > of CPU objects [Patch 6,7]  
> >  >
> >  >  with patch 1 and 3 fixed should be good to go.
> >  >
> >  >  Salil,
> >  >  Can you remind me what happened to migration part of this?
> >  >  Ideally it should be a part of of this series as it should be common
> >  > for  everything that uses GED and should be a conditional part of
> >  > GED's  VMSTATE.
> >  >
> >  >  If this series is just a common base and no actual hotplug on top of
> >  > it is  merged in this release (provided patch 13 is fixed), I'm fine
> >  > with migration  bits being a separate series on top.
> >  >
> >  >  However if some machine would be introducing cpu hotplug in the same
> >  > release, then the migration part should be merged before it or be a
> >  > part  that cpu hotplug series.  
> >  
> >  We have tested Live/Pseudo Migration and it seem to work with the
> >  changes part of the architecture specific patch-set.

have you tested, migration from new QEMU to an older one (that doesn't have 
cpuhotplug builtin)?

> >  
> >  Ampere: https://lore.kernel.org/all/e17e28ac-28c7-496f-b212-
> >  2c9b552db...@amperemail.onmicrosoft.com/
> >  Oracle: https://lore.kernel.org/all/46D74D30-EE54-4AD2-8F0E-
> >  ba5627faa...@oracle.com/
> >  
> >  
> >  For ARM, please check below patch part of RFC V3 for changes related to
> >  migration:
> >  https://lore.kernel.org/qemu-devel/20240613233639.202896-15-
> >  salil.me...@huawei.com/  
> 
> 
> Do you wish to move below change into this path-set and make it common
> to all instead?

it would be the best to include this with here.

> 
> 
> diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
> index 63226b0040..e92ce07955 100644
> --- a/hw/acpi/generic_event_device.c
> +++ b/hw/acpi/generic_event_device.c
> @@ -333,6 +333,16 @@ static const VMStateDescription vmstate_memhp_state = {
>  }
>  };
>  
> +static const VMStateDescription vmstate_cpuhp_state = {
> +.name = "acpi-ged/cpuhp",
> +.version_id = 1,
> +.minimum_version_id = 1,
> +.fields  = (VMStateField[]) {
> +VMSTATE_CPU_HOTPLUG(cpuhp_state, AcpiGedState),
> +VMSTATE_END_OF_LIST()
> +}
> +};
> +
>  static const VMStateDescription vmstate_ged_state = {
>  .name = "acpi-ged-state",
>  .version_id = 1,
> @@ -381,6 +391,7 @@ static const VMStateDescription vmstate_acpi_ged = {
>  },
>  .subsections = (const VMStateDescription * const []) {
>  _memhp_state,
> +_cpuhp_state,

I'm not migration guru but I believe this should be conditional
to avoid breaking cross-version migration.
See 679dd1a957d '.needed = vmstate_test_use_cpuhp. part

CCing Peter

>  _ghes_state,
>  NULL
>  }
> 
> Maybe I can add a separate patch for this in the end? Please confirm.
> 
> Thanks
> Salil.




Re: [PATCH v5 10/13] hw/acpi: Generic Port Affinity Structure support

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:14 +0100
Jonathan Cameron  wrote:

> These are very similar to the recently added Generic Initiators
> but instead of representing an initiator of memory traffic they
> represent an edge point beyond which may lie either targets or
> initiators.  Here we add these ports such that they may
> be targets of hmat_lb records to describe the latency and
> bandwidth from host side initiators to the port.  A discoverable
> mechanism such as UEFI CDAT read from CXL devices and switches
> is used to discover the remainder of the path, and the OS can build
> up full latency and bandwidth numbers as need for work and data
> placement decisions.
> 
> Acked-by: Markus Armbruster 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 

ACPI tables generation LGTM
As for the rest my review is perfunctory mostly.

> ---
> v5: Push the definition of TYPE_ACPI_GENERIC_PORT down into the
> c file (similar to TYPE_ACPI_GENERIC_INITIATOR in earlier patch)
> ---
>  qapi/qom.json   |  34 +
>  include/hw/acpi/aml-build.h |   4 +
>  include/hw/acpi/pci.h   |   2 +-
>  include/hw/pci/pci_bridge.h |   1 +
>  hw/acpi/aml-build.c |  40 ++
>  hw/acpi/pci.c   | 112 +++-
>  hw/arm/virt-acpi-build.c|   2 +-
>  hw/i386/acpi-build.c|   2 +-
>  hw/pci-bridge/pci_expander_bridge.c |   1 -
>  9 files changed, 193 insertions(+), 5 deletions(-)
> 
> diff --git a/qapi/qom.json b/qapi/qom.json
> index 8e75a419c3..b97c031b73 100644
> --- a/qapi/qom.json
> +++ b/qapi/qom.json
> @@ -838,6 +838,38 @@
>'data': { 'pci-dev': 'str',
>  'node': 'uint32' } }
>  
> +##
> +# @AcpiGenericPortProperties:
> +#
> +# Properties for acpi-generic-port objects.
> +#
> +# @pci-bus: QOM path of the PCI bus of the hostbridge associated with
> +# this SRAT Generic Port Affinity Structure.  This is the same as
> +# the bus parameter for the root ports attached to this host
> +# bridge.  The resulting SRAT Generic Port Affinity Structure will
> +# refer to the ACPI object in DSDT that represents the host bridge
> +# (e.g.  ACPI0016 for CXL host bridges).  See ACPI 6.5 Section
> +# 5.2.16.7 for more information.
> +#

> +# @node: Similar to a NUMA node ID, but instead of providing a
> +# reference point used for defining NUMA distances and access
> +# characteristics to memory or from an initiator (e.g. CPU), this
> +# node defines the boundary point between non-discoverable system
> +# buses which must be described by firmware, and a discoverable
> +# bus.  NUMA distances and access characteristics are defined to
> +# and from that point.  For system software to establish full
> +# initiator to target characteristics this information must be
> +# combined with information retrieved from the discoverable part
> +# of the path.  An example would use CDAT (see UEFI.org)
> +# information read from devices and switches in conjunction with
> +# link characteristics read from PCIe Configuration space.

you lost me here (even reading this several time doesn't help).
Perhaps I lack specific domain knowledge, but is there a way to make it
more comprehensible for layman?

> +#
> +# Since: 9.1
> +##
> +{ 'struct': 'AcpiGenericPortProperties',
> +  'data': { 'pci-bus': 'str',
> +'node': 'uint32' } }
> +
>  ##
>  # @RngProperties:
>  #
> @@ -1031,6 +1063,7 @@
>  { 'enum': 'ObjectType',
>'data': [
>  'acpi-generic-initiator',
> +'acpi-generic-port',
>  'authz-list',
>  'authz-listfile',
>  'authz-pam',
> @@ -1106,6 +1139,7 @@
>'discriminator': 'qom-type',
>'data': {
>'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
> +  'acpi-generic-port':  'AcpiGenericPortProperties',
>'authz-list': 'AuthZListProperties',
>'authz-listfile': 'AuthZListFileProperties',
>'authz-pam':  'AuthZPAMProperties',
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index 33eef85791..9e30c735bb 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -490,6 +490,10 @@ void build_srat_pci_generic_initiator(GArray 
> *table_data, int node,
>uint16_t segment, uint8_t bus,
>uint8_t devfn);
>  
> +void build_srat_acpi_generic_port(GArray *table_data, int node,
> +  const char *hid,
> +  uint32_t uid);
> +
>  void build_slit(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>  const char *oem_id, const char *oem_table_id);
>  
> diff --git a/include/hw/acpi/pci.h b/include/hw/acpi/pci.h
> index 3015a8171c..6359d574fd 100644
> --- a/include/hw/acpi/pci.h
> +++ b/include/hw/acpi/pci.h
> @@ -41,6 

Re: [PATCH v5 09/13] hw/pci-host/gpex-acpi: Use acpi_uid property.

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:13 +0100
Jonathan Cameron  wrote:

> Reduce the direct use of PCI internals inside ACPI table creation.
> 
> Suggested-by: Igor Mammedov 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> ---
> v5: Similar to previous, use bus number, not uid in ACPI device naming so
> that uid can be 32 bits and we don't need checks to ensure it is only
> 8 bits.  Not change to the actual numbers as the UID == bus_num
> ---
>  hw/pci-host/gpex-acpi.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
> index f69413ea2c..f271817ef5 100644
> --- a/hw/pci-host/gpex-acpi.c
> +++ b/hw/pci-host/gpex-acpi.c
> @@ -140,6 +140,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  QLIST_FOREACH(bus, >child, sibling) {
>  uint8_t bus_num = pci_bus_num(bus);
>  uint8_t numa_node = pci_bus_numa_node(bus);
> +uint32_t uid;
>  bool is_cxl = pci_bus_is_cxl(bus);
>  
>  if (!pci_bus_is_root(bus)) {
> @@ -155,6 +156,8 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  nr_pcie_buses = bus_num;
>  }
>  
> +uid = object_property_get_uint(OBJECT(bus), "acpi_uid",
> +   _fatal);
>  dev = aml_device("PC%.02X", bus_num);
>  if (is_cxl) {
>  struct Aml *pkg = aml_package(2);
> @@ -167,7 +170,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  aml_append(dev, aml_name_decl("_CID", 
> aml_string("PNP0A03")));
>  }
>  aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
> -aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
>  aml_append(dev, aml_name_decl("_STR", aml_unicode("pxb 
> Device")));
>  aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
>  if (numa_node != NUMA_NODE_UNASSIGNED) {




Re: [PATCH v5 08/13] hw/i386/acpi: Use TYPE_PXB_BUS property acpi_uid for DSDT

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:12 +0100
Jonathan Cameron  wrote:

> Rather than relying on PCI internals, use the new acpi_property
> to obtain the ACPI _UID values.  These are still the same
> as the PCI Bus numbers so no functional change.
> 
> Suggested-by: Igor Mammedov 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> ---
> v5: Leave the device naming as using bus_num so that we can
> relax assumption of the UID being only 8 bits (it is but
> we don't need to assume that)
> ---
>  hw/i386/acpi-build.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index ee92783836..2eaa4c9203 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1550,6 +1550,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  QLIST_FOREACH(bus, >child, sibling) {
>  uint8_t bus_num = pci_bus_num(bus);
>  uint8_t numa_node = pci_bus_numa_node(bus);
> +uint32_t uid;
>  
>  /* look only for expander root buses */
>  if (!pci_bus_is_root(bus)) {
> @@ -1560,6 +1561,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  root_bus_limit = bus_num - 1;
>  }
>  
> +uid = object_property_get_uint(OBJECT(bus), "acpi_uid",
> +   _fatal);
>  scope = aml_scope("\\_SB");
>  
>  if (pci_bus_is_cxl(bus)) {
> @@ -1567,7 +1570,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  } else {
>  dev = aml_device("PC%.02X", bus_num);
>  }
> -aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
>  aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>  if (pci_bus_is_cxl(bus)) {
>  struct Aml *aml_pkg = aml_package(2);




Re: [PATCH v5 07/13] hw/pci-bridge: Add acpi_uid property to TYPE_PXB_BUS

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:11 +0100
Jonathan Cameron  wrote:

> Enable ACPI table creation for PCI Expander Bridges to be independent
> of PCI internals.  Note that the UID is currently the PCI bus number.
> This is motivated by the forthcoming ACPI Generic Port SRAT entries
> which can be made completely independent of PCI internals.
> 
> Suggested-by: Igor Mammedov 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 
> 

Reviewed-by: Igor Mammedov 

> ---
> v5: Add missing property description.
> ---
>  hw/pci-bridge/pci_expander_bridge.c | 13 +
>  1 file changed, 13 insertions(+)
> 
> diff --git a/hw/pci-bridge/pci_expander_bridge.c 
> b/hw/pci-bridge/pci_expander_bridge.c
> index 0411ad31ea..b94cb85cfb 100644
> --- a/hw/pci-bridge/pci_expander_bridge.c
> +++ b/hw/pci-bridge/pci_expander_bridge.c
> @@ -85,12 +85,25 @@ static uint16_t pxb_bus_numa_node(PCIBus *bus)
>  return pxb->numa_node;
>  }
>  
> +static void prop_pxb_uid_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> +uint32_t uid = pci_bus_num(PCI_BUS(obj));
> +
> +visit_type_uint32(v, name, , errp);
> +}
> +
>  static void pxb_bus_class_init(ObjectClass *class, void *data)
>  {
>  PCIBusClass *pbc = PCI_BUS_CLASS(class);
>  
>  pbc->bus_num = pxb_bus_num;
>  pbc->numa_node = pxb_bus_numa_node;
> +
> +object_class_property_add(class, "acpi_uid", "uint32",
> +  prop_pxb_uid_get, NULL, NULL, NULL);
> +object_class_property_set_description(class, "acpi_uid",
> +"ACPI Unique ID used to distinguish this PCI Host Bridge / 
> ACPI00016");
>  }
>  
>  static const TypeInfo pxb_bus_info = {




Re: [PATCH v5 06/13] acpi/pci: Move Generic Initiator object handling into acpi/pci.*

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:10 +0100
Jonathan Cameron  wrote:

> Whilst ACPI SRAT Generic Initiator Afinity Structures are able to refer to
> both PCI and ACPI Device Handles, the QEMU implementation only implements
> the PCI Device Handle case.  For now move the code into the existing
> hw/acpi/pci.c file and header.  If support for ACPI Device Handles is
> added in the future, perhaps this will be moved again.
> 
> Also push the struct AcpiGenericInitiator down into the c file as not
> used outside pci.c.
> 
> Suggested-by: Igor Mammedov 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> 
> ---
> v5: Carry forward changes from previous patch.
> Move the TYPE_ACPI_GENERIC_INTIATOR define down into the c file
> along with include qom/object_interfaces.h
> ---
>  include/hw/acpi/acpi_generic_initiator.h |  24 -
>  include/hw/acpi/pci.h|   3 +
>  hw/acpi/acpi_generic_initiator.c | 120 --
>  hw/acpi/pci.c| 124 +++
>  hw/arm/virt-acpi-build.c |   1 -
>  hw/i386/acpi-build.c |   1 -
>  hw/acpi/meson.build  |   1 -
>  7 files changed, 127 insertions(+), 147 deletions(-)
> 
> diff --git a/include/hw/acpi/acpi_generic_initiator.h 
> b/include/hw/acpi/acpi_generic_initiator.h
> deleted file mode 100644
> index 7b98676713..00
> --- a/include/hw/acpi/acpi_generic_initiator.h
> +++ /dev/null
> @@ -1,24 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#ifndef ACPI_GENERIC_INITIATOR_H
> -#define ACPI_GENERIC_INITIATOR_H
> -
> -#include "qom/object_interfaces.h"
> -
> -#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"
> -
> -typedef struct AcpiGenericInitiator {
> -/* private */
> -Object parent;
> -
> -/* public */
> -char *pci_dev;
> -uint16_t node;
> -} AcpiGenericInitiator;
> -
> -void build_srat_generic_pci_initiator(GArray *table_data);
> -
> -#endif
> diff --git a/include/hw/acpi/pci.h b/include/hw/acpi/pci.h
> index 467a99461c..3015a8171c 100644
> --- a/include/hw/acpi/pci.h
> +++ b/include/hw/acpi/pci.h
> @@ -40,4 +40,7 @@ Aml *aml_pci_device_dsm(void);
>  
>  void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus);
>  void build_pci_bridge_aml(AcpiDevAmlIf *adev, Aml *scope);
> +
> +void build_srat_generic_pci_initiator(GArray *table_data);
> +
>  #endif
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> deleted file mode 100644
> index 365feb527f..00
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ /dev/null
> @@ -1,120 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#include "qemu/osdep.h"
> -#include "hw/acpi/acpi_generic_initiator.h"
> -#include "hw/acpi/aml-build.h"
> -#include "hw/boards.h"
> -#include "hw/pci/pci_device.h"
> -#include "qemu/error-report.h"
> -#include "qapi/error.h"
> -
> -typedef struct AcpiGenericInitiatorClass {
> -ObjectClass parent_class;
> -} AcpiGenericInitiatorClass;
> -
> -OBJECT_DEFINE_TYPE_WITH_INTERFACES(AcpiGenericInitiator, 
> acpi_generic_initiator,
> -   ACPI_GENERIC_INITIATOR, OBJECT,
> -   { TYPE_USER_CREATABLE },
> -   { NULL })
> -
> -OBJECT_DECLARE_SIMPLE_TYPE(AcpiGenericInitiator, ACPI_GENERIC_INITIATOR)
> -
> -static void acpi_generic_initiator_init(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->node = MAX_NODES;
> -gi->pci_dev = NULL;
> -}
> -
> -static void acpi_generic_initiator_finalize(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -g_free(gi->pci_dev);
> -}
> -
> -static void acpi_generic_initiator_set_pci_device(Object *obj, const char 
> *val,
> -  Error **errp)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->pci_dev = g_strdup(val);
> -}
> -
> -static void acpi_generic_initiator_set_node(Object *obj, Visitor *v,
> -const char *name, void *opaque,
> -Error **errp)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(ob

Re: [PATCH v5 06/13] acpi/pci: Move Generic Initiator object handling into acpi/pci.*

2024-07-15 Thread Igor Mammedov
On Fri, 12 Jul 2024 12:08:10 +0100
Jonathan Cameron  wrote:

> Whilst ACPI SRAT Generic Initiator Afinity Structures are able to refer to
> both PCI and ACPI Device Handles, the QEMU implementation only implements
> the PCI Device Handle case.  For now move the code into the existing
> hw/acpi/pci.c file and header.  If support for ACPI Device Handles is
> added in the future, perhaps this will be moved again.
> 
> Also push the struct AcpiGenericInitiator down into the c file as not
> used outside pci.c.
> 
> Suggested-by: Igor Mammedov 
> Tested-by: "Huang, Ying" 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> 
> ---
> v5: Carry forward changes from previous patch.
> Move the TYPE_ACPI_GENERIC_INTIATOR define down into the c file
> along with include qom/object_interfaces.h
> ---
>  include/hw/acpi/acpi_generic_initiator.h |  24 -
>  include/hw/acpi/pci.h|   3 +
>  hw/acpi/acpi_generic_initiator.c | 120 --
>  hw/acpi/pci.c| 124 +++
>  hw/arm/virt-acpi-build.c |   1 -
>  hw/i386/acpi-build.c |   1 -
>  hw/acpi/meson.build  |   1 -
>  7 files changed, 127 insertions(+), 147 deletions(-)
> 
> diff --git a/include/hw/acpi/acpi_generic_initiator.h 
> b/include/hw/acpi/acpi_generic_initiator.h
> deleted file mode 100644
> index 7b98676713..00
> --- a/include/hw/acpi/acpi_generic_initiator.h
> +++ /dev/null
> @@ -1,24 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#ifndef ACPI_GENERIC_INITIATOR_H
> -#define ACPI_GENERIC_INITIATOR_H
> -
> -#include "qom/object_interfaces.h"
> -
> -#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"
> -
> -typedef struct AcpiGenericInitiator {
> -/* private */
> -Object parent;
> -
> -/* public */
> -char *pci_dev;
> -uint16_t node;
> -} AcpiGenericInitiator;
> -
> -void build_srat_generic_pci_initiator(GArray *table_data);
> -
> -#endif
> diff --git a/include/hw/acpi/pci.h b/include/hw/acpi/pci.h
> index 467a99461c..3015a8171c 100644
> --- a/include/hw/acpi/pci.h
> +++ b/include/hw/acpi/pci.h
> @@ -40,4 +40,7 @@ Aml *aml_pci_device_dsm(void);
>  
>  void build_append_pci_bus_devices(Aml *parent_scope, PCIBus *bus);
>  void build_pci_bridge_aml(AcpiDevAmlIf *adev, Aml *scope);
> +
> +void build_srat_generic_pci_initiator(GArray *table_data);
> +
>  #endif
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> deleted file mode 100644
> index 365feb527f..00
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ /dev/null
> @@ -1,120 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#include "qemu/osdep.h"
> -#include "hw/acpi/acpi_generic_initiator.h"
> -#include "hw/acpi/aml-build.h"
> -#include "hw/boards.h"
> -#include "hw/pci/pci_device.h"
> -#include "qemu/error-report.h"
> -#include "qapi/error.h"
> -
> -typedef struct AcpiGenericInitiatorClass {
> -ObjectClass parent_class;
> -} AcpiGenericInitiatorClass;
> -
> -OBJECT_DEFINE_TYPE_WITH_INTERFACES(AcpiGenericInitiator, 
> acpi_generic_initiator,
> -   ACPI_GENERIC_INITIATOR, OBJECT,
> -   { TYPE_USER_CREATABLE },
> -   { NULL })
> -
> -OBJECT_DECLARE_SIMPLE_TYPE(AcpiGenericInitiator, ACPI_GENERIC_INITIATOR)
> -
> -static void acpi_generic_initiator_init(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->node = MAX_NODES;
> -gi->pci_dev = NULL;
> -}
> -
> -static void acpi_generic_initiator_finalize(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -g_free(gi->pci_dev);
> -}
> -
> -static void acpi_generic_initiator_set_pci_device(Object *obj, const char 
> *val,
> -  Error **errp)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->pci_dev = g_strdup(val);
> -}
> -
> -static void acpi_generic_initiator_set_node(Object *obj, Visitor *v,
> -const char *name, void *opaque,
> -Error **errp)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(ob

Re: [PATCH 5/7] backends/hostmem-epc: Get rid of qemu_open_old()

2024-07-15 Thread Igor Mammedov
On Mon, 15 Jul 2024 16:21:53 +0800
Zhao Liu  wrote:

> For qemu_open_old(), osdep.h said:
> 
> > Don't introduce new usage of this function, prefer the following
> > qemu_open/qemu_create that take an "Error **errp".  
> 
> So replace qemu_open_old() with qemu_open().
> 
> Cc: David Hildenbrand 
> Cc: Igor Mammedov 
> Signed-off-by: Zhao Liu 

Reviewed-by: Igor Mammedov 

> ---
>  backends/hostmem-epc.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/backends/hostmem-epc.c b/backends/hostmem-epc.c
> index f58fcf00a10b..6c024d6217d2 100644
> --- a/backends/hostmem-epc.c
> +++ b/backends/hostmem-epc.c
> @@ -29,10 +29,8 @@ sgx_epc_backend_memory_alloc(HostMemoryBackend *backend, 
> Error **errp)
>  return false;
>  }
>  
> -fd = qemu_open_old("/dev/sgx_vepc", O_RDWR);
> +fd = qemu_open("/dev/sgx_vepc", O_RDWR, errp);
>  if (fd < 0) {
> -error_setg_errno(errp, errno,
> - "failed to open /dev/sgx_vepc to alloc SGX EPC");
>  return false;
>  }
>  




Re: [PATCH V15 0/7] Add architecture agnostic code to support vCPU Hotplug

2024-07-15 Thread Igor Mammedov
On Mon, 15 Jul 2024 11:27:57 +
Salil Mehta  wrote:

> Hi Michael,
> 
> >  From: Michael S. Tsirkin 
> >  Sent: Monday, July 15, 2024 12:13 PM
> >  To: Salil Mehta 
> >  
> >  On Sat, Jul 13, 2024 at 07:25:09PM +0100, Salil Mehta wrote:  
> >  > [Note: References are present at the last after the revision history]  
> >  
> >  Igor any comments before I merge this?  
> 
> Hi Michael,
> 
> Assuming there are no last-minute surprises and If you decide to merge this
> series, could I kindly request that you collect all the Tags (XXX-Bys) 
> including
> the Igor's pending Reviewed/Acked-By Tag for the entire series, so that I 
> won't
> have to churn out another version (V16)?

v16 might be necessary, see cover letter.

> 
> Many thanks!
> 
> Best regards
> Salil
> 
> 
> >  
> >  --
> >  MST
> >
> 




Re: [PATCH V15 0/7] Add architecture agnostic code to support vCPU Hotplug

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:09 +0100
Salil Mehta  wrote:

> [Note: References are present at the last after the revision history]
> 
> Virtual CPU hotplug support is being added across various architectures 
> [1][3].
> This series adds various code bits common across all architectures:
> 
> 1. vCPU creation and Parking code refactor [Patch 1]
> 2. Update ACPI GED framework to support vCPU Hotplug [Patch 2,3]
> 3. ACPI CPUs AML code change [Patch 4,5]
> 4. Helper functions to support unrealization of CPU objects [Patch 6,7]

with patch 1 and 3 fixed should be good to go.

Salil,
Can you remind me what happened to migration part of this?
Ideally it should be a part of of this series as it should be common
for everything that uses GED and should be a conditional part
of GED's VMSTATE.

If this series is just a common base and no actual hotplug
on top of it is merged in this release (provided patch 13 is fixed),
I'm fine with migration bits being a separate series on top.

However if some machine would be introducing cpu hotplug in
the same release, then the migration part should be merged before
it or be a part that cpu hotplug series. 
 
> Repository:
> 
> [*] Architecture *Agnostic* Patch-set (This series)
>V14: https://github.com/salil-mehta/qemu.git 
> virt-cpuhp-armv8/rfc-v3.arch.agnostic.v15
> 
>NOTE: This series is meant to work in conjunction with the 
> architecture-specific
>patch-set. For ARM, a combined patch-set (architecture agnostic + 
> specific) was
>earlier pushed as RFC V2 [1]. Later, RFC V2 was split into the ARM 
> Architecture
>specific patch-set RFC V3 [4] (a subset of RFC V2) and the architecture 
> agnostic
>patch-set. Patch-set V14 is the latest version in that series. This series
>works in conjunction with RFC V4-rc2, present at the following link.
> 
> [*] ARM Architecture *Specific* Patch-set
>RFC V3 [4]: https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v3
>RFC V4-rc2: https://github.com/salil-mehta/qemu.git 
> virt-cpuhp-armv8/rfc-v4-rc2 (combined)
> 
> 
> Revision History:
> 
> Patch-set V14 -> V15
> 1. Addressed commnet from Igor Mammedov's on [PATCH V14 4/7]
>- Removed ACPI_CPU_SCAN_METHOD
>- Introduced AML_GED_EVT_CPU_SCAN_METHOD ("\\_SB.GED.CPSCN") macro
> 2. Fix the stray change of "assert (" in "PATCH V14 3/7"
> Link: 
> https://lore.kernel.org/qemu-devel/20240712134201.214699-4-salil.me...@huawei.com/
> 
> Patch-set V13 -> V14
> 1. Addressed Igor Mammedov's following review comments
>- Mentioned abput new external APIs in the header note of [PATCH 1/7]
>- Merged Doc [PATCH V13 8/8] with [PATCH V14 3/7]
>- Introduced GED realize function for various CPU Hotplug regions 
> initializations
>- Added back event handler method to indirectly expose \\_SB.CPUS.CSCN to 
> GED
>  _EVT. Like for ARM, it would be through \\_SB.GED.CSCN event handler 
> method
>- Collected the Ack given for [Patch V13 6/8]
>- Added back the gfree'ing of GDB regs in common finalize and made it 
> conditional
>- Updated the header notes of [PATCH V13 3/8,4/8,5/8] to reflect the 
> changes
> 
> Patch-set  V12 -> V13
> 1. Added Reviewed-by Tag of Harsh Prateek Bora's (IBM) [PATCH V12 1/8]
> 2. Moved the kvm_{create,park,unpark}_vcpu prototypes from 
> accel/kvm/kvm-cpus.h
>to include/sysemu/kvm.h. These can later be exported through AccelOps.
> Link: 
> https://lore.kernel.org/qemu-devel/62f55169-1796-4d8e-a35d-7f003a172...@linux.ibm.com/
> 
> Patch-set  V11 -> V12
> 1. Addressed Harsh Prateek Bora's (IBM) comment
>- Changed @cpu to @vcpu_id in the kvm_unpark_vcpu protoype header/
> 2. Added Zhao Liu's (Intel) Tested-by for whole series
>- Qtest does not breaks on Intel platforms now.
> 3. Added Zhao Liu's (Intel) Reviewed-by for [PATCH V11 {1/8 - 3/8}]
> Link: https://lore.kernel.org/qemu-devel/zlrspujgbgyeu...@intel.com/
> Link: 
> https://lore.kernel.org/qemu-devel/a5f3d78e-cfed-441f-9c56-e3e78fa5e...@linux.ibm.com/
> 
> Patch-set  V10 -> V11
> 1. Addressed Nicholas Piggin's (IBM) comment
>- moved the traces in kvm_unpark_vcpu and kvm_create_vcpu at the end
>- Added the Reviewed-by Tag for [PATCH V10 1/8]
> 2.  Addressed Alex Bennée's (Linaro) comments
>- Added a note explaining dependency of the [PATCH V10 7/8] on Arch 
> specific patch-set
> Link: 
> https://lore.kernel.org/qemu-devel/d1fs5goofwwk.2pnrivl0v6...@gmail.com/ 
> Link: https://lore.kernel.org/qemu-devel/87frubi402@draig.linaro.org/
> 
> Patch-set  V9 -> V10
> 1. Addressed Nicholas Piggin's (IBM) & Philippe Mathieu-Daudé (Linaro) 
> comments
>- carved out kvm_unpark_vcpu and added its trace
>- Widened the scope of the kvm_unpark_vcpu so that it can be used by 
> generic framework
>  being thought out
> Link: 
> https://lore.kernel.org/qemu-devel/20240519210620.228342-1-salil.me...@huawei.com/
> Link: 
> https://lore.kernel.org/qemu-devel/e94b0e14-efee-4050-9c9f-08382a36b...@linaro.org/
> 
> Patch-set  V8 -> V9
> 

Re: [PATCH V15 7/7] gdbstub: Add helper function to unregister GDB register space

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:16 +0100
Salil Mehta  wrote:

> Add common function to help unregister the GDB register space. This shall be
> done in context to the CPU unrealization.
> 
> Note: These are common functions exported to arch specific code. For example,
> for ARM this code is being referred in associated arch specific patch-set:
> 
> Link: 
> https://lore.kernel.org/qemu-devel/20230926103654.34424-1-salil.me...@huawei.com/
> 
> Signed-off-by: Salil Mehta 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Gavin Shan 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Reviewed-by: Vishnu Pajjuri 
> Tested-by: Zhao Liu 

Acked-by: Igor Mammedov 

> ---
>  gdbstub/gdbstub.c  | 13 +
>  hw/core/cpu-common.c   |  4 +++-
>  include/exec/gdbstub.h |  6 ++
>  3 files changed, 22 insertions(+), 1 deletion(-)
> 
> diff --git a/gdbstub/gdbstub.c b/gdbstub/gdbstub.c
> index b9ad0a063e..5da17d6530 100644
> --- a/gdbstub/gdbstub.c
> +++ b/gdbstub/gdbstub.c
> @@ -618,6 +618,19 @@ void gdb_register_coprocessor(CPUState *cpu,
>  }
>  }
>  
> +void gdb_unregister_coprocessor_all(CPUState *cpu)
> +{
> +/*
> + * Safe to nuke everything. GDBRegisterState::xml is static const char so
> + * it won't be freed
> + */
> +g_array_free(cpu->gdb_regs, true);
> +
> +cpu->gdb_regs = NULL;
> +cpu->gdb_num_regs = 0;
> +cpu->gdb_num_g_regs = 0;
> +}
> +
>  static void gdb_process_breakpoint_remove_all(GDBProcess *p)
>  {
>  CPUState *cpu = gdb_get_first_cpu_in_process(p);
> diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
> index b19e1fdacf..fe5383b4f9 100644
> --- a/hw/core/cpu-common.c
> +++ b/hw/core/cpu-common.c
> @@ -281,7 +281,9 @@ static void cpu_common_finalize(Object *obj)
>  g_free(cpu->plugin_state);
>  }
>  #endif
> -g_array_free(cpu->gdb_regs, TRUE);
> +/* If cleanup didn't happen in context to gdb_unregister_coprocessor_all 
> */
> +if (cpu->gdb_regs)
> +g_array_free(cpu->gdb_regs, TRUE);
>  qemu_lockcnt_destroy(>in_ioctl_lock);
>  qemu_mutex_destroy(>work_mutex);
>  qemu_cond_destroy(cpu->halt_cond);
> diff --git a/include/exec/gdbstub.h b/include/exec/gdbstub.h
> index 1bd2c4ec2a..d73f424f56 100644
> --- a/include/exec/gdbstub.h
> +++ b/include/exec/gdbstub.h
> @@ -40,6 +40,12 @@ void gdb_register_coprocessor(CPUState *cpu,
>gdb_get_reg_cb get_reg, gdb_set_reg_cb set_reg,
>const GDBFeature *feature, int g_pos);
>  
> +/**
> + * gdb_unregister_coprocessor_all() - unregisters supplemental set of 
> registers
> + * @cpu - the CPU associated with registers
> + */
> +void gdb_unregister_coprocessor_all(CPUState *cpu);
> +
>  /**
>   * gdbserver_start: start the gdb server
>   * @port_or_device: connection spec for gdb




Re: [PATCH V15 5/7] hw/acpi: Update CPUs AML with cpu-(ctrl)dev change

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:14 +0100
Salil Mehta  wrote:

> CPUs Control device(\\_SB.PCI0) register interface for the x86 arch is IO port
> based and existing CPUs AML code assumes _CRS objects would evaluate to a 
> system
> resource which describes IO Port address. But on ARM arch CPUs control
> device(\\_SB.PRES) register interface is memory-mapped hence _CRS object 
> should
> evaluate to system resource which describes memory-mapped base address. Update
> build CPUs AML function to accept both IO/MEMORY region spaces and accordingly
> update the _CRS object.
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Jonathan Cameron 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Zhao Liu 

Reviewed-by: Igor Mammedov 

> ---
>  hw/acpi/cpu.c | 17 +
>  hw/i386/acpi-build.c  |  3 ++-
>  include/hw/acpi/cpu.h |  5 +++--
>  3 files changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index cf5e9183e4..5cb60ca8bc 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -338,9 +338,10 @@ const VMStateDescription vmstate_cpu_hotplug = {
>  #define CPU_FW_EJECT_EVENT "CEJF"
>  
>  void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures 
> opts,
> -build_madt_cpu_fn build_madt_cpu, hwaddr io_base,
> +build_madt_cpu_fn build_madt_cpu, hwaddr base_addr,
>  const char *res_root,
> -const char *event_handler_method)
> +const char *event_handler_method,
> +AmlRegionSpace rs)
>  {
>  Aml *ifctx;
>  Aml *field;
> @@ -364,14 +365,22 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
> CPUHotplugFeatures opts,
>  aml_name_decl("_UID", aml_string("CPU Hotplug resources")));
>  aml_append(cpu_ctrl_dev, aml_mutex(CPU_LOCK, 0));
>  
> +assert((rs == AML_SYSTEM_IO) || (rs == AML_SYSTEM_MEMORY));
> +
>  crs = aml_resource_template();
> -aml_append(crs, aml_io(AML_DECODE16, io_base, io_base, 1,
> +if (rs == AML_SYSTEM_IO) {
> +aml_append(crs, aml_io(AML_DECODE16, base_addr, base_addr, 1,
> ACPI_CPU_HOTPLUG_REG_LEN));
> +} else if (rs == AML_SYSTEM_MEMORY) {
> +aml_append(crs, aml_memory32_fixed(base_addr,
> +   ACPI_CPU_HOTPLUG_REG_LEN, AML_READ_WRITE));
> +}
> +
>  aml_append(cpu_ctrl_dev, aml_name_decl("_CRS", crs));
>  
>  /* declare CPU hotplug MMIO region with related access fields */
>  aml_append(cpu_ctrl_dev,
> -aml_operation_region("PRST", AML_SYSTEM_IO, aml_int(io_base),
> +aml_operation_region("PRST", rs, aml_int(base_addr),
>   ACPI_CPU_HOTPLUG_REG_LEN));
>  
>  field = aml_field("PRST", AML_BYTE_ACC, AML_NOLOCK,
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index f4e366f64f..5d4bd2b710 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1536,7 +1536,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  .fw_unplugs_cpu = pm->smi_on_cpu_unplug,
>  };
>  build_cpus_aml(dsdt, machine, opts, pc_madt_cpu_entry,
> -   pm->cpu_hp_io_base, "\\_SB.PCI0", "\\_GPE._E02");
> +   pm->cpu_hp_io_base, "\\_SB.PCI0", "\\_GPE._E02",
> +   AML_SYSTEM_IO);
>  }
>  
>  if (pcms->memhp_io_base && nr_mem) {
> diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
> index df87b15997..32654dc274 100644
> --- a/include/hw/acpi/cpu.h
> +++ b/include/hw/acpi/cpu.h
> @@ -63,9 +63,10 @@ typedef void (*build_madt_cpu_fn)(int uid, const 
> CPUArchIdList *apic_ids,
>GArray *entry, bool force_enabled);
>  
>  void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures 
> opts,
> -build_madt_cpu_fn build_madt_cpu, hwaddr io_base,
> +build_madt_cpu_fn build_madt_cpu, hwaddr base_addr,
>  const char *res_root,
> -const char *event_handler_method);
> +const char *event_handler_method,
> +AmlRegionSpace rs);
>  
>  void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList ***list);
>  




Re: [PATCH V15 1/7] accel/kvm: Extract common KVM vCPU {creation,parking} code

2024-07-15 Thread Igor Mammedov
On Mon, 15 Jul 2024 14:49:25 +0200
Igor Mammedov  wrote:

> On Sat, 13 Jul 2024 19:25:10 +0100
> Salil Mehta  wrote:
> 
> > KVM vCPU creation is done once during the vCPU realization when Qemu vCPU 
> > thread
> > is spawned. This is common to all the architectures as of now.
> > 
> > Hot-unplug of vCPU results in destruction of the vCPU object in QOM but the
> > corresponding KVM vCPU object in the Host KVM is not destroyed as KVM 
> > doesn't
> > support vCPU removal. Therefore, its representative KVM vCPU object/context 
> > in
> > Qemu is parked.
> > 
> > Refactor architecture common logic so that some APIs could be reused by vCPU
> > Hotplug code of some architectures likes ARM, Loongson etc. Update new/old 
> > APIs
> > with trace events. New APIs qemu_{create,park,unpark}_vcpu() can be 
> > externally
> > called. No functional change is intended here.
> > 
> > Signed-off-by: Salil Mehta 
> > Reviewed-by: Gavin Shan 
> > Tested-by: Vishnu Pajjuri 
> > Reviewed-by: Jonathan Cameron 
> > Tested-by: Xianglai Li 
> > Tested-by: Miguel Luis 
> > Reviewed-by: Shaoqin Huang 
> > Reviewed-by: Vishnu Pajjuri 
> > Reviewed-by: Nicholas Piggin 
> > Tested-by: Zhao Liu 
> > Reviewed-by: Zhao Liu 
> > Reviewed-by: Harsh Prateek Bora   
> 
> Reviewed-by: Igor Mammedov 

this needs fixing, to make checkpatch happy

Checking 0001-accel-kvm-Extract-common-KVM-vCPU-creation-parking-c.patch...
WARNING: line over 80 characters
#120: FILE: accel/kvm/kvm-all.c:368:
+trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found 
parked");

total: 0 errors, 1 warnings, 183 lines checked

> 
> > ---
> >  accel/kvm/kvm-all.c| 95 --
> >  accel/kvm/kvm-cpus.h   |  1 -
> >  accel/kvm/trace-events |  5 ++-
> >  include/sysemu/kvm.h   | 25 +++
> >  4 files changed, 92 insertions(+), 34 deletions(-)
> > 
> > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > index 2b4ab89679..e446d18944 100644
> > --- a/accel/kvm/kvm-all.c
> > +++ b/accel/kvm/kvm-all.c
> > @@ -340,14 +340,71 @@ err:
> >  return ret;
> >  }
> >  
> > +void kvm_park_vcpu(CPUState *cpu)
> > +{
> > +struct KVMParkedVcpu *vcpu;
> > +
> > +trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> > +
> > +vcpu = g_malloc0(sizeof(*vcpu));
> > +vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> > +vcpu->kvm_fd = cpu->kvm_fd;
> > +QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> > +}
> > +
> > +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
> > +{
> > +struct KVMParkedVcpu *cpu;
> > +int kvm_fd = -ENOENT;
> > +
> > +QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> > +if (cpu->vcpu_id == vcpu_id) {
> > +QLIST_REMOVE(cpu, node);
> > +kvm_fd = cpu->kvm_fd;
> > +g_free(cpu);
> > +}
> > +}
> > +
> > +trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found 
> > parked");
> > +
> > +return kvm_fd;
> > +}
> > +
> > +int kvm_create_vcpu(CPUState *cpu)
> > +{
> > +unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> > +KVMState *s = kvm_state;
> > +int kvm_fd;
> > +
> > +/* check if the KVM vCPU already exist but is parked */
> > +kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> > +if (kvm_fd < 0) {
> > +/* vCPU not parked: create a new KVM vCPU */
> > +kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> > +if (kvm_fd < 0) {
> > +error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", 
> > vcpu_id);
> > +return kvm_fd;
> > +}
> > +}
> > +
> > +cpu->kvm_fd = kvm_fd;
> > +cpu->kvm_state = s;
> > +cpu->vcpu_dirty = true;
> > +cpu->dirty_pages = 0;
> > +cpu->throttle_us_per_full = 0;
> > +
> > +trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
> > +
> > +return 0;
> > +}
> > +
> >  static int do_kvm_destroy_vcpu(CPUState *cpu)
> >  {
> >  KVMState *s = kvm_state;
> >  long mmap_size;
> > -struct KVMParkedVcpu *vcpu = NULL;
> >  int ret = 0;
> >  
> > -trace_kvm_destroy_vcpu();
> > +trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));

Re: [PATCH V15 3/7] hw/acpi: Update ACPI GED framework to support vCPU Hotplug

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:12 +0100
Salil Mehta  wrote:

> ACPI GED (as described in the ACPI 6.4 spec) uses an interrupt listed in the
> _CRS object of GED to intimate OSPM about an event. Later then demultiplexes 
> the
> notified event by evaluating ACPI _EVT method to know the type of event. Use
> ACPI GED to also notify the guest kernel about any CPU hot(un)plug events.
> 
> Note, GED interface is used by many hotplug events like memory hotplug, NVDIMM
> hotplug and non-hotplug events like system power down event. Each of these can
> be selected using a bit in the 32 bit GED IO interface. A bit has been 
> reserved
> for the CPU hotplug event.

> ACPI CPU hotplug related initialization should only happen if ACPI_CPU_HOTPLUG
> support has been enabled for particular architecture. Add 
> cpu_hotplug_hw_init()
> stub to avoid compilation break.

so any target (and machines in it) that has ACPI_CPU_HOTPLUG enabled will use 
have all CPU hotplug
machinery builtin which is fine.

However any machine that uses GED but do not opt-in into CPU hotplug,
will still have CPU hotplug registers/memory regions enabled/mapped. 

It's not much concern for upstream as migration from new to older QEMU
is not supported, however it will break migration downstream (arm/virt) as
new QEMU will try to migrate memory regions/state that do not exists
in older QEMU. Se below for suggestion.

> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Reviewed-by: David Hildenbrand 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Vishnu Pajjuri 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 
> ---
>  docs/specs/acpi_hw_reduced_hotplug.rst |  3 ++-
>  hw/acpi/acpi-cpu-hotplug-stub.c|  6 ++
>  hw/acpi/generic_event_device.c | 24 
>  include/hw/acpi/generic_event_device.h |  4 
>  4 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/specs/acpi_hw_reduced_hotplug.rst 
> b/docs/specs/acpi_hw_reduced_hotplug.rst
> index 0bd3f9399f..3acd6fcd8b 100644
> --- a/docs/specs/acpi_hw_reduced_hotplug.rst
> +++ b/docs/specs/acpi_hw_reduced_hotplug.rst
> @@ -64,7 +64,8 @@ GED IO interface (4 byte access)
> 0: Memory hotplug event
> 1: System power down event
> 2: NVDIMM hotplug event
> -3-31: Reserved
> +   3: CPU hotplug event
> +4-31: Reserved
>  
>  **write_access:**
>  
> diff --git a/hw/acpi/acpi-cpu-hotplug-stub.c b/hw/acpi/acpi-cpu-hotplug-stub.c
> index 3fc4b14c26..c6c61bb9cd 100644
> --- a/hw/acpi/acpi-cpu-hotplug-stub.c
> +++ b/hw/acpi/acpi-cpu-hotplug-stub.c
> @@ -19,6 +19,12 @@ void legacy_acpi_cpu_hotplug_init(MemoryRegion *parent, 
> Object *owner,
>  return;
>  }
>  
> +void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
> + CPUHotplugState *state, hwaddr base_addr)
> +{
> +return;
> +}
> +
>  void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList ***list)
>  {
>  return;
> diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
> index 2d6e91b124..1b31d633ba 100644
> --- a/hw/acpi/generic_event_device.c
> +++ b/hw/acpi/generic_event_device.c
> @@ -25,6 +25,7 @@ static const uint32_t ged_supported_events[] = {
>  ACPI_GED_MEM_HOTPLUG_EVT,
>  ACPI_GED_PWR_DOWN_EVT,
>  ACPI_GED_NVDIMM_HOTPLUG_EVT,
> +ACPI_GED_CPU_HOTPLUG_EVT,
>  };
>  
>  /*
> @@ -234,6 +235,8 @@ static void acpi_ged_device_plug_cb(HotplugHandler 
> *hotplug_dev,
>  } else {
>  acpi_memory_plug_cb(hotplug_dev, >memhp_state, dev, errp);
>  }
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +acpi_cpu_plug_cb(hotplug_dev, >cpuhp_state, dev, errp);
>  } else {
>  error_setg(errp, "virt: device plug request for unsupported device"
> " type: %s", object_get_typename(OBJECT(dev)));
> @@ -248,6 +251,8 @@ static void acpi_ged_unplug_request_cb(HotplugHandler 
> *hotplug_dev,
>  if ((object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) &&
> !(object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM {
>  acpi_memory_unplug_request_cb(hotplug_dev, >memhp_state, dev, 
> errp);
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +acpi_cpu_unplug_request_cb(hotplug_dev, >cpuhp_state, dev, errp);
>  } else {
>  error_setg(errp, "acpi: device unplug request for unsupported device"
> " type: %s", object_get_typename(OBJECT(dev)));
> @@ -261,6 +266,8 @@ static void acpi_ged_unplug_cb(HotplugHandler 
> *hotplug_dev,
>  
>  if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
>  acpi_memory_unplug_cb(>memhp_state, dev, errp);
> +} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +acpi_cpu_unplug_cb(>cpuhp_state, dev, errp);
>  } else 

Re: [PATCH V15 4/7] hw/acpi: Update GED _EVT method AML with CPU scan

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:13 +0100
Salil Mehta  wrote:

> OSPM evaluates _EVT method to map the event. The CPU hotplug event eventually
> results in start of the CPU scan. Scan figures out the CPU and the kind of
> event(plug/unplug) and notifies it back to the guest. Update the GED AML _EVT
> method with the call to method \\_SB.CPUS.CSCN (via \\_SB.GED.CSCN)
> 
> Architecture specific code [1] might initialize its CPUs AML code by calling
> common function build_cpus_aml() like below for ARM:
> 
> build_cpus_aml(scope, ms, opts, xx_madt_cpu_entry, 
> memmap[VIRT_CPUHP_ACPI].base,
>"\\_SB", "\\_SB.GED.CSCN", AML_SYSTEM_MEMORY);
> 
> [1] 
> https://lore.kernel.org/qemu-devel/20240613233639.202896-13-salil.me...@huawei.com/
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Zhao Liu 

Reviewed-by: Igor Mammedov 

> ---
>  hw/acpi/generic_event_device.c | 3 +++
>  include/hw/acpi/generic_event_device.h | 1 +
>  2 files changed, 4 insertions(+)
> 
> diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
> index 1b31d633ba..15ffa12cb2 100644
> --- a/hw/acpi/generic_event_device.c
> +++ b/hw/acpi/generic_event_device.c
> @@ -108,6 +108,9 @@ void build_ged_aml(Aml *table, const char *name, 
> HotplugHandler *hotplug_dev,
>  aml_append(if_ctx, aml_call0(MEMORY_DEVICES_CONTAINER "."
>   MEMORY_SLOT_SCAN_METHOD));
>  break;
> +case ACPI_GED_CPU_HOTPLUG_EVT:
> +aml_append(if_ctx, aml_call0(AML_GED_EVT_CPU_SCAN_METHOD));
> +break;
>  case ACPI_GED_PWR_DOWN_EVT:
>  aml_append(if_ctx,
> aml_notify(aml_name(ACPI_POWER_BUTTON_DEVICE),
> diff --git a/include/hw/acpi/generic_event_device.h 
> b/include/hw/acpi/generic_event_device.h
> index e091ac2108..40af3550b5 100644
> --- a/include/hw/acpi/generic_event_device.h
> +++ b/include/hw/acpi/generic_event_device.h
> @@ -87,6 +87,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(AcpiGedState, ACPI_GED)
>  #define GED_DEVICE  "GED"
>  #define AML_GED_EVT_REG "EREG"
>  #define AML_GED_EVT_SEL "ESEL"
> +#define AML_GED_EVT_CPU_SCAN_METHOD "\\_SB.GED.CSCN"
>  
>  /*
>   * Platforms need to specify the GED event bitmap




Re: [PATCH V15 2/7] hw/acpi: Move CPU ctrl-dev MMIO region len macro to common header file

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:11 +0100
Salil Mehta  wrote:

> CPU ctrl-dev MMIO region length could be used in ACPI GED and various other
> architecture specific places. Move ACPI_CPU_HOTPLUG_REG_LEN macro to more
> appropriate common header file.
> 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Alex Bennée 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Reviewed-by: David Hildenbrand 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 

Reviewed-by: Igor Mammedov 

> ---
>  hw/acpi/cpu.c | 1 -
>  include/hw/acpi/cpu.h | 2 ++
>  2 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 2d81c1e790..cf5e9183e4 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -7,7 +7,6 @@
>  #include "trace.h"
>  #include "sysemu/numa.h"
>  
> -#define ACPI_CPU_HOTPLUG_REG_LEN 12
>  #define ACPI_CPU_SELECTOR_OFFSET_WR 0
>  #define ACPI_CPU_FLAGS_OFFSET_RW 4
>  #define ACPI_CPU_CMD_OFFSET_WR 5
> diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
> index e6e1a9ef59..df87b15997 100644
> --- a/include/hw/acpi/cpu.h
> +++ b/include/hw/acpi/cpu.h
> @@ -19,6 +19,8 @@
>  #include "hw/boards.h"
>  #include "hw/hotplug.h"
>  
> +#define ACPI_CPU_HOTPLUG_REG_LEN 12
> +
>  typedef struct AcpiCpuStatus {
>  CPUState *cpu;
>  uint64_t arch_id;




Re: [PATCH V15 1/7] accel/kvm: Extract common KVM vCPU {creation,parking} code

2024-07-15 Thread Igor Mammedov
On Sat, 13 Jul 2024 19:25:10 +0100
Salil Mehta  wrote:

> KVM vCPU creation is done once during the vCPU realization when Qemu vCPU 
> thread
> is spawned. This is common to all the architectures as of now.
> 
> Hot-unplug of vCPU results in destruction of the vCPU object in QOM but the
> corresponding KVM vCPU object in the Host KVM is not destroyed as KVM doesn't
> support vCPU removal. Therefore, its representative KVM vCPU object/context in
> Qemu is parked.
> 
> Refactor architecture common logic so that some APIs could be reused by vCPU
> Hotplug code of some architectures likes ARM, Loongson etc. Update new/old 
> APIs
> with trace events. New APIs qemu_{create,park,unpark}_vcpu() can be externally
> called. No functional change is intended here.
> 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Jonathan Cameron 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Reviewed-by: Vishnu Pajjuri 
> Reviewed-by: Nicholas Piggin 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 
> Reviewed-by: Harsh Prateek Bora 

Reviewed-by: Igor Mammedov 

> ---
>  accel/kvm/kvm-all.c| 95 --
>  accel/kvm/kvm-cpus.h   |  1 -
>  accel/kvm/trace-events |  5 ++-
>  include/sysemu/kvm.h   | 25 +++
>  4 files changed, 92 insertions(+), 34 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 2b4ab89679..e446d18944 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -340,14 +340,71 @@ err:
>  return ret;
>  }
>  
> +void kvm_park_vcpu(CPUState *cpu)
> +{
> +struct KVMParkedVcpu *vcpu;
> +
> +trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> +
> +vcpu = g_malloc0(sizeof(*vcpu));
> +vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> +vcpu->kvm_fd = cpu->kvm_fd;
> +QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> +}
> +
> +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
> +{
> +struct KVMParkedVcpu *cpu;
> +int kvm_fd = -ENOENT;
> +
> +QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> +if (cpu->vcpu_id == vcpu_id) {
> +QLIST_REMOVE(cpu, node);
> +kvm_fd = cpu->kvm_fd;
> +g_free(cpu);
> +}
> +}
> +
> +trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found 
> parked");
> +
> +return kvm_fd;
> +}
> +
> +int kvm_create_vcpu(CPUState *cpu)
> +{
> +unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> +KVMState *s = kvm_state;
> +int kvm_fd;
> +
> +/* check if the KVM vCPU already exist but is parked */
> +kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> +if (kvm_fd < 0) {
> +/* vCPU not parked: create a new KVM vCPU */
> +kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> +if (kvm_fd < 0) {
> +error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", 
> vcpu_id);
> +return kvm_fd;
> +}
> +}
> +
> +cpu->kvm_fd = kvm_fd;
> +cpu->kvm_state = s;
> +cpu->vcpu_dirty = true;
> +cpu->dirty_pages = 0;
> +cpu->throttle_us_per_full = 0;
> +
> +trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
> +
> +return 0;
> +}
> +
>  static int do_kvm_destroy_vcpu(CPUState *cpu)
>  {
>  KVMState *s = kvm_state;
>  long mmap_size;
> -struct KVMParkedVcpu *vcpu = NULL;
>  int ret = 0;
>  
> -trace_kvm_destroy_vcpu();
> +trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>  
>  ret = kvm_arch_destroy_vcpu(cpu);
>  if (ret < 0) {
> @@ -373,10 +430,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
>  }
>  }
>  
> -vcpu = g_malloc0(sizeof(*vcpu));
> -vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> -vcpu->kvm_fd = cpu->kvm_fd;
> -QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> +kvm_park_vcpu(cpu);
>  err:
>  return ret;
>  }
> @@ -389,24 +443,6 @@ void kvm_destroy_vcpu(CPUState *cpu)
>  }
>  }
>  
> -static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
> -{
> -struct KVMParkedVcpu *cpu;
> -
> -QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> -if (cpu->vcpu_id == vcpu_id) {
> -int kvm_fd;
> -
> -QLIST_REMOVE(cpu, node);
> -kvm_fd = cpu->kvm_fd;
> -g_free(cpu);
> -return kvm_fd;
> -}
> -}
> -
> -return kvm_vm_ioctl(s, KVM_CREATE

Re: [PATCH v2 0/9] RISC-V: ACPI: Namespace updates

2024-07-15 Thread Igor Mammedov
On Sun, 14 Jul 2024 03:46:36 -0400
"Michael S. Tsirkin"  wrote:

> On Fri, Jul 12, 2024 at 03:50:10PM +0200, Igor Mammedov wrote:
> > On Fri, 12 Jul 2024 13:51:04 +0100
> > Daniel P. Berrangé  wrote:
> >   
> > > On Fri, Jul 12, 2024 at 02:43:19PM +0200, Igor Mammedov wrote:  
> > > > On Mon,  8 Jul 2024 17:17:32 +0530
> > > > Sunil V L  wrote:
> > > > 
> > > > > This series adds few updates to RISC-V ACPI namespace for virt 
> > > > > platform.
> > > > > Additionally, it has patches to enable ACPI table testing for RISC-V.
> > > > > 
> > > > > 1) PCI Link devices need to be created outside the scope of the PCI 
> > > > > root
> > > > > complex to ensure correct probe ordering by the OS. This matches the
> > > > > example given in ACPI spec as well.
> > > > > 
> > > > > 2) Add PLIC and APLIC as platform devices as well to ensure probing
> > > > > order as per BRS spec [1] requirement.
> > > > > 
> > > > > 3) BRS spec requires RISC-V to use new ACPI ID for the generic UART. 
> > > > > So,
> > > > > update the HID of the UART.
> > > > > 
> > > > > 4) Enabled ACPI tables tests for RISC-V which were originally part of
> > > > > [2] but couldn't get merged due to updates required in the expected 
> > > > > AML
> > > > > files. I think combining those patches with this series makes it 
> > > > > easier
> > > > > to merge since expected AML files are updated.
> > > > > 
> > > > > [1] - https://github.com/riscv-non-isa/riscv-brs
> > > > > [2] - 
> > > > > https://lists.gnu.org/archive/html/qemu-devel/2024-06/msg04734.html   
> > > > >  
> > > > 
> > > > btw: CI is not happy about series, see:
> > > >  https://gitlab.com/imammedo/qemu/-/pipelines/1371119552
> > > > also 'cross-i686-tci' job routinely timeouts on bios-tables-test
> > > > but we still keep adding more tests to it.
> > > > We should either bump timeout to account for slowness or
> > > > disable bios-tables-test for that job.
> > > 
> > > Asumming the test is functionally correct, and not hanging, then bumping
> > > the timeout is the right answer. You can do this in the meson.build
> > > file  
> > 
> > I think test is fine, since once in a while it passes (I guess it depends 
> > on runner host/load)
> > 
> > Overal job timeout is 1h, but that's not what fails.
> > What I see is, the test aborts after 10min timeout.
> > it's likely we hit boot_sector_test()/acpi_find_rsdp_address_uefi() timeout.
> > That's what we should try to bump.
> > 
> > PS:
> > I've just started the job with 5min bump, lets see if it is enough.  
> 
> Because we should wait for 5min CPU time, not wall time.
> Why don't we do that?
> Something like getrusage should work I think.
> 

It turned out to be a meson timeout that's set individually per test file.
I'll send a patch later on.

> 
> > > We should never disable tests only in CI, because non-CI users
> > > are just as likely to hit timeouts.
> > > 
> > > 
> > > With regards,
> > > Daniel  
> 




[PATCH v2] smbios: make memory device size configurable per Machine

2024-07-15 Thread Igor Mammedov
Currently QEMU describes initial[1] RAM* in SMBIOS as a series of
virtual DIMMs (capped at 16Gb max) using type 17 structure entries.

Which is fine for the most cases.  However when starting guest
with terabytes of RAM this leads to too many memory device
structures, which eventually upsets linux kernel as it reserves
only 64K for these entries and when that border is crossed out
it runs out of reserved memory.

Instead of partitioning initial RAM on 16Gb DIMMs, use maximum
possible chunk size that SMBIOS spec allows[2]. Which lets
encode RAM in lower 31 bits of 32bit field (which amounts upto
2047Tb per DIMM).
As result initial RAM will generate only one type 17 structure
until host/guest reach ability to use more RAM in the future.

Compat changes:
We can't unconditionally change chunk size as it will break
QEMU<->guest ABI (and migration). Thus introduce a new machine
class field that would let older versioned machines to use
legacy 16Gb chunks, while new(er) machine type[s] use maximum
possible chunk size.

PS:
While it might seem to be risky to rise max entry size this large
(much beyond of what current physical RAM modules support),
I'd not expect it causing much issues, modulo uncovering bugs
in software running within guest. And those should be fixed
on guest side to handle SMBIOS spec properly, especially if
guest is expected to support so huge RAM configs.

In worst case, QEMU can reduce chunk size later if we would
care enough about introducing a workaround for some 'unfixable'
guest OS, either by fixing up the next machine type or
giving users a CLI option to customize it.

1) Initial RAM - is RAM configured with help '-m SIZE' CLI option/
   implicitly defined by machine. It doesn't include memory
   configured with help of '-device' option[s] (pcdimm,nvdimm,...)
2) SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size

PS:
* tested on 8Tb host with RHEL6 guest, which seems to parse
  type 17 SMBIOS table entries correctly (according to 'dmidecode').

Signed-off-by: Igor Mammedov 
---
v2:
  * add comment in the code describing where 2047Tb comes from (mst)
  * rephrase commit message a bit and clarify what RAM it applies.
---
 include/hw/boards.h |  4 
 hw/arm/virt.c   |  1 +
 hw/core/machine.c   |  6 ++
 hw/i386/pc_piix.c   |  1 +
 hw/i386/pc_q35.c|  1 +
 hw/smbios/smbios.c  | 11 ++-
 6 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index ef6f18f2c1..48ff6d8b93 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -237,6 +237,9 @@ typedef struct {
  *purposes only.
  *Applies only to default memory backend, i.e., explicit memory backend
  *wasn't used.
+ * @smbios_memory_device_size:
+ *Default size of memory device,
+ *SMBIOS 3.1.0 "7.18 Memory Device (Type 17)"
  */
 struct MachineClass {
 /*< private >*/
@@ -304,6 +307,7 @@ struct MachineClass {
 const CPUArchIdList *(*possible_cpu_arch_ids)(MachineState *machine);
 int64_t (*get_default_cpu_node_id)(const MachineState *ms, int idx);
 ram_addr_t (*fixup_ram_size)(ram_addr_t size);
+uint64_t smbios_memory_device_size;
 };
 
 /**
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index b0c68d66a3..719e83e6a1 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -3308,6 +3308,7 @@ DEFINE_VIRT_MACHINE_AS_LATEST(9, 1)
 static void virt_machine_9_0_options(MachineClass *mc)
 {
 virt_machine_9_1_options(mc);
+mc->smbios_memory_device_size = 16 * GiB;
 compat_props_add(mc->compat_props, hw_compat_9_0, hw_compat_9_0_len);
 }
 DEFINE_VIRT_MACHINE(9, 0)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index bc38cad7f2..ac30544e7f 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -1004,6 +1004,12 @@ static void machine_class_init(ObjectClass *oc, void 
*data)
 /* Default 128 MB as guest ram size */
 mc->default_ram_size = 128 * MiB;
 mc->rom_file_has_mr = true;
+/*
+ * SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size
+ * use max possible value that could be encoded into
+ * 'Extended Size' field (2047Tb).
+ */
+mc->smbios_memory_device_size = 2047 * TiB;
 
 /* numa node memory size aligned on 8MB by default.
  * On Linux, each node's border has to be 8MB aligned
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 9445b07b4f..d9e69243b4 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -495,6 +495,7 @@ static void pc_i440fx_machine_9_0_options(MachineClass *m)
 pc_i440fx_machine_9_1_options(m);
 m->alias = NULL;
 m->is_default = false;
+m->smbios_memory_device_size = 16 * GiB;
 
 compat_props_add(m->compat_props, hw_compat_9_0, hw_compat_9_0_len);
 compat_props_add(m->compat_props, pc_compat_9_0, pc_compat_9_0_len);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 71d3c6d122..9d108b194e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -374,6 +374,7 @@ st

Re: [PATCH v2 0/9] RISC-V: ACPI: Namespace updates

2024-07-12 Thread Igor Mammedov
On Fri, 12 Jul 2024 13:51:04 +0100
Daniel P. Berrangé  wrote:

> On Fri, Jul 12, 2024 at 02:43:19PM +0200, Igor Mammedov wrote:
> > On Mon,  8 Jul 2024 17:17:32 +0530
> > Sunil V L  wrote:
> >   
> > > This series adds few updates to RISC-V ACPI namespace for virt platform.
> > > Additionally, it has patches to enable ACPI table testing for RISC-V.
> > > 
> > > 1) PCI Link devices need to be created outside the scope of the PCI root
> > > complex to ensure correct probe ordering by the OS. This matches the
> > > example given in ACPI spec as well.
> > > 
> > > 2) Add PLIC and APLIC as platform devices as well to ensure probing
> > > order as per BRS spec [1] requirement.
> > > 
> > > 3) BRS spec requires RISC-V to use new ACPI ID for the generic UART. So,
> > > update the HID of the UART.
> > > 
> > > 4) Enabled ACPI tables tests for RISC-V which were originally part of
> > > [2] but couldn't get merged due to updates required in the expected AML
> > > files. I think combining those patches with this series makes it easier
> > > to merge since expected AML files are updated.
> > > 
> > > [1] - https://github.com/riscv-non-isa/riscv-brs
> > > [2] - https://lists.gnu.org/archive/html/qemu-devel/2024-06/msg04734.html 
> > >  
> > 
> > btw: CI is not happy about series, see:
> >  https://gitlab.com/imammedo/qemu/-/pipelines/1371119552
> > also 'cross-i686-tci' job routinely timeouts on bios-tables-test
> > but we still keep adding more tests to it.
> > We should either bump timeout to account for slowness or
> > disable bios-tables-test for that job.  
> 
> Asumming the test is functionally correct, and not hanging, then bumping
> the timeout is the right answer. You can do this in the meson.build
> file

I think test is fine, since once in a while it passes (I guess it depends on 
runner host/load)

Overal job timeout is 1h, but that's not what fails.
What I see is, the test aborts after 10min timeout.
it's likely we hit boot_sector_test()/acpi_find_rsdp_address_uefi() timeout.
That's what we should try to bump.

PS:
I've just started the job with 5min bump, lets see if it is enough.

> We should never disable tests only in CI, because non-CI users
> are just as likely to hit timeouts.
> 
> 
> With regards,
> Daniel




Re: [PATCH v2 0/9] RISC-V: ACPI: Namespace updates

2024-07-12 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:32 +0530
Sunil V L  wrote:

> This series adds few updates to RISC-V ACPI namespace for virt platform.
> Additionally, it has patches to enable ACPI table testing for RISC-V.
> 
> 1) PCI Link devices need to be created outside the scope of the PCI root
> complex to ensure correct probe ordering by the OS. This matches the
> example given in ACPI spec as well.
> 
> 2) Add PLIC and APLIC as platform devices as well to ensure probing
> order as per BRS spec [1] requirement.
> 
> 3) BRS spec requires RISC-V to use new ACPI ID for the generic UART. So,
> update the HID of the UART.
> 
> 4) Enabled ACPI tables tests for RISC-V which were originally part of
> [2] but couldn't get merged due to updates required in the expected AML
> files. I think combining those patches with this series makes it easier
> to merge since expected AML files are updated.
> 
> [1] - https://github.com/riscv-non-isa/riscv-brs
> [2] - https://lists.gnu.org/archive/html/qemu-devel/2024-06/msg04734.html

btw: CI is not happy about series, see:
 https://gitlab.com/imammedo/qemu/-/pipelines/1371119552
also 'cross-i686-tci' job routinely timeouts on bios-tables-test
but we still keep adding more tests to it.
We should either bump timeout to account for slowness or
disable bios-tables-test for that job.


> Changes since v1:
>   1) Made changes in gpex-acpi.c generic as per feedback from
>  Michael. This changes the DSDT for aarch64/virt and microvm
>  machines. Hence, few patches are added to update the expected
>  DSDT files for those machine so that CI tests don't fail.
>   2) Added patches to enable ACPI tables tests for RISC-V
>  including a patch to remove the fallback path to
>  search for expected AML files.
>   3) Rebased and added tags.
> 
> Sunil V L (9):
>   hw/riscv/virt-acpi-build.c: Add namespace devices for PLIC and APLIC
>   hw/riscv/virt-acpi-build.c: Update the HID of RISC-V UART
>   tests/acpi: Allow DSDT acpi table changes for aarch64
>   acpi/gpex: Create PCI link devices outside PCI root bridge
>   tests/acpi: update expected DSDT blob for aarch64 and  microvm
>   tests/qtest/bios-tables-test.c: Remove the fall back path
>   tests/acpi: Add empty ACPI data files for RISC-V
>   tests/qtest/bios-tables-test.c: Enable basic testing for RISC-V
>   tests/acpi: Add expected ACPI AML files for RISC-V
> 
>  hw/pci-host/gpex-acpi.c   |  13 ++---
>  hw/riscv/virt-acpi-build.c|  49 +-
>  tests/data/acpi/aarch64/virt/DSDT | Bin 5196 -> 5196 bytes
>  .../data/acpi/aarch64/virt/DSDT.acpihmatvirt  | Bin 5282 -> 5282 bytes
>  tests/data/acpi/aarch64/virt/DSDT.memhp   | Bin 6557 -> 6557 bytes
>  tests/data/acpi/aarch64/virt/DSDT.pxb | Bin 7679 -> 7679 bytes
>  tests/data/acpi/aarch64/virt/DSDT.topology| Bin 5398 -> 5398 bytes
>  tests/data/acpi/riscv64/virt/APIC | Bin 0 -> 116 bytes
>  tests/data/acpi/riscv64/virt/DSDT | Bin 0 -> 3576 bytes
>  tests/data/acpi/riscv64/virt/FACP | Bin 0 -> 276 bytes
>  tests/data/acpi/riscv64/virt/MCFG | Bin 0 -> 60 bytes
>  tests/data/acpi/riscv64/virt/RHCT | Bin 0 -> 332 bytes
>  tests/data/acpi/riscv64/virt/SPCR | Bin 0 -> 80 bytes
>  tests/data/acpi/x86/microvm/DSDT.pcie | Bin 3023 -> 3023 bytes
>  tests/qtest/bios-tables-test.c|  40 +-
>  15 files changed, 81 insertions(+), 21 deletions(-)
>  create mode 100644 tests/data/acpi/riscv64/virt/APIC
>  create mode 100644 tests/data/acpi/riscv64/virt/DSDT
>  create mode 100644 tests/data/acpi/riscv64/virt/FACP
>  create mode 100644 tests/data/acpi/riscv64/virt/MCFG
>  create mode 100644 tests/data/acpi/riscv64/virt/RHCT
>  create mode 100644 tests/data/acpi/riscv64/virt/SPCR
> 




Re: [PATCH v2 6/9] tests/qtest/bios-tables-test.c: Remove the fall back path

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:38 +0530
Sunil V L  wrote:

> The expected ACPI AML files are moved now under ${arch}/{machine} path.
> Hence, there is no need to search in old path which didn't have ${arch}.
> Remove the code which searches for the expected AML files under old path
> as well.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test.c | 14 --
>  1 file changed, 14 deletions(-)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index f4c4704bab..498e0e35d9 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -267,15 +267,6 @@ static void dump_aml_files(test_data *data, bool rebuild)
> data->arch, data->machine,
> sdt->aml, ext);
>  
> -/*
> - * To keep test cases not failing before the DATA files are 
> moved to
> - * ${arch}/${machine} folder, add this check as well.
> - */
> -if (!g_file_test(aml_file, G_FILE_TEST_EXISTS)) {
> -aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir,
> -   data->machine, sdt->aml, ext);
> -}
> -
>  if (!g_file_test(aml_file, G_FILE_TEST_EXISTS) &&
>  sdt->aml_len == exp_sdt->aml_len &&
>  !memcmp(sdt->aml, exp_sdt->aml, sdt->aml_len)) {
> @@ -412,11 +403,6 @@ static GArray *load_expected_aml(test_data *data)
>  try_again:
>  aml_file = g_strdup_printf("%s/%s/%s/%.4s%s", data_dir, data->arch,
> data->machine, sdt->aml, ext);
> -if (!g_file_test(aml_file, G_FILE_TEST_EXISTS)) {
> -aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir, 
> data->machine,
> -   sdt->aml, ext);
> -}
> -
>  if (verbosity_level >= 2) {
>  fprintf(stderr, "Looking for expected file '%s'\n", aml_file);
>  }




Re: [PATCH v2 3/9] tests/acpi: Allow DSDT acpi table changes for aarch64

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:35 +0530
Sunil V L  wrote:

> so that CI tests don't fail when those ACPI tables are updated in the
> next patch. This is as per the documentation in bios-tables-tests.c.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test-allowed-diff.h | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index dfb8523c8b..9282ea0fb2 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1 +1,7 @@
>  /* List of comma-separated changed AML files to ignore */
> +"tests/data/acpi/aarch64/virt/DSDT",
> +"tests/data/acpi/aarch64/virt/DSDT.memhp",
> +"tests/data/acpi/aarch64/virt/DSDT.topology",
> +"tests/data/acpi/aarch64/virt/DSDT.acpihmatvirt",
> +"tests/data/acpi/aarch64/virt/DSDT.pxb",
> +"tests/data/acpi/x86/microvm/DSDT.pcie",




Re: [PATCH v2 4/9] acpi/gpex: Create PCI link devices outside PCI root bridge

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:36 +0530
Sunil V L  wrote:

> Currently, PCI link devices (PNP0C0F) are always created within the
> scope of the PCI root bridge. However, RISC-V needs these link devices
> to be created outside to ensure the probing order in the OS. This
> matches the example given in the ACPI specification [1] as well. Hence,
> create these link devices directly under _SB instead of under the PCI
> root bridge.
> 
> To keep these link device names unique for multiple PCI bridges, change
> the device name from GSIx to LXXY format where XX is the PCI bus number
> and Y is the INTx.
> 
> GPEX is currently used by riscv, aarch64/virt and x86/microvm machines.
> So, this change will alter the DSDT for those systems.
> 
> [1] - ACPI 5.1: 6.2.13.1 Example: Using _PRT to Describe PCI IRQ Routing
> 
> Signed-off-by: Sunil V L 
> ---
>  hw/pci-host/gpex-acpi.c | 13 +++--
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
> index f69413ea2c..a93b55c991 100644
> --- a/hw/pci-host/gpex-acpi.c
> +++ b/hw/pci-host/gpex-acpi.c
> @@ -7,7 +7,8 @@
>  #include "hw/pci/pcie_host.h"
>  #include "hw/acpi/cxl.h"
>  
> -static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq)
> +static void acpi_dsdt_add_pci_route_table(Aml *dev, uint32_t irq,
> +  Aml *scope, uint8_t bus_num)
>  {
>  Aml *method, *crs;
>  int i, slot_no;
> @@ -20,7 +21,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  Aml *pkg = aml_package(4);
>  aml_append(pkg, aml_int((slot_no << 16) | 0x));
>  aml_append(pkg, aml_int(i));
> -aml_append(pkg, aml_name("GSI%d", gsi));
> +aml_append(pkg, aml_name("L%.02X%d", bus_num, gsi));
 instead of mixing hex and decimal here, make gsi hex as well to be consistent?


>  aml_append(pkg, aml_int(0));
>  aml_append(rt_pkg, pkg);
>  }
> @@ -30,7 +31,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  /* Create GSI link device */
>  for (i = 0; i < PCI_NUM_PINS; i++) {
>  uint32_t irqs = irq + i;
> -Aml *dev_gsi = aml_device("GSI%d", i);
> +Aml *dev_gsi = aml_device("L%.02X%d", bus_num, i);
ditto

>  aml_append(dev_gsi, aml_name_decl("_HID", aml_string("PNP0C0F")));
>  aml_append(dev_gsi, aml_name_decl("_UID", aml_int(i)));
>  crs = aml_resource_template();
> @@ -45,7 +46,7 @@ static void acpi_dsdt_add_pci_route_table(Aml *dev, 
> uint32_t irq)
>  aml_append(dev_gsi, aml_name_decl("_CRS", crs));
>  method = aml_method("_SRS", 1, AML_NOTSERIALIZED);
>  aml_append(dev_gsi, method);
> -aml_append(dev, dev_gsi);
> +aml_append(scope, dev_gsi);
>  }
>  }
>  
> @@ -174,7 +175,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  aml_append(dev, aml_name_decl("_PXM", aml_int(numa_node)));
>  }
>  
> -acpi_dsdt_add_pci_route_table(dev, cfg->irq);
> +acpi_dsdt_add_pci_route_table(dev, cfg->irq, scope, bus_num);
>  
>  /*
>   * Resources defined for PXBs are composed of the following 
> parts:
> @@ -205,7 +206,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> *cfg)
>  aml_append(dev, aml_name_decl("_STR", aml_unicode("PCIe 0 Device")));
>  aml_append(dev, aml_name_decl("_CCA", aml_int(1)));
>  
> -acpi_dsdt_add_pci_route_table(dev, cfg->irq);
> +acpi_dsdt_add_pci_route_table(dev, cfg->irq, scope, 0);
>  
>  method = aml_method("_CBA", 0, AML_NOTSERIALIZED);
>  aml_append(method, aml_return(aml_int(cfg->ecam.base)));




Re: [PATCH v2 2/9] hw/riscv/virt-acpi-build.c: Update the HID of RISC-V UART

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:34 +0530
Sunil V L  wrote:

> The RISC-V BRS specification [1] requires NS16550 compatible UART to
> have the HID RSCV0003. So, update the HID for the UART.
> 
> [1] - https://github.com/riscv-non-isa/riscv-brs

it point's repo with a bunch of files,
please make it easier for reader to find
aka point to concrete document + title (for when link goes stale)
and chapter. (similar to what we do for when documenting ACPI code)

> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 
> ---
>  hw/riscv/virt-acpi-build.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/riscv/virt-acpi-build.c b/hw/riscv/virt-acpi-build.c
> index 87fe882af0..939f951e45 100644
> --- a/hw/riscv/virt-acpi-build.c
> +++ b/hw/riscv/virt-acpi-build.c
> @@ -192,7 +192,7 @@ acpi_dsdt_add_uart(Aml *scope, const MemMapEntry 
> *uart_memmap,
>  uint32_t uart_irq)
>  {
>  Aml *dev = aml_device("COM0");
> -aml_append(dev, aml_name_decl("_HID", aml_string("PNP0501")));
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0003")));
>  aml_append(dev, aml_name_decl("_UID", aml_int(0)));
>  
>  Aml *crs = aml_resource_template();




Re: [PATCH v2 1/9] hw/riscv/virt-acpi-build.c: Add namespace devices for PLIC and APLIC

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:33 +0530
Sunil V L  wrote:

> PLIC and APLIC should be in namespace as well. So, add them using the
> defined HID.
> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 
> ---
>  hw/riscv/virt-acpi-build.c | 47 ++
>  1 file changed, 47 insertions(+)
> 
> diff --git a/hw/riscv/virt-acpi-build.c b/hw/riscv/virt-acpi-build.c
> index 0925528160..87fe882af0 100644
> --- a/hw/riscv/virt-acpi-build.c
> +++ b/hw/riscv/virt-acpi-build.c
> @@ -141,6 +141,52 @@ static void acpi_dsdt_add_cpus(Aml *scope, 
> RISCVVirtState *s)
>  }
>  }
>  
> +static void acpi_dsdt_add_plic_aplic(Aml *scope, RISCVVirtState *s)
> +{
> +MachineState *ms = MACHINE(s);
> +uint64_t plic_aplic_addr;
> +uint32_t gsi_base;
> +uint8_t  socket;
> +
> +if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> +/* PLICs */
> +for (socket = 0; socket < riscv_socket_count(ms); socket++) {

you have socket_count in caller already, pass it as argument and
drop  MachineState *ms = MACHINE(s) above.


> +plic_aplic_addr = s->memmap[VIRT_PLIC].base +
> + s->memmap[VIRT_PLIC].size * socket;
> +gsi_base = VIRT_IRQCHIP_NUM_SOURCES * socket;
> +Aml *dev = aml_device("IC%.02X", socket);
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0001")));
> +aml_append(dev, aml_name_decl("_UID", aml_int(socket)));
> +aml_append(dev, aml_name_decl("_GSB", aml_int(gsi_base)));
> +
> +Aml *crs = aml_resource_template();
> +aml_append(crs, aml_memory32_fixed(plic_aplic_addr,
> +   s->memmap[VIRT_PLIC].size,
> +   AML_READ_WRITE));
> +aml_append(dev, aml_name_decl("_CRS", crs));
> +aml_append(scope, dev);
> +}
> +} else {
> +/* APLICs */
> +for (socket = 0; socket < riscv_socket_count(ms); socket++) {
> +plic_aplic_addr = s->memmap[VIRT_APLIC_S].base +
> + s->memmap[VIRT_APLIC_S].size * socket;
> +gsi_base = VIRT_IRQCHIP_NUM_SOURCES * socket;
> +Aml *dev = aml_device("IC%.02X", socket);
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0002")));
> +aml_append(dev, aml_name_decl("_UID", aml_int(socket)));
> +aml_append(dev, aml_name_decl("_GSB", aml_int(gsi_base)));
> +
> +Aml *crs = aml_resource_template();
> +aml_append(crs, aml_memory32_fixed(plic_aplic_addr,
> +   s->memmap[VIRT_APLIC_S].size,
> +   AML_READ_WRITE));
> +aml_append(dev, aml_name_decl("_CRS", crs));
> +aml_append(scope, dev);
> +}
> +}
> +}
> +
>  static void
>  acpi_dsdt_add_uart(Aml *scope, const MemMapEntry *uart_memmap,
>  uint32_t uart_irq)
> @@ -411,6 +457,7 @@ static void build_dsdt(GArray *table_data,
>  
>  socket_count = riscv_socket_count(ms);
>  
> +acpi_dsdt_add_plic_aplic(scope, s);
Perhaps do the same for memmap/RISCVVirtState 

>  acpi_dsdt_add_uart(scope, [VIRT_UART0], UART0_IRQ);


>  
>  if (socket_count == 1) {




Re: [PATCH v2 1/9] hw/riscv/virt-acpi-build.c: Add namespace devices for PLIC and APLIC

2024-07-11 Thread Igor Mammedov
On Mon,  8 Jul 2024 17:17:33 +0530
Sunil V L  wrote:

> PLIC and APLIC should be in namespace as well. So, add them using the
> defined HID.

defined where? REader shouldn't be forced to go over all web to find
source. Cite it here.

> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 
> ---
>  hw/riscv/virt-acpi-build.c | 47 ++
>  1 file changed, 47 insertions(+)
> 
> diff --git a/hw/riscv/virt-acpi-build.c b/hw/riscv/virt-acpi-build.c
> index 0925528160..87fe882af0 100644
> --- a/hw/riscv/virt-acpi-build.c
> +++ b/hw/riscv/virt-acpi-build.c
> @@ -141,6 +141,52 @@ static void acpi_dsdt_add_cpus(Aml *scope, 
> RISCVVirtState *s)
>  }
>  }
>  
> +static void acpi_dsdt_add_plic_aplic(Aml *scope, RISCVVirtState *s)
> +{
> +MachineState *ms = MACHINE(s);
> +uint64_t plic_aplic_addr;
> +uint32_t gsi_base;
> +uint8_t  socket;
> +
> +if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> +/* PLICs */
> +for (socket = 0; socket < riscv_socket_count(ms); socket++) {
> +plic_aplic_addr = s->memmap[VIRT_PLIC].base +
> + s->memmap[VIRT_PLIC].size * socket;
> +gsi_base = VIRT_IRQCHIP_NUM_SOURCES * socket;
> +Aml *dev = aml_device("IC%.02X", socket);
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0001")));
> +aml_append(dev, aml_name_decl("_UID", aml_int(socket)));
> +aml_append(dev, aml_name_decl("_GSB", aml_int(gsi_base)));
> +
> +Aml *crs = aml_resource_template();
> +aml_append(crs, aml_memory32_fixed(plic_aplic_addr,
> +   s->memmap[VIRT_PLIC].size,
> +   AML_READ_WRITE));
> +aml_append(dev, aml_name_decl("_CRS", crs));
> +aml_append(scope, dev);
> +}
> +} else {
> +/* APLICs */
> +for (socket = 0; socket < riscv_socket_count(ms); socket++) {
> +plic_aplic_addr = s->memmap[VIRT_APLIC_S].base +
> + s->memmap[VIRT_APLIC_S].size * socket;
> +gsi_base = VIRT_IRQCHIP_NUM_SOURCES * socket;
> +Aml *dev = aml_device("IC%.02X", socket);
> +aml_append(dev, aml_name_decl("_HID", aml_string("RSCV0002")));
> +aml_append(dev, aml_name_decl("_UID", aml_int(socket)));
> +aml_append(dev, aml_name_decl("_GSB", aml_int(gsi_base)));
> +
> +Aml *crs = aml_resource_template();
> +aml_append(crs, aml_memory32_fixed(plic_aplic_addr,
> +   s->memmap[VIRT_APLIC_S].size,
> +   AML_READ_WRITE));
> +aml_append(dev, aml_name_decl("_CRS", crs));
> +aml_append(scope, dev);
> +}
> +}
> +}
> +
>  static void
>  acpi_dsdt_add_uart(Aml *scope, const MemMapEntry *uart_memmap,
>  uint32_t uart_irq)
> @@ -411,6 +457,7 @@ static void build_dsdt(GArray *table_data,
>  
>  socket_count = riscv_socket_count(ms);
>  
> +acpi_dsdt_add_plic_aplic(scope, s);
>  acpi_dsdt_add_uart(scope, [VIRT_UART0], UART0_IRQ);
>  
>  if (socket_count == 1) {




Re: [PATCH] smbios: make memory device size configurable per Machine

2024-07-11 Thread Igor Mammedov
On Thu, 11 Jul 2024 07:13:27 -0400
"Michael S. Tsirkin"  wrote:

> On Thu, Jul 11, 2024 at 09:48:22AM +0200, Igor Mammedov wrote:
> > Currently SMBIOS maximum memory device chunk is capped at 16Gb,
> > which is fine for the most cases (QEMU uses it to describe initial
> > RAM (type 17 SMBIOS table entries)).
> > However when starting guest with terabytes of RAM this leads to
> > too many memory device structures, which eventually upsets linux
> > kernel as it reserves only 64K for these entries and when that
> > border is crossed out it runs out of reserved memory.
> > 
> > Instead of partitioning initial RAM on 16Gb chunks, use maximum
> > possible chunk size that SMBIOS spec allows[1]. Which lets
> > encode RAM in Mb units in uint32_t-1 field (upto 2047Tb).
> > As result initial RAM will generate only one type 17 structure
> > until host/guest reach ability to use more RAM in the future.
> > 
> > Compat changes:
> > We can't unconditionally change chunk size as it will break
> > QEMU<->guest ABI (and migration). Thus introduce a new machine class
> > field that would let older versioned machines to use 16Gb chunks
> > while new machine type could use maximum possible chunk size.
> > 
> > While it might seem to be risky to rise max entry size this much
> > (much beyond of what current physical RAM modules support),
> > I'd not expect it causing much issues, modulo uncovering bugs
> > in software running within guest. And those should be fixed
> > on guest side to handle SMBIOS spec properly, especially if
> > guest is expected to support so huge RAM configs.
> > In worst case, QEMU can reduce chunk size later if we would
> > care enough about introducing a workaround for some 'unfixable'
> > guest OS, either by fixing up the next machine type or
> > giving users a CLI option to customize it.
> > 
> > 1) SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size
> > 
> > PS:
> > * tested on 8Tb host with RHEL6 guest, which seems to parse
> >   type 17 SMBIOS table entries correctly (according to 'dmidecode').
> > 
> > Signed-off-by: Igor Mammedov 
> > ---
> >  include/hw/boards.h |  4 
> >  hw/arm/virt.c   |  1 +
> >  hw/core/machine.c   |  1 +
> >  hw/i386/pc_piix.c   |  1 +
> >  hw/i386/pc_q35.c|  1 +
> >  hw/smbios/smbios.c  | 11 ++-
> >  6 files changed, 14 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > index ef6f18f2c1..48ff6d8b93 100644
> > --- a/include/hw/boards.h
> > +++ b/include/hw/boards.h
> > @@ -237,6 +237,9 @@ typedef struct {
> >   *purposes only.
> >   *Applies only to default memory backend, i.e., explicit memory backend
> >   *wasn't used.
> > + * @smbios_memory_device_size:
> > + *Default size of memory device,
> > + *SMBIOS 3.1.0 "7.18 Memory Device (Type 17)"  
> 
> Maybe it would be better to just make this a boolean,
> and put the spec related logic in smbios.c ?
> WDYT?

Using bool here, seems awkward to me,
i.e. not clear semantics and compat handling would be
complicated as well.

And if we have to expose it someday to users,
it would be logical to make it machine property.
Given it's used not only by x86, having it as value
here lets each machine to customize if necessary
using well established pattern (incl. compat machinery)


> >   */
> >  struct MachineClass {
> >  /*< private >*/
> > @@ -304,6 +307,7 @@ struct MachineClass {
> >  const CPUArchIdList *(*possible_cpu_arch_ids)(MachineState *machine);
> >  int64_t (*get_default_cpu_node_id)(const MachineState *ms, int idx);
> >  ram_addr_t (*fixup_ram_size)(ram_addr_t size);
> > +uint64_t smbios_memory_device_size;
> >  };
> >  
> >  /**
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index b0c68d66a3..719e83e6a1 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -3308,6 +3308,7 @@ DEFINE_VIRT_MACHINE_AS_LATEST(9, 1)
> >  static void virt_machine_9_0_options(MachineClass *mc)
> >  {
> >  virt_machine_9_1_options(mc);
> > +mc->smbios_memory_device_size = 16 * GiB;
> >  compat_props_add(mc->compat_props, hw_compat_9_0, hw_compat_9_0_len);
> >  }
> >  DEFINE_VIRT_MACHINE(9, 0)
> > diff --git a/hw/core/machine.c b/hw/core/machine.c
> > index bc38cad7f2..3cfdaec65d 100644
> > --- a/hw/core/machine.c
> > +++ b/hw/core/machine.c
> > @@ -1004,6 +1004,7 @@ static void machine_class_init(ObjectClass *oc, void 
> > *data)
> >  /* 

Re: [PATCH v4 08/13] hw/i386/acpi: Use TYPE_PXB_BUS property acpi_uid for DSDT

2024-07-11 Thread Igor Mammedov
On Tue, 2 Jul 2024 14:14:13 +0100
Jonathan Cameron  wrote:

> Rather than relying on PCI internals, use the new acpi_property
> to obtain the ACPI _UID values.  These are still the same
> as the PCI Bus numbers so no functional change.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> ---
> v4: New patch.
> ---
>  hw/i386/acpi-build.c | 9 ++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index ee92783836..cc32f1e6d4 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1550,6 +1550,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  QLIST_FOREACH(bus, >child, sibling) {
>  uint8_t bus_num = pci_bus_num(bus);
>  uint8_t numa_node = pci_bus_numa_node(bus);
> +uint8_t uid;
>  
>  /* look only for expander root buses */
>  if (!pci_bus_is_root(bus)) {
> @@ -1560,14 +1561,16 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  root_bus_limit = bus_num - 1;
>  }
>  
> +uid = object_property_get_uint(OBJECT(bus), "acpi_uid",
> +   _fatal);

theoretically acpi_uid is 32bit, so if we are expecting 
only 256 buses here, then having and assert to catch truncation
would be good.
alternatively if this UID can't ever be more than 8bit, I'd use
 visit_type_uint8() in previous patch to make sure too large value
won't be silently ignored.

>  scope = aml_scope("\\_SB");
>  
>  if (pci_bus_is_cxl(bus)) {
> -dev = aml_device("CL%.02X", bus_num);
> +dev = aml_device("CL%.02X", uid);
>  } else {
> -dev = aml_device("PC%.02X", bus_num);
> +dev = aml_device("PC%.02X", uid);
>  }
> -aml_append(dev, aml_name_decl("_UID", aml_int(bus_num)));
> +aml_append(dev, aml_name_decl("_UID", aml_int(uid)));
>  aml_append(dev, aml_name_decl("_BBN", aml_int(bus_num)));
>  if (pci_bus_is_cxl(bus)) {
>  struct Aml *aml_pkg = aml_package(2);




Re: [PATCH v4 07/13] hw/pci-bridge: Add acpi_uid property to TYPE_PXB_BUS

2024-07-11 Thread Igor Mammedov
On Tue, 2 Jul 2024 14:14:12 +0100
Jonathan Cameron  wrote:

> Enable ACPI table creation for PCI Expander Bridges to be independent
> of PCI internals.  Note that the UID is currently the PCI bus number.
> This is motivated by the forthcoming ACPI Generic Port SRAT entries
> which can be made completely independent of PCI internals.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> 
> ---
> v4: Generalize to all TYPE_PXB_BUS.  The handling for primary root
> bridges is separate and doesn't overlap with this change.
> ---
>  hw/pci-bridge/pci_expander_bridge.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/hw/pci-bridge/pci_expander_bridge.c 
> b/hw/pci-bridge/pci_expander_bridge.c
> index 0411ad31ea..d71eb4b175 100644
> --- a/hw/pci-bridge/pci_expander_bridge.c
> +++ b/hw/pci-bridge/pci_expander_bridge.c
> @@ -85,12 +85,23 @@ static uint16_t pxb_bus_numa_node(PCIBus *bus)
>  return pxb->numa_node;
>  }
>  
> +static void prop_pxb_uid_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> +uint32_t uid = pci_bus_num(PCI_BUS(obj));
> +
> +visit_type_uint32(v, name, , errp);
> +}
> +
>  static void pxb_bus_class_init(ObjectClass *class, void *data)
>  {
>  PCIBusClass *pbc = PCI_BUS_CLASS(class);
>  
>  pbc->bus_num = pxb_bus_num;
>  pbc->numa_node = pxb_bus_numa_node;
> +
> +object_class_property_add(class, "acpi_uid", "uint32",
> +  prop_pxb_uid_get, NULL, NULL, NULL);

missing related object_class_property_set_description()

>  }
>  
>  static const TypeInfo pxb_bus_info = {




Re: [PATCH v4 05/13] hw/pci: Add a busnr property to pci_props and use for acpi/gi

2024-07-11 Thread Igor Mammedov
On Thu, 11 Jul 2024 13:53:31 +0200
Igor Mammedov  wrote:

> On Tue, 2 Jul 2024 14:14:10 +0100
> Jonathan Cameron  wrote:
> 
> > Using a property allows us to hide the internal details of the PCI device
> > from the code to build a SRAT Generic Initiator Affinity Structure with
> > PCI Device Handle.
> > 
> > Suggested-by: Igor Mammedov 
> > Signed-off-by: Jonathan Cameron 
> > 
> > ---
> > V4: Avoid confusion with device creation parameter bus but renaming to
> > busnr
> > ---
> >  hw/acpi/acpi_generic_initiator.c | 11 ++-
> >  hw/pci/pci.c | 14 ++
> >  2 files changed, 20 insertions(+), 5 deletions(-)
> > 
> > diff --git a/hw/acpi/acpi_generic_initiator.c 
> > b/hw/acpi/acpi_generic_initiator.c
> > index 73bafaaaea..f2711c91ef 100644
> > --- a/hw/acpi/acpi_generic_initiator.c
> > +++ b/hw/acpi/acpi_generic_initiator.c
> > @@ -9,6 +9,7 @@
> >  #include "hw/boards.h"
> >  #include "hw/pci/pci_device.h"
> >  #include "qemu/error-report.h"
> > +#include "qapi/error.h"
> >  
> >  typedef struct AcpiGenericInitiatorClass {
> >  ObjectClass parent_class;
> > @@ -79,7 +80,7 @@ static int build_acpi_generic_initiator(Object *obj, void 
> > *opaque)
> >  MachineState *ms = MACHINE(qdev_get_machine());
> >  AcpiGenericInitiator *gi;
> >  GArray *table_data = opaque;
> > -PCIDevice *pci_dev;
> > +uint8_t bus, devfn;
> >  Object *o;
> >  
> >  if (!object_dynamic_cast(obj, TYPE_ACPI_GENERIC_INITIATOR)) {
> > @@ -100,10 +101,10 @@ static int build_acpi_generic_initiator(Object *obj, 
> > void *opaque)
> >  exit(1);
> >  }
> >  
> > -pci_dev = PCI_DEVICE(o);
> > -build_srat_pci_generic_initiator(table_data, gi->node, 0,
> > - pci_bus_num(pci_get_bus(pci_dev)),
> > - pci_dev->devfn);
> > +bus = object_property_get_uint(o, "busnr", _fatal);
> > +devfn = object_property_get_uint(o, "addr", _fatal);  
> 
> devfn in PCI code is 32bit, while here it's declared as unit8_t,
> which seems wrong.
> It likely would work in case of PCIe root ports/switches where slot is 0,
> but should quickly break elsewhere as soon as slot is more than 0.
> 
> If it's intentional, there should be fat comment here about why it this way
> and an assert to catch silent cropping of the value. 

Ignore that, obviously the rest of the QEMU does not care about this downcast.

Maybe add assert anyways to catch too big devfn returned,
which unlikely to happen ever.

anyways:

Reviewed-by: Igor Mammedov 

> 
> > +
> > +build_srat_pci_generic_initiator(table_data, gi->node, 0, bus, devfn);
> >  
> >  return 0;
> >  }
> > diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> > index 50b86d5790..29d4852c21 100644
> > --- a/hw/pci/pci.c
> > +++ b/hw/pci/pci.c
> > @@ -67,6 +67,19 @@ static char *pcibus_get_fw_dev_path(DeviceState *dev);
> >  static void pcibus_reset_hold(Object *obj, ResetType type);
> >  static bool pcie_has_upstream_port(PCIDevice *dev);
> >  
> > +static void prop_pci_busnr_get(Object *obj, Visitor *v, const char *name,
> > + void *opaque, Error **errp)
> > +{
> > +uint8_t busnr = pci_dev_bus_num(PCI_DEVICE(obj));
> > +
> > +visit_type_uint8(v, name, , errp);
> > +}
> > +
> > +static const PropertyInfo prop_pci_busnr = {
> > +.name = "busnr",
> > +.get = prop_pci_busnr_get,
> > +};
> > +
> >  static Property pci_props[] = {
> >  DEFINE_PROP_PCI_DEVFN("addr", PCIDevice, devfn, -1),
> >  DEFINE_PROP_STRING("romfile", PCIDevice, romfile),
> > @@ -85,6 +98,7 @@ static Property pci_props[] = {
> >  QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
> >  DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present,
> >  QEMU_PCIE_ARI_NEXTFN_1_BITNR, false),
> > +{ .name = "busnr", .info = _pci_busnr },
> >  DEFINE_PROP_END_OF_LIST()
> >  };
> >
> 




Re: [PATCH v4 06/13] acpi/pci: Move Generic Initiator object handling into acpi/pci.*

2024-07-11 Thread Igor Mammedov
On Tue, 2 Jul 2024 14:14:11 +0100
Jonathan Cameron  wrote:

> Whilst ACPI SRAT Generic Initiator Afinity Structures are able to refer to
> both PCI and ACPI Device Handles, the QEMU implementation only implements
> the PCI Device Handle case.  For now move the code into the existing
> hw/acpi/pci.c file and header.  If support for ACPI Device Handles is
> added in the future, perhaps this will be moved again.
> 
> Also push the struct AcpiGenericInitiator down into the c file as not
> used outside pci.c.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> 
> ---
> v4: Update busnr naming
> ---
>  include/hw/acpi/acpi_generic_initiator.h |  24 -
>  include/hw/acpi/pci.h|   6 ++
>  hw/acpi/acpi_generic_initiator.c | 117 --
>  hw/acpi/pci.c| 118 +++
>  hw/arm/virt-acpi-build.c |   1 -
>  hw/i386/acpi-build.c |   1 -
>  hw/acpi/meson.build  |   1 -
>  7 files changed, 124 insertions(+), 144 deletions(-)
> 
> diff --git a/include/hw/acpi/acpi_generic_initiator.h 
> b/include/hw/acpi/acpi_generic_initiator.h
> deleted file mode 100644
> index 7b98676713..00
> --- a/include/hw/acpi/acpi_generic_initiator.h
> +++ /dev/null
> @@ -1,24 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#ifndef ACPI_GENERIC_INITIATOR_H
> -#define ACPI_GENERIC_INITIATOR_H
> -
> -#include "qom/object_interfaces.h"
> -
> -#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"
> -
> -typedef struct AcpiGenericInitiator {
> -/* private */
> -Object parent;
> -
> -/* public */
> -char *pci_dev;
> -uint16_t node;
> -} AcpiGenericInitiator;
> -
> -void build_srat_generic_pci_initiator(GArray *table_data);
> -
> -#endif
> diff --git a/include/hw/acpi/pci.h b/include/hw/acpi/pci.h
> index 467a99461c..9adf1887da 100644
> --- a/include/hw/acpi/pci.h
> +++ b/include/hw/acpi/pci.h
> @@ -28,6 +28,7 @@
>  
>  #include "hw/acpi/bios-linker-loader.h"
>  #include "hw/acpi/acpi_aml_interface.h"
> +#include "qom/object_interfaces.h"
...
> +
> +#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"

why object_interfaces.h  and type name is in header,
at this point I don't see it used elsewhere beside the single C file.
If they must be exported, than mention in commit message where it will
be used.

> +
> +void build_srat_generic_pci_initiator(GArray *table_data);
> +
>  #endif
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> deleted file mode 100644
> index f2711c91ef..00
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ /dev/null
> @@ -1,117 +0,0 @@
> -// SPDX-License-Identifier: GPL-2.0-only
> -/*
> - * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> - */
> -
> -#include "qemu/osdep.h"
> -#include "hw/acpi/acpi_generic_initiator.h"
> -#include "hw/acpi/aml-build.h"
> -#include "hw/boards.h"
> -#include "hw/pci/pci_device.h"
> -#include "qemu/error-report.h"
> -#include "qapi/error.h"
> -
> -typedef struct AcpiGenericInitiatorClass {
> -ObjectClass parent_class;
> -} AcpiGenericInitiatorClass;
> -
> -OBJECT_DEFINE_TYPE_WITH_INTERFACES(AcpiGenericInitiator, 
> acpi_generic_initiator,
> -   ACPI_GENERIC_INITIATOR, OBJECT,
> -   { TYPE_USER_CREATABLE },
> -   { NULL })
> -
> -OBJECT_DECLARE_SIMPLE_TYPE(AcpiGenericInitiator, ACPI_GENERIC_INITIATOR)
> -
> -static void acpi_generic_initiator_init(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->node = MAX_NODES;
> -gi->pci_dev = NULL;
> -}
> -
> -static void acpi_generic_initiator_finalize(Object *obj)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -g_free(gi->pci_dev);
> -}
> -
> -static void acpi_generic_initiator_set_pci_device(Object *obj, const char 
> *val,
> -  Error **errp)
> -{
> -AcpiGenericInitiator *gi = ACPI_GENERIC_INITIATOR(obj);
> -
> -gi->pci_dev = g_strdup(val);
> -}
> -
> -static void acpi_generic_initiator_set_node(Object *obj, Visitor *v,
> -const char *name, void *opaque,
> -Error **errp)
> -{
>

Re: [PATCH v4 05/13] hw/pci: Add a busnr property to pci_props and use for acpi/gi

2024-07-11 Thread Igor Mammedov
On Tue, 2 Jul 2024 14:14:10 +0100
Jonathan Cameron  wrote:

> Using a property allows us to hide the internal details of the PCI device
> from the code to build a SRAT Generic Initiator Affinity Structure with
> PCI Device Handle.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> 
> ---
> V4: Avoid confusion with device creation parameter bus but renaming to
> busnr
> ---
>  hw/acpi/acpi_generic_initiator.c | 11 ++-
>  hw/pci/pci.c | 14 ++
>  2 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 73bafaaaea..f2711c91ef 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -9,6 +9,7 @@
>  #include "hw/boards.h"
>  #include "hw/pci/pci_device.h"
>  #include "qemu/error-report.h"
> +#include "qapi/error.h"
>  
>  typedef struct AcpiGenericInitiatorClass {
>  ObjectClass parent_class;
> @@ -79,7 +80,7 @@ static int build_acpi_generic_initiator(Object *obj, void 
> *opaque)
>  MachineState *ms = MACHINE(qdev_get_machine());
>  AcpiGenericInitiator *gi;
>  GArray *table_data = opaque;
> -PCIDevice *pci_dev;
> +uint8_t bus, devfn;
>  Object *o;
>  
>  if (!object_dynamic_cast(obj, TYPE_ACPI_GENERIC_INITIATOR)) {
> @@ -100,10 +101,10 @@ static int build_acpi_generic_initiator(Object *obj, 
> void *opaque)
>  exit(1);
>  }
>  
> -pci_dev = PCI_DEVICE(o);
> -build_srat_pci_generic_initiator(table_data, gi->node, 0,
> - pci_bus_num(pci_get_bus(pci_dev)),
> - pci_dev->devfn);
> +bus = object_property_get_uint(o, "busnr", _fatal);
> +devfn = object_property_get_uint(o, "addr", _fatal);

devfn in PCI code is 32bit, while here it's declared as unit8_t,
which seems wrong.
It likely would work in case of PCIe root ports/switches where slot is 0,
but should quickly break elsewhere as soon as slot is more than 0.

If it's intentional, there should be fat comment here about why it this way
and an assert to catch silent cropping of the value. 

> +
> +build_srat_pci_generic_initiator(table_data, gi->node, 0, bus, devfn);
>  
>  return 0;
>  }
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index 50b86d5790..29d4852c21 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -67,6 +67,19 @@ static char *pcibus_get_fw_dev_path(DeviceState *dev);
>  static void pcibus_reset_hold(Object *obj, ResetType type);
>  static bool pcie_has_upstream_port(PCIDevice *dev);
>  
> +static void prop_pci_busnr_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> +uint8_t busnr = pci_dev_bus_num(PCI_DEVICE(obj));
> +
> +visit_type_uint8(v, name, , errp);
> +}
> +
> +static const PropertyInfo prop_pci_busnr = {
> +.name = "busnr",
> +.get = prop_pci_busnr_get,
> +};
> +
>  static Property pci_props[] = {
>  DEFINE_PROP_PCI_DEVFN("addr", PCIDevice, devfn, -1),
>  DEFINE_PROP_STRING("romfile", PCIDevice, romfile),
> @@ -85,6 +98,7 @@ static Property pci_props[] = {
>  QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
>  DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present,
>  QEMU_PCIE_ARI_NEXTFN_1_BITNR, false),
> +{ .name = "busnr", .info = _pci_busnr },
>  DEFINE_PROP_END_OF_LIST()
>  };
>  




Re: [PATCH] smbios: make memory device size configurable per Machine

2024-07-11 Thread Igor Mammedov
On Thu, 11 Jul 2024 09:43:46 +0100
Daniel P. Berrangé  wrote:

> On Thu, Jul 11, 2024 at 09:48:22AM +0200, Igor Mammedov wrote:
> > Currently SMBIOS maximum memory device chunk is capped at 16Gb,
> > which is fine for the most cases (QEMU uses it to describe initial
> > RAM (type 17 SMBIOS table entries)).
> > However when starting guest with terabytes of RAM this leads to
> > too many memory device structures, which eventually upsets linux
> > kernel as it reserves only 64K for these entries and when that
> > border is crossed out it runs out of reserved memory.
> > 
> > Instead of partitioning initial RAM on 16Gb chunks, use maximum
> > possible chunk size that SMBIOS spec allows[1]. Which lets
> > encode RAM in Mb units in uint32_t-1 field (upto 2047Tb).
> > As result initial RAM will generate only one type 17 structure
> > until host/guest reach ability to use more RAM in the future.
> > 
> > Compat changes:
> > We can't unconditionally change chunk size as it will break
> > QEMU<->guest ABI (and migration). Thus introduce a new machine class
> > field that would let older versioned machines to use 16Gb chunks
> > while new machine type could use maximum possible chunk size.
> > 
> > While it might seem to be risky to rise max entry size this much
> > (much beyond of what current physical RAM modules support),
> > I'd not expect it causing much issues, modulo uncovering bugs
> > in software running within guest. And those should be fixed
> > on guest side to handle SMBIOS spec properly, especially if
> > guest is expected to support so huge RAM configs.
> > In worst case, QEMU can reduce chunk size later if we would
> > care enough about introducing a workaround for some 'unfixable'
> > guest OS, either by fixing up the next machine type or
> > giving users a CLI option to customize it.  
> 
> I was wondering what real hardware does, since the best way to
> avoid guest OS surprises is to align with real world behaviour.
> IIUC, there is usually one Type 17 structure per physical
> DIMM.
> 
> Most QEMU configs don't express DIMMs as a concept so in that
> case, we can presume 1 virtual DIMM, and thus having one type
> 17 structure is a match for physical hw practices.


> What about when the QEMU config has used nvdimm, pc-dimm,
> or virtio-mem devices though ? It feels like the best practice
> would be to have a type 17 structure for each instance of one
> of those devices.

QEMU doesn't expose any memory beside initial one in SMBIOS.
So from guest introspection pov when using only SMBIOS,
those do not exists.

On tangent:
I think exposing those with hotplug in place makes
it messy especially with migration in mind (we would need to
move smbios tables creation to reset time and enumerate all
supported memory devices at that time to get somewhat reliable
picture, which would reflect machine config _only_ at boot time).

Also it would help to model initial RAM as DIMM(s) device to
avoid faking RAM entry, and do it consistently with DIMM devices.

(but yeah, nobody asked for anything like that so far).


> > 1) SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size
> > 
> > PS:
> > * tested on 8Tb host with RHEL6 guest, which seems to parse
> >   type 17 SMBIOS table entries correctly (according to 'dmidecode').
> > 
> > Signed-off-by: Igor Mammedov 
> > ---
> >  include/hw/boards.h |  4 
> >  hw/arm/virt.c   |  1 +
> >  hw/core/machine.c   |  1 +
> >  hw/i386/pc_piix.c   |  1 +
> >  hw/i386/pc_q35.c|  1 +
> >  hw/smbios/smbios.c  | 11 ++-
> >  6 files changed, 14 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > index ef6f18f2c1..48ff6d8b93 100644
> > --- a/include/hw/boards.h
> > +++ b/include/hw/boards.h
> > @@ -237,6 +237,9 @@ typedef struct {
> >   *purposes only.
> >   *Applies only to default memory backend, i.e., explicit memory backend
> >   *wasn't used.
> > + * @smbios_memory_device_size:
> > + *Default size of memory device,
> > + *SMBIOS 3.1.0 "7.18 Memory Device (Type 17)"
> >   */
> >  struct MachineClass {
> >  /*< private >*/
> > @@ -304,6 +307,7 @@ struct MachineClass {
> >  const CPUArchIdList *(*possible_cpu_arch_ids)(MachineState *machine);
> >  int64_t (*get_default_cpu_node_id)(const MachineState *ms, int idx);
> >  ram_addr_t (*fixup_ram_size)(ram_addr_t size);
> > +uint64_t smbios_memory_device_size;
> >  };
> >  
> >  /**
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index b0c68d66a3..719e83e6a1 1

Re: [PATCH] smbios: make memory device size configurable per Machine

2024-07-11 Thread Igor Mammedov
On Thu, 11 Jul 2024 10:19:27 +0200
Philippe Mathieu-Daudé  wrote:

> Hi Igor,
> 
> On 11/7/24 09:48, Igor Mammedov wrote:
> > Currently SMBIOS maximum memory device chunk is capped at 16Gb,
> > which is fine for the most cases (QEMU uses it to describe initial
> > RAM (type 17 SMBIOS table entries)).
> > However when starting guest with terabytes of RAM this leads to
> > too many memory device structures, which eventually upsets linux
> > kernel as it reserves only 64K for these entries and when that
> > border is crossed out it runs out of reserved memory.
> > 
> > Instead of partitioning initial RAM on 16Gb chunks, use maximum
> > possible chunk size that SMBIOS spec allows[1]. Which lets
> > encode RAM in Mb units in uint32_t-1 field (upto 2047Tb).
> > As result initial RAM will generate only one type 17 structure
> > until host/guest reach ability to use more RAM in the future.
> > 
> > Compat changes:
> > We can't unconditionally change chunk size as it will break
> > QEMU<->guest ABI (and migration). Thus introduce a new machine class
> > field that would let older versioned machines to use 16Gb chunks
> > while new machine type could use maximum possible chunk size.
> > 
> > While it might seem to be risky to rise max entry size this much
> > (much beyond of what current physical RAM modules support),
> > I'd not expect it causing much issues, modulo uncovering bugs
> > in software running within guest. And those should be fixed
> > on guest side to handle SMBIOS spec properly, especially if
> > guest is expected to support so huge RAM configs.
> > In worst case, QEMU can reduce chunk size later if we would
> > care enough about introducing a workaround for some 'unfixable'
> > guest OS, either by fixing up the next machine type or
> > giving users a CLI option to customize it.
> > 
> > 1) SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size
> > 
> > PS:
> > * tested on 8Tb host with RHEL6 guest, which seems to parse
> >type 17 SMBIOS table entries correctly (according to 'dmidecode').
> > 
> > Signed-off-by: Igor Mammedov 
> > ---
> >   include/hw/boards.h |  4 
> >   hw/arm/virt.c   |  1 +
> >   hw/core/machine.c   |  1 +
> >   hw/i386/pc_piix.c   |  1 +
> >   hw/i386/pc_q35.c|  1 +
> >   hw/smbios/smbios.c  | 11 ++-
> >   6 files changed, 14 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > index ef6f18f2c1..48ff6d8b93 100644
> > --- a/include/hw/boards.h
> > +++ b/include/hw/boards.h
> > @@ -237,6 +237,9 @@ typedef struct {
> >*purposes only.
> >*Applies only to default memory backend, i.e., explicit memory 
> > backend
> >*wasn't used.
> > + * @smbios_memory_device_size:
> > + *Default size of memory device,
> > + *SMBIOS 3.1.0 "7.18 Memory Device (Type 17)"
> >*/
> >   struct MachineClass {
> >   /*< private >*/
> > @@ -304,6 +307,7 @@ struct MachineClass {
> >   const CPUArchIdList *(*possible_cpu_arch_ids)(MachineState *machine);
> >   int64_t (*get_default_cpu_node_id)(const MachineState *ms, int idx);
> >   ram_addr_t (*fixup_ram_size)(ram_addr_t size);
> > +uint64_t smbios_memory_device_size;  
> 
> Quick notes since I'm on holidays (not meant to block this patch):
> 
> - How will evolve this machine class property in the context of
>a heterogeneous machine (i.e. x86_64 cores and 1 riscv32 one)?

I'm not aware of a SMBIOS spec (3.x) that cares about that heterogeneous
setup yet. Are there anything in that area exists yet?

> - Should this become a SmbiosProviderInterface later?
if/when SMBIOS does get there (heterogeneous machines), introducing
an interface might make a sense.

> 
> >   };
> >   
> >   /**
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index b0c68d66a3..719e83e6a1 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -3308,6 +3308,7 @@ DEFINE_VIRT_MACHINE_AS_LATEST(9, 1)
> >   static void virt_machine_9_0_options(MachineClass *mc)
> >   {
> >   virt_machine_9_1_options(mc);
> > +mc->smbios_memory_device_size = 16 * GiB;
> >   compat_props_add(mc->compat_props, hw_compat_9_0, hw_compat_9_0_len);
> >   }
> >   DEFINE_VIRT_MACHINE(9, 0)  
> 
> [...]
> 




Re: [PATCH V13 4/8] hw/acpi: Update GED _EVT method AML with CPU scan

2024-07-11 Thread Igor Mammedov
On Thu, 11 Jul 2024 03:29:40 +
Salil Mehta  wrote:

> Hi Igor,
> 
> 
> On 06/07/2024 14:28, Igor Mammedov wrote:
> > On Fri, 7 Jun 2024 12:56:45 +0100
> > Salil Mehta  wrote:
> >  
> >> OSPM evaluates _EVT method to map the event. The CPU hotplug event 
> >> eventually
> >> results in start of the CPU scan. Scan figures out the CPU and the kind of
> >> event(plug/unplug) and notifies it back to the guest. Update the GED AML 
> >> _EVT
> >> method with the call to \\_SB.CPUS.CSCN
> >>
> >> Also, macro CPU_SCAN_METHOD might be referred in other places like during 
> >> GED
> >> intialization so it makes sense to have its definition placed in some 
> >> common
> >> header file like cpu_hotplug.h. But doing this can cause compilation break
> >> because of the conflicting macro definitions present in cpu.c and 
> >> cpu_hotplug.c  
> > one of the reasons is that you reusing legacy hw/acpi/cpu_hotplug.h,
> > see below for suggestion.
> >  
> >> and because both these files get compiled due to historic reasons of x86 
> >> world
> >> i.e. decision to use legacy(GPE.2)/modern(GED) CPU hotplug interface 
> >> happens
> >> during runtime [1]. To mitigate above, for now, declare a new common macro
> >> ACPI_CPU_SCAN_METHOD for CPU scan method instead.
> >> (This needs a separate discussion later on for clean-up)
> >>
> >> Reference:
> >> [1] 
> >> https://lore.kernel.org/qemu-devel/1463496205-251412-24-git-send-email-imamm...@redhat.com/
> >>
> >> Co-developed-by: Keqian Zhu 
> >> Signed-off-by: Keqian Zhu 
> >> Signed-off-by: Salil Mehta 
> >> Reviewed-by: Jonathan Cameron 
> >> Reviewed-by: Gavin Shan 
> >> Tested-by: Vishnu Pajjuri 
> >> Tested-by: Xianglai Li 
> >> Tested-by: Miguel Luis 
> >> Reviewed-by: Shaoqin Huang 
> >> Tested-by: Zhao Liu 
> >> ---
> >>   hw/acpi/cpu.c  | 2 +-
> >>   hw/acpi/generic_event_device.c | 4 
> >>   include/hw/acpi/cpu_hotplug.h  | 2 ++
> >>   3 files changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> >> index 473b37ba88..af2b6655d2 100644
> >> --- a/hw/acpi/cpu.c
> >> +++ b/hw/acpi/cpu.c
> >> @@ -327,7 +327,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
> >>   #define CPUHP_RES_DEVICE  "PRES"
> >>   #define CPU_LOCK  "CPLK"
> >>   #define CPU_STS_METHOD"CSTA"
> >> -#define CPU_SCAN_METHOD   "CSCN"
> >> +#define CPU_SCAN_METHOD   ACPI_CPU_SCAN_METHOD
> >>   #define CPU_NOTIFY_METHOD "CTFY"
> >>   #define CPU_EJECT_METHOD  "CEJ0"
> >>   #define CPU_OST_METHOD"COST"
> >> diff --git a/hw/acpi/generic_event_device.c 
> >> b/hw/acpi/generic_event_device.c
> >> index 54d3b4bf9d..63226b0040 100644
> >> --- a/hw/acpi/generic_event_device.c
> >> +++ b/hw/acpi/generic_event_device.c
> >> @@ -109,6 +109,10 @@ void build_ged_aml(Aml *table, const char *name, 
> >> HotplugHandler *hotplug_dev,
> >>   aml_append(if_ctx, aml_call0(MEMORY_DEVICES_CONTAINER "."
> >>MEMORY_SLOT_SCAN_METHOD));
> >>   break;
> >> +case ACPI_GED_CPU_HOTPLUG_EVT:
> >> +aml_append(if_ctx, aml_call0(ACPI_CPU_CONTAINER "."
> >> + ACPI_CPU_SCAN_METHOD));  
> > I don't particularly like exposing cpu hotplug internals for outside code
> > and then making that code do plumbing hoping that nothing will explode
> > in the future.
> >
> > build_cpus_aml() takes event_handler_method to create a method that
> > can be called by platform. What I suggest is to call that method here
> > instead of trying to expose CPU hotplug internals and manually building
> > call path here.
> > aka:
> >build_cpus_aml(event_handler_method = PATH_TO_GED_DEVICE.CSCN)
> > and then call here
> >aml_append(if_ctx, aml_call0(CSCN));
> > which will call  CSCN in GED scope, that was be populated by
> > build_cpus_aml() to do cpu scan properly without need to expose
> > cpu hotplug internal names and then trying to fixup conflicts caused by 
> > that.
> >
> > PS:
> > we should do the same for memory hotplug, we see in context above  
> 
> In the x86 w

[PATCH] smbios: make memory device size configurable per Machine

2024-07-11 Thread Igor Mammedov
Currently SMBIOS maximum memory device chunk is capped at 16Gb,
which is fine for the most cases (QEMU uses it to describe initial
RAM (type 17 SMBIOS table entries)).
However when starting guest with terabytes of RAM this leads to
too many memory device structures, which eventually upsets linux
kernel as it reserves only 64K for these entries and when that
border is crossed out it runs out of reserved memory.

Instead of partitioning initial RAM on 16Gb chunks, use maximum
possible chunk size that SMBIOS spec allows[1]. Which lets
encode RAM in Mb units in uint32_t-1 field (upto 2047Tb).
As result initial RAM will generate only one type 17 structure
until host/guest reach ability to use more RAM in the future.

Compat changes:
We can't unconditionally change chunk size as it will break
QEMU<->guest ABI (and migration). Thus introduce a new machine class
field that would let older versioned machines to use 16Gb chunks
while new machine type could use maximum possible chunk size.

While it might seem to be risky to rise max entry size this much
(much beyond of what current physical RAM modules support),
I'd not expect it causing much issues, modulo uncovering bugs
in software running within guest. And those should be fixed
on guest side to handle SMBIOS spec properly, especially if
guest is expected to support so huge RAM configs.
In worst case, QEMU can reduce chunk size later if we would
care enough about introducing a workaround for some 'unfixable'
guest OS, either by fixing up the next machine type or
giving users a CLI option to customize it.

1) SMBIOS 3.1.0 7.18.5 Memory Device — Extended Size

PS:
* tested on 8Tb host with RHEL6 guest, which seems to parse
  type 17 SMBIOS table entries correctly (according to 'dmidecode').

Signed-off-by: Igor Mammedov 
---
 include/hw/boards.h |  4 
 hw/arm/virt.c   |  1 +
 hw/core/machine.c   |  1 +
 hw/i386/pc_piix.c   |  1 +
 hw/i386/pc_q35.c|  1 +
 hw/smbios/smbios.c  | 11 ++-
 6 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index ef6f18f2c1..48ff6d8b93 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -237,6 +237,9 @@ typedef struct {
  *purposes only.
  *Applies only to default memory backend, i.e., explicit memory backend
  *wasn't used.
+ * @smbios_memory_device_size:
+ *Default size of memory device,
+ *SMBIOS 3.1.0 "7.18 Memory Device (Type 17)"
  */
 struct MachineClass {
 /*< private >*/
@@ -304,6 +307,7 @@ struct MachineClass {
 const CPUArchIdList *(*possible_cpu_arch_ids)(MachineState *machine);
 int64_t (*get_default_cpu_node_id)(const MachineState *ms, int idx);
 ram_addr_t (*fixup_ram_size)(ram_addr_t size);
+uint64_t smbios_memory_device_size;
 };
 
 /**
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index b0c68d66a3..719e83e6a1 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -3308,6 +3308,7 @@ DEFINE_VIRT_MACHINE_AS_LATEST(9, 1)
 static void virt_machine_9_0_options(MachineClass *mc)
 {
 virt_machine_9_1_options(mc);
+mc->smbios_memory_device_size = 16 * GiB;
 compat_props_add(mc->compat_props, hw_compat_9_0, hw_compat_9_0_len);
 }
 DEFINE_VIRT_MACHINE(9, 0)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index bc38cad7f2..3cfdaec65d 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -1004,6 +1004,7 @@ static void machine_class_init(ObjectClass *oc, void 
*data)
 /* Default 128 MB as guest ram size */
 mc->default_ram_size = 128 * MiB;
 mc->rom_file_has_mr = true;
+mc->smbios_memory_device_size = 2047 * TiB;
 
 /* numa node memory size aligned on 8MB by default.
  * On Linux, each node's border has to be 8MB aligned
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 9445b07b4f..d9e69243b4 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -495,6 +495,7 @@ static void pc_i440fx_machine_9_0_options(MachineClass *m)
 pc_i440fx_machine_9_1_options(m);
 m->alias = NULL;
 m->is_default = false;
+m->smbios_memory_device_size = 16 * GiB;
 
 compat_props_add(m->compat_props, hw_compat_9_0, hw_compat_9_0_len);
 compat_props_add(m->compat_props, pc_compat_9_0, pc_compat_9_0_len);
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 71d3c6d122..9d108b194e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -374,6 +374,7 @@ static void pc_q35_machine_9_0_options(MachineClass *m)
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
 pc_q35_machine_9_1_options(m);
 m->alias = NULL;
+m->smbios_memory_device_size = 16 * GiB;
 compat_props_add(m->compat_props, hw_compat_9_0, hw_compat_9_0_len);
 compat_props_add(m->compat_props, pc_compat_9_0, pc_compat_9_0_len);
 pcmc->isa_bios_alias = false;
diff --git a/hw/smbios/smbios.c b/hw/smbios/smbios.c
index 3b7703489d..a394514264 100644
--- a/hw/smbios/smbios.c
+++ b/hw/smbios/smbios.c
@@ -

Re: [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation,parking} code

2024-07-09 Thread Igor Mammedov
On Mon, 8 Jul 2024 23:30:01 +
Salil Mehta  wrote:

> Hi Igor,
> 
> On 08/07/2024 13:32, Igor Mammedov wrote:
> > On Sat, 6 Jul 2024 15:43:01 +
> > Salil Mehta  wrote:
> >  
> >> Hi Igor,
> >> Thanks for taking out time to review.
> >>
> >> On Sat, Jul 6, 2024 at 1:12 PM Igor Mammedov  wrote:
> >>  
> >>> On Fri, 7 Jun 2024 12:56:42 +0100
> >>> Salil Mehta  wrote:
> >>> 
> >>>> KVM vCPU creation is done once during the vCPU realization when Qemu  
> >>> vCPU thread  
> >>>> is spawned. This is common to all the architectures as of now.
> >>>>
> >>>> Hot-unplug of vCPU results in destruction of the vCPU object in QOM but  
> >>> the  
> >>>> corresponding KVM vCPU object in the Host KVM is not destroyed as KVM  
> >>> doesn't  
> >>>> support vCPU removal. Therefore, its representative KVM vCPU  
> >>> object/context in  
> >>>> Qemu is parked.
> >>>>
> >>>> Refactor architecture common logic so that some APIs could be reused by  
> >>> vCPU  
> >>>> Hotplug code of some architectures likes ARM, Loongson etc. Update  
> >>> new/old APIs  
> >>>> with trace events. No functional change is intended here.
> >>>>
> >>>> Signed-off-by: Salil Mehta 
> >>>> Reviewed-by: Gavin Shan 
> >>>> Tested-by: Vishnu Pajjuri 
> >>>> Reviewed-by: Jonathan Cameron 
> >>>> Tested-by: Xianglai Li 
> >>>> Tested-by: Miguel Luis 
> >>>> Reviewed-by: Shaoqin Huang 
> >>>> Reviewed-by: Vishnu Pajjuri 
> >>>> Reviewed-by: Nicholas Piggin 
> >>>> Tested-by: Zhao Liu 
> >>>> Reviewed-by: Zhao Liu 
> >>>> Reviewed-by: Harsh Prateek Bora 
> >>>> ---
> >>>>   accel/kvm/kvm-all.c| 95 --
> >>>>   accel/kvm/kvm-cpus.h   |  1 -
> >>>>   accel/kvm/trace-events |  5 ++-
> >>>>   include/sysemu/kvm.h   | 25 +++
> >>>>   4 files changed, 92 insertions(+), 34 deletions(-)
> >>>>
> >>>> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> >>>> index c0be9f5eed..8f9128bb92 100644
> >>>> --- a/accel/kvm/kvm-all.c
> >>>> +++ b/accel/kvm/kvm-all.c
> >>>> @@ -340,14 +340,71 @@ err:
> >>>>   return ret;
> >>>>   }
> >>>>
> >>>> +void kvm_park_vcpu(CPUState *cpu)
> >>>> +{
> >>>> +struct KVMParkedVcpu *vcpu;
> >>>> +
> >>>> +trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> >>>> +
> >>>> +vcpu = g_malloc0(sizeof(*vcpu));
> >>>> +vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> >>>> +vcpu->kvm_fd = cpu->kvm_fd;
> >>>> +QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> >>>> +}
> >>>> +
> >>>> +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
> >>>> +{
> >>>> +struct KVMParkedVcpu *cpu;
> >>>> +int kvm_fd = -ENOENT;
> >>>> +
> >>>> +QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> >>>> +if (cpu->vcpu_id == vcpu_id) {
> >>>> +QLIST_REMOVE(cpu, node);
> >>>> +kvm_fd = cpu->kvm_fd;
> >>>> +g_free(cpu);
> >>>> +}
> >>>> +}
> >>>> +
> >>>> +trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found 
> >>>>  
> >>> parked");  
> >>>> +
> >>>> +return kvm_fd;
> >>>> +}
> >>>> +
> >>>> +int kvm_create_vcpu(CPUState *cpu)
> >>>> +{
> >>>> +unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> >>>> +KVMState *s = kvm_state;
> >>>> +int kvm_fd;
> >>>> +
> >>>> +/* check if the KVM vCPU already exist but is parked */
> >>>> +kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> >>>> +if (kvm_fd < 0) {
> >>>> +/* vCPU not parked: create a new KVM vCPU */
> >>>> +kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> >&g

Re: [PATCH V13 8/8] docs/specs/acpi_hw_reduced_hotplug: Add the CPU Hotplug Event Bit

2024-07-08 Thread Igor Mammedov
On Mon, 8 Jul 2024 05:32:28 +
Salil Mehta  wrote:

> On 06/07/2024 14:45, Igor Mammedov wrote:
> > On Fri, 7 Jun 2024 12:56:49 +0100
> > Salil Mehta  wrote:
> >  
> >> GED interface is used by many hotplug events like memory hotplug, NVDIMM 
> >> hotplug
> >> and non-hotplug events like system power down event. Each of these can be
> >> selected using a bit in the 32 bit GED IO interface. A bit has been 
> >> reserved for
> >> the CPU hotplug event.
> >>
> >> Signed-off-by: Salil Mehta 
> >> Reviewed-by: Gavin Shan 
> >> Tested-by: Zhao Liu   
> > suggest to squash this into the patch that introduces this bit [3/8]  
> 
> I thought, we are introducing a change common to all architectures?

hw reduced hotplug implies GED, so including doc change
into the patch that introduces bit in the code is better
option.

it also easier on the folks that come later and find doc
and code in the same commit (which is easier to follow
than looking for different commits in git log).

> > Best, Salil.
> 
> >> ---
> >>   docs/specs/acpi_hw_reduced_hotplug.rst | 3 ++-
> >>   1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/docs/specs/acpi_hw_reduced_hotplug.rst 
> >> b/docs/specs/acpi_hw_reduced_hotplug.rst
> >> index 0bd3f9399f..3acd6fcd8b 100644
> >> --- a/docs/specs/acpi_hw_reduced_hotplug.rst
> >> +++ b/docs/specs/acpi_hw_reduced_hotplug.rst
> >> @@ -64,7 +64,8 @@ GED IO interface (4 byte access)
> >>  0: Memory hotplug event
> >>  1: System power down event
> >>  2: NVDIMM hotplug event
> >> -3-31: Reserved
> >> +   3: CPU hotplug event
> >> +4-31: Reserved
> >>   
> >>   **write_access:**
> >> 
> > :
> >  
> 




Re: [PATCH V13 5/8] hw/acpi: Update CPUs AML with cpu-(ctrl)dev change

2024-07-08 Thread Igor Mammedov
On Mon, 8 Jul 2024 05:26:00 +
Salil Mehta  wrote:

> On 06/07/2024 14:35, Igor Mammedov wrote:
> > On Fri, 7 Jun 2024 12:56:46 +0100
> > Salil Mehta  wrote:
> >  
> >> CPUs Control device(\\_SB.PCI0) register interface for the x86 arch is IO 
> >> port
> >> based and existing CPUs AML code assumes _CRS objects would evaluate to a 
> >> system
> >> resource which describes IO Port address. But on ARM arch CPUs control
> >> device(\\_SB.PRES) register interface is memory-mapped hence _CRS object 
> >> should
> >> evaluate to system resource which describes memory-mapped base address. 
> >> Update
> >> build CPUs AML function to accept both IO/MEMORY region spaces and 
> >> accordingly
> >> update the _CRS object.  
> > ack for above change  
> Thanks
> >
> >
> > but below part is one too many different changes withing 1 patch.
> > anyways, GPE part probably won't be needed if you follow suggestion made
> > on previous patch.  
> 
> The change mentioned in the earlier patches might end up creating
> 
> noise for this patch-set as one will have to touch the Memory Hotplug
> 
> part as well. I'm willing to do that change but I think it is a noise for
> 
> this patch-set, really.

you don't have to touch memory hotplug,
but fixing it up (as a separate patch of cause) to be consistent
with cpu hotplug would be nice.

> 
> > 
> >> On x86, CPU Hotplug uses Generic ACPI GPE Block Bit 2 (GPE.2) event 
> >> handler to
> >> notify OSPM about any CPU hot(un)plug events. Latest CPU Hotplug is based 
> >> on
> >> ACPI Generic Event Device framework and uses ACPI GED device for the same. 
> >> Not
> >> all architectures support GPE based CPU Hotplug event handler. Hence, make 
> >> AML
> >> for GPE.2 event handler conditional.
> >>
> >> Co-developed-by: Keqian Zhu 
> >> Signed-off-by: Keqian Zhu 
> >> Signed-off-by: Salil Mehta 
> >> Reviewed-by: Gavin Shan 
> >> Tested-by: Vishnu Pajjuri 
> >> Reviewed-by: Jonathan Cameron 
> >> Tested-by: Xianglai Li 
> >> Tested-by: Miguel Luis 
> >> Reviewed-by: Shaoqin Huang 
> >> Tested-by: Zhao Liu 
> >> ---
> >>   hw/acpi/cpu.c | 23 ---
> >>   hw/i386/acpi-build.c  |  3 ++-
> >>   include/hw/acpi/cpu.h |  5 +++--
> >>   3 files changed, 21 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> >> index af2b6655d2..4c63514b16 100644
> >> --- a/hw/acpi/cpu.c
> >> +++ b/hw/acpi/cpu.c
> >> @@ -343,9 +343,10 @@ const VMStateDescription vmstate_cpu_hotplug = {
> >>   #define CPU_FW_EJECT_EVENT "CEJF"
> >>   
> >>   void build_cpus_aml(Aml *table, MachineState *machine, 
> >> CPUHotplugFeatures opts,
> >> -build_madt_cpu_fn build_madt_cpu, hwaddr io_base,
> >> +build_madt_cpu_fn build_madt_cpu, hwaddr base_addr,
> >>   const char *res_root,
> >> -const char *event_handler_method)
> >> +const char *event_handler_method,
> >> +AmlRegionSpace rs)
> >>   {
> >>   Aml *ifctx;
> >>   Aml *field;
> >> @@ -370,13 +371,19 @@ void build_cpus_aml(Aml *table, MachineState 
> >> *machine, CPUHotplugFeatures opts,
> >>   aml_append(cpu_ctrl_dev, aml_mutex(CPU_LOCK, 0));
> >>   
> >>   crs = aml_resource_template();
> >> -aml_append(crs, aml_io(AML_DECODE16, io_base, io_base, 1,
> >> +if (rs == AML_SYSTEM_IO) {
> >> +aml_append(crs, aml_io(AML_DECODE16, base_addr, base_addr, 1,
> >>  ACPI_CPU_HOTPLUG_REG_LEN));
> >> +} else {  
> > else
> >   if (rs == yours type)  
> >> +aml_append(crs, aml_memory32_fixed(base_addr,
> >> +   ACPI_CPU_HOTPLUG_REG_LEN, AML_READ_WRITE));
> >> +}  
> > else assert on not supported input  
> 
> Sure, no problem. I can incorporate the change.
> 
> Thanks, Salil.
> 
> >  
> >> +
> >>   aml_append(cpu_ctrl_dev, aml_name_decl("_CRS", crs));
> >>   
> >>   /* declare CPU hotplug MMIO region with related access fields */
> >>   aml_append(cpu_ctrl_dev,
> >> -aml_operation_region("PRST", AML_SYSTEM_IO,

Re: [PATCH V13 4/8] hw/acpi: Update GED _EVT method AML with CPU scan

2024-07-08 Thread Igor Mammedov
On Mon, 8 Jul 2024 05:21:06 +
Salil Mehta  wrote:

> Hi Igor,
> 
> On 06/07/2024 14:28, Igor Mammedov wrote:
> > On Fri, 7 Jun 2024 12:56:45 +0100
> > Salil Mehta  wrote:
> >  
> >> OSPM evaluates _EVT method to map the event. The CPU hotplug event 
> >> eventually
> >> results in start of the CPU scan. Scan figures out the CPU and the kind of
> >> event(plug/unplug) and notifies it back to the guest. Update the GED AML 
> >> _EVT
> >> method with the call to \\_SB.CPUS.CSCN
> >>
> >> Also, macro CPU_SCAN_METHOD might be referred in other places like during 
> >> GED
> >> intialization so it makes sense to have its definition placed in some 
> >> common
> >> header file like cpu_hotplug.h. But doing this can cause compilation break
> >> because of the conflicting macro definitions present in cpu.c and 
> >> cpu_hotplug.c  
> > one of the reasons is that you reusing legacy hw/acpi/cpu_hotplug.h,
> > see below for suggestion.  
> ok
> >  
> >> and because both these files get compiled due to historic reasons of x86 
> >> world
> >> i.e. decision to use legacy(GPE.2)/modern(GED) CPU hotplug interface 
> >> happens
> >> during runtime [1]. To mitigate above, for now, declare a new common macro
> >> ACPI_CPU_SCAN_METHOD for CPU scan method instead.
> >> (This needs a separate discussion later on for clean-up)
> >>
> >> Reference:
> >> [1] 
> >> https://lore.kernel.org/qemu-devel/1463496205-251412-24-git-send-email-imamm...@redhat.com/
> >>
> >> Co-developed-by: Keqian Zhu 
> >> Signed-off-by: Keqian Zhu 
> >> Signed-off-by: Salil Mehta 
> >> Reviewed-by: Jonathan Cameron 
> >> Reviewed-by: Gavin Shan 
> >> Tested-by: Vishnu Pajjuri 
> >> Tested-by: Xianglai Li 
> >> Tested-by: Miguel Luis 
> >> Reviewed-by: Shaoqin Huang 
> >> Tested-by: Zhao Liu 
> >> ---
> >>   hw/acpi/cpu.c  | 2 +-
> >>   hw/acpi/generic_event_device.c | 4 
> >>   include/hw/acpi/cpu_hotplug.h  | 2 ++
> >>   3 files changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> >> index 473b37ba88..af2b6655d2 100644
> >> --- a/hw/acpi/cpu.c
> >> +++ b/hw/acpi/cpu.c
> >> @@ -327,7 +327,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
> >>   #define CPUHP_RES_DEVICE  "PRES"
> >>   #define CPU_LOCK  "CPLK"
> >>   #define CPU_STS_METHOD"CSTA"
> >> -#define CPU_SCAN_METHOD   "CSCN"
> >> +#define CPU_SCAN_METHOD   ACPI_CPU_SCAN_METHOD
> >>   #define CPU_NOTIFY_METHOD "CTFY"
> >>   #define CPU_EJECT_METHOD  "CEJ0"
> >>   #define CPU_OST_METHOD"COST"
> >> diff --git a/hw/acpi/generic_event_device.c 
> >> b/hw/acpi/generic_event_device.c
> >> index 54d3b4bf9d..63226b0040 100644
> >> --- a/hw/acpi/generic_event_device.c
> >> +++ b/hw/acpi/generic_event_device.c
> >> @@ -109,6 +109,10 @@ void build_ged_aml(Aml *table, const char *name, 
> >> HotplugHandler *hotplug_dev,
> >>   aml_append(if_ctx, aml_call0(MEMORY_DEVICES_CONTAINER "."
> >>MEMORY_SLOT_SCAN_METHOD));
> >>   break;
> >> +case ACPI_GED_CPU_HOTPLUG_EVT:
> >> +aml_append(if_ctx, aml_call0(ACPI_CPU_CONTAINER "."
> >> + ACPI_CPU_SCAN_METHOD));  
> > I don't particularly like exposing cpu hotplug internals for outside code
> > and then making that code do plumbing hoping that nothing will explode
> > in the future.  
> 
> I understand your point but I've followed what was already existing.
> 
> For example,
> 
> build_dsdt()
> 
> {
> 
>     [...]
> 
>      acpi_dsdt_add_uart(scope, [VIRT_UART],
>     (irqmap[VIRT_UART] + ARM_SPI_BASE));
>      if (vmc->acpi_expose_flash) {
>      acpi_dsdt_add_flash(scope, [VIRT_FLASH]);
>      }
>      fw_cfg_acpi_dsdt_add(scope, [VIRT_FW_CFG]);
>      virtio_acpi_dsdt_add(scope, memmap[VIRT_MMIO].base, 
> memmap[VIRT_MMIO].size,
>   (irqmap[VIRT_MMIO] + ARM_SPI_BASE),
>   0, NUM_VIRTIO_TRANSPORTS);
>      acpi_dsdt_add_pci(scope, memmap, irqmap[VIRT_PCIE] + ARM_SPI_BASE, 
> vms);
>      if (vms->acpi_dev) {
>      build_g

Re: [PATCH V13 3/8] hw/acpi: Update ACPI GED framework to support vCPU Hotplug

2024-07-08 Thread Igor Mammedov
On Mon, 8 Jul 2024 05:12:48 +
Salil Mehta  wrote:

> On 06/07/2024 13:46, Igor Mammedov wrote:
> > On Fri, 7 Jun 2024 12:56:44 +0100
> > Salil Mehta  wrote:
> >  
> >> ACPI GED (as described in the ACPI 6.4 spec) uses an interrupt listed in 
> >> the
> >> _CRS object of GED to intimate OSPM about an event. Later then 
> >> demultiplexes the
> >> notified event by evaluating ACPI _EVT method to know the type of event. 
> >> Use
> >> ACPI GED to also notify the guest kernel about any CPU hot(un)plug events.
> >>
> >> ACPI CPU hotplug related initialization should only happen if 
> >> ACPI_CPU_HOTPLUG
> >> support has been enabled for particular architecture. Add 
> >> cpu_hotplug_hw_init()
> >> stub to avoid compilation break.
> >>
> >> Co-developed-by: Keqian Zhu 
> >> Signed-off-by: Keqian Zhu 
> >> Signed-off-by: Salil Mehta 
> >> Reviewed-by: Jonathan Cameron 
> >> Reviewed-by: Gavin Shan 
> >> Reviewed-by: David Hildenbrand 
> >> Reviewed-by: Shaoqin Huang 
> >> Tested-by: Vishnu Pajjuri 
> >> Tested-by: Xianglai Li 
> >> Tested-by: Miguel Luis 
> >> Reviewed-by: Vishnu Pajjuri 
> >> Tested-by: Zhao Liu 
> >> Reviewed-by: Zhao Liu 
> >> ---
> >>   hw/acpi/acpi-cpu-hotplug-stub.c|  6 ++
> >>   hw/acpi/cpu.c  |  6 +-
> >>   hw/acpi/generic_event_device.c | 17 +
> >>   include/hw/acpi/generic_event_device.h |  4 
> >>   4 files changed, 32 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/acpi/acpi-cpu-hotplug-stub.c 
> >> b/hw/acpi/acpi-cpu-hotplug-stub.c
> >> index 3fc4b14c26..c6c61bb9cd 100644
> >> --- a/hw/acpi/acpi-cpu-hotplug-stub.c
> >> +++ b/hw/acpi/acpi-cpu-hotplug-stub.c
> >> @@ -19,6 +19,12 @@ void legacy_acpi_cpu_hotplug_init(MemoryRegion *parent, 
> >> Object *owner,
> >>   return;
> >>   }
> >>   
> >> +void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
> >> + CPUHotplugState *state, hwaddr base_addr)
> >> +{
> >> +return;
> >> +}
> >> +
> >>   void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList 
> >> ***list)
> >>   {
> >>   return;
> >> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> >> index 69aaa563db..473b37ba88 100644
> >> --- a/hw/acpi/cpu.c
> >> +++ b/hw/acpi/cpu.c
> >> @@ -221,7 +221,11 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object 
> >> *owner,
> >>   const CPUArchIdList *id_list;
> >>   int i;
> >>   
> >> -assert(mc->possible_cpu_arch_ids);
> >> +/* hotplug might not be available for all types like x86/microvm etc. 
> >> */
> >> +if (!mc->possible_cpu_arch_ids) {
> >> +return;
> >> +}  
> > if hotplug is not supported, this function shouldn't be called at all.  
> 
> True. But none the less this gets called for Intel/microvm and causes 
> qtest to fail.
> 
> I think, we've had this discussion before last year as well. Please 
> check below:
> 
> https://lore.kernel.org/qemu-devel/15e70616-6abb-63a4-17d0-820f4a254...@opnsrc.net/

And I see that I had the same objection, 
'
cpu_hotplug_hw_init() should not be called at initfn time,
but rather at realize time.
'


> >
> > [...]  
> >> @@ -400,6 +411,12 @@ static void acpi_ged_initfn(Object *obj)
> >>   memory_region_init_io(_st->regs, obj, _regs_ops, ged_st,
> >> TYPE_ACPI_GED "-regs", ACPI_GED_REG_COUNT);
> >>   sysbus_init_mmio(sbd, _st->regs);
> >> +
> >> +memory_region_init(>container_cpuhp, OBJECT(dev), "cpuhp 
> >> container",
> >> +   ACPI_CPU_HOTPLUG_REG_LEN);
> >> +sysbus_init_mmio(sbd, >container_cpuhp);
> >> +cpu_hotplug_hw_init(>container_cpuhp, OBJECT(dev),
> >> +>cpuhp_state, 0);  

> > suggest to move this call to realize time, and gate it on
> > ACPI_GED_CPU_HOTPLUG_EVT being set.
> > Platform that supports cpu hotplug must optin, setting 
> > ACPI_GED_CPU_HOTPLUG_EVT,
> > while for the rest it will be ignored.

which I've just suggested again ^^^.

> >
> > for example: create_acpi_ged() : event |= ACPI_GED_NVDIMM_HOTPLUG_EVT; 
 
> 
> Similar case applies to the Memory hotplug as well and 

Re: [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation,parking} code

2024-07-08 Thread Igor Mammedov
On Sat, 6 Jul 2024 15:43:01 +
Salil Mehta  wrote:

> Hi Igor,
> Thanks for taking out time to review.
> 
> On Sat, Jul 6, 2024 at 1:12 PM Igor Mammedov  wrote:
> 
> > On Fri, 7 Jun 2024 12:56:42 +0100
> > Salil Mehta  wrote:
> >  
> > > KVM vCPU creation is done once during the vCPU realization when Qemu  
> > vCPU thread  
> > > is spawned. This is common to all the architectures as of now.
> > >
> > > Hot-unplug of vCPU results in destruction of the vCPU object in QOM but  
> > the  
> > > corresponding KVM vCPU object in the Host KVM is not destroyed as KVM  
> > doesn't  
> > > support vCPU removal. Therefore, its representative KVM vCPU  
> > object/context in  
> > > Qemu is parked.
> > >
> > > Refactor architecture common logic so that some APIs could be reused by  
> > vCPU  
> > > Hotplug code of some architectures likes ARM, Loongson etc. Update  
> > new/old APIs  
> > > with trace events. No functional change is intended here.
> > >
> > > Signed-off-by: Salil Mehta 
> > > Reviewed-by: Gavin Shan 
> > > Tested-by: Vishnu Pajjuri 
> > > Reviewed-by: Jonathan Cameron 
> > > Tested-by: Xianglai Li 
> > > Tested-by: Miguel Luis 
> > > Reviewed-by: Shaoqin Huang 
> > > Reviewed-by: Vishnu Pajjuri 
> > > Reviewed-by: Nicholas Piggin 
> > > Tested-by: Zhao Liu 
> > > Reviewed-by: Zhao Liu 
> > > Reviewed-by: Harsh Prateek Bora 
> > > ---
> > >  accel/kvm/kvm-all.c| 95 --
> > >  accel/kvm/kvm-cpus.h   |  1 -
> > >  accel/kvm/trace-events |  5 ++-
> > >  include/sysemu/kvm.h   | 25 +++
> > >  4 files changed, 92 insertions(+), 34 deletions(-)
> > >
> > > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > > index c0be9f5eed..8f9128bb92 100644
> > > --- a/accel/kvm/kvm-all.c
> > > +++ b/accel/kvm/kvm-all.c
> > > @@ -340,14 +340,71 @@ err:
> > >  return ret;
> > >  }
> > >
> > > +void kvm_park_vcpu(CPUState *cpu)
> > > +{
> > > +struct KVMParkedVcpu *vcpu;
> > > +
> > > +trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> > > +
> > > +vcpu = g_malloc0(sizeof(*vcpu));
> > > +vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> > > +vcpu->kvm_fd = cpu->kvm_fd;
> > > +QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> > > +}
> > > +
> > > +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
> > > +{
> > > +struct KVMParkedVcpu *cpu;
> > > +int kvm_fd = -ENOENT;
> > > +
> > > +QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> > > +if (cpu->vcpu_id == vcpu_id) {
> > > +QLIST_REMOVE(cpu, node);
> > > +kvm_fd = cpu->kvm_fd;
> > > +g_free(cpu);
> > > +}
> > > +}
> > > +
> > > +trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found  
> > parked");  
> > > +
> > > +return kvm_fd;
> > > +}
> > > +
> > > +int kvm_create_vcpu(CPUState *cpu)
> > > +{
> > > +unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> > > +KVMState *s = kvm_state;
> > > +int kvm_fd;
> > > +
> > > +/* check if the KVM vCPU already exist but is parked */
> > > +kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> > > +if (kvm_fd < 0) {
> > > +/* vCPU not parked: create a new KVM vCPU */
> > > +kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> > > +if (kvm_fd < 0) {
> > > +error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu",  
> > vcpu_id);  
> > > +return kvm_fd;
> > > +}
> > > +}
> > > +
> > > +cpu->kvm_fd = kvm_fd;
> > > +cpu->kvm_state = s;
> > > +cpu->vcpu_dirty = true;
> > > +cpu->dirty_pages = 0;
> > > +cpu->throttle_us_per_full = 0;
> > > +
> > > +trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
> > > +
> > > +return 0;
> > > +}  
> >
> > Is there any reason why you are embedding/hiding kvm_state in new API
> > instead of passing it as argument (all callers have it defined, so why not
> > reuse that)?
> >  
>

Re: [PATCH] i386/cpu: Drop the check of phys_bits in host_cpu_realizefn()

2024-07-06 Thread Igor Mammedov
On Thu,  4 Jul 2024 07:12:31 -0400
Xiaoyao Li  wrote:

> The check of cpu->phys_bits to be in range between
> [32, TARGET_PHYS_ADDR_SPACE_BITS] in host_cpu_realizefn()
> is duplicated with check in x86_cpu_realizefn().
> 
> Since the ckeck in x86_cpu_realizefn() is called later and can cover all
> teh x86 case. Remove the one in host_cpu_realizefn().
> 
> Signed-off-by: Xiaoyao Li 

Reviewed-by: Igor Mammedov 

> ---
>  target/i386/host-cpu.c | 12 +---
>  1 file changed, 1 insertion(+), 11 deletions(-)
> 
> diff --git a/target/i386/host-cpu.c b/target/i386/host-cpu.c
> index 8b8bf5afeccf..b109c1a2221f 100644
> --- a/target/i386/host-cpu.c
> +++ b/target/i386/host-cpu.c
> @@ -75,17 +75,7 @@ bool host_cpu_realizefn(CPUState *cs, Error **errp)
>  CPUX86State *env = >env;
>  
>  if (env->features[FEAT_8000_0001_EDX] & CPUID_EXT2_LM) {
> -uint32_t phys_bits = host_cpu_adjust_phys_bits(cpu);
> -
> -if (phys_bits &&
> -(phys_bits > TARGET_PHYS_ADDR_SPACE_BITS ||
> - phys_bits < 32)) {
> -error_setg(errp, "phys-bits should be between 32 and %u "
> -   " (but is %u)",
> -   TARGET_PHYS_ADDR_SPACE_BITS, phys_bits);
> -return false;
> -}
> -cpu->phys_bits = phys_bits;
> +cpu->phys_bits = host_cpu_adjust_phys_bits(cpu);
>  }
>  return true;
>  }




Re: [PATCH V13 8/8] docs/specs/acpi_hw_reduced_hotplug: Add the CPU Hotplug Event Bit

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:49 +0100
Salil Mehta  wrote:

> GED interface is used by many hotplug events like memory hotplug, NVDIMM 
> hotplug
> and non-hotplug events like system power down event. Each of these can be
> selected using a bit in the 32 bit GED IO interface. A bit has been reserved 
> for
> the CPU hotplug event.
> 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Gavin Shan 
> Tested-by: Zhao Liu 

suggest to squash this into the patch that introduces this bit [3/8]


> ---
>  docs/specs/acpi_hw_reduced_hotplug.rst | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/specs/acpi_hw_reduced_hotplug.rst 
> b/docs/specs/acpi_hw_reduced_hotplug.rst
> index 0bd3f9399f..3acd6fcd8b 100644
> --- a/docs/specs/acpi_hw_reduced_hotplug.rst
> +++ b/docs/specs/acpi_hw_reduced_hotplug.rst
> @@ -64,7 +64,8 @@ GED IO interface (4 byte access)
> 0: Memory hotplug event
> 1: System power down event
> 2: NVDIMM hotplug event
> -3-31: Reserved
> +   3: CPU hotplug event
> +4-31: Reserved
>  
>  **write_access:**
>  

:




Re: [PATCH V13 7/8] gdbstub: Add helper function to unregister GDB register space

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:48 +0100
Salil Mehta  wrote:

> Add common function to help unregister the GDB register space. This shall be
> done in context to the CPU unrealization.
> 
> Note: These are common functions exported to arch specific code. For example,
> for ARM this code is being referred in associated arch specific patch-set:
> 
> Link: 
> https://lore.kernel.org/qemu-devel/20230926103654.34424-1-salil.me...@huawei.com/
> 
> Signed-off-by: Salil Mehta 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Gavin Shan 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Reviewed-by: Vishnu Pajjuri 
> Tested-by: Zhao Liu 
> ---
>  gdbstub/gdbstub.c  | 13 +
>  hw/core/cpu-common.c   |  1 -
>  include/exec/gdbstub.h |  6 ++
>  3 files changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/gdbstub/gdbstub.c b/gdbstub/gdbstub.c
> index b3574997ea..1949b09240 100644
> --- a/gdbstub/gdbstub.c
> +++ b/gdbstub/gdbstub.c
> @@ -617,6 +617,19 @@ void gdb_register_coprocessor(CPUState *cpu,
>  }
>  }
>  
> +void gdb_unregister_coprocessor_all(CPUState *cpu)
> +{
> +/*
> + * Safe to nuke everything. GDBRegisterState::xml is static const char so
> + * it won't be freed
> + */
> +g_array_free(cpu->gdb_regs, true);
> +
> +cpu->gdb_regs = NULL;
> +cpu->gdb_num_regs = 0;
> +cpu->gdb_num_g_regs = 0;
> +}
> +
>  static void gdb_process_breakpoint_remove_all(GDBProcess *p)
>  {
>  CPUState *cpu = gdb_get_first_cpu_in_process(p);
> diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
> index 0f0a247f56..e5140b4bc1 100644
> --- a/hw/core/cpu-common.c
> +++ b/hw/core/cpu-common.c
> @@ -274,7 +274,6 @@ static void cpu_common_finalize(Object *obj)
>  {
>  CPUState *cpu = CPU(obj);
>  
> -g_array_free(cpu->gdb_regs, TRUE);

so free() is gone but new  gdb_unregister_coprocessor_all() ain't called,
are we staring to leak some memory here?

>  qemu_lockcnt_destroy(>in_ioctl_lock);
>  qemu_mutex_destroy(>work_mutex);
>  }
> diff --git a/include/exec/gdbstub.h b/include/exec/gdbstub.h
> index eb14b91139..249d4d4bc8 100644
> --- a/include/exec/gdbstub.h
> +++ b/include/exec/gdbstub.h
> @@ -49,6 +49,12 @@ void gdb_register_coprocessor(CPUState *cpu,
>gdb_get_reg_cb get_reg, gdb_set_reg_cb set_reg,
>const GDBFeature *feature, int g_pos);
>  
> +/**
> + * gdb_unregister_coprocessor_all() - unregisters supplemental set of 
> registers
> + * @cpu - the CPU associated with registers
> + */
> +void gdb_unregister_coprocessor_all(CPUState *cpu);
> +
>  /**
>   * gdbserver_start: start the gdb server
>   * @port_or_device: connection spec for gdb




Re: [PATCH V13 6/8] physmem: Add helper function to destroy CPU AddressSpace

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:47 +0100
Salil Mehta  wrote:

> Virtual CPU Hot-unplug leads to unrealization of a CPU object. This also
> involves destruction of the CPU AddressSpace. Add common function to help
> destroy the CPU AddressSpace.
> 
> Signed-off-by: Salil Mehta 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Gavin Shan 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Zhao Liu 


Acked-by: Igor Mammedov 

> ---
>  include/exec/cpu-common.h |  8 
>  include/hw/core/cpu.h |  1 +
>  system/physmem.c  | 29 +
>  3 files changed, 38 insertions(+)
> 
> diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
> index 815342d043..240ee04369 100644
> --- a/include/exec/cpu-common.h
> +++ b/include/exec/cpu-common.h
> @@ -129,6 +129,14 @@ size_t qemu_ram_pagesize_largest(void);
>   */
>  void cpu_address_space_init(CPUState *cpu, int asidx,
>  const char *prefix, MemoryRegion *mr);
> +/**
> + * cpu_address_space_destroy:
> + * @cpu: CPU for which address space needs to be destroyed
> + * @asidx: integer index of this address space
> + *
> + * Note that with KVM only one address space is supported.
> + */
> +void cpu_address_space_destroy(CPUState *cpu, int asidx);
>  
>  void cpu_physical_memory_rw(hwaddr addr, void *buf,
>  hwaddr len, bool is_write);
> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> index bb398e8237..60b160d0b4 100644
> --- a/include/hw/core/cpu.h
> +++ b/include/hw/core/cpu.h
> @@ -486,6 +486,7 @@ struct CPUState {
>  QSIMPLEQ_HEAD(, qemu_work_item) work_list;
>  
>  struct CPUAddressSpace *cpu_ases;
> +int cpu_ases_count;
>  int num_ases;
>  AddressSpace *as;
>  MemoryRegion *memory;
> diff --git a/system/physmem.c b/system/physmem.c
> index 342b7a8fd4..146f17826a 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -763,6 +763,7 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>  
>  if (!cpu->cpu_ases) {
>  cpu->cpu_ases = g_new0(CPUAddressSpace, cpu->num_ases);
> +cpu->cpu_ases_count = cpu->num_ases;
>  }
>  
>  newas = >cpu_ases[asidx];
> @@ -776,6 +777,34 @@ void cpu_address_space_init(CPUState *cpu, int asidx,
>  }
>  }
>  
> +void cpu_address_space_destroy(CPUState *cpu, int asidx)
> +{
> +CPUAddressSpace *cpuas;
> +
> +assert(cpu->cpu_ases);
> +assert(asidx >= 0 && asidx < cpu->num_ases);
> +/* KVM cannot currently support multiple address spaces. */
> +assert(asidx == 0 || !kvm_enabled());
> +
> +cpuas = >cpu_ases[asidx];
> +if (tcg_enabled()) {
> +memory_listener_unregister(>tcg_as_listener);
> +}
> +
> +address_space_destroy(cpuas->as);
> +g_free_rcu(cpuas->as, rcu);
> +
> +if (asidx == 0) {
> +/* reset the convenience alias for address space 0 */
> +cpu->as = NULL;
> +}
> +
> +if (--cpu->cpu_ases_count == 0) {
> +g_free(cpu->cpu_ases);
> +cpu->cpu_ases = NULL;
> +}
> +}
> +
>  AddressSpace *cpu_get_address_space(CPUState *cpu, int asidx)
>  {
>  /* Return the AddressSpace corresponding to the specified index */




Re: [PATCH V13 5/8] hw/acpi: Update CPUs AML with cpu-(ctrl)dev change

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:46 +0100
Salil Mehta  wrote:

> CPUs Control device(\\_SB.PCI0) register interface for the x86 arch is IO port
> based and existing CPUs AML code assumes _CRS objects would evaluate to a 
> system
> resource which describes IO Port address. But on ARM arch CPUs control
> device(\\_SB.PRES) register interface is memory-mapped hence _CRS object 
> should
> evaluate to system resource which describes memory-mapped base address. Update
> build CPUs AML function to accept both IO/MEMORY region spaces and accordingly
> update the _CRS object.
ack for above change


but below part is one too many different changes withing 1 patch.
anyways, GPE part probably won't be needed if you follow suggestion made
on previous patch.
 
> On x86, CPU Hotplug uses Generic ACPI GPE Block Bit 2 (GPE.2) event handler to
> notify OSPM about any CPU hot(un)plug events. Latest CPU Hotplug is based on
> ACPI Generic Event Device framework and uses ACPI GED device for the same. Not
> all architectures support GPE based CPU Hotplug event handler. Hence, make AML
> for GPE.2 event handler conditional.
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Jonathan Cameron 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Zhao Liu 
> ---
>  hw/acpi/cpu.c | 23 ---
>  hw/i386/acpi-build.c  |  3 ++-
>  include/hw/acpi/cpu.h |  5 +++--
>  3 files changed, 21 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index af2b6655d2..4c63514b16 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -343,9 +343,10 @@ const VMStateDescription vmstate_cpu_hotplug = {
>  #define CPU_FW_EJECT_EVENT "CEJF"
>  
>  void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures 
> opts,
> -build_madt_cpu_fn build_madt_cpu, hwaddr io_base,
> +build_madt_cpu_fn build_madt_cpu, hwaddr base_addr,
>  const char *res_root,
> -const char *event_handler_method)
> +const char *event_handler_method,
> +AmlRegionSpace rs)
>  {
>  Aml *ifctx;
>  Aml *field;
> @@ -370,13 +371,19 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
> CPUHotplugFeatures opts,
>  aml_append(cpu_ctrl_dev, aml_mutex(CPU_LOCK, 0));
>  
>  crs = aml_resource_template();
> -aml_append(crs, aml_io(AML_DECODE16, io_base, io_base, 1,
> +if (rs == AML_SYSTEM_IO) {
> +aml_append(crs, aml_io(AML_DECODE16, base_addr, base_addr, 1,
> ACPI_CPU_HOTPLUG_REG_LEN));
> +} else {

else
 if (rs == yours type)
> +aml_append(crs, aml_memory32_fixed(base_addr,
> +   ACPI_CPU_HOTPLUG_REG_LEN, AML_READ_WRITE));
> +}
else assert on not supported input

> +
>  aml_append(cpu_ctrl_dev, aml_name_decl("_CRS", crs));
>  
>  /* declare CPU hotplug MMIO region with related access fields */
>  aml_append(cpu_ctrl_dev,
> -aml_operation_region("PRST", AML_SYSTEM_IO, aml_int(io_base),
> +aml_operation_region("PRST", rs, aml_int(base_addr),
>   ACPI_CPU_HOTPLUG_REG_LEN));
>  
>  field = aml_field("PRST", AML_BYTE_ACC, AML_NOLOCK,
> @@ -700,9 +707,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
> CPUHotplugFeatures opts,
>  aml_append(sb_scope, cpus_dev);
>  aml_append(table, sb_scope);
>  
> -method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> -aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> -aml_append(table, method);
> +if (event_handler_method) {
> +method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED);
> +aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD));
> +aml_append(table, method);
> +}
>  
>  g_free(cphp_res_path);
>  }
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 53f804ac16..b73b136605 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1537,7 +1537,8 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  .fw_unplugs_cpu = pm->smi_on_cpu_unplug,
>  };
>  build_cpus_aml(dsdt, machine, opts, pc_madt_cpu_entry,
> -   pm->cpu_hp_io_base, "\\_SB.PCI0", "\\_GPE._E02");
> +   pm->cpu_hp_io_base, "\\_SB.PCI0", "\\_GPE._E02",
> +   AML_SYSTEM_IO);
>  }
>  
>  if (pcms->memhp_io_base && nr_mem) {
> diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h
> index e6e1a9ef59..48cded697c 100644
> --- a/include/hw/acpi/cpu.h
> +++ b/include/hw/acpi/cpu.h
> @@ -61,9 +61,10 @@ typedef void (*build_madt_cpu_fn)(int uid, const 
> CPUArchIdList *apic_ids,
> 

Re: [PATCH V13 4/8] hw/acpi: Update GED _EVT method AML with CPU scan

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:45 +0100
Salil Mehta  wrote:

> OSPM evaluates _EVT method to map the event. The CPU hotplug event eventually
> results in start of the CPU scan. Scan figures out the CPU and the kind of
> event(plug/unplug) and notifies it back to the guest. Update the GED AML _EVT
> method with the call to \\_SB.CPUS.CSCN
> 
> Also, macro CPU_SCAN_METHOD might be referred in other places like during GED
> intialization so it makes sense to have its definition placed in some common
> header file like cpu_hotplug.h. But doing this can cause compilation break
> because of the conflicting macro definitions present in cpu.c and 
> cpu_hotplug.c

one of the reasons is that you reusing legacy hw/acpi/cpu_hotplug.h,
see below for suggestion.

> and because both these files get compiled due to historic reasons of x86 world
> i.e. decision to use legacy(GPE.2)/modern(GED) CPU hotplug interface happens
> during runtime [1]. To mitigate above, for now, declare a new common macro
> ACPI_CPU_SCAN_METHOD for CPU scan method instead.
> (This needs a separate discussion later on for clean-up)
> 
> Reference:
> [1] 
> https://lore.kernel.org/qemu-devel/1463496205-251412-24-git-send-email-imamm...@redhat.com/
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Zhao Liu 
> ---
>  hw/acpi/cpu.c  | 2 +-
>  hw/acpi/generic_event_device.c | 4 
>  include/hw/acpi/cpu_hotplug.h  | 2 ++
>  3 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 473b37ba88..af2b6655d2 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -327,7 +327,7 @@ const VMStateDescription vmstate_cpu_hotplug = {
>  #define CPUHP_RES_DEVICE  "PRES"
>  #define CPU_LOCK  "CPLK"
>  #define CPU_STS_METHOD"CSTA"
> -#define CPU_SCAN_METHOD   "CSCN"
> +#define CPU_SCAN_METHOD   ACPI_CPU_SCAN_METHOD
>  #define CPU_NOTIFY_METHOD "CTFY"
>  #define CPU_EJECT_METHOD  "CEJ0"
>  #define CPU_OST_METHOD"COST"
> diff --git a/hw/acpi/generic_event_device.c b/hw/acpi/generic_event_device.c
> index 54d3b4bf9d..63226b0040 100644
> --- a/hw/acpi/generic_event_device.c
> +++ b/hw/acpi/generic_event_device.c
> @@ -109,6 +109,10 @@ void build_ged_aml(Aml *table, const char *name, 
> HotplugHandler *hotplug_dev,
>  aml_append(if_ctx, aml_call0(MEMORY_DEVICES_CONTAINER "."
>   MEMORY_SLOT_SCAN_METHOD));
>  break;
> +case ACPI_GED_CPU_HOTPLUG_EVT:
> +aml_append(if_ctx, aml_call0(ACPI_CPU_CONTAINER "."
> + ACPI_CPU_SCAN_METHOD));

I don't particularly like exposing cpu hotplug internals for outside code
and then making that code do plumbing hoping that nothing will explode
in the future.

build_cpus_aml() takes event_handler_method to create a method that
can be called by platform. What I suggest is to call that method here
instead of trying to expose CPU hotplug internals and manually building
call path here.
aka:
  build_cpus_aml(event_handler_method = PATH_TO_GED_DEVICE.CSCN)
and then call here 
  aml_append(if_ctx, aml_call0(CSCN));
which will call  CSCN in GED scope, that was be populated by
build_cpus_aml() to do cpu scan properly without need to expose
cpu hotplug internal names and then trying to fixup conflicts caused by that.

PS:
we should do the same for memory hotplug, we see in context above


> +break;
>  case ACPI_GED_PWR_DOWN_EVT:
>  aml_append(if_ctx,
> aml_notify(aml_name(ACPI_POWER_BUTTON_DEVICE),
> diff --git a/include/hw/acpi/cpu_hotplug.h b/include/hw/acpi/cpu_hotplug.h
> index 48b291e45e..ef631750b4 100644
> --- a/include/hw/acpi/cpu_hotplug.h
> +++ b/include/hw/acpi/cpu_hotplug.h
> @@ -20,6 +20,8 @@
>  #include "hw/acpi/cpu.h"
>  
>  #define ACPI_CPU_HOTPLUG_REG_LEN 12
> +#define ACPI_CPU_SCAN_METHOD "CSCN"
> +#define ACPI_CPU_CONTAINER "\\_SB.CPUS"
>  
>  typedef struct AcpiCpuHotplug {
>  Object *device;




Re: [PATCH V13 3/8] hw/acpi: Update ACPI GED framework to support vCPU Hotplug

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:44 +0100
Salil Mehta  wrote:

> ACPI GED (as described in the ACPI 6.4 spec) uses an interrupt listed in the
> _CRS object of GED to intimate OSPM about an event. Later then demultiplexes 
> the
> notified event by evaluating ACPI _EVT method to know the type of event. Use
> ACPI GED to also notify the guest kernel about any CPU hot(un)plug events.
> 
> ACPI CPU hotplug related initialization should only happen if ACPI_CPU_HOTPLUG
> support has been enabled for particular architecture. Add 
> cpu_hotplug_hw_init()
> stub to avoid compilation break.
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Keqian Zhu 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Reviewed-by: David Hildenbrand 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Vishnu Pajjuri 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 
> ---
>  hw/acpi/acpi-cpu-hotplug-stub.c|  6 ++
>  hw/acpi/cpu.c  |  6 +-
>  hw/acpi/generic_event_device.c | 17 +
>  include/hw/acpi/generic_event_device.h |  4 
>  4 files changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/acpi-cpu-hotplug-stub.c b/hw/acpi/acpi-cpu-hotplug-stub.c
> index 3fc4b14c26..c6c61bb9cd 100644
> --- a/hw/acpi/acpi-cpu-hotplug-stub.c
> +++ b/hw/acpi/acpi-cpu-hotplug-stub.c
> @@ -19,6 +19,12 @@ void legacy_acpi_cpu_hotplug_init(MemoryRegion *parent, 
> Object *owner,
>  return;
>  }
>  
> +void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
> + CPUHotplugState *state, hwaddr base_addr)
> +{
> +return;
> +}
> +
>  void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList ***list)
>  {
>  return;
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 69aaa563db..473b37ba88 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -221,7 +221,11 @@ void cpu_hotplug_hw_init(MemoryRegion *as, Object *owner,
>  const CPUArchIdList *id_list;
>  int i;
>  
> -assert(mc->possible_cpu_arch_ids);
> +/* hotplug might not be available for all types like x86/microvm etc. */
> +if (!mc->possible_cpu_arch_ids) {
> +return;
> +}

if hotplug is not supported, this function shouldn't be called at all.

[...]
> @@ -400,6 +411,12 @@ static void acpi_ged_initfn(Object *obj)
>  memory_region_init_io(_st->regs, obj, _regs_ops, ged_st,
>TYPE_ACPI_GED "-regs", ACPI_GED_REG_COUNT);
>  sysbus_init_mmio(sbd, _st->regs);
> +
> +memory_region_init(>container_cpuhp, OBJECT(dev), "cpuhp container",
> +   ACPI_CPU_HOTPLUG_REG_LEN);
> +sysbus_init_mmio(sbd, >container_cpuhp);

> +cpu_hotplug_hw_init(>container_cpuhp, OBJECT(dev),
> +>cpuhp_state, 0);

suggest to move this call to realize time, and gate it on 
ACPI_GED_CPU_HOTPLUG_EVT being set.
Platform that supports cpu hotplug must optin, setting ACPI_GED_CPU_HOTPLUG_EVT,
while for the rest it will be ignored.

for example: create_acpi_ged() : event |= ACPI_GED_NVDIMM_HOTPLUG_EVT;

>  }
>  
>  static void acpi_ged_class_init(ObjectClass *class, void *data)
> diff --git a/include/hw/acpi/generic_event_device.h 
> b/include/hw/acpi/generic_event_device.h
> index ba84ce0214..90fc41cbb8 100644
> --- a/include/hw/acpi/generic_event_device.h
> +++ b/include/hw/acpi/generic_event_device.h
> @@ -60,6 +60,7 @@
>  #define HW_ACPI_GENERIC_EVENT_DEVICE_H
>  
>  #include "hw/sysbus.h"
> +#include "hw/acpi/cpu_hotplug.h"
>  #include "hw/acpi/memory_hotplug.h"
>  #include "hw/acpi/ghes.h"
>  #include "qom/object.h"
> @@ -95,6 +96,7 @@ OBJECT_DECLARE_SIMPLE_TYPE(AcpiGedState, ACPI_GED)
>  #define ACPI_GED_MEM_HOTPLUG_EVT   0x1
>  #define ACPI_GED_PWR_DOWN_EVT  0x2
>  #define ACPI_GED_NVDIMM_HOTPLUG_EVT 0x4
> +#define ACPI_GED_CPU_HOTPLUG_EVT0x8
>  
>  typedef struct GEDState {
>  MemoryRegion evt;
> @@ -106,6 +108,8 @@ struct AcpiGedState {
>  SysBusDevice parent_obj;
>  MemHotplugState memhp_state;
>  MemoryRegion container_memhp;
> +CPUHotplugState cpuhp_state;
> +MemoryRegion container_cpuhp;
>  GEDState ged_state;
>  uint32_t ged_event_bitmap;
>  qemu_irq irq;




Re: [PATCH V13 2/8] hw/acpi: Move CPU ctrl-dev MMIO region len macro to common header file

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:43 +0100
Salil Mehta  wrote:

> CPU ctrl-dev MMIO region length could be used in ACPI GED and various other
> architecture specific places. Move ACPI_CPU_HOTPLUG_REG_LEN macro to more
> appropriate common header file.
> 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Alex Bennée 
> Reviewed-by: Jonathan Cameron 
> Reviewed-by: Gavin Shan 
> Reviewed-by: David Hildenbrand 
> Reviewed-by: Shaoqin Huang 
> Tested-by: Vishnu Pajjuri 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 
> ---
>  hw/acpi/cpu.c | 2 +-
>  include/hw/acpi/cpu_hotplug.h | 2 ++
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 2d81c1e790..69aaa563db 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -1,13 +1,13 @@
>  #include "qemu/osdep.h"
>  #include "migration/vmstate.h"
>  #include "hw/acpi/cpu.h"
> +#include "hw/acpi/cpu_hotplug.h"
>  #include "hw/core/cpu.h"
>  #include "qapi/error.h"
>  #include "qapi/qapi-events-acpi.h"
>  #include "trace.h"
>  #include "sysemu/numa.h"
>  
> -#define ACPI_CPU_HOTPLUG_REG_LEN 12
>  #define ACPI_CPU_SELECTOR_OFFSET_WR 0
>  #define ACPI_CPU_FLAGS_OFFSET_RW 4
>  #define ACPI_CPU_CMD_OFFSET_WR 5
> diff --git a/include/hw/acpi/cpu_hotplug.h b/include/hw/acpi/cpu_hotplug.h
> index 3b932a..48b291e45e 100644
> --- a/include/hw/acpi/cpu_hotplug.h
> +++ b/include/hw/acpi/cpu_hotplug.h

this file has deps on x86 machine and contains mainly
legacy CPU hotplug API for x86. 

> @@ -19,6 +19,8 @@
>  #include "hw/hotplug.h"
>  #include "hw/acpi/cpu.h"
>  
> +#define ACPI_CPU_HOTPLUG_REG_LEN 12

the better place for it would be include/hw/acpi/cpu.h

>  typedef struct AcpiCpuHotplug {
>  Object *device;
>  MemoryRegion io;




Re: [PATCH V13 1/8] accel/kvm: Extract common KVM vCPU {creation,parking} code

2024-07-06 Thread Igor Mammedov
On Fri, 7 Jun 2024 12:56:42 +0100
Salil Mehta  wrote:

> KVM vCPU creation is done once during the vCPU realization when Qemu vCPU 
> thread
> is spawned. This is common to all the architectures as of now.
> 
> Hot-unplug of vCPU results in destruction of the vCPU object in QOM but the
> corresponding KVM vCPU object in the Host KVM is not destroyed as KVM doesn't
> support vCPU removal. Therefore, its representative KVM vCPU object/context in
> Qemu is parked.
> 
> Refactor architecture common logic so that some APIs could be reused by vCPU
> Hotplug code of some architectures likes ARM, Loongson etc. Update new/old 
> APIs
> with trace events. No functional change is intended here.
> 
> Signed-off-by: Salil Mehta 
> Reviewed-by: Gavin Shan 
> Tested-by: Vishnu Pajjuri 
> Reviewed-by: Jonathan Cameron 
> Tested-by: Xianglai Li 
> Tested-by: Miguel Luis 
> Reviewed-by: Shaoqin Huang 
> Reviewed-by: Vishnu Pajjuri 
> Reviewed-by: Nicholas Piggin 
> Tested-by: Zhao Liu 
> Reviewed-by: Zhao Liu 
> Reviewed-by: Harsh Prateek Bora 
> ---
>  accel/kvm/kvm-all.c| 95 --
>  accel/kvm/kvm-cpus.h   |  1 -
>  accel/kvm/trace-events |  5 ++-
>  include/sysemu/kvm.h   | 25 +++
>  4 files changed, 92 insertions(+), 34 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index c0be9f5eed..8f9128bb92 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -340,14 +340,71 @@ err:
>  return ret;
>  }
>  
> +void kvm_park_vcpu(CPUState *cpu)
> +{
> +struct KVMParkedVcpu *vcpu;
> +
> +trace_kvm_park_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
> +
> +vcpu = g_malloc0(sizeof(*vcpu));
> +vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> +vcpu->kvm_fd = cpu->kvm_fd;
> +QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> +}
> +
> +int kvm_unpark_vcpu(KVMState *s, unsigned long vcpu_id)
> +{
> +struct KVMParkedVcpu *cpu;
> +int kvm_fd = -ENOENT;
> +
> +QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> +if (cpu->vcpu_id == vcpu_id) {
> +QLIST_REMOVE(cpu, node);
> +kvm_fd = cpu->kvm_fd;
> +g_free(cpu);
> +}
> +}
> +
> +trace_kvm_unpark_vcpu(vcpu_id, kvm_fd > 0 ? "unparked" : "not found 
> parked");
> +
> +return kvm_fd;
> +}
> +
> +int kvm_create_vcpu(CPUState *cpu)
> +{
> +unsigned long vcpu_id = kvm_arch_vcpu_id(cpu);
> +KVMState *s = kvm_state;
> +int kvm_fd;
> +
> +/* check if the KVM vCPU already exist but is parked */
> +kvm_fd = kvm_unpark_vcpu(s, vcpu_id);
> +if (kvm_fd < 0) {
> +/* vCPU not parked: create a new KVM vCPU */
> +kvm_fd = kvm_vm_ioctl(s, KVM_CREATE_VCPU, vcpu_id);
> +if (kvm_fd < 0) {
> +error_report("KVM_CREATE_VCPU IOCTL failed for vCPU %lu", 
> vcpu_id);
> +return kvm_fd;
> +}
> +}
> +
> +cpu->kvm_fd = kvm_fd;
> +cpu->kvm_state = s;
> +cpu->vcpu_dirty = true;
> +cpu->dirty_pages = 0;
> +cpu->throttle_us_per_full = 0;
> +
> +trace_kvm_create_vcpu(cpu->cpu_index, vcpu_id, kvm_fd);
> +
> +return 0;
> +}

Is there any reason why you are embedding/hiding kvm_state in new API
instead of passing it as argument (all callers have it defined, so why not 
reuse that)?

otherwise patch lgtm 

> +
>  static int do_kvm_destroy_vcpu(CPUState *cpu)
>  {
>  KVMState *s = kvm_state;
>  long mmap_size;
> -struct KVMParkedVcpu *vcpu = NULL;
>  int ret = 0;
>  
> -trace_kvm_destroy_vcpu();
> +trace_kvm_destroy_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>  
>  ret = kvm_arch_destroy_vcpu(cpu);
>  if (ret < 0) {
> @@ -373,10 +430,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
>  }
>  }
>  
> -vcpu = g_malloc0(sizeof(*vcpu));
> -vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
> -vcpu->kvm_fd = cpu->kvm_fd;
> -QLIST_INSERT_HEAD(_state->kvm_parked_vcpus, vcpu, node);
> +kvm_park_vcpu(cpu);
>  err:
>  return ret;
>  }
> @@ -389,24 +443,6 @@ void kvm_destroy_vcpu(CPUState *cpu)
>  }
>  }
>  
> -static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
> -{
> -struct KVMParkedVcpu *cpu;
> -
> -QLIST_FOREACH(cpu, >kvm_parked_vcpus, node) {
> -if (cpu->vcpu_id == vcpu_id) {
> -int kvm_fd;
> -
> -QLIST_REMOVE(cpu, node);
> -kvm_fd = cpu->kvm_fd;
> -g_free(cpu);
> -return kvm_fd;
> -}
> -}
> -
> -return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
> -}
> -
>  int kvm_init_vcpu(CPUState *cpu, Error **errp)
>  {
>  KVMState *s = kvm_state;
> @@ -415,19 +451,14 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
>  
>  trace_kvm_init_vcpu(cpu->cpu_index, kvm_arch_vcpu_id(cpu));
>  
> -ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
> +ret = kvm_create_vcpu(cpu);
>  if (ret < 0) {
> -error_setg_errno(errp, -ret, "kvm_init_vcpu: kvm_get_vcpu failed 
> (%lu)",
> +  

Re: [PATCH v4 06/16] tests/qtest/bios-tables-test.c: Add support for arch in path

2024-07-02 Thread Igor Mammedov
On Tue, 25 Jun 2024 20:38:29 +0530
Sunil V L  wrote:

> Since machine name can be common for multiple architectures (ex: virt),
> add "arch" in the path to search for expected AML files. Since the AML
> files are still under old path, add support for searching with and
> without arch in the path.

we probably should remove fallback path lookup after series is merged.
it' fine to do it as a follow up patch.

> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 
> Reviewed-by: Igor Mammedov 




Re: [v2 1/1] hw/i386/acpi-build: add OSHP method support for SHPC driver load

2024-07-02 Thread Igor Mammedov
On Mon, 1 Jul 2024 14:27:50 +
"Gao,Shiyuan"  wrote:

> > > > > If I want to use ACPI PCI hotplug in the pxb bridge, what else need 
> > > > > to be done?  
> > > >
> > > > does it have to be hotplug directly into pxb or
> > > > would be it be sufficient to have hotplug support
> > > > on pci-bridge attached to a pxb?  
> > >
> > > It's sufficient to hotplug support on pci-bridge attached to a pxb.  
> >
> > ... but I guess using this instead would be better anyway?  
> 
> https://lore.kernel.org/all/20220422135101.65796...@redhat.com/t/#r831d589f243c24334a09995620b74408847a87a0
> 
> According this message, It seems that the current QEMU does not support it 
> yet. 
> I tried to hotplug on pci-bridge attached to a pxb, no device found in the 
> guest.

SHPC works for q35, which provides _OSC.

It is broken for pc machine though, since machine lacks either _OSC or OSHP.
Theoretically SHPC should still work for hotplugged bridges
(i.e. with ACPI hotplug enabled, when one hotplugs a bridge into
pci.0, but I haven't tried that lately)

I'm still not sure if we should make OSHP global, or put it only
under bridges that have shcp=on && don't have ACPI hotplug.
The later seems cleaner though.

> >
> > take care,
> >   Gerd  
> 




Re: [PATCH 02/23] target/i386: fix gen_prepare_size_nz condition

2024-07-01 Thread Igor Mammedov
On Fri, 28 Jun 2024 15:34:58 +0100
Alex Bennée  wrote:

> Alex Bennée  writes:
> 
> > Incorrect brace positions causes an unintended overflow on 32 bit
> > builds and shenanigans result.
> >
> > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2413
> > Suggested-by: Mark Cave-Ayland 
> > Signed-off-by: Alex Bennée   
> 
> This seems to trigger regressions in:
> 
>   qtest-x86_64/bios-tables-test
>   qtest-x86_64/pxe-test
>   qtest-x86_64/vmgenid-test
> 
> Could that be down to generated test data?

Without context, I'd guess, that
guest doesn't boot/get to randevu point that tests are waiting for
and then it just timeouts => fails.

> 
> > ---
> >  target/i386/tcg/translate.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
> > index ad1819815a..94f13541c3 100644
> > --- a/target/i386/tcg/translate.c
> > +++ b/target/i386/tcg/translate.c
> > @@ -877,7 +877,7 @@ static CCPrepare gen_prepare_sign_nz(TCGv src, MemOp 
> > size)
> >  return (CCPrepare) { .cond = TCG_COND_LT, .reg = src };
> >  } else {
> >  return (CCPrepare) { .cond = TCG_COND_TSTNE, .reg = src,
> > - .imm = 1ull << ((8 << size) - 1) };
> > + .imm = (1ull << (8 << size)) - 1 };
> >  }
> >  }  
> 




Re: [PATCH v3 08/11] hw/acpi: Generic Port Affinity Structure support

2024-07-01 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:16 +0100
Jonathan Cameron  wrote:

> These are very similar to the recently added Generic Initiators
> but instead of representing an initiator of memory traffic they
> represent an edge point beyond which may lie either targets or
> initiators.  Here we add these ports such that they may
> be targets of hmat_lb records to describe the latency and
> bandwidth from host side initiators to the port.  A discoverable
> mechanism such as UEFI CDAT read from CXL devices and switches
> is used to discover the remainder of the path, and the OS can build
> up full latency and bandwidth numbers as need for work and data
> placement decisions.
> 
> Acked-by: Markus Armbruster 
> Signed-off-by: Jonathan Cameron 
> ---
> v3: Move to hw/acpi/pci.c
> Rename the funciton to actually registers both types
> of generic nodes to reflect it isn't GI only.
> Note that the qom part is unchanged and other changes are mostly
> code movement so I've kept Markus' Ack.
> ---
>  qapi/qom.json|  34 
>  include/hw/acpi/acpi_generic_initiator.h |  35 
>  include/hw/acpi/aml-build.h  |   4 +
>  include/hw/acpi/pci.h|   3 +-
>  include/hw/pci/pci_bridge.h  |   1 +
>  hw/acpi/acpi_generic_initiator.c | 216 +++
>  hw/acpi/aml-build.c  |  40 +
>  hw/acpi/pci.c| 110 +++-
>  hw/arm/virt-acpi-build.c |   2 +-
>  hw/i386/acpi-build.c |   2 +-
>  hw/pci-bridge/pci_expander_bridge.c  |   1 -
>  11 files changed, 443 insertions(+), 5 deletions(-)

this is quite large patch, is it possible to split into
a set of smaller patches?

> diff --git a/qapi/qom.json b/qapi/qom.json
> index 8bd299265e..8fa6bbd9a7 100644
> --- a/qapi/qom.json
> +++ b/qapi/qom.json
> @@ -826,6 +826,38 @@
>'data': { 'pci-dev': 'str',
>  'node': 'uint32' } }
>  
> +##
> +# @AcpiGenericPortProperties:
> +#
> +# Properties for acpi-generic-port objects.
> +#
> +# @pci-bus: QOM path of the PCI bus of the hostbridge associated with
> +# this SRAT Generic Port Affinity Structure.  This is the same as
> +# the bus parameter for the root ports attached to this host
> +# bridge.  The resulting SRAT Generic Port Affinity Structure will
> +# refer to the ACPI object in DSDT that represents the host bridge
> +# (e.g.  ACPI0016 for CXL host bridges).  See ACPI 6.5 Section
> +# 5.2.16.7 for more information.
> +#
> +# @node: Similar to a NUMA node ID, but instead of providing a
> +# reference point used for defining NUMA distances and access
> +# characteristics to memory or from an initiator (e.g. CPU), this
> +# node defines the boundary point between non-discoverable system
> +# buses which must be described by firmware, and a discoverable
> +# bus.  NUMA distances and access characteristics are defined to
> +# and from that point.  For system software to establish full
> +# initiator to target characteristics this information must be
> +# combined with information retrieved from the discoverable part
> +# of the path.  An example would use CDAT (see UEFI.org)
> +# information read from devices and switches in conjunction with
> +# link characteristics read from PCIe Configuration space.
> +#
> +# Since: 9.1
> +##
> +{ 'struct': 'AcpiGenericPortProperties',
> +  'data': { 'pci-bus': 'str',
> +'node': 'uint32' } }
> +
>  ##
>  # @RngProperties:
>  #
> @@ -1019,6 +1051,7 @@
>  { 'enum': 'ObjectType',
>'data': [
>  'acpi-generic-initiator',
> +'acpi-generic-port',
>  'authz-list',
>  'authz-listfile',
>  'authz-pam',
> @@ -1092,6 +1125,7 @@
>'discriminator': 'qom-type',
>'data': {
>'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
> +  'acpi-generic-port':  'AcpiGenericPortProperties',
>'authz-list': 'AuthZListProperties',
>'authz-listfile': 'AuthZListFileProperties',
>'authz-pam':  'AuthZPAMProperties',
> diff --git a/include/hw/acpi/acpi_generic_initiator.h 
> b/include/hw/acpi/acpi_generic_initiator.h
> new file mode 100644
> index 00..92a39ad303
> --- /dev/null
> +++ b/include/hw/acpi/acpi_generic_initiator.h
> @@ -0,0 +1,35 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved
> + */
> +
> +#ifndef ACPI_GENERIC_INITIATOR_H
> +#define ACPI_GENERIC_INITIATOR_H
> +
> +#include "qom/object_interfaces.h"
> +
> +#define TYPE_ACPI_GENERIC_INITIATOR "acpi-generic-initiator"
> +
> +typedef struct AcpiGenericInitiator {
> +/* private */
> +Object parent;
> +
> +/* public */
> +char *pci_dev;
> +uint16_t node;
> +} AcpiGenericInitiator;
> +
> +#define TYPE_ACPI_GENERIC_PORT "acpi-generic-port"
> +
> +typedef struct 

Re: [PATCH V12 0/8] Add architecture agnostic code to support vCPU Hotplug

2024-07-01 Thread Igor Mammedov
On Wed, 26 Jun 2024 17:53:52 +
Salil Mehta  wrote:

> Hi Gavin,
> 
> >  From: Gavin Shan 
> >  Sent: Wednesday, June 26, 2024 5:13 AM
> >  To: Salil Mehta ; Igor Mammedov
> >  
> >  
> >  Hi Salil and Igor,
> >  
> >  On 6/26/24 9:51 AM, Salil Mehta wrote:  
> >  > On Wed, Jun 5, 2024 at 3:03 PM Igor Mammedov  
> >  mailto:imamm...@redhat.com>> wrote:  
> >  > On Sun, 2 Jun 2024 18:03:05 -0400
> >  > "Michael S. Tsirkin" mailto:m...@redhat.com>>  
> >  wrote:  
> >  >  
> >  >  > On Thu, May 30, 2024 at 12:42:33AM +0100, Salil Mehta wrote:  
> >  >  > > Virtual CPU hotplug support is being added across various 
> > architectures[1][3].
> >  >  > > This series adds various code bits common across all 
> > architectures:
> >  >  > >
> >  >  > > 1. vCPU creation and Parking code refactor [Patch 1]
> >  >  > > 2. Update ACPI GED framework to support vCPU Hotplug [Patch 2,3]
> >  >  > > 3. ACPI CPUs AML code change [Patch 4,5]
> >  >  > > 4. Helper functions to support unrealization of CPU objects 
> > [Patch  6,7]
> >  >  > > 5. Docs [Patch 8]
> >  >  > >
> >  >  > >
> >  >  > > Repository:
> >  >  > >
> >  >  > > [*] https://github.com/salil-mehta/qemu.git   
> > <https://github.com/salil-mehta/qemu.git> virt-cpuhp-armv8/rfc- 
> > v3.arch.agnostic.v12
> >  >  > >
> >  >  > > NOTE: This series is meant to work in conjunction with 
> > Architecture specific patch-set.
> >  >  > > For ARM, this will work in combination of the architecture 
> > specific part based on
> >  >  > > RFC V2 [1]. This architecture specific patch-set RFC V3 shall 
> > be floated soon and is
> >  >  > > present at below location
> >  >  > >
> >  >  > > [*] 
> > https://github.com/salil-mehta/qemu/tree/virt-cpuhp-armv8/rfc-v3-rc1 
> > <https://github.com/salil-mehta/qemu/tree/virt-cpuhp-armv8/rfc-v3-rc1>
> >  >  > >  
> >  >  >
> >  >  >
> >  >  > Igor plan to take a look?  
> >  >
> >  > Yep, I plan to review it
> >  >
> >  >
> >  > A gentle reminder on this.
> >  >  
> >  
> >  Since the latest revision for this series is v13, so I guess Igor needs to 
> > do the
> >  final screening on v13 instead?
> >  
> >  v13: https://lists.nongnu.org/archive/html/qemu-arm/2024-06/msg00129.html  
> 
> 
> Yes, thanks for pointing this. 

I have v13 tagged,
Sorry for delay, I'll try to review it this week

> 
> 
> >  
> >  Thanks,
> >  Gavin
> >
> 




Re: [v2 1/1] hw/i386/acpi-build: add OSHP method support for SHPC driver load

2024-07-01 Thread Igor Mammedov
On Fri, 28 Jun 2024 03:04:28 +
"Gao,Shiyuan"  wrote:

> > > that OS cannot get control of SHPC hotplug and hotplug device to
> > > the PCI bridge will fail when we use SHPC Native type:
> > >
> > >   [3.336059] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> > >OSHP (\_SB_.PCI0.S28_)
> > >   [3.337408] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> > >OSHP (\_SB_.PCI0)
> > >   [3.338710] shpchp :00:03.0: Cannot get control of SHPC hotplug
> > >
> > > Add OSHP method support for transfer control to the operating system,
> > > after this SHPC driver will be loaded success and the hotplug device to
> > > the PCI bridge will success when we use SHPC Native type.
> > >
> > >   [1.703975] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> > >OSHP (\_SB_.PCI0.S18_)
> > >   [1.704934] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> > >OSHP (\_SB_.PCI0)
> > >   [1.705855] shpchp :00:03.0: Gained control of SHPC hotplug 
> > >(\_SB_.PCI0)
> > >   [1.707054] shpchp :00:03.0: HPC vendor_id 1b36 device_id 1 ss_vid 0 
> > >ss_did 0  
> >
> > please describe in commit message reproducer
> > (aka QEMU CLI and guest OS and if necessary other details)  
> 
> qemu-system-x86_64 -machine pc-q35-9.0
> ...
> -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off

please use full QEMU CLI and follow up steps to trigger the issue.

From above it's not obvious what and where you are trying to hotplug
 
> guest OS: centos7/ubuntu22.04
> 
> I will add it in the next version.
> 
> > > +/*
> > > + * PCI Firmware Specification 3.0
> > > + * 4.8. The OSHP Control Method
> > > + */
> > > +static Aml *build_oshp_method(void)
> > > +{
> > > +    Aml *method;
> > > +
> > > +    /*
> > > + * We don't use ACPI to control the SHPC, so just return
> > > + * success is enough.
> > > + */
> > > +    method = aml_method("OSHP", 0, AML_NOTSERIALIZED);
> > > +    aml_append(method, aml_return(aml_int(0x0)));
> > > +    return method;
> > > +}
> > > +
> > >  static void
> > >  build_dsdt(GArray *table_data, BIOSLinker *linker,
> > > AcpiPmInfo *pm, AcpiMiscInfo *misc,
> > > @@ -1452,6 +1469,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> > >  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
> > >  aml_append(dev, aml_name_decl("_UID", 
> > >aml_int(pcmc->pci_root_uid)));
> > >  aml_append(dev, aml_pci_edsm());
> > > +    aml_append(dev, build_oshp_method());  
> >
> > it's global and what will happen if we have ACPI PCI hotplug enabled
> > and guest calls this NOP method?  
> 
> ths OS get the control of SHPC hotplug and SHPC driver load fail later.
> 
> [6.170345] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> OSHP (\_SB_.PCI0.S18_)
> [6.171962] shpchp :00:03.0: Requesting control of SHPC hotplug via 
> OSHP (\_SB_.PCI0)
> [6.173556] shpchp :00:03.0: Gained control of SHPC hotplug 
> (\_SB_.PCI0)
> [6.175144] shpchp :00:03.0: HPC vendor_id 1b36 device_id 1 ss_vid 0 
> ss_did 0
> [6.196153] shpchp :00:03.0: irq 24 for MSI/MSI-X
> [6.197211] shpchp :00:03.0: pci_hp_register failed with error -16
> [6.198272] shpchp :00:03.0: Slot initialization failed
> 
> this looks more suitable.
> 
> +if (!pm->pcihp_bridge_en) {
> +aml_append(dev, build_i440fx_oshp_method());
> +}

we also have
 PIIX4_PM.acpi-root-pci-hotplug (default true)
though it seems that ACPI hotplug takes precedence of SHPC if both are enabled.
So I'd take it and OSHP approach seems simpler than adding _OSC to do the same.

> 
> >  
> > >  aml_append(sb_scope, dev);
> > >  aml_append(dsdt, sb_scope);
> > >
> > > @@ -1586,6 +1604,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
> > >  aml_append(dev, build_q35_osc_method(true));
> > >  } else {
> > >  aml_append(dev, aml_name_decl("_HID", 
> > >aml_eisaid("PNP0A03")));
> > > +    aml_append(dev, build_oshp_method());
> > >  }
> > >
> > >  if (numa_node != NUMA_NODE_UNASSIGNED) {  
> 
> Hot plug/unplug a device using SHPC will take more time than ACPI PCI 
> hotplug, because
> after pressing the button, it can be cancelled within 5 seconds in SHPC 
> driver. 

for SHPC on PXB see,
commit d10dda2d60 hw/pci-bridge: disable SHPC in PXB

it seems that enabling SHPC on PXB in QEMU is not enough,
UEFI needs to support that as well
(CCing Gerd to check whether it is possible at all) 

> If I want to use ACPI PCI hotplug in the pxb bridge, what else need to be 
> done?

does it have to be hotplug directly into pxb or
would be it be sufficient to have hotplug support
on pci-bridge attached to a pxb?

I particularly do not like spreading ACPI hotplug 
to any host bridges, as it's quite complicated
code.

Michael,
Are there any reasons why we don't have hotplug directly
on PXBs enabled from PCI spec point of 

Re: [PATCH v3 05/11] hw/pci: Add a bus property to pci_props and use for acpi/gi

2024-06-28 Thread Igor Mammedov
On Thu, 27 Jun 2024 15:09:12 +0200
Igor Mammedov  wrote:

> On Thu, 20 Jun 2024 17:03:13 +0100
> Jonathan Cameron  wrote:
> 
> > Using a property allows us to hide the internal details of the PCI device
> > from the code to build a SRAT Generic Initiator Affinity Structure with
> > PCI Device Handle.
> > 
> > Suggested-by: Igor Mammedov 
> > Signed-off-by: Jonathan Cameron 
> > 
> > ---
> > V3: New patch
> > ---
> >  hw/acpi/acpi_generic_initiator.c | 11 ++-
> >  hw/pci/pci.c | 14 ++
> >  2 files changed, 20 insertions(+), 5 deletions(-)
> > 
> > diff --git a/hw/acpi/acpi_generic_initiator.c 
> > b/hw/acpi/acpi_generic_initiator.c
> > index 73bafaaaea..34284359f0 100644
> > --- a/hw/acpi/acpi_generic_initiator.c
> > +++ b/hw/acpi/acpi_generic_initiator.c
> > @@ -9,6 +9,7 @@
> >  #include "hw/boards.h"
> >  #include "hw/pci/pci_device.h"
> >  #include "qemu/error-report.h"
> > +#include "qapi/error.h"
> >  
> >  typedef struct AcpiGenericInitiatorClass {
> >  ObjectClass parent_class;
> > @@ -79,7 +80,7 @@ static int build_acpi_generic_initiator(Object *obj, void 
> > *opaque)
> >  MachineState *ms = MACHINE(qdev_get_machine());
> >  AcpiGenericInitiator *gi;
> >  GArray *table_data = opaque;
> > -PCIDevice *pci_dev;
> > +uint8_t bus, devfn;
> >  Object *o;
> >  
> >  if (!object_dynamic_cast(obj, TYPE_ACPI_GENERIC_INITIATOR)) {
> > @@ -100,10 +101,10 @@ static int build_acpi_generic_initiator(Object *obj, 
> > void *opaque)
> >  exit(1);
> >  }
> >  
> > -pci_dev = PCI_DEVICE(o);
> > -build_srat_pci_generic_initiator(table_data, gi->node, 0,
> > - pci_bus_num(pci_get_bus(pci_dev)),
> > - pci_dev->devfn);
> > +bus = object_property_get_uint(o, "bus", _fatal);
> > +devfn = object_property_get_uint(o, "addr", _fatal);
> > +
> > +build_srat_pci_generic_initiator(table_data, gi->node, 0, bus, devfn);
> >  
> >  return 0;
> >  }
> > diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> > index 324c1302d2..b4b499b172 100644
> > --- a/hw/pci/pci.c
> > +++ b/hw/pci/pci.c
> > @@ -67,6 +67,19 @@ static char *pcibus_get_fw_dev_path(DeviceState *dev);
> >  static void pcibus_reset_hold(Object *obj, ResetType type);
> >  static bool pcie_has_upstream_port(PCIDevice *dev);
> >  
> > +static void prop_pci_bus_get(Object *obj, Visitor *v, const char *name,
> > + void *opaque, Error **errp)
> > +{
> > +uint8_t bus = pci_dev_bus_num(PCI_DEVICE(obj));
> > +
> > +visit_type_uint8(v, name, , errp);
> > +}
> > +
> > +static const PropertyInfo prop_pci_bus = {
> > +.name = "bus",  
> 
> /me confused,
> didn't we have 'bus' property for PCI devices already?
> 
> i.e. I can add PCI device like this
>   -device e1000,bus=pci.0,addr=0x6,...

to avoid confusion, I'd suggest to name it to 'busnr'
(or be more specific (primary|secondary)_busnr if applicable)

>   
> 
> > +.get = prop_pci_bus_get,
> > +};
> > +
> >  static Property pci_props[] = {
> >  DEFINE_PROP_PCI_DEVFN("addr", PCIDevice, devfn, -1),
> >  DEFINE_PROP_STRING("romfile", PCIDevice, romfile),
> > @@ -85,6 +98,7 @@ static Property pci_props[] = {
> >  QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
> >  DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present,
> >  QEMU_PCIE_ARI_NEXTFN_1_BITNR, false),
> > +{ .name = "bus", .info = _pci_bus },
> >  DEFINE_PROP_END_OF_LIST()
> >  };
> >
> 




Re: [PATCH v3 07/11] hw/pci-bridge: Add acpi_uid property to CXL PXB

2024-06-28 Thread Igor Mammedov
On Thu, 27 Jun 2024 14:46:14 +0100
Jonathan Cameron  wrote:

> On Thu, 27 Jun 2024 15:27:58 +0200
> Igor Mammedov  wrote:
> 
> > On Thu, 20 Jun 2024 17:03:15 +0100
> > Jonathan Cameron  wrote:
> >   
> > > This allows the ACPI SRAT Generic Port Affinity Structure
> > > creation to be independent of PCI internals. Note that
> > > the UID is currently the PCI bus number.
> > > 
> > > Suggested-by: Igor Mammedov 
> > > Signed-off-by: Jonathan Cameron 
> > > 
> > > ---
> > > v3: New patch
> > > ---
> > >  hw/pci-bridge/pci_expander_bridge.c | 17 -
> > >  1 file changed, 16 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/hw/pci-bridge/pci_expander_bridge.c 
> > > b/hw/pci-bridge/pci_expander_bridge.c
> > > index 0411ad31ea..92d39b917a 100644
> > > --- a/hw/pci-bridge/pci_expander_bridge.c
> > > +++ b/hw/pci-bridge/pci_expander_bridge.c
> > > @@ -93,6 +93,21 @@ static void pxb_bus_class_init(ObjectClass *class, 
> > > void *data)
> > >  pbc->numa_node = pxb_bus_numa_node;
> > >  }
> > >  
> > > +static void prop_pxb_cxl_uid_get(Object *obj, Visitor *v, const char 
> > > *name,
> > > + void *opaque, Error **errp)
> > > +{
> > > +uint32_t uid = pci_bus_num(PCI_BUS(obj));
> > > +
> > > +visit_type_uint32(v, name, , errp);
> > > +}
> > > +
> > > +static void pxb_cxl_bus_class_init(ObjectClass *class, void *data)
> > > +{
> > > +pxb_bus_class_init(class, data);
> > > +object_class_property_add(class, "acpi_uid", "uint32",
> > > +  prop_pxb_cxl_uid_get, NULL, NULL, NULL);
> > > +}
> > > +
> > >  static const TypeInfo pxb_bus_info = {
> > >  .name  = TYPE_PXB_BUS,
> > >  .parent= TYPE_PCI_BUS,
> > > @@ -111,7 +126,7 @@ static const TypeInfo pxb_cxl_bus_info = {
> > >  .name  = TYPE_PXB_CXL_BUS,
> > >  .parent= TYPE_CXL_BUS,
> > >  .instance_size = sizeof(PXBBus),
> > > -.class_init= pxb_bus_class_init,
> > > +.class_init= pxb_cxl_bus_class_init,
> > 
> > why it's CXL only, doesn't the same UID rules apply to other PCI buses?  
> 
> In principle, yes.  My nervousness is that we can only test anything
> using this infrastructure today with CXL root bridges.
> 
> So I was thinking we should keep it limited and broaden the scope
> if anyone ever cares.  I don't mind broadening it from the start though.

Then I'd use it everywhere and cleanup ACPI code to use it as well
just to be consistent.
 
> Jonathan
> 
> 
> > >  };
> > >  
> > >  static const char *pxb_host_root_bus_path(PCIHostState *host_bridge,
> > 
> >   
> 




Re: [PATCH v3 1/3] tests/acpi: pc: allow DSDT acpi table changes

2024-06-28 Thread Igor Mammedov
On Fri,  7 Jun 2024 14:17:24 +
Ricardo Ribalda  wrote:

> Signed-off-by: Ricardo Ribalda 
> ---
>  tests/qtest/bios-tables-test-allowed-diff.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index dfb8523c8b..b2c2c10cbc 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1 +1,2 @@
>  /* List of comma-separated changed AML files to ignore */
> +"tests/data/acpi/pc/DSDT",

that's no enough, a lot more tables expected blobs are affected by
the next patch.

before posting, make sure that 'make check-qtest' passes fine




Re: [PATCH v3 2/3] hw/i386/acpi-build: Return a pre-computed _PRT table

2024-06-28 Thread Igor Mammedov
On Fri,  7 Jun 2024 14:17:25 +
Ricardo Ribalda  wrote:

> When qemu runs without kvm acceleration the ACPI executions take a great
> amount of time. If they take more than the default time (30sec), the
> ACPI calls fail and the system might not behave correctly.
> 
> Now the _PRT table is computed on the fly. We can drastically reduce the
> execution of the _PRT method if we return a pre-computed table.
> 
> Without this patch:
> [   51.343484] ACPI Error: Aborting method \_SB.PCI0._PRT due to previous 
> error (AE_AML_LOOP_TIMEOUT) (20230628/psparse-529)
> [   51.527032] ACPI Error: Method execution failed \_SB.PCI0._PRT due to 
> previous error (AE_AML_LOOP_TIMEOUT) (20230628/uteval-68)
> [   51.530049] virtio-pci :00:02.0: can't derive routing for PCI INT A
> [   51.530797] virtio-pci :00:02.0: PCI INT A: no GSI
> [   81.922901] ACPI Error: Aborting method \_SB.PCI0._PRT due to previous 
> error (AE_AML_LOOP_TIMEOUT) (20230628/psparse-529)
> [   82.103534] ACPI Error: Method execution failed \_SB.PCI0._PRT due to 
> previous error (AE_AML_LOOP_TIMEOUT) (20230628/uteval-68)
> [   82.106088] virtio-pci :00:04.0: can't derive routing for PCI INT A
> [   82.106761] virtio-pci :00:04.0: PCI INT A: no GSI
> [  112.192568] ACPI Error: Aborting method \_SB.PCI0._PRT due to previous 
> error (AE_AML_LOOP_TIMEOUT) (20230628/psparse-529)
> [  112.486687] ACPI Error: Method execution failed \_SB.PCI0._PRT due to 
> previous error (AE_AML_LOOP_TIMEOUT) (20230628/uteval-68)
> [  112.489554] virtio-pci :00:05.0: can't derive routing for PCI INT A
> [  112.490027] virtio-pci :00:05.0: PCI INT A: no GSI
> [  142.559448] ACPI Error: Aborting method \_SB.PCI0._PRT due to previous 
> error (AE_AML_LOOP_TIMEOUT) (20230628/psparse-529)
> [  142.718596] ACPI Error: Method execution failed \_SB.PCI0._PRT due to 
> previous error (AE_AML_LOOP_TIMEOUT) (20230628/uteval-68)
> [  142.722889] virtio-pci :00:06.0: can't derive routing for PCI INT A
> [  142.724578] virtio-pci :00:06.0: PCI INT A: no GSI
> 
> With this patch:
> [   22.938076] ACPI: \_SB_.LNKB: Enabled at IRQ 10
> [   24.214002] ACPI: \_SB_.LNKD: Enabled at IRQ 11
> [   25.465170] ACPI: \_SB_.LNKA: Enabled at IRQ 10
> [   27.944920] ACPI: \_SB_.LNKC: Enabled at IRQ 11
> 
> ACPI disassembly:
> Scope (PCI0)
> {
> Method (_PRT, 0, NotSerialized)  // _PRT: PCI Routing Table
> {
> Return (Package (0x80)
> {
> Package (0x04)
> {
> 0x,
> Zero,
> LNKD,
> Zero
> },
> 
> Package (0x04)
> {
> 0x,
> One,
> LNKA,
> Zero
> },
> 
> Package (0x04)
> {
> 0x,
> 0x02,
> LNKB,
> Zero
> },
> 
> Package (0x04)
> {
> 0x,
> 0x03,
> LNKC,
> Zero
> },
> 
> Package (0x04)
> {
> 0x0001,
> Zero,
>     LNKS,
> Zero
> },
> Context: 
> https://lore.kernel.org/virtualization/20240417145544.38d7b...@imammedo.users.ipa.redhat.com/T/#t
> 
> Signed-off-by: Ricardo Ribalda 

Reviewed-by: Igor Mammedov 

> ---
>  hw/i386/acpi-build.c | 120 ---
>  1 file changed, 22 insertions(+), 98 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 53f804ac16..03216a6f29 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -725,120 +725,44 @@ static Aml *aml_pci_pdsm(void)
>  return method;
>  }
>  
> -/**
> - * build_prt_entry:
> - * @link_name: link name for PCI route entry
> - *
> - * build AML package containing a PCI route entry for @link_name
> - */
> -static Aml *build_prt_entry(const char *link_name)
> -{
> -Aml *a_zero = aml_int(0);
> -Aml *pkg = aml_package(4);
> -aml_append(pkg, a_zero);
> -aml_append(pkg, a_zero);
> -aml_append(pkg, aml_name("%s", link_name));
> -aml_append(pkg, a_zero);
> -return pkg;
> -}
> -
>  /*
> - * initialize_rout

Re: [PATCH v3 09/11] bios-tables-test: Allow for new acpihmat-generic-x test data.

2024-06-27 Thread Igor Mammedov
On Thu, 27 Jun 2024 14:51:55 +0200
Igor Mammedov  wrote:

> On Thu, 20 Jun 2024 17:03:17 +0100
> Jonathan Cameron  wrote:
> 
> > The test to be added exercises many corners of the SRAT and HMAT table  
>    did you mean 'corner cases"?
> > generation.

another issue is that this and later patches will conflict with
risc-v acpi tests, that along the way change directory structure
of expected tables.

Perhaps, it's better to rebase this series on top of that. 


> > 
> > Signed-off-by: Jonathan Cameron 
> > ---
> > v3: No change
> > ---
> >  tests/qtest/bios-tables-test-allowed-diff.h | 5 +
> >  tests/data/acpi/q35/APIC.acpihmat-generic-x | 0
> >  tests/data/acpi/q35/CEDT.acpihmat-generic-x | 0
> >  tests/data/acpi/q35/DSDT.acpihmat-generic-x | 0
> >  tests/data/acpi/q35/HMAT.acpihmat-generic-x | 0
> >  tests/data/acpi/q35/SRAT.acpihmat-generic-x | 0
> >  6 files changed, 5 insertions(+)
> > 
> > diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> > b/tests/qtest/bios-tables-test-allowed-diff.h
> > index dfb8523c8b..a5aa801c99 100644
> > --- a/tests/qtest/bios-tables-test-allowed-diff.h
> > +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> > @@ -1 +1,6 @@
> >  /* List of comma-separated changed AML files to ignore */
> > +"tests/data/acpi/q35/APIC.acpihmat-generic-x",
> > +"tests/data/acpi/q35/CEDT.acpihmat-generic-x",
> > +"tests/data/acpi/q35/DSDT.acpihmat-generic-x",
> > +"tests/data/acpi/q35/HMAT.acpihmat-generic-x",
> > +"tests/data/acpi/q35/SRAT.acpihmat-generic-x",
> > diff --git a/tests/data/acpi/q35/APIC.acpihmat-generic-x 
> > b/tests/data/acpi/q35/APIC.acpihmat-generic-x
> > new file mode 100644
> > index 00..e69de29bb2
> > diff --git a/tests/data/acpi/q35/CEDT.acpihmat-generic-x 
> > b/tests/data/acpi/q35/CEDT.acpihmat-generic-x
> > new file mode 100644
> > index 00..e69de29bb2
> > diff --git a/tests/data/acpi/q35/DSDT.acpihmat-generic-x 
> > b/tests/data/acpi/q35/DSDT.acpihmat-generic-x
> > new file mode 100644
> > index 00..e69de29bb2
> > diff --git a/tests/data/acpi/q35/HMAT.acpihmat-generic-x 
> > b/tests/data/acpi/q35/HMAT.acpihmat-generic-x
> > new file mode 100644
> > index 00..e69de29bb2
> > diff --git a/tests/data/acpi/q35/SRAT.acpihmat-generic-x 
> > b/tests/data/acpi/q35/SRAT.acpihmat-generic-x
> > new file mode 100644
> > index 00..e69de29bb2  
> 




Re: [v2 1/1] hw/i386/acpi-build: add OSHP method support for SHPC driver load

2024-06-27 Thread Igor Mammedov
On Tue, 25 Jun 2024 11:52:24 +0800
Shiyuan Gao  wrote:

> SHPC driver will be loaded fail in i440fx machine, the dmesg shows
> that OS cannot get control of SHPC hotplug and hotplug device to
> the PCI bridge will fail when we use SHPC Native type:
> 
>   [3.336059] shpchp :00:03.0: Requesting control of SHPC hotplug via OSHP 
> (\_SB_.PCI0.S28_)
>   [3.337408] shpchp :00:03.0: Requesting control of SHPC hotplug via OSHP 
> (\_SB_.PCI0)
>   [3.338710] shpchp :00:03.0: Cannot get control of SHPC hotplug
> 
> Add OSHP method support for transfer control to the operating system,
> after this SHPC driver will be loaded success and the hotplug device to
> the PCI bridge will success when we use SHPC Native type.
> 
>   [1.703975] shpchp :00:03.0: Requesting control of SHPC hotplug via OSHP 
> (\_SB_.PCI0.S18_)
>   [1.704934] shpchp :00:03.0: Requesting control of SHPC hotplug via OSHP 
> (\_SB_.PCI0)
>   [1.705855] shpchp :00:03.0: Gained control of SHPC hotplug (\_SB_.PCI0)
>   [1.707054] shpchp :00:03.0: HPC vendor_id 1b36 device_id 1 ss_vid 0 
> ss_did 0

please describe in commit message reproducer
(aka QEMU CLI and guest OS and if necessary other details)


> Signed-off-by: Shiyuan Gao 
> ---
> v1 -> v2:
> * add quote PCI firmware spec 3.0
> * explain why an empty method is enough
> ---
> 
>  hw/i386/acpi-build.c | 19 +++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index f4e366f64f..00f8abedf6 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1412,6 +1412,23 @@ static void build_acpi0017(Aml *table)
>  aml_append(table, scope);
>  }
>  
> +/*
> + * PCI Firmware Specification 3.0
> + * 4.8. The OSHP Control Method
> + */
> +static Aml *build_oshp_method(void)
> +{
> +Aml *method;
> +
> +/*
> + * We don't use ACPI to control the SHPC, so just return
> + * success is enough.
> + */
> +method = aml_method("OSHP", 0, AML_NOTSERIALIZED);
> +aml_append(method, aml_return(aml_int(0x0)));
> +return method;
> +}
> +
>  static void
>  build_dsdt(GArray *table_data, BIOSLinker *linker,
> AcpiPmInfo *pm, AcpiMiscInfo *misc,
> @@ -1452,6 +1469,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0A03")));
>  aml_append(dev, aml_name_decl("_UID", aml_int(pcmc->pci_root_uid)));
>  aml_append(dev, aml_pci_edsm());
> +aml_append(dev, build_oshp_method());

it's global and what will happen if we have ACPI PCI hotplug enabled
and guest calls this NOP method?

>  aml_append(sb_scope, dev);
>  aml_append(dsdt, sb_scope);
>  
> @@ -1586,6 +1604,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker,
>  aml_append(dev, build_q35_osc_method(true));
>  } else {
>  aml_append(dev, aml_name_decl("_HID", 
> aml_eisaid("PNP0A03")));
> +aml_append(dev, build_oshp_method());
>  }
>  
>  if (numa_node != NUMA_NODE_UNASSIGNED) {




Re: [PATCH v3 07/11] hw/pci-bridge: Add acpi_uid property to CXL PXB

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:15 +0100
Jonathan Cameron  wrote:

> This allows the ACPI SRAT Generic Port Affinity Structure
> creation to be independent of PCI internals. Note that
> the UID is currently the PCI bus number.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> 
> ---
> v3: New patch
> ---
>  hw/pci-bridge/pci_expander_bridge.c | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/pci-bridge/pci_expander_bridge.c 
> b/hw/pci-bridge/pci_expander_bridge.c
> index 0411ad31ea..92d39b917a 100644
> --- a/hw/pci-bridge/pci_expander_bridge.c
> +++ b/hw/pci-bridge/pci_expander_bridge.c
> @@ -93,6 +93,21 @@ static void pxb_bus_class_init(ObjectClass *class, void 
> *data)
>  pbc->numa_node = pxb_bus_numa_node;
>  }
>  
> +static void prop_pxb_cxl_uid_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> +uint32_t uid = pci_bus_num(PCI_BUS(obj));
> +
> +visit_type_uint32(v, name, , errp);
> +}
> +
> +static void pxb_cxl_bus_class_init(ObjectClass *class, void *data)
> +{
> +pxb_bus_class_init(class, data);
> +object_class_property_add(class, "acpi_uid", "uint32",
> +  prop_pxb_cxl_uid_get, NULL, NULL, NULL);
> +}
> +
>  static const TypeInfo pxb_bus_info = {
>  .name  = TYPE_PXB_BUS,
>  .parent= TYPE_PCI_BUS,
> @@ -111,7 +126,7 @@ static const TypeInfo pxb_cxl_bus_info = {
>  .name  = TYPE_PXB_CXL_BUS,
>  .parent= TYPE_CXL_BUS,
>  .instance_size = sizeof(PXBBus),
> -.class_init= pxb_bus_class_init,
> +.class_init= pxb_cxl_bus_class_init,

why it's CXL only, doesn't the same UID rules apply to other PCI buses?
>  };
>  
>  static const char *pxb_host_root_bus_path(PCIHostState *host_bridge,




Re: [PATCH v3 05/11] hw/pci: Add a bus property to pci_props and use for acpi/gi

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:13 +0100
Jonathan Cameron  wrote:

> Using a property allows us to hide the internal details of the PCI device
> from the code to build a SRAT Generic Initiator Affinity Structure with
> PCI Device Handle.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 
> 
> ---
> V3: New patch
> ---
>  hw/acpi/acpi_generic_initiator.c | 11 ++-
>  hw/pci/pci.c | 14 ++
>  2 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 73bafaaaea..34284359f0 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -9,6 +9,7 @@
>  #include "hw/boards.h"
>  #include "hw/pci/pci_device.h"
>  #include "qemu/error-report.h"
> +#include "qapi/error.h"
>  
>  typedef struct AcpiGenericInitiatorClass {
>  ObjectClass parent_class;
> @@ -79,7 +80,7 @@ static int build_acpi_generic_initiator(Object *obj, void 
> *opaque)
>  MachineState *ms = MACHINE(qdev_get_machine());
>  AcpiGenericInitiator *gi;
>  GArray *table_data = opaque;
> -PCIDevice *pci_dev;
> +uint8_t bus, devfn;
>  Object *o;
>  
>  if (!object_dynamic_cast(obj, TYPE_ACPI_GENERIC_INITIATOR)) {
> @@ -100,10 +101,10 @@ static int build_acpi_generic_initiator(Object *obj, 
> void *opaque)
>  exit(1);
>  }
>  
> -pci_dev = PCI_DEVICE(o);
> -build_srat_pci_generic_initiator(table_data, gi->node, 0,
> - pci_bus_num(pci_get_bus(pci_dev)),
> - pci_dev->devfn);
> +bus = object_property_get_uint(o, "bus", _fatal);
> +devfn = object_property_get_uint(o, "addr", _fatal);
> +
> +build_srat_pci_generic_initiator(table_data, gi->node, 0, bus, devfn);
>  
>  return 0;
>  }
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index 324c1302d2..b4b499b172 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -67,6 +67,19 @@ static char *pcibus_get_fw_dev_path(DeviceState *dev);
>  static void pcibus_reset_hold(Object *obj, ResetType type);
>  static bool pcie_has_upstream_port(PCIDevice *dev);
>  
> +static void prop_pci_bus_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> +uint8_t bus = pci_dev_bus_num(PCI_DEVICE(obj));
> +
> +visit_type_uint8(v, name, , errp);
> +}
> +
> +static const PropertyInfo prop_pci_bus = {
> +.name = "bus",

/me confused,
didn't we have 'bus' property for PCI devices already?

i.e. I can add PCI device like this
  -device e1000,bus=pci.0,addr=0x6,...
  

> +.get = prop_pci_bus_get,
> +};
> +
>  static Property pci_props[] = {
>  DEFINE_PROP_PCI_DEVFN("addr", PCIDevice, devfn, -1),
>  DEFINE_PROP_STRING("romfile", PCIDevice, romfile),
> @@ -85,6 +98,7 @@ static Property pci_props[] = {
>  QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
>  DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present,
>  QEMU_PCIE_ARI_NEXTFN_1_BITNR, false),
> +{ .name = "bus", .info = _pci_bus },
>  DEFINE_PROP_END_OF_LIST()
>  };
>  




Re: [PATCH v3 04/11] hw/acpi: Rename build_all_acpi_generic_initiators() to build_acpi_generic_initiator()

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:12 +0100
Jonathan Cameron  wrote:

> Igor noted that this function only builds one instance, so was rather
> misleadingly named. Fix that.
> 
> Suggested-by: Igor Mammedov 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> 
> ---
> v3: New patch
> ---
>  hw/acpi/acpi_generic_initiator.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 7665b16107..73bafaaaea 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -74,7 +74,7 @@ static void acpi_generic_initiator_class_init(ObjectClass 
> *oc, void *data)
>  acpi_generic_initiator_set_node, NULL, NULL);
>  }
>  
> -static int build_all_acpi_generic_initiators(Object *obj, void *opaque)
> +static int build_acpi_generic_initiator(Object *obj, void *opaque)
>  {
>  MachineState *ms = MACHINE(qdev_get_machine());
>  AcpiGenericInitiator *gi;
> @@ -111,6 +111,6 @@ static int build_all_acpi_generic_initiators(Object *obj, 
> void *opaque)
>  void build_srat_generic_pci_initiator(GArray *table_data)
>  {
>  object_child_foreach_recursive(object_get_root(),
> -   build_all_acpi_generic_initiators,
> +   build_acpi_generic_initiator,
> table_data);
>  }




Re: [PATCH v3 09/11] bios-tables-test: Allow for new acpihmat-generic-x test data.

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:17 +0100
Jonathan Cameron  wrote:

> The test to be added exercises many corners of the SRAT and HMAT table
   did you mean 'corner cases"?
> generation.
> 
> Signed-off-by: Jonathan Cameron 
> ---
> v3: No change
> ---
>  tests/qtest/bios-tables-test-allowed-diff.h | 5 +
>  tests/data/acpi/q35/APIC.acpihmat-generic-x | 0
>  tests/data/acpi/q35/CEDT.acpihmat-generic-x | 0
>  tests/data/acpi/q35/DSDT.acpihmat-generic-x | 0
>  tests/data/acpi/q35/HMAT.acpihmat-generic-x | 0
>  tests/data/acpi/q35/SRAT.acpihmat-generic-x | 0
>  6 files changed, 5 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index dfb8523c8b..a5aa801c99 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1 +1,6 @@
>  /* List of comma-separated changed AML files to ignore */
> +"tests/data/acpi/q35/APIC.acpihmat-generic-x",
> +"tests/data/acpi/q35/CEDT.acpihmat-generic-x",
> +"tests/data/acpi/q35/DSDT.acpihmat-generic-x",
> +"tests/data/acpi/q35/HMAT.acpihmat-generic-x",
> +"tests/data/acpi/q35/SRAT.acpihmat-generic-x",
> diff --git a/tests/data/acpi/q35/APIC.acpihmat-generic-x 
> b/tests/data/acpi/q35/APIC.acpihmat-generic-x
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/q35/CEDT.acpihmat-generic-x 
> b/tests/data/acpi/q35/CEDT.acpihmat-generic-x
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/q35/DSDT.acpihmat-generic-x 
> b/tests/data/acpi/q35/DSDT.acpihmat-generic-x
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/q35/HMAT.acpihmat-generic-x 
> b/tests/data/acpi/q35/HMAT.acpihmat-generic-x
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/q35/SRAT.acpihmat-generic-x 
> b/tests/data/acpi/q35/SRAT.acpihmat-generic-x
> new file mode 100644
> index 00..e69de29bb2




Re: [PATCH v3 03/11] hw/acpi: Move AML building code for Generic Initiators to aml_build.c

2024-06-27 Thread Igor Mammedov
On Thu, 27 Jun 2024 08:44:14 -0400
"Michael S. Tsirkin"  wrote:

> On Thu, Jun 27, 2024 at 02:42:44PM +0200, Igor Mammedov wrote:
> > On Thu, 20 Jun 2024 17:03:11 +0100
> > Jonathan Cameron  wrote:
> >   
> > > Rather than attempting to create a generic function with mess of the two
> > > different device handle types, use a PCI handle specific variant.  If the
> > > ACPI handle form is needed then that can be introduced alongside this
> > > with little duplicated code.
> > > 
> > > Drop the PCIDeviceHandle in favor of just passing the bus, devfn
> > > and segment directly.  devfn kept as a single byte because ARI means
> > > that in cases this is just an 8 bit function number.
> > > 
> > > Suggested-by: Igor Mammedov 
> > > Link: 
> > > https://lore.kernel.org/qemu-devel/20240618142333.102be...@imammedo.users.ipa.redhat.com/
> > > Signed-off-by: Jonathan Cameron   
> > 
> > with typo fixed  
> 
> typo being "in cases"?
> 
> > Reviewed-by: Igor Mammedov 
> >   
> 

nope, I highlighted it the patch
  > +void build_srat_pci_generic_initiator(GArray * table_date, int node,  
  s/table_date/table_data/




Re: [PATCH v3 01/11] hw/acpi: Fix ordering of BDF in Generic Initiator PCI Device Handle.

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:09 +0100
Jonathan Cameron  wrote:

> The ordering in ACPI specification [1] has bus number in the lowest byte.
> As ACPI tables are little endian this is the reverse of the ordering
> used by PCI_BUILD_BDF().  As a minimal fix split the QEMU BDF up
> into bus and devfn and write them as single bytes in the correct
> order.
> 
> [1] ACPI Spec 6.3, Table 5.80
> 
> Fixes: 0a5b5acdf2d8 ("hw/acpi: Implement the SRAT GI affinity structure")
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> 
> ---
> v3: New patch.  Note this code will go away, so this is intended for
> backporting purposes
> ---
>  hw/acpi/acpi_generic_initiator.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 17b9a052f5..3d2b567999 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -92,7 +92,8 @@ build_srat_generic_pci_initiator_affinity(GArray 
> *table_data, int node,
>  
>  /* Device Handle - PCI */
>  build_append_int_noprefix(table_data, handle->segment, 2);
> -build_append_int_noprefix(table_data, handle->bdf, 2);
> +build_append_int_noprefix(table_data, PCI_BUS_NUM(handle->bdf), 1);
> +build_append_int_noprefix(table_data, PCI_BDF_TO_DEVFN(handle->bdf), 1);
>  for (index = 0; index < 12; index++) {
>  build_append_int_noprefix(table_data, 0, 1);
>  }




Re: [PATCH v3 02/11] hw/acpi/GI: Fix trivial parameter alignment issue.

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:10 +0100
Jonathan Cameron  wrote:

> Before making additional modification, tidy up this misleading indentation.
> 
> Reviewed-by: Ankit Agrawal 
> Signed-off-by: Jonathan Cameron 

Reviewed-by: Igor Mammedov 

> ---
> v3: Unchanged
> ---
>  hw/acpi/acpi_generic_initiator.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 3d2b567999..4a02c19468 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -133,7 +133,7 @@ static int build_all_acpi_generic_initiators(Object *obj, 
> void *opaque)
>  
>  dev_handle.segment = 0;
>  dev_handle.bdf = PCI_BUILD_BDF(pci_bus_num(pci_get_bus(pci_dev)),
> -   pci_dev->devfn);
> +   pci_dev->devfn);
>  
>  build_srat_generic_pci_initiator_affinity(table_data,
>gi->node, _handle);




Re: [PATCH v3 03/11] hw/acpi: Move AML building code for Generic Initiators to aml_build.c

2024-06-27 Thread Igor Mammedov
On Thu, 20 Jun 2024 17:03:11 +0100
Jonathan Cameron  wrote:

> Rather than attempting to create a generic function with mess of the two
> different device handle types, use a PCI handle specific variant.  If the
> ACPI handle form is needed then that can be introduced alongside this
> with little duplicated code.
> 
> Drop the PCIDeviceHandle in favor of just passing the bus, devfn
> and segment directly.  devfn kept as a single byte because ARI means
> that in cases this is just an 8 bit function number.
> 
> Suggested-by: Igor Mammedov 
> Link: 
> https://lore.kernel.org/qemu-devel/20240618142333.102be...@imammedo.users.ipa.redhat.com/
> Signed-off-by: Jonathan Cameron 

with typo fixed

Reviewed-by: Igor Mammedov 

> 
> ---
> v3: New patch based on Igor's comments on the endian fix.
> ---
>  include/hw/acpi/acpi_generic_initiator.h | 23 -
>  include/hw/acpi/aml-build.h  |  4 +++
>  hw/acpi/acpi_generic_initiator.c | 39 ++---
>  hw/acpi/aml-build.c  | 44 
>  4 files changed, 51 insertions(+), 59 deletions(-)
> 
> diff --git a/include/hw/acpi/acpi_generic_initiator.h 
> b/include/hw/acpi/acpi_generic_initiator.h
> index a304bad73e..7b98676713 100644
> --- a/include/hw/acpi/acpi_generic_initiator.h
> +++ b/include/hw/acpi/acpi_generic_initiator.h
> @@ -19,29 +19,6 @@ typedef struct AcpiGenericInitiator {
>  uint16_t node;
>  } AcpiGenericInitiator;
>  
> -/*
> - * ACPI 6.3:
> - * Table 5-81 Flags – Generic Initiator Affinity Structure
> - */
> -typedef enum {
> -/*
> - * If clear, the OSPM ignores the contents of the Generic
> - * Initiator/Port Affinity Structure. This allows system firmware
> - * to populate the SRAT with a static number of structures, but only
> - * enable them as necessary.
> - */
> -GEN_AFFINITY_ENABLED = (1 << 0),
> -} GenericAffinityFlags;
> -
> -/*
> - * ACPI 6.3:
> - * Table 5-80 Device Handle - PCI
> - */
> -typedef struct PCIDeviceHandle {
> -uint16_t segment;
> -uint16_t bdf;
> -} PCIDeviceHandle;
> -
>  void build_srat_generic_pci_initiator(GArray *table_data);
>  
>  #endif
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index a3784155cb..9ba3a21c13 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -486,6 +486,10 @@ Aml *build_crs(PCIHostState *host, CrsRangeSet 
> *range_set, uint32_t io_offset,
>  void build_srat_memory(GArray *table_data, uint64_t base,
> uint64_t len, int node, MemoryAffinityFlags flags);
>  
> +void build_srat_pci_generic_initiator(GArray * table_date, int node,

s/table_date/table_data/

> +  uint16_t segment, uint8_t bus,
> +  uint8_t devfn);
> +
>  void build_slit(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>  const char *oem_id, const char *oem_table_id);
>  
> diff --git a/hw/acpi/acpi_generic_initiator.c 
> b/hw/acpi/acpi_generic_initiator.c
> index 4a02c19468..7665b16107 100644
> --- a/hw/acpi/acpi_generic_initiator.c
> +++ b/hw/acpi/acpi_generic_initiator.c
> @@ -74,40 +74,11 @@ static void acpi_generic_initiator_class_init(ObjectClass 
> *oc, void *data)
>  acpi_generic_initiator_set_node, NULL, NULL);
>  }
>  
> -/*
> - * ACPI 6.3:
> - * Table 5-78 Generic Initiator Affinity Structure
> - */
> -static void
> -build_srat_generic_pci_initiator_affinity(GArray *table_data, int node,
> -  PCIDeviceHandle *handle)
> -{
> -uint8_t index;
> -
> -build_append_int_noprefix(table_data, 5, 1);  /* Type */
> -build_append_int_noprefix(table_data, 32, 1); /* Length */
> -build_append_int_noprefix(table_data, 0, 1);  /* Reserved */
> -build_append_int_noprefix(table_data, 1, 1);  /* Device Handle Type: PCI 
> */
> -build_append_int_noprefix(table_data, node, 4);  /* Proximity Domain */
> -
> -/* Device Handle - PCI */
> -build_append_int_noprefix(table_data, handle->segment, 2);
> -build_append_int_noprefix(table_data, PCI_BUS_NUM(handle->bdf), 1);
> -build_append_int_noprefix(table_data, PCI_BDF_TO_DEVFN(handle->bdf), 1);
> -for (index = 0; index < 12; index++) {
> -build_append_int_noprefix(table_data, 0, 1);
> -}
> -
> -build_append_int_noprefix(table_data, GEN_AFFINITY_ENABLED, 4); /* Flags 
> */
> -build_append_int_noprefix(table_data, 0, 4); /* Reserved */
> -}
> -
>  static int build_all_acpi_generic_initiators(Object *obj, void *opaque)
>  {
>  MachineS

Re: [PATCH v4 16/16] tests/qtest/bios-tables-test: Add expected ACPI data files for RISC-V

2024-06-27 Thread Igor Mammedov
On Tue, 25 Jun 2024 20:38:39 +0530
Sunil V L  wrote:

> As per the step 5 in the process documented in bios-tables-test.c,
> generate the expected ACPI AML data files for RISC-V using the
> rebuild-expected-aml.sh script and update the
> bios-tables-test-allowed-diff.h.
> 
> These are all new files being added for the first time. Hence, iASL diff
> output is not added.
> 
> Signed-off-by: Sunil V L 
> Acked-by: Alistair Francis 
> Acked-by: Igor Mammedov 

Michael,
can it go via risc-v tree or
do you plan to merge it via your tree?

> ---
>  tests/data/acpi/riscv64/virt/APIC   | Bin 0 -> 116 bytes
>  tests/data/acpi/riscv64/virt/DSDT   | Bin 0 -> 3518 bytes
>  tests/data/acpi/riscv64/virt/FACP   | Bin 0 -> 276 bytes
>  tests/data/acpi/riscv64/virt/MCFG   | Bin 0 -> 60 bytes
>  tests/data/acpi/riscv64/virt/RHCT   | Bin 0 -> 314 bytes
>  tests/data/acpi/riscv64/virt/SPCR   | Bin 0 -> 80 bytes
>  tests/qtest/bios-tables-test-allowed-diff.h |   6 --
>  7 files changed, 6 deletions(-)
> 
> diff --git a/tests/data/acpi/riscv64/virt/APIC 
> b/tests/data/acpi/riscv64/virt/APIC
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..66a25dfd2d6ea2b607c024722b2eab95873a01e9
>  100644
> GIT binary patch
> literal 116
> zcmZ<^@N_O=U|?X|;^gn_5v<@85#X!<1dKp25F13pfP@Mo12P{Zj?R|`s)2!c7=s}J
> I#NvT*0o0BN0RR91
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/riscv64/virt/DSDT 
> b/tests/data/acpi/riscv64/virt/DSDT
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0fb2d5e0e389541209b765d5092d0706f40298f6
>  100644
> GIT binary patch
> literal 3518
> zcmZvf%WvaU6vnR;w@IBxlQexl(t(j!ppl%0(ryq zgcxeCJ*}oI`D?w!iChKA+$9t$orAn%(bn
> zN+n)t?0eh6a^of6TgGN7rRbcFh1Tyc_k%{iceZVNuIr}z+pT7 z>)qm0`a{a^Ra*0X8Qm}E3gQ zVE91w@#@A>YkTOF1_DRa)WNl^t#{A^TNmZNji{btZCtt5E!+b  
> zecnEQ^xc;~?0jvN=B?69B6sx0n@1-LMCh z|GVk1+UV8=+`16nn$W3iZBaGE(t=R0Su8V)L_|(iti)M3i8v3Jc_g_ zZ0_+)tcM-v;WLjB?y(x{F%swTD)SiS9?!;ljK+DKGLIDZSc~;Y#d$nr9_i3y=NsQ^
> zu@zZ&*ReP}{EyK3th+T@*_*eqZ#4FX%O>b{iWO(USDtFAW3{YY{55g*p1P}!a8zWX
> z7lz;IPVBzpJS=7G%wV8y2Q62ba|`EHRm#%1lYm%>L=vK=N;x|_7+?*WxKL3R0`umY
> z>MhKe$y(1g;N2-TU8l! zP=$;-)Haz>@sOMoiwl`i1tW@cj+o4-cu3BPCB-VhD+4MD9hIDroD#Oi8OIy2%-
> zNlr-4iRFXLXr|LTGn$gLp$V}cX!M^n3=p)tt`$vN>NG_kr`M{qil6Owag1ZPHY
> zW+W#h=gbPutl-Q_PDsv)?-Htwo@Y*Q<|HR1=gbSvyx`1BPDsup=eXpA zv*(GAkEvZhm4f7i zLUPVY!8s{7CnYB&=bRFpQ-X6!azb*>X~8)yIHx5iB zS;+~>Ip+lDoZy_3oRFMzUU1F}$vGDU=Yrr|kera5b5U?E3eH8z3CTH^1m}|A  
> zT#}rSoU5@qiM`L8Qmx@>rXnqyVu6bqy3;0
> zSfN$e$O$X-aop-gjFlN1TJ2C(VM8aZsGs9rPsDhcG3gaHcG3%d9rt=N#> zYt+>h-rEXOMpLn!a_)bcQwbVUYCt>d6Z~go(OKwiV=x$e6rJOWm8FJLZ)jL(gSOQ9  
> z(=101Q%{N90rg{iGreXyIPiUy_PU*2Ro)uw?+2cJexkhQVfAu5b@3W?^1b$-wSOuL
> z8($pWumAYmuXoN*92)^EIHqx|osu9QI;oM>2efl4w7)DozPM|Bh$~ecUA>%od=bT&  
> z;R0PerC=JrI{7MZ#_1;2tCR9A{Hkc%mp4o`zpVZISFrki`_c5@?b)Ba_T|{c>*}hQ  
> pv@F`;cR<_jYzAT_(hnb+|8pBtRxTSGr9{slF`>K_0A
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/riscv64/virt/FACP 
> b/tests/data/acpi/riscv64/virt/FACP
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..a5276b65ea8ce46cc9b40d96d98f0669c9089ed4
>  100644
> GIT binary patch
> literal 276
> zcmZ>BbPf< A0RR91
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/riscv64/virt/MCFG 
> b/tests/data/acpi/riscv64/virt/MCFG
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..37eb923a9320f5573c0c2cdb90bd98409cc7eb6f
>  100644
> GIT binary patch
> literal 60
> rcmeZuc5}C3U|?Y6aq@Te2v%^42yj*a0!E-1hz+8VfB}^KA4CHH3`GY4
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/riscv64/virt/RHCT 
> b/tests/data/acpi/riscv64/virt/RHCT
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..beaa961bbf0f0486c0dee25f543377c928354f84
>  100644
> GIT binary patch
> literal 314
> zcmXAlu}%Xq42FGxD#XNyI`tt=C`A2-wkXNvbP-K1O46&8iKjrk6)SI3ey5h~
> z@3-SPa`wCa{iPvl)b_RC9X8vKw|)adiC8n)zP^7d?+~A>`lE(^DK1@Wog4=(iq&1K
> z7;1J`gewX|OE=3Z>{xM3wM)ljIQKa+635YaZ7jrOeGc+eJEnks*|jl=GEUBVQ8WhX  
> zK@ Pp1|9>GjINg;u`)Bd);9H  
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/riscv64/virt/SPCR 
> b/tests/data/acpi/riscv64/virt/SPCR
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..4da9daf65f71a13ac2b488d4e9728f194b569a43
>  100644
> GIT binary patch
> literal 80
> zcmWFza1IJ!U|?X{>E!S15v<@85#X!<1dKp25F12;fdT`FDF9*%FmM4$c8~z`e;@#f  
> G!2kgKJqrN<
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index 70474a097f..dfb8523c8b 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1,7 +1 @@
>  /* List of comma-separated changed AML files to ignore */
> -"tests/data/acpi/riscv64/virt/APIC",
> -"tests/data/acpi/riscv64/virt/DSDT",
> -"tests/data/acpi/riscv64/virt/FACP",
> -"tests/data/acpi/riscv64/virt/MCFG",
> -"tests/data/acpi/riscv64/virt/RHCT",
> -"tests/data/acpi/riscv64/virt/SPCR",




Re: [PATCH v3 14/15] tests/qtest/bios-tables-test.c: Enable basic testing for RISC-V

2024-06-25 Thread Igor Mammedov
On Tue, 25 Jun 2024 17:59:33 +0530
Sunil V L  wrote:

> On Tue, Jun 25, 2024 at 02:05:58PM +0200, Igor Mammedov wrote:
> > On Tue, 25 Jun 2024 13:19:59 +0200
> > Igor Mammedov  wrote:
> >   
> > > On Fri, 21 Jun 2024 17:29:05 +0530
> > > Sunil V L  wrote:
> > >   
> > > > Add basic ACPI table test case for RISC-V.
> > > > 
> > > > Signed-off-by: Sunil V L 
> > > > Reviewed-by: Alistair Francis 
> > > 
> > > Reviewed-by: Igor Mammedov   
> > 
> > I take ack back for now, since patch most likely to cause failures on 
> > weaker test host (CI infra)
> > 
> > test case never finishes and timeouts on my x86 host while consuming 100%,
> >   
> Hi Igor,
> 
> Many thanks for your kind review!. I think you are missing the patch [1]
> (which I mentioned in cover letter as well). This patch became a
> dependency since your suggestion to use -cdrom option needed this fix.
> 
> gitlab CI tests also passed for me with that patch included.
> 
> [1] - https://mail.gnu.org/archive/html/qemu-devel/2024-06/msg03683.html

ok, keep my RB but respin series with that patch included to make it complete.
(there is no harm if it gets merged 1st through another tree. but it makes 
life of reviewers/maintainers easier)

>  
> Thanks,
> Sunil
> 




Re: [PATCH v3 14/15] tests/qtest/bios-tables-test.c: Enable basic testing for RISC-V

2024-06-25 Thread Igor Mammedov
On Tue, 25 Jun 2024 13:19:59 +0200
Igor Mammedov  wrote:

> On Fri, 21 Jun 2024 17:29:05 +0530
> Sunil V L  wrote:
> 
> > Add basic ACPI table test case for RISC-V.
> > 
> > Signed-off-by: Sunil V L 
> > Reviewed-by: Alistair Francis   
> 
> Reviewed-by: Igor Mammedov 

I take ack back for now, since patch most likely to cause failures on weaker 
test host (CI infra)

test case never finishes and timeouts on my x86 host while consuming 100%,

==
QTEST_QEMU_BINARY=./qemu-system-riscv64 
/tmp/qemu_build/tests/qtest/bios-tables-test
# random seed: R02Sd870403ff62b08e48122105b2700f660
# starting QEMU: exec ./qemu-system-riscv64 -qtest unix:/tmp/qtest-2873960.sock 
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-2873960.qmp,id=char0 -mon 
chardev=char0,mode=control -display none -audio none -machine none -accel qtest
1..1
# Start of riscv64 tests
# Start of acpi tests
# starting QEMU: exec ./qemu-system-riscv64 -qtest unix:/tmp/qtest-2873960.sock 
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-2873960.qmp,id=char0 -mon 
chardev=char0,mode=control -display none -audio none -machine virt  -accel tcg 
-nodefaults -nographic -drive 
if=pflash,format=raw,file=pc-bios/edk2-riscv-code.fd,readonly=on -drive 
if=pflash,format=raw,file=pc-bios/edk2-riscv-vars.fd,snapshot=on -cdrom 
tests/data/uefi-boot-images/bios-tables-test.riscv64.iso.qcow2 -cpu rva22s64  
-accel qtest



**
ERROR:../../builds/imammedo/qemu/tests/qtest/acpi-utils.c:158:acpi_find_rsdp_address_uefi:
 code should not be reached
Bail out! 
ERROR:../../builds/imammedo/qemu/tests/qtest/acpi-utils.c:158:acpi_find_rsdp_address_uefi:
 code should not be reached



 
> 
> > ---
> >  tests/qtest/bios-tables-test.c | 26 ++
> >  1 file changed, 26 insertions(+)
> > 
> > diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> > index f4c4704bab..0f9c654e96 100644
> > --- a/tests/qtest/bios-tables-test.c
> > +++ b/tests/qtest/bios-tables-test.c
> > @@ -1977,6 +1977,28 @@ static void test_acpi_microvm_acpi_erst(void)
> >  }
> >  #endif /* CONFIG_POSIX */
> >  
> > +static void test_acpi_riscv64_virt_tcg(void)
> > +{
> > +test_data data = {
> > +.machine = "virt",
> > +.arch = "riscv64",
> > +.tcg_only = true,
> > +.uefi_fl1 = "pc-bios/edk2-riscv-code.fd",
> > +.uefi_fl2 = "pc-bios/edk2-riscv-vars.fd",
> > +.cd = 
> > "tests/data/uefi-boot-images/bios-tables-test.riscv64.iso.qcow2",
> > +.ram_start = 0x8000ULL,
> > +.scan_len = 128ULL * 1024 * 1024,
> > +};
> > +
> > +/*
> > + * RHCT will have ISA string encoded. To reduce the effort
> > + * of updating expected AML file for any new default ISA extension,
> > + * use the profile rva22s64.
> > + */
> > +test_acpi_one("-cpu rva22s64 ", );
> > +free_test_data();
> > +}
> > +
> >  static void test_acpi_aarch64_virt_tcg(void)
> >  {
> >  test_data data = {
> > @@ -2455,6 +2477,10 @@ int main(int argc, char *argv[])
> >  qtest_add_func("acpi/virt/viot", 
> > test_acpi_aarch64_virt_viot);
> >  }
> >  }
> > +} else if (strcmp(arch, "riscv64") == 0) {
> > +if (has_tcg && qtest_has_device("virtio-blk-pci")) {
> > +qtest_add_func("acpi/virt", test_acpi_riscv64_virt_tcg);
> > +}
> >  }
> >  ret = g_test_run();
> >  boot_sector_cleanup(disk);  
> 




Re: [PATCH v3 14/15] tests/qtest/bios-tables-test.c: Enable basic testing for RISC-V

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:29:05 +0530
Sunil V L  wrote:

> Add basic ACPI table test case for RISC-V.
> 
> Signed-off-by: Sunil V L 
> Reviewed-by: Alistair Francis 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test.c | 26 ++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index f4c4704bab..0f9c654e96 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -1977,6 +1977,28 @@ static void test_acpi_microvm_acpi_erst(void)
>  }
>  #endif /* CONFIG_POSIX */
>  
> +static void test_acpi_riscv64_virt_tcg(void)
> +{
> +test_data data = {
> +.machine = "virt",
> +.arch = "riscv64",
> +.tcg_only = true,
> +.uefi_fl1 = "pc-bios/edk2-riscv-code.fd",
> +.uefi_fl2 = "pc-bios/edk2-riscv-vars.fd",
> +.cd = 
> "tests/data/uefi-boot-images/bios-tables-test.riscv64.iso.qcow2",
> +.ram_start = 0x8000ULL,
> +.scan_len = 128ULL * 1024 * 1024,
> +};
> +
> +/*
> + * RHCT will have ISA string encoded. To reduce the effort
> + * of updating expected AML file for any new default ISA extension,
> + * use the profile rva22s64.
> + */
> +test_acpi_one("-cpu rva22s64 ", );
> +free_test_data();
> +}
> +
>  static void test_acpi_aarch64_virt_tcg(void)
>  {
>  test_data data = {
> @@ -2455,6 +2477,10 @@ int main(int argc, char *argv[])
>  qtest_add_func("acpi/virt/viot", 
> test_acpi_aarch64_virt_viot);
>  }
>  }
> +} else if (strcmp(arch, "riscv64") == 0) {
> +if (has_tcg && qtest_has_device("virtio-blk-pci")) {
> +qtest_add_func("acpi/virt", test_acpi_riscv64_virt_tcg);
> +}
>  }
>  ret = g_test_run();
>  boot_sector_cleanup(disk);




Re: [PATCH v3 08/15] tests/data/acpi: Move x86 ACPI tables under x86/${machine} path

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:59 +0530
Sunil V L  wrote:

> To support multiple architectures using same machine name, create x86
> folder and move all x86 related AML files for each machine type inside.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/data/acpi/{ => x86}/microvm/APIC  | Bin
>  tests/data/acpi/{ => x86}/microvm/APIC.ioapic2  | Bin
>  tests/data/acpi/{ => x86}/microvm/APIC.pcie | Bin
>  tests/data/acpi/{ => x86}/microvm/DSDT  | Bin
>  tests/data/acpi/{ => x86}/microvm/DSDT.ioapic2  | Bin
>  tests/data/acpi/{ => x86}/microvm/DSDT.pcie | Bin
>  tests/data/acpi/{ => x86}/microvm/DSDT.rtc  | Bin
>  tests/data/acpi/{ => x86}/microvm/DSDT.usb  | Bin
>  tests/data/acpi/{ => x86}/microvm/ERST.pcie | Bin
>  tests/data/acpi/{ => x86}/microvm/FACP  | Bin
>  tests/data/acpi/{ => x86}/pc/APIC   | Bin
>  tests/data/acpi/{ => x86}/pc/APIC.acpihmat  | Bin
>  tests/data/acpi/{ => x86}/pc/APIC.cphp  | Bin
>  tests/data/acpi/{ => x86}/pc/APIC.dimmpxm   | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT   | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.acpierst  | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.acpihmat  | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.bridge| Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.cphp  | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.dimmpxm   | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.hpbridge  | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.hpbrroot  | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.ipmikcs   | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.memhp | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.nohpet| Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.numamem   | Bin
>  tests/data/acpi/{ => x86}/pc/DSDT.roothp| Bin
>  tests/data/acpi/{ => x86}/pc/ERST.acpierst  | Bin
>  tests/data/acpi/{ => x86}/pc/FACP   | Bin
>  tests/data/acpi/{ => x86}/pc/FACP.nosmm | Bin
>  tests/data/acpi/{ => x86}/pc/FACS   | Bin
>  tests/data/acpi/{ => x86}/pc/HMAT.acpihmat  | Bin
>  tests/data/acpi/{ => x86}/pc/HPET   | Bin
>  tests/data/acpi/{ => x86}/pc/NFIT.dimmpxm   | Bin
>  tests/data/acpi/{ => x86}/pc/SLIT.cphp  | Bin
>  tests/data/acpi/{ => x86}/pc/SLIT.memhp | Bin
>  tests/data/acpi/{ => x86}/pc/SRAT.acpihmat  | Bin
>  tests/data/acpi/{ => x86}/pc/SRAT.cphp  | Bin
>  tests/data/acpi/{ => x86}/pc/SRAT.dimmpxm   | Bin
>  tests/data/acpi/{ => x86}/pc/SRAT.memhp | Bin
>  tests/data/acpi/{ => x86}/pc/SRAT.numamem   | Bin
>  tests/data/acpi/{ => x86}/pc/SSDT.dimmpxm   | Bin
>  tests/data/acpi/{ => x86}/pc/WAET   | Bin
>  tests/data/acpi/{ => x86}/q35/APIC  | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.acpihmat | Bin
>  .../acpi/{ => x86}/q35/APIC.acpihmat-noinitiator| Bin
>  tests/data/acpi/{ => x86}/q35/APIC.core-count   | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.core-count2  | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.cphp | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.dimmpxm  | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.thread-count | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.thread-count2| Bin
>  tests/data/acpi/{ => x86}/q35/APIC.type4-count  | Bin
>  tests/data/acpi/{ => x86}/q35/APIC.xapic| Bin
>  tests/data/acpi/{ => x86}/q35/CEDT.cxl  | Bin
>  tests/data/acpi/{ => x86}/q35/DMAR.dmar | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT  | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.acpierst | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.acpihmat | Bin
>  .../acpi/{ => x86}/q35/DSDT.acpihmat-noinitiator| Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.applesmc | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.bridge   | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.core-count   | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.core-count2  | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.cphp | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.cxl  | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.dimmpxm  | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.ipmibt   | Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.ipmismbus| Bin
>  tests/data/acpi/{ => x86}/q35/DSDT.ivrs | Bin
>  tests/data/acpi/{ 

Re: [PATCH v3 09/15] tests/data/acpi/virt: Move ARM64 ACPI tables under aarch64/${machine} path

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:29:00 +0530
Sunil V L  wrote:

> Same machine name can be used by different architectures. Hence, create
> aarch64 folder and move all aarch64 related AML files for virt machine
> inside.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/data/acpi/{ => aarch64}/virt/APIC | Bin
>  .../data/acpi/{ => aarch64}/virt/APIC.acpihmatvirt  | Bin
>  tests/data/acpi/{ => aarch64}/virt/APIC.topology| Bin
>  tests/data/acpi/{ => aarch64}/virt/DBG2 | Bin
>  tests/data/acpi/{ => aarch64}/virt/DSDT | Bin
>  .../data/acpi/{ => aarch64}/virt/DSDT.acpihmatvirt  | Bin
>  tests/data/acpi/{ => aarch64}/virt/DSDT.memhp   | Bin
>  tests/data/acpi/{ => aarch64}/virt/DSDT.pxb | Bin
>  tests/data/acpi/{ => aarch64}/virt/DSDT.topology| Bin
>  tests/data/acpi/{ => aarch64}/virt/FACP | Bin
>  tests/data/acpi/{ => aarch64}/virt/GTDT | Bin
>  .../data/acpi/{ => aarch64}/virt/HMAT.acpihmatvirt  | Bin
>  tests/data/acpi/{ => aarch64}/virt/IORT | Bin
>  tests/data/acpi/{ => aarch64}/virt/MCFG | Bin
>  tests/data/acpi/{ => aarch64}/virt/NFIT.memhp   | Bin
>  tests/data/acpi/{ => aarch64}/virt/PPTT | Bin
>  .../data/acpi/{ => aarch64}/virt/PPTT.acpihmatvirt  | Bin
>  tests/data/acpi/{ => aarch64}/virt/PPTT.topology| Bin
>  tests/data/acpi/{ => aarch64}/virt/SLIT.memhp   | Bin
>  tests/data/acpi/{ => aarch64}/virt/SPCR | Bin
>  .../data/acpi/{ => aarch64}/virt/SRAT.acpihmatvirt  | Bin
>  tests/data/acpi/{ => aarch64}/virt/SRAT.memhp   | Bin
>  tests/data/acpi/{ => aarch64}/virt/SRAT.numamem | Bin
>  tests/data/acpi/{ => aarch64}/virt/SSDT.memhp   | Bin
>  tests/data/acpi/{ => aarch64}/virt/VIOT | Bin
>  25 files changed, 0 insertions(+), 0 deletions(-)
>  rename tests/data/acpi/{ => aarch64}/virt/APIC (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/APIC.acpihmatvirt (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/APIC.topology (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DBG2 (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DSDT (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DSDT.acpihmatvirt (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DSDT.memhp (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DSDT.pxb (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/DSDT.topology (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/FACP (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/GTDT (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/HMAT.acpihmatvirt (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/IORT (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/MCFG (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/NFIT.memhp (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/PPTT (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/PPTT.acpihmatvirt (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/PPTT.topology (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SLIT.memhp (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SPCR (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SRAT.acpihmatvirt (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SRAT.memhp (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SRAT.numamem (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/SSDT.memhp (100%)
>  rename tests/data/acpi/{ => aarch64}/virt/VIOT (100%)
> 
> diff --git a/tests/data/acpi/virt/APIC b/tests/data/acpi/aarch64/virt/APIC
> similarity index 100%
> rename from tests/data/acpi/virt/APIC
> rename to tests/data/acpi/aarch64/virt/APIC
> diff --git a/tests/data/acpi/virt/APIC.acpihmatvirt 
> b/tests/data/acpi/aarch64/virt/APIC.acpihmatvirt
> similarity index 100%
> rename from tests/data/acpi/virt/APIC.acpihmatvirt
> rename to tests/data/acpi/aarch64/virt/APIC.acpihmatvirt
> diff --git a/tests/data/acpi/virt/APIC.topology 
> b/tests/data/acpi/aarch64/virt/APIC.topology
> similarity index 100%
> rename from tests/data/acpi/virt/APIC.topology
> rename to tests/data/acpi/aarch64/virt/APIC.topology
> diff --git a/tests/data/acpi/virt/DBG2 b/tests/data/acpi/aarch64/virt/DBG2
> similarity index 100%
> rename from tests/data/acpi/virt/DBG2
> rename to tests/data/acpi/aarch64/virt/DBG2
> diff --git a/tests/data/acpi/virt/DSDT b/tests/data/acpi/aarch64/virt/DSDT
> similarity index 100%
> rename from tests/data/acpi/virt/DSDT
> rename to tests/data/acpi/aarch64/virt/DSDT
> diff --git a/tests/data/acpi/virt/DSDT.acpihmatvirt 
> b/tests/data/acpi/aarch64/virt/DSDT.acpihmatvirt
> similarity index 100%
> rename fr

Re: [PATCH v3 07/15] tests/qtest/bios-tables-test.c: Set "arch" for x86 tests

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:58 +0530
Sunil V L  wrote:

> To search for expected AML files under ${arch}/${machine} path, set this
> field for X86 related test cases.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test.c | 77 --
>  1 file changed, 64 insertions(+), 13 deletions(-)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index 007c281c9a..f4c4704bab 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -933,6 +933,7 @@ static void test_acpi_piix4_tcg(void)
>   * This is to make guest actually run.
>   */
>  data.machine = MACHINE_PC;
> +data.arch= "x86";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
>  test_acpi_one(NULL, );
> @@ -944,6 +945,7 @@ static void test_acpi_piix4_tcg_bridge(void)
>  test_data data = {};
>  
>  data.machine = MACHINE_PC;
> +data.arch= "x86";
>  data.variant = ".bridge";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
> @@ -981,6 +983,7 @@ static void test_acpi_piix4_no_root_hotplug(void)
>  test_data data = {};
>  
>  data.machine = MACHINE_PC;
> +data.arch= "x86";
>  data.variant = ".roothp";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
> @@ -997,6 +1000,7 @@ static void test_acpi_piix4_no_bridge_hotplug(void)
>  test_data data = {};
>  
>  data.machine = MACHINE_PC;
> +data.arch= "x86";
>  data.variant = ".hpbridge";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
> @@ -1013,6 +1017,7 @@ static void test_acpi_piix4_no_acpi_pci_hotplug(void)
>  test_data data = {};
>  
>  data.machine = MACHINE_PC;
> +data.arch= "x86";
>  data.variant = ".hpbrroot";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
> @@ -1034,6 +1039,7 @@ static void test_acpi_q35_tcg(void)
>  test_data data = {};
>  
>  data.machine = MACHINE_Q35;
> +data.arch = "x86";
>  data.required_struct_types = base_required_struct_types;
>  data.required_struct_types_len = ARRAY_SIZE(base_required_struct_types);
>  test_acpi_one(NULL, );
> @@ -1049,6 +1055,7 @@ static void test_acpi_q35_kvm_type4_count(void)
>  {
>  test_data data = {
>  .machine = MACHINE_Q35,
> +.arch= "x86",
>  .variant = ".type4-count",
>  .required_struct_types = base_required_struct_types,
>  .required_struct_types_len = ARRAY_SIZE(base_required_struct_types),
> @@ -1065,6 +1072,7 @@ static void test_acpi_q35_kvm_core_count(void)
>  {
>  test_data data = {
>  .machine = MACHINE_Q35,
> +.arch= "x86",
>  .variant = ".core-count",
>  .required_struct_types = base_required_struct_types,
>  .required_struct_types_len = ARRAY_SIZE(base_required_struct_types),
> @@ -1082,6 +1090,7 @@ static void test_acpi_q35_kvm_core_count2(void)
>  {
>  test_data data = {
>  .machine = MACHINE_Q35,
> +.arch= "x86",
>  .variant = ".core-count2",
>  .required_struct_types = base_required_struct_types,
>  .required_struct_types_len = ARRAY_SIZE(base_required_struct_types),
> @@ -1099,6 +1108,7 @@ static void test_acpi_q35_kvm_thread_count(void)
>  {
>  test_data data = {
>  .machine = MACHINE_Q35,
> +.arch= "x86",
>  .variant = ".thread-count",
>  .required_struct_types = base_required_struct_types,
>  .required_struct_types_len = ARRAY_SIZE(base_required_struct_types),
> @@ -1116,6 +1126,7 @@ static void test_acpi_q35_kvm_thread_count2(void)
>  {
>  test_data data = {
>  .machine = MACHINE_Q35,
> +.arch= "x86",
>  .variant = ".thread-count2",
>  .required_struct_types = base_required_struct_types,
>  .required_struct_types_len = ARRAY_SIZE(base_required_struct_types),
> @@ -1134,6 +1145,7 @@ static void test_acpi_q35_tcg_bridge(void)
>  test_data data = {};
>  
&

Re: [PATCH v3 06/15] tests/qtest/bios-tables-test.c: Set "arch" for aarch64 tests

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:57 +0530
Sunil V L  wrote:

> To search for expected AML files under ${arch}/${machine} path, set this
> field for AARCH64 related test cases.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test.c | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index 29c52952f4..007c281c9a 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -1591,6 +1591,7 @@ static void test_acpi_aarch64_virt_tcg_memhp(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -1684,6 +1685,7 @@ static void test_acpi_aarch64_virt_tcg_numamem(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -1706,6 +1708,7 @@ static void test_acpi_aarch64_virt_tcg_pxb(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -1779,6 +1782,7 @@ static void test_acpi_aarch64_virt_tcg_acpi_hmat(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -1935,6 +1939,7 @@ static void test_acpi_aarch64_virt_tcg(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -1954,6 +1959,7 @@ static void test_acpi_aarch64_virt_tcg_topology(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .variant = ".topology",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
> @@ -2037,6 +2043,7 @@ static void test_acpi_aarch64_virt_viot(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",
> @@ -2213,6 +2220,7 @@ static void test_acpi_aarch64_virt_oem_fields(void)
>  {
>  test_data data = {
>  .machine = "virt",
> +.arch = "aarch64",
>  .tcg_only = true,
>  .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd",
>  .uefi_fl2 = "pc-bios/edk2-arm-vars.fd",




Re: [PATCH v3 05/15] tests/qtest/bios-tables-test.c: Add support for arch in path

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:56 +0530
Sunil V L  wrote:

> Since machine name can be common for multiple architectures (ex: virt),
> add "arch" in the path to search for expected AML files. Since the AML
> files are still under old path, add support for searching with and
> without arch in the path.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/qtest/bios-tables-test.c | 23 ---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index c4a4d1c7bf..29c52952f4 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -78,6 +78,7 @@
>  typedef struct {
>  bool tcg_only;
>  const char *machine;
> +const char *arch;
>  const char *machine_param;
>  const char *variant;
>  const char *uefi_fl1;
> @@ -262,8 +263,19 @@ static void dump_aml_files(test_data *data, bool rebuild)
>  g_assert(exp_sdt->aml);
>  
>  if (rebuild) {
> -aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir, 
> data->machine,
> +aml_file = g_strdup_printf("%s/%s/%s/%.4s%s", data_dir,
> +   data->arch, data->machine,
> sdt->aml, ext);
> +
> +/*
> + * To keep test cases not failing before the DATA files are 
> moved to
> + * ${arch}/${machine} folder, add this check as well.
> + */
> +if (!g_file_test(aml_file, G_FILE_TEST_EXISTS)) {
> +aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir,
> +   data->machine, sdt->aml, ext);
> +}
> +
>  if (!g_file_test(aml_file, G_FILE_TEST_EXISTS) &&
>  sdt->aml_len == exp_sdt->aml_len &&
>  !memcmp(sdt->aml, exp_sdt->aml, sdt->aml_len)) {
> @@ -398,8 +410,13 @@ static GArray *load_expected_aml(test_data *data)
>  memset(_sdt, 0, sizeof(exp_sdt));
>  
>  try_again:
> -aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir, data->machine,
> -   sdt->aml, ext);
> +aml_file = g_strdup_printf("%s/%s/%s/%.4s%s", data_dir, data->arch,
> +   data->machine, sdt->aml, ext);
> +if (!g_file_test(aml_file, G_FILE_TEST_EXISTS)) {
> +aml_file = g_strdup_printf("%s/%s/%.4s%s", data_dir, 
> data->machine,
> +   sdt->aml, ext);
> +}
> +
>  if (verbosity_level >= 2) {
>  fprintf(stderr, "Looking for expected file '%s'\n", aml_file);
>  }




Re: [PATCH v3 02/15] uefi-test-tools: Add support for python based build script

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:53 +0530
Sunil V L  wrote:

> edk2-funcs.sh which is used in this Makefile, was removed in the commit
> c28a2891f3 ("edk2: update build script"). It is replaced with a python
> based script. So, update the Makefile and add the configuration file as
> required to support the python based build script.
> 
> Signed-off-by: Sunil V L 

Acked-by: Igor Mammedov 

> ---
>  tests/uefi-test-tools/Makefile   | 19 +++
>  tests/uefi-test-tools/uefi-test-build.config | 52 
>  2 files changed, 59 insertions(+), 12 deletions(-)
>  create mode 100644 tests/uefi-test-tools/uefi-test-build.config
> 
> diff --git a/tests/uefi-test-tools/Makefile b/tests/uefi-test-tools/Makefile
> index 0c003f2877..f4eaebd8ff 100644
> --- a/tests/uefi-test-tools/Makefile
> +++ b/tests/uefi-test-tools/Makefile
> @@ -12,7 +12,7 @@
>  
>  edk2_dir  := ../../roms/edk2
>  images_dir:= ../data/uefi-boot-images
> -emulation_targets := arm aarch64 i386 x86_64
> +emulation_targets := arm aarch64 i386 x86_64 riscv64
>  uefi_binaries := bios-tables-test
>  intermediate_suffixes := .efi .fat .iso.raw
>  
> @@ -56,7 +56,8 @@ Build/%.iso.raw: Build/%.fat
>  # stripped from, the argument.
>  map_arm_to_uefi = $(subst arm,ARM,$(1))
>  map_aarch64_to_uefi = $(subst aarch64,AA64,$(call map_arm_to_uefi,$(1)))
> -map_i386_to_uefi= $(subst i386,IA32,$(call map_aarch64_to_uefi,$(1)))
> +map_riscv64_to_uefi = $(subst riscv64,RISCV64,$(call 
> map_aarch64_to_uefi,$(1)))
> +map_i386_to_uefi= $(subst i386,IA32,$(call map_riscv64_to_uefi,$(1)))
>  map_x86_64_to_uefi  = $(subst x86_64,X64,$(call map_i386_to_uefi,$(1)))
>  map_to_uefi = $(subst .,,$(call map_x86_64_to_uefi,$(1)))
>  
> @@ -70,7 +71,7 @@ Build/%.fat: Build/%.efi
>   uefi_bin_b=$$(stat --format=%s -- $<) && \
>   uefi_fat_kb=$$(( (uefi_bin_b * 11 / 10 + 1023) / 1024 )) && \
>   uefi_fat_kb=$$(( uefi_fat_kb >= 64 ? uefi_fat_kb : 64 )) && \
> - mkdosfs -C $@ -n $(basename $(@F)) -- $$uefi_fat_kb
> + mkdosfs -C $@ -n "bios-test" -- $$uefi_fat_kb
>   MTOOLS_SKIP_CHECK=1 mmd -i $@ ::EFI
>   MTOOLS_SKIP_CHECK=1 mmd -i $@ ::EFI/BOOT
>   MTOOLS_SKIP_CHECK=1 mcopy -i $@ -- $< \
> @@ -95,15 +96,9 @@ Build/%.fat: Build/%.efi
>  # we must mark the recipe manually as recursive, by using the "+" indicator.
>  # This way, when the inner "make" starts a parallel build of the target edk2
>  # module, it can communicate with the outer "make"'s job server.
> -Build/bios-tables-test.%.efi: build-edk2-tools
> - +./build.sh $(edk2_dir) BiosTablesTest $* $@
> -
> -build-edk2-tools:
> - cd $(edk2_dir)/BaseTools && git submodule update --init --force
> - $(MAKE) -C $(edk2_dir)/BaseTools \
> - PYTHON_COMMAND=$${EDK2_PYTHON_COMMAND:-python3} \
> - EXTRA_OPTFLAGS='$(EDK2_BASETOOLS_OPTFLAGS)' \
> - EXTRA_LDFLAGS='$(EDK2_BASETOOLS_LDFLAGS)'
> +Build/bios-tables-test.%.efi:
> + $(PYTHON) ../../roms/edk2-build.py --config uefi-test-build.config \
> + --match $*
>  
>  clean:
>   rm -rf Build Conf log
> diff --git a/tests/uefi-test-tools/uefi-test-build.config 
> b/tests/uefi-test-tools/uefi-test-build.config
> new file mode 100644
> index 00..1f389ae541
> --- /dev/null
> +++ b/tests/uefi-test-tools/uefi-test-build.config
> @@ -0,0 +1,52 @@
> +[global]
> +core = ../../roms/edk2
> +
> +
> +# arm
> +
> +[build.arm]
> +conf = UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +plat = UefiTestTools
> +dest = ./Build
> +arch = ARM
> +cpy1 = ARM/BiosTablesTest.efi  bios-tables-test.arm.efi
> +
> +
> +# aarch64
> +
> +[build.aarch64]
> +conf = UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +plat = UefiTestTools
> +dest = ./Build
> +arch = AARCH64
> +cpy1 = AARCH64/BiosTablesTest.efi  bios-tables-test.aarch64.efi
> +
> +
> +# riscv64
> +
> +[build.riscv]
> +conf = UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +plat = UefiTestTools
> +dest = ./Build
> +arch = RISCV64
> +cpy1 = RISCV64/BiosTablesTest.efi  bios-tables-test.riscv64.efi
> +
> +
> +# ia32
> +
> +[build.ia32]
> +conf = UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +plat = UefiTestTools
> +dest = ./Build
> +arch = IA32
> +cpy1 = IA32/BiosTablesTest.efi  bios-tables-test.i386.efi
> +
> +
> +# x64
> +
> +[build.x64]
> +conf = UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +plat = UefiTestTools
> +dest = ./Build
> +arch = X64
> +cpy1 = X64/BiosTablesTest.efi  bios-tables-test.x86_64.efi




Re: [PATCH v3 01/15] uefi-test-tools/UefiTestToolsPkg: Add RISC-V support

2024-06-25 Thread Igor Mammedov
On Fri, 21 Jun 2024 17:28:52 +0530
Sunil V L  wrote:

> Enable building the test application for RISC-V with appropriate
> dependencies updated.
> 
> Signed-off-by: Sunil V L 
> Acked-by: Gerd Hoffmann 
> Acked-by: Alistair Francis 

Acked-by: Igor Mammedov 

> ---
>  tests/uefi-test-tools/UefiTestToolsPkg/UefiTestToolsPkg.dsc | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/uefi-test-tools/UefiTestToolsPkg/UefiTestToolsPkg.dsc 
> b/tests/uefi-test-tools/UefiTestToolsPkg/UefiTestToolsPkg.dsc
> index c8511cd732..0902fd3c73 100644
> --- a/tests/uefi-test-tools/UefiTestToolsPkg/UefiTestToolsPkg.dsc
> +++ b/tests/uefi-test-tools/UefiTestToolsPkg/UefiTestToolsPkg.dsc
> @@ -19,7 +19,7 @@
>PLATFORM_VERSION= 0.1
>PLATFORM_NAME   = UefiTestTools
>SKUID_IDENTIFIER= DEFAULT
> -  SUPPORTED_ARCHITECTURES = ARM|AARCH64|IA32|X64
> +  SUPPORTED_ARCHITECTURES = ARM|AARCH64|IA32|X64|RISCV64
>BUILD_TARGETS   = DEBUG
>  
>  [BuildOptions.IA32]
> @@ -60,6 +60,10 @@
>  
>  [LibraryClasses.IA32, LibraryClasses.X64]
>BaseMemoryLib|MdePkg/Library/BaseMemoryLibRepStr/BaseMemoryLibRepStr.inf
> +  
> RegisterFilterLib|MdePkg/Library/RegisterFilterLibNull/RegisterFilterLibNull.inf
> +
> +[LibraryClasses.RISCV64]
> +  BaseMemoryLib|MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf
>  
>  [PcdsFixedAtBuild]
>gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel|0x8040004F




Re: [PATCH v2 06/12] tests/data/acpi/virt: Move ACPI tables under aarch64

2024-06-20 Thread Igor Mammedov
On Wed, 19 Jun 2024 23:30:35 +0530
Sunil V L  wrote:

> On Wed, Jun 19, 2024 at 05:20:50AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jun 19, 2024 at 11:17:43AM +0200, Igor Mammedov wrote:  
> > > On Mon, 27 May 2024 20:46:29 +0530
> > > Sunil V L  wrote:
> > >   
> > > > On Mon, May 27, 2024 at 12:12:10PM +0200, Philippe Mathieu-Daudé wrote: 
> > > >  
> > > > > Hi Sunil,
> > > > > 
> > > > > On 24/5/24 08:14, Sunil V L wrote:
> > > > > > Since virt is a common machine name across architectures like ARM64 
> > > > > > and
> > > > > > RISC-V, move existing ARM64 ACPI tables under aarch64 folder so that
> > > > > > RISC-V tables can be added under riscv64 folder in future.
> > > > > > 
> > > > > > Signed-off-by: Sunil V L 
> > > > > > Reviewed-by: Alistair Francis 
> > > > > > ---
> > > > > >   tests/data/acpi/virt/{ => aarch64}/APIC | Bin
> > > > > 
> > > > > The usual pattern is {target}/{machine}, so instead of:
> > > > > 
> > > > >   microvm/
> > > > >   pc/
> > > > >   q35/
> > > > >   virt/aarch64/
> > > > >   virt/riscv64/
> > > > > 
> > > > > (which is odd because q35 is the x86 'virt'), I'd rather see:
> > > > > 
> > > > >   x86/microvm/
> > > > >   x86/pc/
> > > > >   x86/q35/
> > > > >   aarch64/virt/
> > > > >   riscv64/virt/
> > > > > 
> > > > > Anyhow just my 2 cents, up to the ACPI maintainers :)
> > > > > 
> > > > Hi Phil,
> > > > 
> > > > Your suggestion does make sense to me. Let me wait for feedback from
> > > > ARM/ACPI maintainers.  
> > > 
> > > I'd prefer  {target}/{machine} hierarchy like Philippe suggests  
> > 
> > Agreed.
> >   
> Thanks for the confirmation!. Let me send the updated version soon.
> 
> Moving pc/q35/microvm also under new x86 would need many changes in
> bios-table-test.c. So, the question is, are you ok to combine x86
> changes as well in this series or prefer to it later in separate series?

it should be fine ok to include x86 changes here as well.

I'd basically split previous patch on path altering part and a 2nd adding
 .arch = "aarch64"

then 3rd doing the same for x86

as for this patch, I'd include all blobs movement here.

> 
> Thanks,
> Sunil
> 




Re: [PATCH v2 12/12] tests/qtest/bios-tables-test: Add expected ACPI data files for RISC-V

2024-06-19 Thread Igor Mammedov
On Fri, 24 May 2024 11:44:11 +0530
Sunil V L  wrote:

> As per the step 5 in the process documented in bios-tables-test.c,
> generate the expected ACPI AML data files for RISC-V using the
> rebuild-expected-aml.sh script and update the
> bios-tables-test-allowed-diff.h.
> 
> These are all new files being added for the first time. Hence, iASL diff
> output is not added.
> 
> Signed-off-by: Sunil V L 

Acked-by: Igor Mammedov 

> ---
>  tests/data/acpi/virt/riscv64/APIC   | Bin 0 -> 116 bytes
>  tests/data/acpi/virt/riscv64/DSDT   | Bin 0 -> 3518 bytes
>  tests/data/acpi/virt/riscv64/FACP   | Bin 0 -> 276 bytes
>  tests/data/acpi/virt/riscv64/MCFG   | Bin 0 -> 60 bytes
>  tests/data/acpi/virt/riscv64/RHCT   | Bin 0 -> 314 bytes
>  tests/data/acpi/virt/riscv64/SPCR   | Bin 0 -> 80 bytes
>  tests/qtest/bios-tables-test-allowed-diff.h |   6 --
>  7 files changed, 6 deletions(-)
> 
> diff --git a/tests/data/acpi/virt/riscv64/APIC 
> b/tests/data/acpi/virt/riscv64/APIC
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..66a25dfd2d6ea2b607c024722b2eab95873a01e9
>  100644
> GIT binary patch
> literal 116
> zcmZ<^@N_O=U|?X|;^gn_5v<@85#X!<1dKp25F13pfP@Mo12P{Zj?R|`s)2!c7=s}J
> I#NvT*0o0BN0RR91
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/virt/riscv64/DSDT 
> b/tests/data/acpi/virt/riscv64/DSDT
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0fb2d5e0e389541209b765d5092d0706f40298f6
>  100644
> GIT binary patch
> literal 3518
> zcmZvf%WvaU6vnR;w@IBxlQexl(t(j!ppl%0(ryq zgcxeCJ*}oI`D?w!iChKA+$9t$orAn%(bn
> zN+n)t?0eh6a^of6TgGN7rRbcFh1Tyc_k%{iceZVNuIr}z+pT7 z>)qm0`a{a^Ra*0X8Qm}E3gQ zVE91w@#@A>YkTOF1_DRa)WNl^t#{A^TNmZNji{btZCtt5E!+b  
> zecnEQ^xc;~?0jvN=B?69B6sx0n@1-LMCh z|GVk1+UV8=+`16nn$W3iZBaGE(t=R0Su8V)L_|(iti)M3i8v3Jc_g_ zZ0_+)tcM-v;WLjB?y(x{F%swTD)SiS9?!;ljK+DKGLIDZSc~;Y#d$nr9_i3y=NsQ^
> zu@zZ&*ReP}{EyK3th+T@*_*eqZ#4FX%O>b{iWO(USDtFAW3{YY{55g*p1P}!a8zWX
> z7lz;IPVBzpJS=7G%wV8y2Q62ba|`EHRm#%1lYm%>L=vK=N;x|_7+?*WxKL3R0`umY
> z>MhKe$y(1g;N2-TU8l! zP=$;-)Haz>@sOMoiwl`i1tW@cj+o4-cu3BPCB-VhD+4MD9hIDroD#Oi8OIy2%-
> zNlr-4iRFXLXr|LTGn$gLp$V}cX!M^n3=p)tt`$vN>NG_kr`M{qil6Owag1ZPHY
> zW+W#h=gbPutl-Q_PDsv)?-Htwo@Y*Q<|HR1=gbSvyx`1BPDsup=eXpA zv*(GAkEvZhm4f7i zLUPVY!8s{7CnYB&=bRFpQ-X6!azb*>X~8)yIHx5iB zS;+~>Ip+lDoZy_3oRFMzUU1F}$vGDU=Yrr|kera5b5U?E3eH8z3CTH^1m}|A  
> zT#}rSoU5@qiM`L8Qmx@>rXnqyVu6bqy3;0
> zSfN$e$O$X-aop-gjFlN1TJ2C(VM8aZsGs9rPsDhcG3gaHcG3%d9rt=N#> zYt+>h-rEXOMpLn!a_)bcQwbVUYCt>d6Z~go(OKwiV=x$e6rJOWm8FJLZ)jL(gSOQ9  
> z(=101Q%{N90rg{iGreXyIPiUy_PU*2Ro)uw?+2cJexkhQVfAu5b@3W?^1b$-wSOuL
> z8($pWumAYmuXoN*92)^EIHqx|osu9QI;oM>2efl4w7)DozPM|Bh$~ecUA>%od=bT&  
> z;R0PerC=JrI{7MZ#_1;2tCR9A{Hkc%mp4o`zpVZISFrki`_c5@?b)Ba_T|{c>*}hQ  
> pv@F`;cR<_jYzAT_(hnb+|8pBtRxTSGr9{slF`>K_0A
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/virt/riscv64/FACP 
> b/tests/data/acpi/virt/riscv64/FACP
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..a5276b65ea8ce46cc9b40d96d98f0669c9089ed4
>  100644
> GIT binary patch
> literal 276
> zcmZ>BbPf< A0RR91
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/virt/riscv64/MCFG 
> b/tests/data/acpi/virt/riscv64/MCFG
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..37eb923a9320f5573c0c2cdb90bd98409cc7eb6f
>  100644
> GIT binary patch
> literal 60
> rcmeZuc5}C3U|?Y6aq@Te2v%^42yj*a0!E-1hz+8VfB}^KA4CHH3`GY4
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/virt/riscv64/RHCT 
> b/tests/data/acpi/virt/riscv64/RHCT
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..beaa961bbf0f0486c0dee25f543377c928354f84
>  100644
> GIT binary patch
> literal 314
> zcmXAlu}%Xq42FGxD#XNyI`tt=C`A2-wkXNvbP-K1O46&8iKjrk6)SI3ey5h~
> z@3-SPa`wCa{iPvl)b_RC9X8vKw|)adiC8n)zP^7d?+~A>`lE(^DK1@Wog4=(iq&1K
> z7;1J`gewX|OE=3Z>{xM3wM)ljIQKa+635YaZ7jrOeGc+eJEnks*|jl=GEUBVQ8WhX  
> zK@ Pp1|9>GjINg;u`)Bd);9H  
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/data/acpi/virt/riscv64/SPCR 
> b/tests/data/acpi/virt/riscv64/SPCR
> index 
> e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..4da9daf65f71a13ac2b488d4e9728f194b569a43
>  100644
> GIT binary patch
> literal 80
> zcmWFza1IJ!U|?X{>E!S15v<@85#X!<1dKp25F12;fdT`FDF9*%FmM4$c8~z`e;@#f  
> G!2kgKJqrN<
> 
> literal 0
> HcmV?d1
> 
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index d8610c8d72..dfb8523c8b 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1,7 +1 @@
>  /* List of comma-separated changed AML files to ignore */
> -"tests/data/acpi/virt/riscv64/APIC",
> -"tests/data/acpi/virt/riscv64/DSDT",
> -"tests/data/acpi/virt/riscv64/FACP",
> -"tests/data/acpi/virt/riscv64/MCFG",
> -"tests/data/acpi/virt/riscv64/RHCT",
> -"tests/data/acpi/virt/riscv64/SPCR",




Re: [PATCH v2 11/12] tests/qtest/bios-tables-test.c: Enable basic testing for RISC-V

2024-06-19 Thread Igor Mammedov
On Fri, 24 May 2024 11:44:10 +0530
Sunil V L  wrote:

> Add basic ACPI table test case for RISC-V.
> 
> Signed-off-by: Sunil V L 
> ---
>  tests/qtest/bios-tables-test.c | 27 +++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
> index c73174ad00..880435a5fa 100644
> --- a/tests/qtest/bios-tables-test.c
> +++ b/tests/qtest/bios-tables-test.c
> @@ -1935,6 +1935,29 @@ static void test_acpi_microvm_acpi_erst(void)
>  }
>  #endif /* CONFIG_POSIX */
>  
> +static void test_acpi_riscv64_virt_tcg(void)
> +{
> +->cd data = {
> +.machine = "virt",
> +.arch = "riscv64",
> +.tcg_only = true,
> +.uefi_fl1 = "pc-bios/edk2-riscv-code.fd",
> +.uefi_fl2 = "pc-bios/edk2-riscv-vars.fd",
> +.ram_start = 0x8000ULL,
> +.scan_len = 128ULL * 1024 * 1024,
> +};
> +
> +/*
> + * RHCT will have ISA string encoded. To reduce the effort
> + * of updating expected AML file for any new default ISA extension,
> + * use the profile rva22s64.
> + */
> +test_acpi_one("-cpu rva22s64 -device virtio-blk-device,drive=hd0 "
> +  "-drive 
> file=tests/data/uefi-boot-images/bios-tables-test.riscv64.iso.qcow2,id=hd0",

Can you reuse test_data->cd, instead of specifying disk here? 

> +  );
> +free_test_data();
> +}
> +
>  static void test_acpi_aarch64_virt_tcg(void)
>  {
>  test_data data = {
> @@ -2404,6 +2427,10 @@ int main(int argc, char *argv[])
>  qtest_add_func("acpi/virt/viot", 
> test_acpi_aarch64_virt_viot);
>  }
>  }
> +} else if (strcmp(arch, "riscv64") == 0) {
> +if (has_tcg && qtest_has_device("virtio-blk-pci")) {
> +qtest_add_func("acpi/virt", test_acpi_riscv64_virt_tcg);
> +}
>  }
>  ret = g_test_run();
>  boot_sector_cleanup(disk);




Re: [PATCH v2 10/12] tests/qtest/bios-tables-test: Add empty ACPI data files for RISC-V

2024-06-19 Thread Igor Mammedov
On Fri, 24 May 2024 11:44:09 +0530
Sunil V L  wrote:

> As per process documented (steps 1-3) in bios-tables-test.c, add empty
> AML data files for RISC-V ACPI tables and add the entries in
> bios-tables-test-allowed-diff.h.
> 
> Signed-off-by: Sunil V L 

Reviewed-by: Igor Mammedov 

> ---
>  tests/data/acpi/virt/riscv64/APIC   | 0
>  tests/data/acpi/virt/riscv64/DSDT   | 0
>  tests/data/acpi/virt/riscv64/FACP   | 0
>  tests/data/acpi/virt/riscv64/MCFG   | 0
>  tests/data/acpi/virt/riscv64/RHCT   | 0
>  tests/data/acpi/virt/riscv64/SPCR   | 0
>  tests/qtest/bios-tables-test-allowed-diff.h | 6 ++
>  7 files changed, 6 insertions(+)
>  create mode 100644 tests/data/acpi/virt/riscv64/APIC
>  create mode 100644 tests/data/acpi/virt/riscv64/DSDT
>  create mode 100644 tests/data/acpi/virt/riscv64/FACP
>  create mode 100644 tests/data/acpi/virt/riscv64/MCFG
>  create mode 100644 tests/data/acpi/virt/riscv64/RHCT
>  create mode 100644 tests/data/acpi/virt/riscv64/SPCR
> 
> diff --git a/tests/data/acpi/virt/riscv64/APIC 
> b/tests/data/acpi/virt/riscv64/APIC
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/virt/riscv64/DSDT 
> b/tests/data/acpi/virt/riscv64/DSDT
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/virt/riscv64/FACP 
> b/tests/data/acpi/virt/riscv64/FACP
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/virt/riscv64/MCFG 
> b/tests/data/acpi/virt/riscv64/MCFG
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/virt/riscv64/RHCT 
> b/tests/data/acpi/virt/riscv64/RHCT
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/data/acpi/virt/riscv64/SPCR 
> b/tests/data/acpi/virt/riscv64/SPCR
> new file mode 100644
> index 00..e69de29bb2
> diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
> b/tests/qtest/bios-tables-test-allowed-diff.h
> index dfb8523c8b..d8610c8d72 100644
> --- a/tests/qtest/bios-tables-test-allowed-diff.h
> +++ b/tests/qtest/bios-tables-test-allowed-diff.h
> @@ -1 +1,7 @@
>  /* List of comma-separated changed AML files to ignore */
> +"tests/data/acpi/virt/riscv64/APIC",
> +"tests/data/acpi/virt/riscv64/DSDT",
> +"tests/data/acpi/virt/riscv64/FACP",
> +"tests/data/acpi/virt/riscv64/MCFG",
> +"tests/data/acpi/virt/riscv64/RHCT",
> +"tests/data/acpi/virt/riscv64/SPCR",




Re: [PATCH v2 09/12] tests/data/acpi/rebuild-expected-aml.sh: Add RISC-V

2024-06-19 Thread Igor Mammedov
On Fri, 24 May 2024 11:44:08 +0530
Sunil V L  wrote:

> Update the list of supported architectures to include RISC-V.
> 
> Signed-off-by: Sunil V L 
> Reviewed-by: Alistair Francis 

Reviewed-by: Igor Mammedov 

> ---
>  tests/data/acpi/rebuild-expected-aml.sh | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/data/acpi/rebuild-expected-aml.sh 
> b/tests/data/acpi/rebuild-expected-aml.sh
> index dcf2e2f221..c1092fb8ba 100755
> --- a/tests/data/acpi/rebuild-expected-aml.sh
> +++ b/tests/data/acpi/rebuild-expected-aml.sh
> @@ -12,7 +12,7 @@
>  # This work is licensed under the terms of the GNU GPLv2.
>  # See the COPYING.LIB file in the top-level directory.
>  
> -qemu_arches="x86_64 aarch64"
> +qemu_arches="x86_64 aarch64 riscv64"
>  
>  if [ ! -e "tests/qtest/bios-tables-test" ]; then
>  echo "Test: bios-tables-test is required! Run make check before this 
> script."
> @@ -36,7 +36,8 @@ fi
>  if [ -z "$qemu_bins" ]; then
>  echo "Only the following architectures are currently supported: 
> $qemu_arches"
>  echo "None of these configured!"
> -echo "To fix, run configure --target-list=x86_64-softmmu,aarch64-softmmu"
> +echo "To fix, run configure \
> + --target-list=x86_64-softmmu,aarch64-softmmu,riscv64-softmmu"
>  exit 1;
>  fi
>  




  1   2   3   4   5   6   7   8   9   10   >