Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-16 Thread Ankit Agrawal
>> >
>> > Given that, an alternative proposal that I think would work
>> > for you would be to add a 'placeholder' memory node definition
>> > in SRAT (so allow 0 size explicitly - might need a new SRAT
>> > entry to avoid backwards compat issues).
>>
>> Putting all the PCI/GI/... complexity aside, I'll just raise again that
>> for virtio-mem something simple like that might be helpful as well, IIUC.
>>
>>   -numa node,nodeid=2 \
>>   ...
>>   -device virtio-mem-pci,node=2,... \
>>
>> All we need is the OS to prepare for an empty node that will get
>> populated with memory later.
>>
>> So if that's what a "placeholder" node definition in srat could achieve
>> as well, even without all of the other acpi-generic-initiator stuff,
>> that would be great.
>
> Please no "placeholder" definitions in SRAT. One of the main thrusts of
> CXL is to move away from static ACPI tables describing vendor-specific
> memory topology, towards an industry standard device enumeration.

So I suppose we go with the original suggestion that aligns with the
current spec description pointed by Jonathan, which is the following:

A separate acpi-generic-initiator object that links only one node to the
device. For each such association, a new object would be created.

A previously mentioned example from Jonathan:
  -object acpi-generic-initiator,id=gi1,pci-dev=dev1,nodeid=10
  -object acpi-generic-initiator,id=gi2,pci-dev=dev1,nodeid=11

> It is strictly OS policy about how many NUMA nodes it imagines it wants
> to define within that playground. The current OS policy is one node per
> "window". If a solution believes Linux should be creating more than that
> I submit that's a discussion with OS policy developers, not a trip to
> the BIOS team to please sprinkle in more placeholders. Linux can fully
> own the policy here. The painful bit is just that it never had to
> before.

Whilst I agree that Linux kernel solution would be nice as a long term
solution, such change could be quite involved and intrusive.


Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-10 Thread Michael S. Tsirkin
On Wed, Jan 10, 2024 at 03:19:05PM -0800, Dan Williams wrote:
> David Hildenbrand wrote:
> > On 09.01.24 17:52, Jonathan Cameron wrote:
> > > On Thu, 4 Jan 2024 10:39:41 -0700
> > > Alex Williamson  wrote:
> > > 
> > >> On Thu, 4 Jan 2024 16:40:39 +
> > >> Ankit Agrawal  wrote:
> > >>
> > >>> Had a discussion with RH folks, summary follows:
> > >>>
> > >>> 1. To align with the current spec description pointed by Jonathan, we 
> > >>> first do
> > >>>   a separate object instance per GI node as suggested by Jonathan. 
> > >>> i.e.
> > >>>   a acpi-generic-initiator would only link one node to the device. 
> > >>> To
> > >>>   associate a set of nodes, those number of object instances should 
> > >>> be
> > >>>   created.
> > >>> 2. In parallel, we work to get the spec updated. After the update, we 
> > >>> switch
> > >>>  to the current implementation to link a PCI device with a set of 
> > >>> NUMA
> > >>>  nodes.
> > >>>
> > >>> Alex/Jonathan, does this sound fine?
> > >>>
> > >>
> > >> Yes, as I understand Jonathan's comments, the acpi-generic-initiator
> > >> object should currently define a single device:node relationship to
> > >> match the ACPI definition.
> > > 
> > > Doesn't matter for this, but it's a many_device:single_node
> > > relationship as currently defined. We should be able to support that
> > > in any new interfaces for QEMU.
> > > 
> > >>   Separately a clarification of the spec
> > >> could be pursued that could allow us to reinstate a node list option
> > >> for the acpi-generic-initiator object.  In the interim, a user can
> > >> define multiple 1:1 objects to create the 1:N relationship that's
> > >> ultimately required here.  Thanks,
> > > 
> > > Yes, a spec clarification would work, probably needs some text
> > > to say a GI might not be an initiator as well - my worry is
> > > theoretical backwards compatibility with a (probably
> > > nonexistent) OS that assumes the N:1 mapping. So you may be in
> > > new SRAT entry territory.
> > > 
> > > Given that, an alternative proposal that I think would work
> > > for you would be to add a 'placeholder' memory node definition
> > > in SRAT (so allow 0 size explicitly - might need a new SRAT
> > > entry to avoid backwards compat issues).
> > 
> > Putting all the PCI/GI/... complexity aside, I'll just raise again that 
> > for virtio-mem something simple like that might be helpful as well, IIUC.
> > 
> > -numa node,nodeid=2 \
> > ...
> > -device virtio-mem-pci,node=2,... \
> > 
> > All we need is the OS to prepare for an empty node that will get 
> > populated with memory later.
> > 
> > So if that's what a "placeholder" node definition in srat could achieve 
> > as well, even without all of the other acpi-generic-initiator stuff, 
> > that would be great.
> 
> Please no "placeholder" definitions in SRAT. One of the main thrusts of
> CXL is to move away from static ACPI tables describing vendor-specific
> memory topology, towards an industry standard device enumeration.
> 
> Platform firmware enumerates the platform CXL "windows" (ACPI CEDT
> CFMWS) and the relative performance of the CPU access a CXL port (ACPI
> HMAT Generic Port), everything else is CXL standard enumeration.

I assume memory topology and so on apply, right?  E.g PMTT etc.
Just making sure.


> It is strictly OS policy about how many NUMA nodes it imagines it wants
> to define within that playground. The current OS policy is one node per
> "window". If a solution believes Linux should be creating more than that
> I submit that's a discussion with OS policy developers, not a trip to
> the BIOS team to please sprinkle in more placeholders. Linux can fully
> own the policy here. The painful bit is just that it never had to
> before.




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread Jason Gunthorpe
On Tue, Jan 09, 2024 at 11:36:03AM -0800, Dan Williams wrote:
> Jason Gunthorpe wrote:
> > On Tue, Jan 09, 2024 at 06:02:03PM +0100, David Hildenbrand wrote:
> > > > Given that, an alternative proposal that I think would work
> > > > for you would be to add a 'placeholder' memory node definition
> > > > in SRAT (so allow 0 size explicitly - might need a new SRAT
> > > > entry to avoid backwards compat issues).
> > > 
> > > Putting all the PCI/GI/... complexity aside, I'll just raise again that 
> > > for
> > > virtio-mem something simple like that might be helpful as well, IIUC.
> > > 
> > >   -numa node,nodeid=2 \
> > >   ...
> > >   -device virtio-mem-pci,node=2,... \
> > > 
> > > All we need is the OS to prepare for an empty node that will get populated
> > > with memory later.
> > 
> > That is all this is doing too, the NUMA relationship of the actual
> > memory is desribed already by the PCI device since it is a BAR on the
> > device.
> > 
> > The only purpose is to get the empty nodes into Linux :(
> > 
> > > So if that's what a "placeholder" node definition in srat could achieve as
> > > well, even without all of the other acpi-generic-initiator stuff, that 
> > > would
> > > be great.
> > 
> > Seems like there are two use quite similar cases.. virtio-mem is going
> > to be calling the same family of kernel API I suspect :)
> 
> It seems sad that we, as an industry, went through all of this trouble
> to define a dynamically enumerable CXL device model only to turn around
> and require static ACPI tables to tell us how to enumerate it.
> 
> A similar problem exists on the memory target side and the approach
> taken there was to have Linux statically reserve at least enough numa
> node numbers for all the platform CXL memory ranges (defined in the
> ACPI.CEDT.CFMWS), but with the promise to come back and broach the
> dynamic node creation problem "if the need arises".
> 
> This initiator-node enumeration case seems like that occasion where the
> need has arisen to get Linux out of the mode of needing to declare all
> possible numa nodes early in boot. Allow for nodes to be discoverable
> post NUMA-init.
> 
> One strawman scheme that comes to mind is instead of "add nodes early" in
> boot, "delete unused nodes late" in boot after the device topology has
> been enumerated. Otherwise, requiring static ACPI tables to further
> enumerate an industry-standard dynamically enumerated bus seems to be
> going in the wrong direction.

Fully agree, and I think this will get increasingly painful as we go
down the CXL road.

Jason



Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread Dan Williams
Jason Gunthorpe wrote:
> On Tue, Jan 09, 2024 at 06:02:03PM +0100, David Hildenbrand wrote:
> > > Given that, an alternative proposal that I think would work
> > > for you would be to add a 'placeholder' memory node definition
> > > in SRAT (so allow 0 size explicitly - might need a new SRAT
> > > entry to avoid backwards compat issues).
> > 
> > Putting all the PCI/GI/... complexity aside, I'll just raise again that for
> > virtio-mem something simple like that might be helpful as well, IIUC.
> > 
> > -numa node,nodeid=2 \
> > ...
> > -device virtio-mem-pci,node=2,... \
> > 
> > All we need is the OS to prepare for an empty node that will get populated
> > with memory later.
> 
> That is all this is doing too, the NUMA relationship of the actual
> memory is desribed already by the PCI device since it is a BAR on the
> device.
> 
> The only purpose is to get the empty nodes into Linux :(
> 
> > So if that's what a "placeholder" node definition in srat could achieve as
> > well, even without all of the other acpi-generic-initiator stuff, that would
> > be great.
> 
> Seems like there are two use quite similar cases.. virtio-mem is going
> to be calling the same family of kernel API I suspect :)

It seems sad that we, as an industry, went through all of this trouble
to define a dynamically enumerable CXL device model only to turn around
and require static ACPI tables to tell us how to enumerate it.

A similar problem exists on the memory target side and the approach
taken there was to have Linux statically reserve at least enough numa
node numbers for all the platform CXL memory ranges (defined in the
ACPI.CEDT.CFMWS), but with the promise to come back and broach the
dynamic node creation problem "if the need arises".

This initiator-node enumeration case seems like that occasion where the
need has arisen to get Linux out of the mode of needing to declare all
possible numa nodes early in boot. Allow for nodes to be discoverable
post NUMA-init.

One strawman scheme that comes to mind is instead of "add nodes early" in
boot, "delete unused nodes late" in boot after the device topology has
been enumerated. Otherwise, requiring static ACPI tables to further
enumerate an industry-standard dynamically enumerated bus seems to be
going in the wrong direction.



Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread Jason Gunthorpe
On Tue, Jan 09, 2024 at 06:02:03PM +0100, David Hildenbrand wrote:
> > Given that, an alternative proposal that I think would work
> > for you would be to add a 'placeholder' memory node definition
> > in SRAT (so allow 0 size explicitly - might need a new SRAT
> > entry to avoid backwards compat issues).
> 
> Putting all the PCI/GI/... complexity aside, I'll just raise again that for
> virtio-mem something simple like that might be helpful as well, IIUC.
> 
>   -numa node,nodeid=2 \
>   ...
>   -device virtio-mem-pci,node=2,... \
> 
> All we need is the OS to prepare for an empty node that will get populated
> with memory later.

That is all this is doing too, the NUMA relationship of the actual
memory is desribed already by the PCI device since it is a BAR on the
device.

The only purpose is to get the empty nodes into Linux :(

> So if that's what a "placeholder" node definition in srat could achieve as
> well, even without all of the other acpi-generic-initiator stuff, that would
> be great.

Seems like there are two use quite similar cases.. virtio-mem is going
to be calling the same family of kernel API I suspect :)

Jason



Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread David Hildenbrand

On 09.01.24 17:52, Jonathan Cameron wrote:

On Thu, 4 Jan 2024 10:39:41 -0700
Alex Williamson  wrote:


On Thu, 4 Jan 2024 16:40:39 +
Ankit Agrawal  wrote:


Had a discussion with RH folks, summary follows:

1. To align with the current spec description pointed by Jonathan, we first do
  a separate object instance per GI node as suggested by Jonathan. i.e.
  a acpi-generic-initiator would only link one node to the device. To
  associate a set of nodes, those number of object instances should be
  created.
2. In parallel, we work to get the spec updated. After the update, we switch
 to the current implementation to link a PCI device with a set of NUMA
 nodes.

Alex/Jonathan, does this sound fine?
   


Yes, as I understand Jonathan's comments, the acpi-generic-initiator
object should currently define a single device:node relationship to
match the ACPI definition.


Doesn't matter for this, but it's a many_device:single_node
relationship as currently defined. We should be able to support that
in any new interfaces for QEMU.


  Separately a clarification of the spec
could be pursued that could allow us to reinstate a node list option
for the acpi-generic-initiator object.  In the interim, a user can
define multiple 1:1 objects to create the 1:N relationship that's
ultimately required here.  Thanks,


Yes, a spec clarification would work, probably needs some text
to say a GI might not be an initiator as well - my worry is
theoretical backwards compatibility with a (probably
nonexistent) OS that assumes the N:1 mapping. So you may be in
new SRAT entry territory.

Given that, an alternative proposal that I think would work
for you would be to add a 'placeholder' memory node definition
in SRAT (so allow 0 size explicitly - might need a new SRAT
entry to avoid backwards compat issues).


Putting all the PCI/GI/... complexity aside, I'll just raise again that 
for virtio-mem something simple like that might be helpful as well, IIUC.


-numa node,nodeid=2 \
...
-device virtio-mem-pci,node=2,... \

All we need is the OS to prepare for an empty node that will get 
populated with memory later.


So if that's what a "placeholder" node definition in srat could achieve 
as well, even without all of the other acpi-generic-initiator stuff, 
that would be great.


--
Cheers,

David / dhildenb




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread Jonathan Cameron via
On Thu, 4 Jan 2024 10:39:41 -0700
Alex Williamson  wrote:

> On Thu, 4 Jan 2024 16:40:39 +
> Ankit Agrawal  wrote:
> 
> > Had a discussion with RH folks, summary follows:
> > 
> > 1. To align with the current spec description pointed by Jonathan, we first 
> > do
> >  a separate object instance per GI node as suggested by Jonathan. i.e.
> >  a acpi-generic-initiator would only link one node to the device. To 
> >  associate a set of nodes, those number of object instances should be
> >  created.
> > 2. In parallel, we work to get the spec updated. After the update, we switch
> > to the current implementation to link a PCI device with a set of NUMA
> > nodes.
> > 
> > Alex/Jonathan, does this sound fine?
> >   
> 
> Yes, as I understand Jonathan's comments, the acpi-generic-initiator
> object should currently define a single device:node relationship to
> match the ACPI definition.

Doesn't matter for this, but it's a many_device:single_node
relationship as currently defined. We should be able to support that
in any new interfaces for QEMU.

>  Separately a clarification of the spec
> could be pursued that could allow us to reinstate a node list option
> for the acpi-generic-initiator object.  In the interim, a user can
> define multiple 1:1 objects to create the 1:N relationship that's
> ultimately required here.  Thanks,

Yes, a spec clarification would work, probably needs some text
to say a GI might not be an initiator as well - my worry is
theoretical backwards compatibility with a (probably
nonexistent) OS that assumes the N:1 mapping. So you may be in 
new SRAT entry territory.

Given that, an alternative proposal that I think would work
for you would be to add a 'placeholder' memory node definition
in SRAT (so allow 0 size explicitly - might need a new SRAT
entry to avoid backwards compat issues). Then put the GPU
initiator part in a GI node and use the HMAT Memory Proximity
Domain Attributes magic linkage entry "Proximity Domain for
the Attached Initiator" to associate the placeholder memory
nodes with the GI / GPU.

I'd go to ASWG with a big diagram and ask 'how do I do this!'

If you do it code first I'm happy to help out with refining
the proposal. I just don't like the time of ASWG calls so tend
to not make them in person.

Or just emulate UEFI's CDAT (from CXL, but not CXL specific)
from your GPU and make it a driver problem ;)

Jonathan


> 
> Alex
> 




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-09 Thread Jonathan Cameron via
On Thu, 4 Jan 2024 03:36:06 +
Ankit Agrawal  wrote:

> Thanks Jonathan for the review.
> 
> > As per reply to the cover letter I definitely want to see SRAT table dumps
> > in here though so we can easily see what this is actually building.  
> 
> Ack.
> 
> > I worry that some OS might make the assumption that it's one GI node
> > per PCI device though. The language in the ACPI specification is:
> > 
> > "The Generic Initiator Affinity Structure provides the association between 
> > _a_
> > generic initiator and _the_ proximity domain to which the initiator 
> > belongs".
> > 
> > The use of _a_ and _the_ in there makes it pretty explicitly a N:1 
> > relationship
> > (multiple devices can be in same proximity domain, but a device may only be 
> > in one).
> > To avoid that confusion you will need an ACPI spec change.  I'd be happy to
> > support  
> 
> Yeah, that's a good point. It won't hurt to make the spec change to make the
> possibility of the association between a device with multiple domains.
> 
> > The reason you can get away with this in Linux today is that I only 
> > implemented
> > a very minimal support for GIs with the mappings being provided the other 
> > way
> > around (_PXM in a PCIe node in DSDT).  If we finish that support off I'd 
> > assume  
> 
> Not sure if I understand this. Can you provide a reference to this DSDT 
> related
> change?

You need to add the PCI tree down to the device which is a bit fiddly if there
are switches etc. I'm also not sure I ever followed up in getting the PCI
fix in after we finally dealt with the issue this triggered on old AMD boxes
(they had devices that claimed to be in non existent proximity domains :(
later at least one path to hit that was closed down - I'm not sure all of them
were).

Anyhow, the fix for PCI include an example where the EP has a different PXM
to the root bridge.  In this example 0x02 is the GI node.

https://lore.kernel.org/all/20180912152140.3676-2-jonathan.came...@huawei.com/

>   Device (PCI2)
>   {
> Name (_HID, "PNP0A08") // PCI Express Root Bridge
> Name (_CID, "PNP0A03") // Compatible PCI Root Bridge
> Name(_SEG, 2) // Segment of this Root complex
> Name(_BBN, 0xF8) // Base Bus Number
> Name(_CCA, 1)
> Method (_PXM, 0, NotSerialized) {
>   Return(0x00)
> }
> 
> ...
> Device (BRI0) {
>   Name (_HID, "19E51610")
>   Name (_ADR, 0)
>   Name (_BBN, 0xF9)
>   Device (CAR0) {
> Name (_HID, "97109912")
> Name (_ADR, 0)
> Method (_PXM, 0, NotSerialized) {
>   Return(0x02)
> }
>   }
> }
>   }

Without that PCI fix, you'll only see correct GI mappings in Linux
for platform devices.

Sorry for slow reply - I missed the rest of this thread until I was
brandishing as an argument for another discussion on GIs and noticed
it had carried on with out me.

Jonathan





Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-08 Thread Markus Armbruster
Ankit Agrawal  writes:

>>> +##
>>> +# @AcpiGenericInitiatorProperties:
>>> +#
>>> +# Properties for acpi-generic-initiator objects.
>>> +#
>>> +# @pci-dev: PCI device ID to be associated with the node
>>> +#
>>> +# @host-nodes: numa node list associated with the PCI device.
>>
>> NUMA
>>
>> Suggest "list of NUMA nodes associated with ..."
>
> Ack, will make the change.
>
>>> @@ -981,6 +997,7 @@
>>>  'id': 'str' },
>>>    'discriminator': 'qom-type',
>>>    'data': {
>>> +  'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
>>>    'authz-list': 'AuthZListProperties',
>>>    'authz-listfile': 'AuthZListFileProperties',
>>>    'authz-pam':  'AuthZPAMProperties',
>>
>> I'm holding my Acked-by until the interface design issues raised by
>> Jason have been resolved.
>
> I suppose you meant Jonathan here?

Yes.  Going too fast.  My apologies!


Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-08 Thread Ankit Agrawal

>> > However, I'll leave it up to those more familiar with the QEMU numa
>> > control interface design to comment on whether this approach is preferable
>> > to making the gi part of the numa node entry or doing it like hmat.
>>
>> > -numa srat-gi,node-id=10,gi-pci-dev=dev1
>>
>> The current way of acpi-generic-initiator object usage came out of the 
>> discussion
>> on v1 to essentially link all the device NUMA nodes to the device.
>> (https://lore.kernel.org/all/20230926131427.1e441670.alex.william...@redhat.com/)
>>
>> Can Alex or David comment on which is preferable (the current mechanism vs 
>> 1:1
>> mapping per object as suggested by Jonathan)?
>
> I imagine there are ways that either could work, but specifying a
> gi-pci-dev in the numa node declaration appears to get a bit messy if we
> have multiple gi-pci-dev devices to associate to the node whereas
> creating an acpi-generic-initiator object per individual device:node
> relationship feels a bit easier to iterate.
>
> Also if we do extend the ACPI spec to more explicitly allow a device to
> associate to multiple nodes, we could re-instate the list behavior of
> the acpi-generic-initiator whereas I don't see a representation of the
> association at the numa object that makes sense.  Thanks,

Ack, making the change to create an individual acpi-generic-initiator object
per device:node.

Alex



Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-08 Thread Ankit Agrawal

>> +##
>> +# @AcpiGenericInitiatorProperties:
>> +#
>> +# Properties for acpi-generic-initiator objects.
>> +#
>> +# @pci-dev: PCI device ID to be associated with the node
>> +#
>> +# @host-nodes: numa node list associated with the PCI device.
>
> NUMA
>
> Suggest "list of NUMA nodes associated with ..."

Ack, will make the change.

>> @@ -981,6 +997,7 @@
>>  'id': 'str' },
>>    'discriminator': 'qom-type',
>>    'data': {
>> +  'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
>>    'authz-list': 'AuthZListProperties',
>>    'authz-listfile': 'AuthZListFileProperties',
>>    'authz-pam':  'AuthZPAMProperties',
>
> I'm holding my Acked-by until the interface design issues raised by
> Jason have been resolved.

I suppose you meant Jonathan here?


Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-08 Thread Markus Armbruster
 writes:

> From: Ankit Agrawal 
>
> NVIDIA GPU's support MIG (Mult-Instance GPUs) feature [1], which allows
> partitioning of the GPU device resources (including device memory) into
> several (upto 8) isolated instances. Each of the partitioned memory needs
> a dedicated NUMA node to operate. The partitions are not fixed and they
> can be created/deleted at runtime.
>
> Unfortunately Linux OS does not provide a means to dynamically create/destroy
> NUMA nodes and such feature implementation is not expected to be trivial. The
> nodes that OS discovers at the boot time while parsing SRAT remains fixed. So
> we utilize the Generic Initiator Affinity structures that allows association
> between nodes and devices. Multiple GI structures per BDF is possible,
> allowing creation of multiple nodes by exposing unique PXM in each of these
> structures.
>
> Introduce a new acpi-generic-initiator object to allow host admin provide the
> device and the corresponding NUMA nodes. Qemu maintain this association and
> use this object to build the requisite GI Affinity Structure. On a multi
> device system, each device supporting the features needs a unique
> acpi-generic-initiator object with its own set of NUMA nodes associated to it.
>
> An admin can provide the range of nodes through a uint16 array host-nodes
> and link it to a device by providing its id. Currently, only PCI device is
> supported. The following sample creates 8 nodes per PCI device for a VM
> with 2 PCI devices and link them to the respecitve PCI device using
> acpi-generic-initiator objects:
>
> -numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 \
> -numa node,nodeid=5 -numa node,nodeid=6 -numa node,nodeid=7 \
> -numa node,nodeid=8 -numa node,nodeid=9 \
> -device 
> vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
> -object acpi-generic-initiator,id=gi0,pci-dev=dev0,host-nodes=2-9 \
>
> -numa node,nodeid=10 -numa node,nodeid=11 -numa node,nodeid=12 \
> -numa node,nodeid=13 -numa node,nodeid=14 -numa node,nodeid=15 \
> -numa node,nodeid=16 -numa node,nodeid=17 \
> -device 
> vfio-pci-nohotplug,host=0009:01:01.0,bus=pcie.0,addr=05.0,rombar=0,id=dev1 \
> -object acpi-generic-initiator,id=gi1,pci-dev=dev1,host-nodes=10-17 \
>
> [1] https://www.nvidia.com/en-in/technologies/multi-instance-gpu
>
> Signed-off-by: Ankit Agrawal 

Appreciate the improved commit message.

[...]

> diff --git a/qapi/qom.json b/qapi/qom.json
> index c53ef978ff..7b33d4a53c 100644
> --- a/qapi/qom.json
> +++ b/qapi/qom.json
> @@ -794,6 +794,21 @@
>  { 'struct': 'VfioUserServerProperties',
>'data': { 'socket': 'SocketAddress', 'device': 'str' } }
>  
> +##
> +# @AcpiGenericInitiatorProperties:
> +#
> +# Properties for acpi-generic-initiator objects.
> +#
> +# @pci-dev: PCI device ID to be associated with the node
> +#
> +# @host-nodes: numa node list associated with the PCI device.

NUMA

Suggest "list of NUMA nodes associated with ..."

> +#
> +# Since: 9.0
> +##
> +{ 'struct': 'AcpiGenericInitiatorProperties',
> +  'data': { 'pci-dev': 'str',
> +'host-nodes': ['uint16'] } }
> +
>  ##
>  # @RngProperties:
>  #
> @@ -911,6 +926,7 @@
>  ##
>  { 'enum': 'ObjectType',
>'data': [
> +'acpi-generic-initiator',
>  'authz-list',
>  'authz-listfile',
>  'authz-pam',
> @@ -981,6 +997,7 @@
>  'id': 'str' },
>'discriminator': 'qom-type',
>'data': {
> +  'acpi-generic-initiator': 'AcpiGenericInitiatorProperties',
>'authz-list': 'AuthZListProperties',
>'authz-listfile': 'AuthZListFileProperties',
>'authz-pam':  'AuthZPAMProperties',

I'm holding my Acked-by until the interface design issues raised by
Jason have been resolved.




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-04 Thread Alex Williamson
On Thu, 4 Jan 2024 16:40:39 +
Ankit Agrawal  wrote:

> Had a discussion with RH folks, summary follows:
> 
> 1. To align with the current spec description pointed by Jonathan, we first do
>  a separate object instance per GI node as suggested by Jonathan. i.e.
>  a acpi-generic-initiator would only link one node to the device. To 
>  associate a set of nodes, those number of object instances should be
>  created.
> 2. In parallel, we work to get the spec updated. After the update, we switch
> to the current implementation to link a PCI device with a set of NUMA
> nodes.
> 
> Alex/Jonathan, does this sound fine?
> 

Yes, as I understand Jonathan's comments, the acpi-generic-initiator
object should currently define a single device:node relationship to
match the ACPI definition.  Separately a clarification of the spec
could be pursued that could allow us to reinstate a node list option
for the acpi-generic-initiator object.  In the interim, a user can
define multiple 1:1 objects to create the 1:N relationship that's
ultimately required here.  Thanks,

Alex




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-04 Thread Alex Williamson
On Thu, 4 Jan 2024 03:36:06 +
Ankit Agrawal  wrote:

> Thanks Jonathan for the review.
> 
> > As per reply to the cover letter I definitely want to see SRAT table dumps
> > in here though so we can easily see what this is actually building.  
> 
> Ack.
> 
> > I worry that some OS might make the assumption that it's one GI node
> > per PCI device though. The language in the ACPI specification is:
> > 
> > "The Generic Initiator Affinity Structure provides the association between 
> > _a_
> > generic initiator and _the_ proximity domain to which the initiator 
> > belongs".
> > 
> > The use of _a_ and _the_ in there makes it pretty explicitly a N:1 
> > relationship
> > (multiple devices can be in same proximity domain, but a device may only be 
> > in one).
> > To avoid that confusion you will need an ACPI spec change.  I'd be happy to
> > support  
> 
> Yeah, that's a good point. It won't hurt to make the spec change to make the
> possibility of the association between a device with multiple domains.
> 
> > The reason you can get away with this in Linux today is that I only 
> > implemented
> > a very minimal support for GIs with the mappings being provided the other 
> > way
> > around (_PXM in a PCIe node in DSDT).  If we finish that support off I'd 
> > assume  
> 
> Not sure if I understand this. Can you provide a reference to this DSDT 
> related
> change?
> 
> > Also, this effectively creates a bunch of separate generic initiator nodes
> > and lumping that under one object seems to imply they are in general 
> > connected
> > to each other.
> > 
> > I'd be happier with a separate instance per GI node
> > 
> >  -object acpi-generic-initiator,id=gi1,pci-dev=dev1,nodeid=10
> >  -object acpi-generic-initiator,id=gi2,pci-dev=dev1,nodeid=11
> > etc with the proviso that anyone using this on a system that assumes a one
> > to one mapping for PCI
> >
> > However, I'll leave it up to those more familiar with the QEMU numa
> > control interface design to comment on whether this approach is preferable
> > to making the gi part of the numa node entry or doing it like hmat.  
> 
> > -numa srat-gi,node-id=10,gi-pci-dev=dev1  
> 
> The current way of acpi-generic-initiator object usage came out of the 
> discussion
> on v1 to essentially link all the device NUMA nodes to the device.
> (https://lore.kernel.org/all/20230926131427.1e441670.alex.william...@redhat.com/)
> 
> Can Alex or David comment on which is preferable (the current mechanism vs 1:1
> mapping per object as suggested by Jonathan)?

I imagine there are ways that either could work, but specifying a
gi-pci-dev in the numa node declaration appears to get a bit messy if we
have multiple gi-pci-dev devices to associate to the node whereas
creating an acpi-generic-initiator object per individual device:node
relationship feels a bit easier to iterate.

Also if we do extend the ACPI spec to more explicitly allow a device to
associate to multiple nodes, we could re-instate the list behavior of
the acpi-generic-initiator whereas I don't see a representation of the
association at the numa object that makes sense.  Thanks,

Alex




Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-04 Thread Ankit Agrawal
Had a discussion with RH folks, summary follows:

1. To align with the current spec description pointed by Jonathan, we first do
 a separate object instance per GI node as suggested by Jonathan. i.e.
 a acpi-generic-initiator would only link one node to the device. To 
 associate a set of nodes, those number of object instances should be
 created.
2. In parallel, we work to get the spec updated. After the update, we switch
to the current implementation to link a PCI device with a set of NUMA
nodes.

Alex/Jonathan, does this sound fine?


Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-04 Thread Ankit Agrawal
>> However, I'll leave it up to those more familiar with the QEMU numa
>> control interface design to comment on whether this approach is preferable
>> to making the gi part of the numa node entry or doing it like hmat.
>> -numa srat-gi,node-id=10,gi-pci-dev=dev1
>
> The current way of acpi-generic-initiator object usage came out of the 
> discussion
> on v1 to essentially link all the device NUMA nodes to the device.
> (https://lore.kernel.org/all/20230926131427.1e441670.alex.william...@redhat.com/)

> Can Alex or David comment on which is preferable (the current mechanism vs 1:1
> mapping per object as suggested by Jonathan)?

Just to add, IMO just a single Qemu object to tie the nodes with the device is
better as the nodes are kind of a pool. Having several objects may be an 
overkill?

Plus this is a Qemu object, eventually we populate one SRAT GI structure to
expose the PXM-to-device link.


Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-03 Thread Ankit Agrawal
Thanks Jonathan for the review.

> As per reply to the cover letter I definitely want to see SRAT table dumps
> in here though so we can easily see what this is actually building.

Ack.

> I worry that some OS might make the assumption that it's one GI node
> per PCI device though. The language in the ACPI specification is:
> 
> "The Generic Initiator Affinity Structure provides the association between _a_
> generic initiator and _the_ proximity domain to which the initiator belongs".
> 
> The use of _a_ and _the_ in there makes it pretty explicitly a N:1 
> relationship
> (multiple devices can be in same proximity domain, but a device may only be 
> in one).
> To avoid that confusion you will need an ACPI spec change.  I'd be happy to
> support

Yeah, that's a good point. It won't hurt to make the spec change to make the
possibility of the association between a device with multiple domains.

> The reason you can get away with this in Linux today is that I only 
> implemented
> a very minimal support for GIs with the mappings being provided the other way
> around (_PXM in a PCIe node in DSDT).  If we finish that support off I'd 
> assume

Not sure if I understand this. Can you provide a reference to this DSDT related
change?

> Also, this effectively creates a bunch of separate generic initiator nodes
> and lumping that under one object seems to imply they are in general connected
> to each other.
> 
> I'd be happier with a separate instance per GI node
> 
>  -object acpi-generic-initiator,id=gi1,pci-dev=dev1,nodeid=10
>  -object acpi-generic-initiator,id=gi2,pci-dev=dev1,nodeid=11
> etc with the proviso that anyone using this on a system that assumes a one
> to one mapping for PCI
>
> However, I'll leave it up to those more familiar with the QEMU numa
> control interface design to comment on whether this approach is preferable
> to making the gi part of the numa node entry or doing it like hmat.

> -numa srat-gi,node-id=10,gi-pci-dev=dev1

The current way of acpi-generic-initiator object usage came out of the 
discussion
on v1 to essentially link all the device NUMA nodes to the device.
(https://lore.kernel.org/all/20230926131427.1e441670.alex.william...@redhat.com/)

Can Alex or David comment on which is preferable (the current mechanism vs 1:1
mapping per object as suggested by Jonathan)?



Re: [PATCH v6 1/2] qom: new object to associate device to numa node

2024-01-02 Thread Jonathan Cameron via
On Mon, 25 Dec 2023 10:26:02 +0530
 wrote:

> From: Ankit Agrawal 
> 
> NVIDIA GPU's support MIG (Mult-Instance GPUs) feature [1], which allows
> partitioning of the GPU device resources (including device memory) into
> several (upto 8) isolated instances. Each of the partitioned memory needs
> a dedicated NUMA node to operate. The partitions are not fixed and they
> can be created/deleted at runtime.
> 
> Unfortunately Linux OS does not provide a means to dynamically create/destroy
> NUMA nodes and such feature implementation is not expected to be trivial. The
> nodes that OS discovers at the boot time while parsing SRAT remains fixed. So
> we utilize the Generic Initiator Affinity structures that allows association
> between nodes and devices. Multiple GI structures per BDF is possible,
> allowing creation of multiple nodes by exposing unique PXM in each of these
> structures.
> 
> Introduce a new acpi-generic-initiator object to allow host admin provide the
> device and the corresponding NUMA nodes. Qemu maintain this association and
> use this object to build the requisite GI Affinity Structure. On a multi
> device system, each device supporting the features needs a unique
> acpi-generic-initiator object with its own set of NUMA nodes associated to it.
> 
> An admin can provide the range of nodes through a uint16 array host-nodes
> and link it to a device by providing its id. Currently, only PCI device is
> supported. The following sample creates 8 nodes per PCI device for a VM
> with 2 PCI devices and link them to the respecitve PCI device using
> acpi-generic-initiator objects:
> 
> -numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 \
> -numa node,nodeid=5 -numa node,nodeid=6 -numa node,nodeid=7 \
> -numa node,nodeid=8 -numa node,nodeid=9 \
> -device 
> vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
> -object acpi-generic-initiator,id=gi0,pci-dev=dev0,host-nodes=2-9 \
> 
> -numa node,nodeid=10 -numa node,nodeid=11 -numa node,nodeid=12 \
> -numa node,nodeid=13 -numa node,nodeid=14 -numa node,nodeid=15 \
> -numa node,nodeid=16 -numa node,nodeid=17 \
> -device 
> vfio-pci-nohotplug,host=0009:01:01.0,bus=pcie.0,addr=05.0,rombar=0,id=dev1 \
> -object acpi-generic-initiator,id=gi1,pci-dev=dev1,host-nodes=10-17 \

Hi Ankit,

Whilst I'm still not particularly keen on this use of GI nodes, the
infrastructure is now generic enough that it covers more normal use cases
so I'm almost fine with it going into QEMU. If you want to use it for unusual
things that's up to you ;)  Note that the following is about QEMU allowing
you to potentially shoot yourself in the foot rather than necessarily saying
the interface shouldn't allow a PCI dev to map to multiple GI nodes.

As per reply to the cover letter I definitely want to see SRAT table dumps
in here though so we can easily see what this is actually building.

I worry that some OS might make the assumption that it's one GI node
per PCI device though. The language in the ACPI specification is:

"The Generic Initiator Affinity Structure provides the association between _a_
generic initiator and _the_ proximity domain to which the initiator belongs".

The use of _a_ and _the_ in there makes it pretty explicitly a N:1 relationship
(multiple devices can be in same proximity domain, but a device may only be in 
one).
To avoid that confusion you will need an ACPI spec change.  I'd be happy to
support 

The reason you can get away with this in Linux today is that I only implemented
a very minimal support for GIs with the mappings being provided the other way
around (_PXM in a PCIe node in DSDT).  If we finish that support off I'd assume
the multiple mappings here will result in a firmware bug warning in at least
some cases.  Note the reason support for the mapping the other way isn't yet
in linux is that we never resolved the mess that a PCI re-enumeration will
cause (requires a pre enumeration pass of what is configured by fw and caching
of the path to all the PCIe devices that lets you access so we can reconstruct
the mapping post enumeration).

Also, this effectively creates a bunch of separate generic initiator nodes
and lumping that under one object seems to imply they are in general connected
to each other.

I'd be happier with a separate instance per GI node

  -object acpi-generic-initiator,id=gi1,pci-dev=dev1,nodeid=10
  -object acpi-generic-initiator,id=gi2,pci-dev=dev1,nodeid=11
etc with the proviso that anyone using this on a system that assumes a one
to one mapping for PCI

However, I'll leave it up to those more familiar with the QEMU numa
control interface design to comment on whether this approach is preferable
to making the gi part of the numa node entry or doing it like hmat.

-numa srat-gi,node-id=10,gi-pci-dev=dev1

etc

> 
> [1] https://www.nvidia.com/en-in/technologies/multi-instance-gpu
> 
> Signed-off-by: Ankit Agrawal 
> ---
>  hw/acpi/acpi-generic-initiator.c | 70 
>