On Wed, Jul 17, 2024 at 04:02:58PM +0100, Jonathan Cameron wrote:
> On Mon, 15 Jul 2024 16:48:41 +0200
> Igor Mammedov <imamm...@redhat.com> wrote:
> 
> > On Fri, 12 Jul 2024 12:08:14 +0100
> > Jonathan Cameron <jonathan.came...@huawei.com> wrote:
> > 
> > > These are very similar to the recently added Generic Initiators
> > > but instead of representing an initiator of memory traffic they
> > > represent an edge point beyond which may lie either targets or
> > > initiators.  Here we add these ports such that they may
> > > be targets of hmat_lb records to describe the latency and
> > > bandwidth from host side initiators to the port.  A discoverable
> > > mechanism such as UEFI CDAT read from CXL devices and switches
> > > is used to discover the remainder of the path, and the OS can build
> > > up full latency and bandwidth numbers as need for work and data
> > > placement decisions.
> > > 
> > > Acked-by: Markus Armbruster <arm...@redhat.com>
> > > Tested-by: "Huang, Ying" <ying.hu...@intel.com>
> > > Signed-off-by: Jonathan Cameron <jonathan.came...@huawei.com>  
> > 
> > ACPI tables generation LGTM
> > As for the rest my review is perfunctory mostly.
> 
> The node type points and missing descriptor applying equally to generic
> initiators. I'll add a couple of patches cleaning that up as well as 
> fixing them up for generic ports.
> 
> For the exit(1) that was copying other similar locations. I don't
> mind changing it though if something else is preferred.
> 
> Given tight timescales (and I was away for a few days which didn't
> help), I'll send out a v6 with changes as below.
> 
> Jonathan
> 

I'm working on a pull and going offline for a week guys, what's not in
will be in the next release.  Sorry.

> > 
> > > ---
> > > v5: Push the definition of TYPE_ACPI_GENERIC_PORT down into the
> > >     c file (similar to TYPE_ACPI_GENERIC_INITIATOR in earlier patch)
> > > ---
> > >  qapi/qom.json                       |  34 +++++++++
> > >  include/hw/acpi/aml-build.h         |   4 +
> > >  include/hw/acpi/pci.h               |   2 +-
> > >  include/hw/pci/pci_bridge.h         |   1 +
> > >  hw/acpi/aml-build.c                 |  40 ++++++++++
> > >  hw/acpi/pci.c                       | 112 +++++++++++++++++++++++++++-
> > >  hw/arm/virt-acpi-build.c            |   2 +-
> > >  hw/i386/acpi-build.c                |   2 +-
> > >  hw/pci-bridge/pci_expander_bridge.c |   1 -
> > >  9 files changed, 193 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/qapi/qom.json b/qapi/qom.json
> > > index 8e75a419c3..b97c031b73 100644
> > > --- a/qapi/qom.json
> > > +++ b/qapi/qom.json
> > > @@ -838,6 +838,38 @@
> > >    'data': { 'pci-dev': 'str',
> > >              'node': 'uint32' } }
> > >  
> > > +##
> > > +# @AcpiGenericPortProperties:
> > > +#
> > > +# Properties for acpi-generic-port objects.
> > > +#
> > > +# @pci-bus: QOM path of the PCI bus of the hostbridge associated with
> > > +#     this SRAT Generic Port Affinity Structure.  This is the same as
> > > +#     the bus parameter for the root ports attached to this host
> > > +#     bridge.  The resulting SRAT Generic Port Affinity Structure will
> > > +#     refer to the ACPI object in DSDT that represents the host bridge
> > > +#     (e.g.  ACPI0016 for CXL host bridges).  See ACPI 6.5 Section
> > > +#     5.2.16.7 for more information.
> > > +#  
> > 
> > > +# @node: Similar to a NUMA node ID, but instead of providing a
> > > +#     reference point used for defining NUMA distances and access
> > > +#     characteristics to memory or from an initiator (e.g. CPU), this
> > > +#     node defines the boundary point between non-discoverable system
> > > +#     buses which must be described by firmware, and a discoverable
> > > +#     bus.  NUMA distances and access characteristics are defined to
> > > +#     and from that point.  For system software to establish full
> > > +#     initiator to target characteristics this information must be
> > > +#     combined with information retrieved from the discoverable part
> > > +#     of the path.  An example would use CDAT (see UEFI.org)
> > > +#     information read from devices and switches in conjunction with
> > > +#     link characteristics read from PCIe Configuration space.  
> > 
> > you lost me here (even reading this several time doesn't help).
> > Perhaps I lack specific domain knowledge, but is there a way to make it
> > more comprehensible for layman?
> 
> This is far from the first version (which Markus really didn't like ;)
> It is really easy to draw as a sequence of diagrams and really tricky
> to put in text!  Not so easy to get the kernel code right either
> as it turns out but that's another story.
> 
> Perhaps if I add something to the end to say what you do with it
> that might help?
> 
> "To get the full path latency, from CPU to CXL attached DRAM on a type 3
>  CXL device:  Add the latency from CPU to Generic Port (from HMAT indexed
>  via the the node ID in this SRAT structure) to that for CXL bus links, the
>  latency across intermediate switches and from the EP port to the
>  actual memory.  Bandwidth is more complex as there may be interleaving
>  across multiple devices and shared links in the path."
> 
> > 
> > > +#
> > > +# Since: 9.1
> > > +##
> > > +{ 'struct': 'AcpiGenericPortProperties',
> > > +  'data': { 'pci-bus': 'str',
> > > +            'node': 'uint32' } }
> > > +
> 


Reply via email to