On Tue, May 7, 2019 at 11:32 PM Tao Xu <tao3...@intel.com> wrote: > > This series of patches will build Heterogeneous Memory Attribute Table (HMAT) > according to the command line. The ACPI HMAT describes the memory attributes, > such as memory side cache attributes and bandwidth and latency details, > related to the System Physical Address (SPA) Memory Ranges. > The software is expected to use this information as hint for optimization. > > OSPM evaluates HMAT only during system initialization. Any changes to the HMAT > state at runtime or information regarding HMAT for hot plug are communicated > using the _HMA method. [..]
Hi, I gave these patches a try while developing support for the new EFI v2.8 Specific Purpose Memory attribute [1]. I have a gap / feature request to note to make this implementation capable of emulating current shipping platform BIOS implementations for persistent memory platforms. The NUMA configuration I tested was: -numa node,mem=4G,cpus=0-19,nodeid=0 -numa node,mem=4G,cpus=20-39,nodeid=1 -numa node,mem=4G,nodeid=2 -numa node,mem=4G,nodeid=3 ...and it produced an entry like the following for proximity domain 2. [0C8h 0200 2] Structure Type : 0000 [Memory Proximity Domain Attributes] [0CAh 0202 2] Reserved : 0000 [0CCh 0204 4] Length : 00000028 [0D0h 0208 2] Flags (decoded below) : 0002 Processor Proximity Domain Valid : 0 [0D2h 0210 2] Reserved1 : 0000 [0D4h 0212 4] Processor Proximity Domain : 00000002 [0D8h 0216 4] Memory Proximity Domain : 00000002 [0DCh 0220 4] Reserved2 : 00000000 [0E0h 0224 8] Reserved3 : 0000000240000000 [0E8h 0232 8] Reserved4 : 0000000100000000 Notice that the Processor "Proximity Domain Valid" bit is clear. I understand that the implementation is keying off of whether cpus are defined for that same node or not, but that's not how current persistent memory platforms implement "Processor Proximity Domain". On these platforms persistent memory indeed has its own proximity domain, but the Processor Proximity Domain is expected to be assigned to the domain that houses the memory controller for that persistent memory. So to emulate that configuration it would be useful to have a way to specify "Processor Proximity Domain" without needing to define CPUs in that domain. Something like: -numa node,mem=4G,cpus=0-19,nodeid=0 -numa node,mem=4G,cpus=20-39,nodeid=1 -numa node,mem=4G,nodeid=2,localnodeid=0 -numa node,mem=4G,nodeid=3,localnodeid=1 ...to specify that node2 memory is connected / local to node0 and node3 memory is connected / local to node1. In general HMAT specifies that all performance differentiated memory ranges have their own proximity domain, but those are expected to still be associated with a local/host/home-socket memory controller. [1]: https://lists.01.org/pipermail/linux-nvdimm/2019-May/021668.html _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm