Jonathan Cameron wrote:
> On Thu, 23 Jun 2022 21:19:49 -0700
> Dan Williams <[email protected]> wrote:
>
> > Be careful to only disable cxl_pmem objects related to a given
> > cxl_nvdimm_bridge. Otherwise, offline_nvdimm_bus() reaches across CXL
> > domains and disables more than is expected.
> >
> > Signed-off-by: Dan Williams <[email protected]>
> Fix, but not fixes tag? Probably wants a comment (I'm guessing
> it didn't matter until now?)
I'll add:
Fixes: 21083f51521f ("cxl/pmem: Register 'pmem' / cxl_nvdimm devices")
To date this has been a benign side effect since it only effects
cxl_test, but as cxl_test gets wider deployment it needs to meet the
expectation that any cxl_test operations have no effect on the
production stack. It might also be important if Device Tree adds
incremental CXL platform topology support.
> By Domains, what do you mean? I don't think we have that
> well defined as a term.
By "domain" I meant a CXL topology hierarchy that a given memdev
attaches. In the userspace cxl-cli tool terms this is a "bus":
# cxl list -M -b "ACPI.CXL" | jq .[0]
{
"memdev": "mem0",
"pmem_size": 536870912,
"ram_size": 0,
"serial": 0,
"host": "0000:35:00.0"
}
# cxl list -M -b "cxl_test" | jq .[0]
{
"memdev": "mem2",
"pmem_size": 1073741824,
"ram_size": 1073741824,
"serial": 1,
"numa_node": 1,
"host": "cxl_mem.1"
}
...where "-b" filters by the "bus" provider. This shows that mem0
derives its CXL.mem connectivity from the typical ACPI hierarchy, and
mem2 is in the "cxl_test" domain. I did not use the "bus" term in the
changelog because "bus" means something different to the kernel as both
of those devices are registered on @cxl_bus_type.