Now that i've had some time to look at the spec, the DVSEC CXL Capability
register (8.1.3.1 in 3.0 spec)
only supports enabling two HDM ranges at the moment, which to me means we
should implement

memdev0=..., memdev1=...

Yesterday I pushed a patch proposal that separated the regions into
persistent and volatile, but
i had discovered that there isn't a good way (presently) to determine of a
MemoryBacking object
is both file File-type AND has pmem=true, meaning we'd have to either
expose that interface
(this seems dubious to me) or do the following:

memdev0=mem0,memdev0-pmem=true,memdev1=mem0,memdev0-pmem=false

and then simply store a bit for each hostmem region accordingly to report
pmem/volatile.
This would allow the flexibility for the backing device to be whatever the
user wants while
still being able to mark the behavior of the type-3 device as pmem or
volatile.  Alternatively
we could make `memdev0-type=` and allow for new types in the future I
suppose.

The one thing I'm a bit confused on - the Media_type and Memory_Class
fields in the DVSEC
CXL Range registers (8.1.3.8.2).  Right now they're set to "010b" = Memory
Characteristics
are communicated via CDAT".  Do the devices presently emulate this? I'm
finding it hard to pick
apart the code to identify it.

On Thu, Oct 6, 2022 at 1:30 PM Gregory Price <gourry.memve...@gmail.com>
wrote:

> On Thu, Oct 06, 2022 at 05:42:14PM +0100, Jonathan Cameron wrote:
> > >
> > > 1) The PCI device type is set prior to realize/attributes, and is
> > > currently still set to PCI_CLASS_STORAGE_EXPRESS.  Should this instead
> > > be PCI_CLASS_MEMORY_CXL when presenting as a simple memory expander?
> >
> > We override this later in realize but indeed a bit odd that we first set
> > it to the wrong thing. I guess that is really old code.  Nice thing to
> clean up.
> > I just tried it and setting it right in the first place + dropping where
> we
> > change it later works fine.
> >
>
> I'll add it as a pullout patch ahead of my next revision.
>
> /**** snip - skipping ahead for the sake of brevity ****/
>
>
> I was unaware that an SLD could be comprised of multiple regions
> of both persistent and volatile memory.  I was under the impression that
> it could only be one type of memory.  Of course that makes sense in the
> case of a memory expander that simply lets you plug DIMMs in *facepalm*
>
> I see the reason to have separate backends in this case.
>
> The reason to allow an array of backing devices is if we believe each
> individual DIMM plugged into a memexpander is likely to show up as
> (configurably?) individual NUMA nodes, or if it's likely to get
> classified as one numa node.
>
> Maybe we should consider 2 new options:
> --persistent-memdevs=pm1 pm2 pm3
> --volatile-memdevs=vm1 vm2 vm3
>
> etc, and deprecate --memdev, and go with your array of memdevs idea.
>
> I think I could probably whip that up in a day or two.  Thoughts?
>
>
>
> > >
> > > 2) EDK2 sets the memory area as a reserved, and the memory is not
> > > configured by the system as ram.  I'm fairly sure edk2 just doesn't
> > > support this yet, but there's a chicken/egg problem.  If the device
> > > isn't there, there's nothing to test against... if there's nothing to
> > > test against, no one will write the support.  So I figure we should
> kick
> > > start the process (probably by getting it wrong on the first go
> around!)
> >
> > Yup, if the bios left it alone, OS drivers need to treat it the same as
> > they would deal with hotplugged memory.  Note my strong suspicion is
> there
> > will be host vendors who won't ever handle volatile CXL memory in
> firmware.
> > They will just let the OS bring it up after boot. As long as you have DDR
> > as well on the system that will be fine.  Means there is one code path
> > to verify rather than two.  Not everyone will care about legacy OS
> support.
> >
>
> Presently i'm failing to bring up a region of memory even when this is
> set to persistent (even on upstream configuration).  The kernel is
> presently failing to set_size because the region is used.
>
> I can't tell if this is a driver error or because EDK2 is marking the
> region as reserved.
>
> relevant boot output:
> [    0.000000] BIOS-e820: [mem 0x0000000290000000-0x000000029fffffff]
> reserved
> [    1.229097] acpi ACPI0016:00: _OSC: OS supports [ExtendedConfig ASPM
> ClockPM Segments MSI EDR HPX-Type3]
> [    1.244082] acpi ACPI0016:00: _OSC: OS supports [CXL20PortDevRegAccess
> CXLProtocolErrorReporting CXLNativeHotPlug]
> [    1.261245] acpi ACPI0016:00: _OSC: platform does not support [LTR DPC]
> [    1.272347] acpi ACPI0016:00: _OSC: OS now controls [PCIeHotplug
> SHPCHotplug PME AER PCIeCapability]
> [    1.286092] acpi ACPI0016:00: _OSC: OS now controls
> [CXLMemErrorReporting]
>
> The device is otherwise available for use
>
> cli output
> # cxl list
> [
>   {
>     "memdev":"mem0",
>     "pmem_size":268435456,
>     "ram_size":0,
>     "serial":0,
>     "host":"0000:35:00.0"
>   }
> ]
>
> but it fails to setup correctly
>
> cxl create-region -m -d decoder0.0 -w 1 -g 256 mem0
> cxl region: create_region: region0: set_size failed: Numerical result out
> of range
> cxl region: cmd_create_region: created 0 regions
>
> I tracked this down to this part of the kernel:
>
> kernel/resource.c
>
> static struct resource *get_free_mem_region(...)
> {
>         ... snip ...
>         enumerate regions, fail to find a useable region
>         ... snip ...
>         return ERR_PTR(-ERANGE);
> }
>
> but i'm not sure of what to do with this info.  We have some proof
> that real hardware works with this no problem, and the only difference
> is that the EFI/bios/firmware is setting the memory regions as `usable`
> or `soft reserved`, which would imply the EDK2 is the blocker here
> regardless of the OS driver status.
>
> But I'd seen elsewhere you had gotten some of this working, and I'm
> failing to get anything working at the moment.  If you have any input i
> would greatly appreciate the help.
>
> QEMU config:
>
> /opt/qemu-cxl2/bin/qemu-system-x86_64 \
> -drive file=/var/lib/libvirt/images/cxl.qcow2,format=qcow2,index=0,media=d\
> -m 2G,slots=4,maxmem=4G \
> -smp 4 \
> -machine type=q35,accel=kvm,cxl=on \
> -enable-kvm \
> -nographic \
> -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,slot=0 \
> -object memory-backend-file,id=cxl-mem0,mem-path=/tmp/cxl-mem0,size=256M \
> -object memory-backend-file,id=lsa0,mem-path=/tmp/cxl-lsa0,size=256M \
> -device cxl-type3,bus=rp0,pmem=true,memdev=cxl-mem0,lsa=lsa0,id=cxl-pmem0 \
> -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=256M
>
> I'd seen on the lists that you had seen issues with single-rp setups,
> but no combination of configuration I've tried (including all the ones
> in the docs and tests) lead to a successful region creation with
> `cxl create-region`
>
> > >
> > > 3) Upstream linux drivers haven't touched ram configurations yet.  I
> > > just configured this with Dan Williams yesterday on IRC.  My
> > > understanding is that it's been worked on but nothing has been
> > > upstreamed, in part because there are only a very small set of devices
> > > available to developers at the moment.
> >
> > There was an offer of similar volatile memory QEMU emulation in the
> > session on QEMU CXL at Linux Plumbers.  That will look something like
> you have
> > here and maybe reflects that someone has hardware as well...
> >
>
> I saw that, and I figured I'd start the conversation by pushing
> something :].
>

Reply via email to