Re: CXL volatile memory is not listed

2023-08-17 Thread Maverickk 78
Hi Jonathan,

The use case of CXL switch will always need some sort of management
agent + FM configuring the available CXL memory connected.

In most cases it would be bmc controller configuring MLD/MHD's to
host, and in very rare scenarios it may be one of the host interacting
with FM firmware inside the switch which would do the trick.

Another use case is static hardcoding between CXL memory & host in
built in cxl switch

There is no scenario where one of the host BIOS can push the select
CXL memory to itself.


Is my understanding correct?



On Fri, 11 Aug 2023 at 19:25, Jonathan Cameron
 wrote:
>
> On Fri, 11 Aug 2023 08:04:26 +0530
> Maverickk 78  wrote:
>
> > Jonathan,
> >
> > > More generally for the flow that would bring the memory up as system ram
> > > you would typically need the bios to have done the CXL enumeration or
> > > a bunch of scripts in the kernel to have done it.  In general it can't
> > > be fully automated, because there are policy decisions to make on things 
> > > like
> > > interleaving.
> >
> > BIOS CXL enumeration? is CEDT not enough? or BIOS further needs to
> > create an entry
> > in the e820 table?
> On intel platforms 'maybe' :)  I know how it works on those that just
> use the nice standard EFI tables - less familiar with the e820 stuff :)
>
> CEDT says where to find the the various bits of system related CXL stuff.
> Nothing in there on the configuration that should be used such as interleaving
> as that depends on what the administrator wants. Or on what the BIOS has
> decided the users should have.
>
> >
> > >
> > > I'm not aware of any open source BIOSs that do it yet.  So you have
> > > to rely on the same kernel paths as for persistent memory - manual 
> > > configuration
> > > etc in the kernel.
> > >
> > Manual works with "cxl create regiton"  :)
> Great.
>
> Jonathan
>
> >
> > On Thu, 10 Aug 2023 at 16:05, Jonathan Cameron
> >  wrote:
> > >
> > > On Wed, 9 Aug 2023 04:21:47 +0530
> > > Maverickk 78  wrote:
> > >
> > > > Hello,
> > > >
> > > > I am running qemu-system-x86_64
> > > >
> > > > qemu-system-x86_64 --version
> > > > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> > > >
> > > +Cc linux-cxl as the answer is more todo with linux than qemu.
> > >
> > > > qemu-system-x86_64 \
> > > > -m 2G,slots=4,maxmem=4G \
> > > > -smp 4 \
> > > > -machine type=q35,accel=kvm,cxl=on \
> > > > -enable-kvm \
> > > > -nographic \
> > > > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > > > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > > > -object 
> > > > memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > > > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > > > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
> > >
> > > There are some problems upstream at the moment (probably not cxl related 
> > > but
> > > I'm digging). So today I can't boot an x86 machine. (goody)
> > >
> > >
> > > More generally for the flow that would bring the memory up as system ram
> > > you would typically need the bios to have done the CXL enumeration or
> > > a bunch of scripts in the kernel to have done it.  In general it can't
> > > be fully automated, because there are policy decisions to make on things 
> > > like
> > > interleaving.
> > >
> > > I'm not aware of any open source BIOSs that do it yet.  So you have
> > > to rely on the same kernel paths as for persistent memory - manual 
> > > configuration
> > > etc in the kernel.
> > >
> > > There is support in ndctl for those enabling flows, so I'd look there
> > > for more information
> > >
> > > Jonathan
> > >
> > >
> > > >
> > > >
> > > > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > > > shows only 2G memory which is System RAM, it's not listing the CXL
> > > > memory.
> > > >
> > > > Do I need to pass any particular parameter in the kernel command line?
> > > >
> > > > Is there any documentation available? I followed the inputs provided in
> > > >
> > > > https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/
> > > >
> > > > Is there any documentation/blog listed?
> > >
>



Re: CXL volatile memory is not listed

2023-08-17 Thread Maverickk 78
Hi Fan

Awesome, thanks for the info!

On Fri, 11 Aug 2023 at 22:19, Fan Ni  wrote:
>
> On Fri, Aug 11, 2023 at 07:52:25AM +0530, Maverickk 78 wrote:
> > Thanks Fan,
> >
> > cxl create-region works like a charm :)
> >
> > Since this gets listed as "System Ram(kmem)", I guess the kernel
> > treats it as regular memory and
> > allocates it to the applications when needed?
> > or is there an extra effort needed to make it available for
> > applications on the host?
> >
>
> Yes. Once it is onlined, you can use it as regular memory.
> CXL memory will serve as a zero-CPU memory-only NUMA node.
> You can check it with numactl -H.
>
> To use the cxl memory with an app, you can use
> numactl --membind=numa_id app_name
> #numa_id is the dedicated numa node where cxl memory sits.
>
> One thing to notes, kvm will not work correctly with Qemu emulation when
> you try to use cxl memory for an application, so do not enable kvm.
>
> Fan
>
> > On Thu, 10 Aug 2023 at 22:03, Fan Ni  wrote:
> > >
> > > On Wed, Aug 09, 2023 at 04:21:47AM +0530, Maverickk 78 wrote:
> > > > Hello,
> > > >
> > > > I am running qemu-system-x86_64
> > > >
> > > > qemu-system-x86_64 --version
> > > > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> > > >
> > > > qemu-system-x86_64 \
> > > > -m 2G,slots=4,maxmem=4G \
> > > > -smp 4 \
> > > > -machine type=q35,accel=kvm,cxl=on \
> > > > -enable-kvm \
> > > > -nographic \
> > > > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > > > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > > > -object 
> > > > memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > > > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > > > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
> > > >
> > > >
> > > > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > > > shows only 2G memory which is System RAM, it's not listing the CXL
> > > > memory.
> > > >
> > > > Do I need to pass any particular parameter in the kernel command line?
> > > >
> > > > Is there any documentation available? I followed the inputs provided in
> > > >
> > > > https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/
> > > >
> > > > Is there any documentation/blog listed?
> > >
> > > If I remember it correctly, for volatile cxl memory, we need to create a
> > > region and then it will be discovered as system memory and shows up.
> > >
> > > Try to create a region with "cxl create-region".
> > >
> > > Fan
> > > >



Re: CXL volatile memory is not listed

2023-08-10 Thread Maverickk 78
Thanks Phil, David and Fan

Looks like it was an error from my side due to lack of information
cxl create-region works :)


On Thu, 10 Aug 2023 at 16:29, Philippe Mathieu-Daudé  wrote:
>
> Hi,
>
> Cc'ing Igor and David.
>
> On 9/8/23 00:51, Maverickk 78 wrote:
> > Hello,
> >
> > I am running qemu-system-x86_64
> >
> > qemu-system-x86_64 --version
> > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> >
> > qemu-system-x86_64 \
> > -m 2G,slots=4,maxmem=4G \
> > -smp 4 \
> > -machine type=q35,accel=kvm,cxl=on \
> > -enable-kvm \
> > -nographic \
> > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > -object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
> >
> >
> > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > shows only 2G memory which is System RAM, it's not listing the CXL
> > memory.
>
> Sounds like a bug. Do you mind reporting at
> https://gitlab.com/qemu-project/qemu/-/issues?
>
> Thanks,
>
> Phil.
>
> > Do I need to pass any particular parameter in the kernel command line?
> >
> > Is there any documentation available? I followed the inputs provided in
> >
> > https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/
> >
> > Is there any documentation/blog listed?
> >
>



Re: CXL volatile memory is not listed

2023-08-10 Thread Maverickk 78
Jonathan,

> More generally for the flow that would bring the memory up as system ram
> you would typically need the bios to have done the CXL enumeration or
> a bunch of scripts in the kernel to have done it.  In general it can't
> be fully automated, because there are policy decisions to make on things like
> interleaving.

BIOS CXL enumeration? is CEDT not enough? or BIOS further needs to
create an entry
in the e820 table?

>
> I'm not aware of any open source BIOSs that do it yet.  So you have
> to rely on the same kernel paths as for persistent memory - manual 
> configuration
> etc in the kernel.
>
Manual works with "cxl create regiton"  :)

On Thu, 10 Aug 2023 at 16:05, Jonathan Cameron
 wrote:
>
> On Wed, 9 Aug 2023 04:21:47 +0530
> Maverickk 78  wrote:
>
> > Hello,
> >
> > I am running qemu-system-x86_64
> >
> > qemu-system-x86_64 --version
> > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> >
> +Cc linux-cxl as the answer is more todo with linux than qemu.
>
> > qemu-system-x86_64 \
> > -m 2G,slots=4,maxmem=4G \
> > -smp 4 \
> > -machine type=q35,accel=kvm,cxl=on \
> > -enable-kvm \
> > -nographic \
> > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > -object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
>
> There are some problems upstream at the moment (probably not cxl related but
> I'm digging). So today I can't boot an x86 machine. (goody)
>
>
> More generally for the flow that would bring the memory up as system ram
> you would typically need the bios to have done the CXL enumeration or
> a bunch of scripts in the kernel to have done it.  In general it can't
> be fully automated, because there are policy decisions to make on things like
> interleaving.
>
> I'm not aware of any open source BIOSs that do it yet.  So you have
> to rely on the same kernel paths as for persistent memory - manual 
> configuration
> etc in the kernel.
>
> There is support in ndctl for those enabling flows, so I'd look there
> for more information
>
> Jonathan
>
>
> >
> >
> > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > shows only 2G memory which is System RAM, it's not listing the CXL
> > memory.
> >
> > Do I need to pass any particular parameter in the kernel command line?
> >
> > Is there any documentation available? I followed the inputs provided in
> >
> > https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/
> >
> > Is there any documentation/blog listed?
>



Re: CXL volatile memory is not listed

2023-08-10 Thread Maverickk 78
Thanks Fan,

cxl create-region works like a charm :)

Since this gets listed as "System Ram(kmem)", I guess the kernel
treats it as regular memory and
allocates it to the applications when needed?
or is there an extra effort needed to make it available for
applications on the host?

On Thu, 10 Aug 2023 at 22:03, Fan Ni  wrote:
>
> On Wed, Aug 09, 2023 at 04:21:47AM +0530, Maverickk 78 wrote:
> > Hello,
> >
> > I am running qemu-system-x86_64
> >
> > qemu-system-x86_64 --version
> > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> >
> > qemu-system-x86_64 \
> > -m 2G,slots=4,maxmem=4G \
> > -smp 4 \
> > -machine type=q35,accel=kvm,cxl=on \
> > -enable-kvm \
> > -nographic \
> > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > -object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
> >
> >
> > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > shows only 2G memory which is System RAM, it's not listing the CXL
> > memory.
> >
> > Do I need to pass any particular parameter in the kernel command line?
> >
> > Is there any documentation available? I followed the inputs provided in
> >
> > https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/
> >
> > Is there any documentation/blog listed?
>
> If I remember it correctly, for volatile cxl memory, we need to create a
> region and then it will be discovered as system memory and shows up.
>
> Try to create a region with "cxl create-region".
>
> Fan
> >



CXL volatile memory is not listed

2023-08-08 Thread Maverickk 78
Hello,

I am running qemu-system-x86_64

qemu-system-x86_64 --version
QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)

qemu-system-x86_64 \
-m 2G,slots=4,maxmem=4G \
-smp 4 \
-machine type=q35,accel=kvm,cxl=on \
-enable-kvm \
-nographic \
-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
-object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
-device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G


I was expecting the CXL memory to be listed in "System Ram", the lsmem
shows only 2G memory which is System RAM, it's not listing the CXL
memory.

Do I need to pass any particular parameter in the kernel command line?

Is there any documentation available? I followed the inputs provided in

https://lore.kernel.org/linux-mm/y+csoehvlkudn...@kroah.com/T/

Is there any documentation/blog listed?



Re: property 'cxl-type3.size' not found

2023-04-03 Thread Maverickk 78
Hi Jonathan

Do you want me to modify the doc(remove size)? I can do that.

Regards
Raghu

On Mon, 3 Apr 2023, 15:12 Jonathan Cameron, 
wrote:

> On Mon, 3 Apr 2023 14:34:33 +0530
> Maverickk 78  wrote:
>
> > Hello,
> >
> > I am trying qemu-system-aarch64 & cxl configuration listed in
> >
> > https://www.qemu.org/docs/master/system/devices/cxl.html
> >
> > qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8
> > -cpu max \
> > ...
> > -object
> >
> memory-backend-file,id=cxl-mem0,share=on,mem-path=/tmp/cxltest.raw,size=256M
> > \
> > -object
> >
> memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest1.raw,size=256M
> > \
> > -object
> >
> memory-backend-file,id=cxl-mem2,share=on,mem-path=/tmp/cxltest2.raw,size=256M
> > \
> > -object
> >
> memory-backend-file,id=cxl-mem3,share=on,mem-path=/tmp/cxltest3.raw,size=256M
> > \
> > -object
> >
> memory-backend-file,id=cxl-lsa0,share=on,mem-path=/tmp/lsa0.raw,size=256M \
> > -object
> >
> memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa1.raw,size=256M \
> > -object
> >
> memory-backend-file,id=cxl-lsa2,share=on,mem-path=/tmp/lsa2.raw,size=256M \
> > -object
> >
> memory-backend-file,id=cxl-lsa3,share=on,mem-path=/tmp/lsa3.raw,size=256M \
> > -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
> > -device cxl-rp,port=0,bus=cxl.1,id=root_port0,chassis=0,slot=0 \
> > -device cxl-rp,port=1,bus=cxl.1,id=root_port1,chassis=0,slot=1 \
> > -device cxl-upstream,bus=root_port0,id=us0 \
> > -device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
> > -device
> >
> cxl-type3,bus=swport0,memdev=cxl-mem0,lsa=cxl-lsa0,id=cxl-pmem0,size=256M \
> > -device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \
> > -device
> >
> cxl-type3,bus=swport1,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem1,size=256M \
> > -device cxl-downstream,port=2,bus=us0,id=swport2,chassis=0,slot=6 \
> > -device
> >
> cxl-type3,bus=swport2,memdev=cxl-mem2,lsa=cxl-lsa2,id=cxl-pmem2,size=256M \
> > -device cxl-downstream,port=3,bus=us0,id=swport3,chassis=0,slot=7 \
> > -device
> >
> cxl-type3,bus=swport3,memdev=cxl-mem3,lsa=cxl-lsa3,id=cxl-pmem3,size=256M \
> > -M
> >
> cxl-fmw.0.targets.0=cxl.1,cxl-fmw.0.size=4G,cxl-fmw.0.interleave-granularity=4k
> >
> >
> >
> > I hit this following error
> > qemu-system-aarch64: -device
> >
> cxl-type3,bus=swport0,memdev=cxl-mem0,lsa=cxl-lsa0,id=cxl-pmem0,size=256M:
> > property 'cxl-type3.size' not found
> >
> >
> > Any clue if I am missing something?
>
> Looks like docs have slipped behind current state. Size isn't needed for
> the memdev
> any more as can be established from the memory backend and there isn't a
> reason
> why they'd ever be different (there was in a much earlier version).
>
> There is a known bigger issue with those docs which is that they got
> cherry picked
> from a series that included ARM support but arm support hasn't landed yet
> (and will be a while due to need for DT support).
>
> I'll look at fixing both issues up. Or if you want to send a patch Raghu
> that would
> be even better!
>
> Jonathan
>
> >
> >
> > Regards
> >
>
>


property 'cxl-type3.size' not found

2023-04-03 Thread Maverickk 78
Hello,

I am trying qemu-system-aarch64 & cxl configuration listed in

https://www.qemu.org/docs/master/system/devices/cxl.html

qemu-system-aarch64 -M virt,gic-version=3,cxl=on -m 4g,maxmem=8G,slots=8
-cpu max \
...
-object
memory-backend-file,id=cxl-mem0,share=on,mem-path=/tmp/cxltest.raw,size=256M
\
-object
memory-backend-file,id=cxl-mem1,share=on,mem-path=/tmp/cxltest1.raw,size=256M
\
-object
memory-backend-file,id=cxl-mem2,share=on,mem-path=/tmp/cxltest2.raw,size=256M
\
-object
memory-backend-file,id=cxl-mem3,share=on,mem-path=/tmp/cxltest3.raw,size=256M
\
-object
memory-backend-file,id=cxl-lsa0,share=on,mem-path=/tmp/lsa0.raw,size=256M \
-object
memory-backend-file,id=cxl-lsa1,share=on,mem-path=/tmp/lsa1.raw,size=256M \
-object
memory-backend-file,id=cxl-lsa2,share=on,mem-path=/tmp/lsa2.raw,size=256M \
-object
memory-backend-file,id=cxl-lsa3,share=on,mem-path=/tmp/lsa3.raw,size=256M \
-device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
-device cxl-rp,port=0,bus=cxl.1,id=root_port0,chassis=0,slot=0 \
-device cxl-rp,port=1,bus=cxl.1,id=root_port1,chassis=0,slot=1 \
-device cxl-upstream,bus=root_port0,id=us0 \
-device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
-device
cxl-type3,bus=swport0,memdev=cxl-mem0,lsa=cxl-lsa0,id=cxl-pmem0,size=256M \
-device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \
-device
cxl-type3,bus=swport1,memdev=cxl-mem1,lsa=cxl-lsa1,id=cxl-pmem1,size=256M \
-device cxl-downstream,port=2,bus=us0,id=swport2,chassis=0,slot=6 \
-device
cxl-type3,bus=swport2,memdev=cxl-mem2,lsa=cxl-lsa2,id=cxl-pmem2,size=256M \
-device cxl-downstream,port=3,bus=us0,id=swport3,chassis=0,slot=7 \
-device
cxl-type3,bus=swport3,memdev=cxl-mem3,lsa=cxl-lsa3,id=cxl-pmem3,size=256M \
-M
cxl-fmw.0.targets.0=cxl.1,cxl-fmw.0.size=4G,cxl-fmw.0.interleave-granularity=4k



I hit this following error
qemu-system-aarch64: -device
cxl-type3,bus=swport0,memdev=cxl-mem0,lsa=cxl-lsa0,id=cxl-pmem0,size=256M:
property 'cxl-type3.size' not found


Any clue if I am missing something?


Regards


Re: Cxl devel!

2023-03-31 Thread Maverickk 78
Hi Jonathan,

Thanks for the response, effort and time you spent to list down the
TODOs in CXL space.

I just started understanding CXL2.0, am part of a startup developing a
CXL2.0 switch to build
compostable architecture, it's been 6 weeks.

As part of it I have built QEMU and configured with CXL devices as
documented in
https://stevescargall.com/blog/2022/01/20/how-to-emulate-cxl-devices-using-kvm-and-qemu/

And use your PoC code to understand the FMAPI & MCTP message flow.

Going forward I will ramp-up on the existing support in QEMU,
especially regarding the points you listed and
get used to the development/debug/test workflow, maybe I need 2-3
weeks to process all the information
you provided.

Any cheatsheets from your side will be helpful and it will help me
catch up soon.

Looking forward to working with you.

Regards
Raghu



On Tue, 28 Mar 2023 at 18:29, Jonathan Cameron
 wrote:
>
> On Fri, 24 Mar 2023 04:32:52 +0530
> Maverickk 78  wrote:
>
> > Hello Jonathan
> >
> > Raghu here, I'm going over your cxl patches for past few days, it's very
> > impressive.
> >
> > I want to get involved and contribute in your endeavor, may be bits &
> > pieces to start.
> >
> > If you're specific trivial task(cvl/pcie/fm) about cxl, please let me know.
> >
> > Regards
> > Raghu
> >
>
> Hi Raghu,
>
> Great that you are interested in getting involved.
>
> As to suggestions for what to do, it's depends on what interests you.
> I'll list some broad categories and hopefully we can focus in on stuff.
>
> Following is brainstorming on the spot, so I've probably forgotten lots
> of things.   There is an out of date todo at:
> https://gitlab.com/jic23/qemu/-/wikis/TODO%20list
>
> Smallish tasks.
> 1) Increase fidelity of emulation.  In many places we take short cuts in
>the interests of supporting 'enough' to be able to test kernel code 
> against..
>A classic example of this is we don't perform any of the checks we should 
> be
>on HDM decoders.  Tightening those restrictions up would be great. 
> Typically that
>involves tweaking the kernel code to try and do 'wrong' things.
>There are some other examples of this on gitlab.com/jic23/qemu around 
> locking of
>registers. This is rarely as high priority as 'new features' but we will 
> want to
>tidy up all these loose corners eventually.
> 2) Missing features.  An example of this is the security related stuff that 
> went into
>the kernel recently.  Whilst that is fairly easy to check using the cxl 
> mocking
>driver in the kernel, I'd also like to see a QEMU implementation.
>Some of the big features don't interact as they should.  For instance we 
> don't report
>poison list overflow via the event log yet.  It would be great to get this 
> all working
>rather than requiring injection of poison and the event as currently 
> needed (not all
>upstream yet).
> 3) Cleanup some of the existing emulation that we haven't upstreamed yet.
>- CPMU. Main challenge with this is finding balance between insane 
> commandlines
>  and flexibility.  Right now the code on gitlab.com/jic23/qemu 
> (cxl-)
>  provides a fairly random set of counters that were handy for testing 
> corners
>  of the driver that's at v3 on the kernel mailing lists.
>- Review and testing of the stuff that is on my tree (all been on list I 
> think) but
>  not yet at the top. Fixing up problems with that in advance will save us 
> time
>  when proposing them for upstream.
>- SPDM / CMA.  Right now this relies on a connection to SPDM-emu.  I'd 
> like to explore
>  if we can use libspdm as a library instead.  Last time I checked this 
> looked non
>  trivial but the dmtf tools team are keen to help.
>
>
> Bigger stuff - note that people are already looking at some of these but they
> may be interested in some help.
> 1) An example type 2 device.  We'd probably have to invent something along the
>lines of a simple copy offload engine.  The intent being to prove out that
>the kernel code works.  Dan has some stuff on the git.kernel.org tree to 
> support
>type 2 device.
> 2) Tests.  So far we test the bios table generation and that we can start 
> qemu with
>different topologies. I'd love to see a test that actually brings up a 
> region and
>tests some reading and writing + ideally looks at result in memory devices 
> to check
>everything worked.
> 3) Dynamic Capacity Devices - some stuff on going related to this, but there 
> is a lot
>to do.  Main focus today is on MHDs.   Perhaps look at the very earl code 
> posted
>for switch CCIs.  We have a lot of work to do in kernel for this stuff as 
> well.
> 4) MCTP CCI.  I posted a PoC for this a long time back.  It works but we'd 
> need to figure
>out how to wire it up sensibly.
>
> Jonathan
>



Cxl devel!

2023-03-23 Thread Maverickk 78
Hello Jonathan

Raghu here, I'm going over your cxl patches for past few days, it's very
impressive.

I want to get involved and contribute in your endeavor, may be bits &
pieces to start.

If you're specific trivial task(cvl/pcie/fm) about cxl, please let me know.

Regards
Raghu


Re: Call failed: MCTP Endpoint did not respond: Qemu CXL switch with mctp-1.0

2023-03-17 Thread Maverickk 78
Hi Jonathan,

Thanks for the quick response, this patch works!


Regards
Raghu

On Fri, 17 Mar 2023 at 23:42, Jonathan Cameron
 wrote:
>
> On Fri, 17 Mar 2023 16:37:20 +
> Jonathan Cameron via  wrote:
>
> > On Fri, 17 Mar 2023 00:11:10 +0530
> > Maverickk 78  wrote:
> >
> > > Hi
> > >
> > >  I am trying mctp & mctpd with aspeed +buildroot(master) + linux v6.2
> > > with Qemu 7.2.
> > >
> > >
> > > I have added necessary FMAPI related patches into QEMU to support CLX
> > > switch emulation
> > >
> > > RFC-1-2-misc-i2c_mctp_cxl_fmapi-Initial-device-emulation.diff
> > >
> > > RFC-2-3-hw-i2c-add-mctp-core.diff
> > >
> > > RFC-4-4-hw-misc-add-a-toy-i2c-echo-device.diff
> > >
> > > RFC-2-2-arm-virt-Add-aspeed-i2c-controller-and-MCTP-EP-to-enable-MCTP-testing.diff
> > >
> > > RFC-3-3-hw-nvme-add-nvme-management-interface-model.diff
> > >
> > >
> > > Executed following mctp commands to setup the binding,
> > >
> > > mctp link set mctpi2c15 up
> > >
> > > mctp addr add 50 dev mctpi2c15
> > >
> > > mctp link set mctpi2c15 net 11
> > >
> > > systemctl restart mctpd.service
> > >
> > > busctl call xyz.openbmc_project.MCTP /xyz/openbmc_project/mctp
> > > au.com.CodeConstruct.MCTP AssignEndpoint say mctpi2c15 1 0x4d
> > >
> > >
> > >  The above busctl configuration is reaching fmapi patch and sets up
> > > the endpoint id but then mctpd fails with log after timeout.
> > >
> > > Call failed: MCTP Endpoint did not respond
> > >
> > > Any clue what's going on?
> >
> > Hi Raghu,
> >
> > Yikes. Didn't think anyone would still use that series.
> > Not even sure I still have a tree with it on.
> >
> > I'll try and bring up again and get back to you. Might be a little
> > while though.
>
> It is Friday and this was more interesting than what I was planning to do. :)
>
> I think the breakage comes from the async send i2c series that was a month
> or so after the PoC was posted. Issues was it was only entering the _bh once.
>
> Following hack works for me on current mainline (+ CXL patches that shouldn't
> affect this.)
>
>
>
>
> From c8d819835faaec2b2a4755eb891284fe21c0747d Mon Sep 17 00:00:00 2001
> From: Jonathan Cameron 
> Date: Fri, 17 Mar 2023 18:07:08 +
> Subject: [PATCH] misc/i2c_mctp_fmapi: Hack
>
> Signed-off-by: Jonathan Cameron 
> ---
>  hw/misc/i2c_mctp_cxl_fmapi.c | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/hw/misc/i2c_mctp_cxl_fmapi.c b/hw/misc/i2c_mctp_cxl_fmapi.c
> index 219e30bfd5..2e2da80264 100644
> --- a/hw/misc/i2c_mctp_cxl_fmapi.c
> +++ b/hw/misc/i2c_mctp_cxl_fmapi.c
> @@ -330,7 +330,7 @@ static int i2c_mctp_cxl_switch_event(I2CSlave *i2c, enum 
> i2c_event event)
>  case I2C_FINISH:
>  s->len = 0;
>  s->state = MCTP_I2C_PROCESS_REQUEST;
> -qemu_bh_schedule(s->bh);
> +i2c_bus_master(s->bus, s->bh);
>  return 0;
>  case I2C_NACK:
>  default:
> @@ -671,12 +671,11 @@ static void mctp_bh(void *opaque)
>
>  switch (s->state) {
>  case MCTP_I2C_PROCESS_REQUEST:
> -i2c_bus_master(s->bus, s->bh);
>  s->state = MCTP_I2C_START_SEND;
> -return;
> -
> +//return;
> +//fallthrough
>  case MCTP_I2C_START_SEND:
> -i2c_start_send(s->bus, s->source_slave_addr);
> +i2c_start_send_async(s->bus, s->source_slave_addr);
>  s->send_buf[s->len] = s->source_slave_addr << 1;
>  s->len++;
>  s->state = MCTP_I2C_ACK;
> --
> 2.37.2
>
>
> >
> > Jonathan
> >
> >
> > >
> > >
> > > Regards
> > > Raghu
> >
> >
> >
>



Call failed: MCTP Endpoint did not respond: Qemu CXL switch with mctp-1.0

2023-03-16 Thread Maverickk 78
Hi

 I am trying mctp & mctpd with aspeed +buildroot(master) + linux v6.2
with Qemu 7.2.


I have added necessary FMAPI related patches into QEMU to support CLX
switch emulation

RFC-1-2-misc-i2c_mctp_cxl_fmapi-Initial-device-emulation.diff

RFC-2-3-hw-i2c-add-mctp-core.diff

RFC-4-4-hw-misc-add-a-toy-i2c-echo-device.diff

RFC-2-2-arm-virt-Add-aspeed-i2c-controller-and-MCTP-EP-to-enable-MCTP-testing.diff

RFC-3-3-hw-nvme-add-nvme-management-interface-model.diff


Executed following mctp commands to setup the binding,

mctp link set mctpi2c15 up

mctp addr add 50 dev mctpi2c15

mctp link set mctpi2c15 net 11

systemctl restart mctpd.service

busctl call xyz.openbmc_project.MCTP /xyz/openbmc_project/mctp
au.com.CodeConstruct.MCTP AssignEndpoint say mctpi2c15 1 0x4d


 The above busctl configuration is reaching fmapi patch and sets up
the endpoint id but then mctpd fails with log after timeout.

Call failed: MCTP Endpoint did not respond

Any clue what's going on?


Regards
Raghu