Hello Jonathan,

On 5/6/22 16:07, Jonathan Cameron wrote:
On Thu, 31 Mar 2022 18:57:33 +0200
Klaus Jensen <i...@irrelevant.dk> wrote:

From: Klaus Jensen <k.jen...@samsung.com>

Hi all,

This RFC series adds I2C "slave mode" support for the Aspeed I2C
controller as well as the necessary infrastructure in the i2c core to
support this.

Background
~~~~~~~~~~
We are working on an emulated NVM Express Management Interface[1] for
testing and validation purposes. NVMe-MI is based on the MCTP
protocol[2] which may use a variety of underlying transports. The one we
are interested in is I2C[3].

The first general trickery here is that all MCTP transactions are based
on the SMBus Block Write bus protocol[4]. This means that the slave must
be able to master the bus to communicate. As you know, hw/i2c/core.c
currently does not support this use case.

The second issue is how to interact with these mastering devices. Jeremy
and Matt (CC'ed) have been working on an MCTP stack for the Linux Kernel
(already upstream) and an I2C binding driver[5] is currently under
review. This binding driver relies on I2C slave mode support in the I2C
controller.

Hi Klaus,

Just thought I'd mention I'm also interested in MCTP over I2C emulation
for a couple of projects:

Klaus is working on a v2 :

  
http://patchwork.ozlabs.org/project/qemu-devel/patch/20220503225925.1798324-2-p...@fb.com/

Thanks,

C.



1) DMTF SPDM - mostly as a second transport for the kernel stack alongside
    PCI DOE.
2) CXL FM-API - adding support for the Fabric Manager interfaces
    on emulated CXL switches which is also typically carried over
    MCTP.

I was thinking of emulating a MCTP over PCI VDM but this has saved me
going to the effort of doing that for now at least :)

I have hacked a really really basic MCTP device together using this
series and it all seems to be working with the kernel stack (subject to a
few kernel driver bugs that I'll report / send fixes for next week).
I'm cheating all over the place so far, (lots of hard coded values) but
would be interested in a more flexible solution that might perhaps
share infrastructure with your NVMe-MI work.

Reply via email to