Gustavo Romero <gustavo.rom...@linaro.org> writes:

> Hi Markus,
>
> Thanks for interesting in the ivshmem-flat device.
>
> Bill Mills (cc:ed) is the best person to answer your question,
> so please find his answer below.
>
> On 2/28/24 3:29 AM, Markus Armbruster wrote:
>> Gustavo Romero <gustavo.rom...@linaro.org> writes:
>> 
>> [...]
>> 
>>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>>> current ivshmem device but does not require a PCI bus. It implements the 
>>> ivshmem
>>> status and control registers as MMRs and the shared memory as a directly
>>> accessible memory region in the VM memory layout. It's meant to be used on
>>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a 
>>> tiny
>>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>>> memory-constrained resource targets.
>>>
>>> The patchset includes a QTest for the ivshmem-flat device, however, it's 
>>> also
>>> possible to experiment with it in two ways:
>>>
>>> (a) using two Cortex-M VMs running Zephyr; or
>>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and 
>>> another
>>>      arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>>
>>> Please note that for running the ivshmem-flat QTests the following patch, 
>>> which
>>> is not committed to the tree yet, must be applied:
>>>
>>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
>> 
>> What problem are you trying to solve with ivshmem?
>> 
>> Shared memory is not a solution to any communication problem, it's
>> merely a building block for building such solutions: you invariably have
>> to layer some protocol on top.  What do you intend to put on top of
>> ivshmem?
>
> Actually ivshmem is shared memory and bi-direction notifications (in this 
> case a doorbell register and an irq).

Yes, ivshmem-doorbell supports interrupts.  Doesn't change my argument.

> This is the fundamental requirement for many types of communication but our 
> interest is for the OpenAMP project [1].
>
> All the OpenAMP project's communication is based on shared memory and 
> bi-directional notification.  Often this is on a AMP SOC with Cortex-As and 
> Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One 
> example of this is an x86 host computer and a PCIe card with an ARM SOC.  
> Other examples include two systems with PCIe root complex connected via a 
> non-transparent bridge.
>
> The existing PCI based ivshmem lets us model these types of systems in a 
> simple generic way without worrying about the details of the RC/EP 
> relationship or the details of a specific non-transparent bridge.  In fact 
> the ivshmem looks to the two (or more) systems like a non-transparent bridge 
> with its own memory (and no other memory access is allowed).
>
> Right now we are testing this with RPMSG between two QEMU system where both 
> systems are cortex-a53 and both running Zephyr. [2]
>
> We will expand this by switching one of the QEMU systems to either arm64 
> Linux or x86 Linux.

So you want to simulate a heterogeneous machine by connecting multiple
qemu-system-FOO processes via ivshmem, correct?

> We (and others) are also working on a generic virtio transport that will work 
> between any two systems as long as they have shared memory and bi-directional 
> notifications.

On top of or adjacent to ivshmem?

> Now for ivshmem-flat.  We want to expand this model to include MCU like CPUs 
> and RTOS'es that don't have PCIe.  We focus on Cortex-M because every open 
> source RTOS has an existing port for one of the Cortex-M machines already in 
> QEMU.  However they don't normally pick the same one.  If we added our own 
> custom machine for this, the QEMU project would push back and even if 
> accepted we would have to do a port for each RTOS.  This would mean we would 
> not test as many RTOSes.
>
> The ivshmem-flat is actually a good model for what a Cortex-M based PCIe card 
> would look like.  The host system would see the connection as PCIe but to the 
> Cortex-M it would just appear as memory, MMR's for the doorbell, and an IRQ.
>
> So even after we have a "roll your own machine definition from a file", I 
> expect ivshmem and ivshmem-flat to still be very useful.
>
> [1] https://www.openampproject.org/
> [2] Work in progress here: 
> https://github.com/OpenAMP/openamp-system-reference/tree/main/examples/zephyr/dual_qemu_ivshmem


Reply via email to