On 30.04.20 13:11, Srivatsa Vaddagiri wrote:
* Will Deacon <w...@kernel.org> [2020-04-30 11:41:50]:

On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
example: message_queue_send() and message_queue_recevie() hypercalls).

Hmm, but then how would such a kernel work as a guest under all the
spec-compliant hypervisors out there?

Ok I see your point and yes for better binary compatibility, the ops have to be
set based on runtime detection of hypervisor capabilities.

Ok. I guess the other option is to standardize on a new virtio transport (like
ivshmem2-virtio)?

I haven't looked at that, but I suppose it depends on what your hypervisor
folks are willing to accomodate.

I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
(for life-cycle management of VMs), which our hypervisor may not support. A
simple shared memory and doorbell or message-queue based transport will work for
us.

As written in our private conversation, a mapping of the ivshmem2 device discovery to platform mechanism (device tree etc.) and maybe even the register access for doorbell and life-cycle management to something hypercall-like would be imaginable. What would count more from virtio perspective is a common mapping on a shared memory transport.

That said, I also warned about all the features that PCI already defined (such as message-based interrupts) which you may have to add when going a different way for the shared memory device.

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to