Stefan Hajnoczi <[email protected]> writes:

> On Tue, Sep 16, 2025 at 2:01 AM Jürgen Groß <[email protected]> wrote:
>>
>> Today virtio backends are mostly living on the host. At KVM-Forum 2025
>> Stefano did a presentation [1] in which he mentioned the idea to have
>> virtio devices between a Coco guest and the associated SVSM. One problem
>> is to have a simple way to connect the virtio devices' frontends and
>> backends in a simple way.
>>
>> A similar problem is existing when using virtio in a Xen environment:
>> Xen allows to use driver domains, so the backends can live in a mostly
>> unprivileged guest (this guest will probably need access to a physical
>> device like network, though).
>>
>> With Xen it is possible to use Xenstore to communicate the configuration
>> of the virtio device: The Xen toolstack will write the configuration
>> related data to the backend- and frontend-specific paths in Xenstore for
>> the affected guests to pick them up.
>>
>> With SVSM it would be possible to communicate the configuration via
>> SVSM-calls, but I believe we can do better.
>>
>> I believe it would be interesting to add the concept of driver guests
>> to KVM, similar to the driver domains of Xen. This would add another
>> scenario where virtio parameters need to be communicated to guests. Here
>> hotplug (both sides, frontend and backend) needs to be considered, too.
>>
>> With the introduction of a virtio config device most requirements could
>> be satisfied: it could enumerate available virtio devices, return config
>> parameters of a device (backend and frontend side), signal hotplugging of
>> new devices.
>>
>> For the concept of driver guests those guests would need a way to access
>> I/O-buffers of the frontend side. Xen has the concept of grants for this
>> purpose, which is similar to a pv-IOMMU. It would be natural to use the
>> virtio IOMMU device for this purpose under KVM, probably with some extensions
>> for non-static use cases.
>>
>> This is only a rough outline of the general idea. I'd be interested in any
>> feedback. If there is some interest of this concept, I'd be happy to start
>> working on a prototype for driver guests.
>
> Hi Jürgen,
> virtio-vhost-user extends vhost-user into the guest, allowing a guest
> to act as a VIRTIO device:
> https://wiki.qemu.org/Features/VirtioVhostUser
>
> I think this solves what you are describing, although vhost-user
> doesn't have an enforcing IOMMU. The device can access any memory that
> was given to it (typically all of guest RAM).

I caught up with Tyler recently, who has done some preliminary work on
limiting which memory is exposed to vhost-user backends, but hasn't had
a chance to take it further so far:

https://gitlab.com/tylerfanelli/qemu/-/tree/vu-mem-isolation
https://wiki.qemu.org/Internships/ProjectIdeas/VhostUserMemoryIsolation

> virtio-vhost-user is not part of the VIRTIO spec or merged in QEMU
> because no one has needed this functionality enough to spend time
> getting it upstream.
>
> Alyssa mentioned a similar use case recently and that the VIRTIO
> message transport that's under development could be part of an
> alternative solution:
> https://lwn.net/ml/all/[email protected]/

Attachment: signature.asc
Description: PGP signature

Reply via email to