On Wed, May 08, 2024 at 02:37:52PM +0200, Paolo Bonzini wrote:
> On 5/8/24 11:38, Stefano Garzarella wrote:
> > On Wed, May 08, 2024 at 01:13:09PM GMT, Marc-André Lureau wrote:
> > > Hi
> > > 
> > > On Wed, May 8, 2024 at 11:50 AM Stefano Garzarella
> > > <sgarz...@redhat.com> wrote:
> > > > 
> > > > Hi Roman,
> > > > 
> > > > On Tue, May 07, 2024 at 11:20:50PM GMT, Roman Kiryanov wrote:
> > > > >Hi Stefano,
> > > > >
> > > > >On Tue, May 7, 2024 at 1:10 AM Stefano Garzarella
> > > > <sgarz...@redhat.com> wrote:
> > > > >> I have no experience with Windows, but what we need for
> > > > vhost-user is:
> > > > >>
> > > > >> - AF_UNIX and be able to send file descriptors using ancillary data
> > > > >>    (i.e. SCM_RIGHTS)
> > > > >
> > > > >As far as I understand, Windows does NOT support SCM_RIGHTS
> > > > over AF_UNIX.
> > > > 
> > > > Thank you for the information. This is unfortunate and does not allow
> > > > us to use vhost-user as it is on Windows.
> > > > 
> > > 
> > > fwiw, Windows has other mechanisms to share resources between processes.
> > > 
> > > To share/pass sockets, you can use WSADuplicateSocket. For shared
> > > memory and other resources, DuplicateHandle API.
> > 
> > Cool, thanks for sharing that. So it could be done, but I think we need
> > to extend the vhost-user protocol to work with Windows.
> 
> It would be possible to implement the memfd backend for Windows, using the
> CreateFileMapping() API.
> 
> However, the vhost-user protocol's VHOST_USER_SET_MEM_TABLE requests do not
> have any padding that can be used to pass the handle to the target. An
> extended version would be necessary.
> 
> One difference between Unix and Windows is that, if the vhost-server messes
> up the handling of messages from the socket, and therefore it does not close
> the handle, it is leaked forever.  This is not a huge deal per se, but I
> think it means that QEMU is not allowed to "open" a privileged vhost-user
> server process with PROCESS_DUP_HANDLE rights (translation: QEMU cannot
> provide duplicate handles to a privileged vhost-user server process).
> 
> Also I'm not sure what the cost of DuplicateHandle() is, and whether it's a
> good idea to do it for every region on every VHOST_USER_SET_MEM_TABLE
> request.  But VHOST_USER_SET_MEM_TABLE is not a fast path, so perhaps it's
> okay.
> 
> I think a virtio-vsock implementation in QEMU would be easier, lacking
> another usecase for vhost-user on Windows.
> 
> The main design question is whether multiple virtio-vsock devices for the
> same guest should share the CID space or not (I think it should, but I'm not
> 100% sure).  To connect host<->guest you could have a QOM object, here I am
> naming it vsock-forward as an example:

Designwise, a native VSOCK backend in QEMU really should implement the
same approach defined by firecracker, so that we have interoperability
with systemd, firecracker and cloud-hypervisor. See

  https://gitlab.com/qemu-project/qemu/-/issues/2095
  
https://github.com/firecracker-microvm/firecracker/blob/main/docs/vsock.md#firecracker-virtio-vsock-design

This involves multiple UNIX sockets on the host

  1 * /some/path   - QEMU listens on this, and accepts connections
                     from other host processes. The client sends
                     "PORT <num>" to indicate that guest port it
                     is connecting to

  n * /some/path_$PORT - QEMU connect to this for outgoing connections
                         from the guest. Other host processes need
                         to listen on whatever path_$PORT need to be
                         serviced

IOW, from a CLI pov, QEMU should need nothing more than

    -object vsock-forward,prefix=/some/path


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


Reply via email to