On Fri, Jan 16, 2026 at 11:20:25AM +0100, Albert Esteve wrote:
> On Tue, Nov 11, 2025 at 10:11 AM Albert Esteve <[email protected]> wrote:
> >
> > Add GET_SHMEM_CONFIG vhost-user frontend
> > message to the spec documentation.
> >
> > Reviewed-by: Alyssa Ross <[email protected]>
> > Reviewed-by: Stefan Hajnoczi <[email protected]>
> > Signed-off-by: Albert Esteve <[email protected]>
> > ---
> >  docs/interop/vhost-user.rst | 39 +++++++++++++++++++++++++++++++++++++
> >  1 file changed, 39 insertions(+)
> >
> > diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
> > index 6c1d66d7d3..6a1ecd7f48 100644
> > --- a/docs/interop/vhost-user.rst
> > +++ b/docs/interop/vhost-user.rst
> > @@ -371,6 +371,20 @@ MMAP request
> >    - 0: Pages are mapped read-only
> >    - 1: Pages are mapped read-write
> >
> > +VIRTIO Shared Memory Region configuration
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > ++-------------+---------+------------+----+--------------+
> > +| num regions | padding | mem size 0 | .. | mem size 255 |
> > ++-------------+---------+------------+----+--------------+
> > +
> > +:num regions: a 32-bit number of regions
> > +
> > +:padding: 32-bit
> > +
> > +:mem size: contains ``num regions`` 64-bit fields representing the size of 
> > each
> > +           VIRTIO Shared Memory Region
> > +
> 
> When implementing this for rust-vmm, the mem size came up a bit
> confusing. In the last patch (7/7) of this series, the implementation
> uses `num regions` as a count for the number of valid regions (thus
> accounting for gaps in the shmem region mapping). Thus, `mem size` has
> this confusing statement saying that it containers `num regions`
> fields. It should say it contains 256 fields (it is only sent once
> during initialization, so no need to save bytes here), with only `num
> regions` that are valid (i.e., greater than 0). Maybe it could even
> discard the `num regions` field, and send only the full array.
> Thoughts?

Let's discuss the exact wording here.
I'm not sure why would we need this padding sending unused fields
though. Waste no, need not?

> As much as I wanted this series merged, this deserves a clarification.
> So I can either send a new version of the series or split the last
> three patches into a different series. Hopefully it only requires one
> more version though.
> 
> 
> >  C structure
> >  -----------
> >
> > @@ -397,6 +411,7 @@ In QEMU the vhost-user message is implemented with the 
> > following struct:
> >            VhostUserShared object;
> >            VhostUserTransferDeviceState transfer_state;
> >            VhostUserMMap mmap;
> > +          VhostUserShMemConfig shmem;
> >        };
> >    } QEMU_PACKED VhostUserMsg;
> >
> > @@ -1761,6 +1776,30 @@ Front-end message types
> >    Using this function requires prior negotiation of the
> >    ``VHOST_USER_PROTOCOL_F_DEVICE_STATE`` feature.
> >
> > +``VHOST_USER_GET_SHMEM_CONFIG``
> > +  :id: 44
> > +  :equivalent ioctl: N/A
> > +  :request payload: N/A
> > +  :reply payload: ``struct VhostUserShMemConfig``
> > +
> > +  When the ``VHOST_USER_PROTOCOL_F_SHMEM`` protocol feature has been
> > +  successfully negotiated, this message can be submitted by the front-end
> > +  to gather the VIRTIO Shared Memory Region configuration. The back-end 
> > will
> > +  respond with the number of VIRTIO Shared Memory Regions it requires, and
> > +  each shared memory region size in an array. The shared memory IDs are
> > +  represented by the array index. The information returned shall comply
> > +  with the following rules:
> > +
> > +  * The shared information will remain valid and unchanged for the entire
> > +    lifetime of the connection.
> > +
> > +  * The Shared Memory Region size must be a multiple of the page size
> > +    supported by mmap(2).
> > +
> > +  * The size may be 0 if the region is unused. This can happen when the
> > +    device does not support an optional feature but does support a feature
> > +    that uses a higher shmid.
> > +
> >  Back-end message types
> >  ----------------------
> >
> > --
> > 2.49.0
> >


Reply via email to