Hi,

On Wednesday, August 7, 2024 9:52:10 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Aug 2, 2024 at 1:22 PM Sahil Siddiq <icegambi...@gmail.com> wrote:
> > [...]
> > @@ -726,17 +738,30 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, 
> > VirtIODevice *vdev,
> >      svq->vring.num = virtio_queue_get_num(vdev,
> >      virtio_get_queue_index(vq));
> >      svq->num_free = svq->vring.num;
> > 
> > -    svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> > -                           PROT_READ | PROT_WRITE, MAP_SHARED | 
> > MAP_ANONYMOUS,
> > -                           -1, 0);
> > -    desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > -    svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> > -    svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> > -                           PROT_READ | PROT_WRITE, MAP_SHARED | 
> > MAP_ANONYMOUS,
> > -                           -1, 0);
> > -    svq->desc_state = g_new0(SVQDescState, svq->vring.num);
> > -    svq->desc_next = g_new0(uint16_t, svq->vring.num);
> > -    for (unsigned i = 0; i < svq->vring.num - 1; i++) {
> > +    svq->is_packed = virtio_vdev_has_feature(svq->vdev, 
> > VIRTIO_F_RING_PACKED);
> > +
> > +    if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> > +        svq->vring_packed.vring.desc = mmap(NULL, 
> > vhost_svq_memory_packed(svq),
> > +                                          PROT_READ | PROT_WRITE, 
> > MAP_SHARED | MAP_ANONYMOUS,
> > +                                          -1, 0);
> > +        desc_size = sizeof(struct vring_packed_desc) * svq->vring.num;
> > +        svq->vring_packed.vring.driver = (void *)((char 
> > *)svq->vring_packed.vring.desc + desc_size);
> > +        svq->vring_packed.vring.device = (void *)((char 
> > *)svq->vring_packed.vring.driver +
> > +                                     sizeof(struct 
> > vring_packed_desc_event));
>
> This is a great start but it will be problematic when you start
> mapping the areas to the vdpa device. The driver area should be read
> only for the device, but it is placed in the same page as a RW one.
>
> More on this later.
> 
> > +    } else {
> > +        svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> > +                               PROT_READ | PROT_WRITE, MAP_SHARED 
> > |MAP_ANONYMOUS,
> > +                               -1, 0);
> > +        desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > +        svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> > +        svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> > +                               PROT_READ | PROT_WRITE, MAP_SHARED 
> > |MAP_ANONYMOUS,
> > +                               -1, 0);
> > +    }
> 
> I think it will be beneficial to avoid "if (packed)" conditionals on
> the exposed functions that give information about the memory maps.
> These need to be replicated at
> hw/virtio/vhost-vdpa.c:vhost_vdpa_svq_map_rings.
> 
> However, the current one depends on the driver area to live in the
> same page as the descriptor area, so it is not suitable for this.

I haven't really understood this.

In split vqs the descriptor, driver and device areas are mapped to RW pages.
In vhost_vdpa.c:vhost_vdpa_svq_map_rings, the regions are mapped with
the appropriate "perm" field that sets the R/W permissions in the DMAMap
object. Is this problematic for the split vq format because the avail ring is
anyway mapped to a RW page in "vhost_svq_start"?

For packed vqs, the "Driver Event Suppression" data structure should be
read-only for the device. Similar to split vqs, this is mapped to a RW page
in "vhost_svq_start" but it is then mapped to a DMAMap object with read-
only perms in "vhost_vdpa_svq_map_rings".

I am a little confused about where the issue lies.

Thanks,
Sahil



Reply via email to