On Tue, Oct 19, 2021 at 4:40 PM Eugenio Perez Martin
<epere...@redhat.com> wrote:
>
> On Fri, Oct 15, 2021 at 6:42 AM Jason Wang <jasow...@redhat.com> wrote:
> >
> >
> > 在 2021/10/15 上午12:39, Eugenio Perez Martin 写道:
> > > On Wed, Oct 13, 2021 at 5:47 AM Jason Wang <jasow...@redhat.com> wrote:
> > >>
> > >> 在 2021/10/1 下午3:05, Eugenio Pérez 写道:
> > >>> This will make qemu aware of the device used buffers, allowing it to
> > >>> write the guest memory with its contents if needed.
> > >>>
> > >>> Since the use of vhost_virtqueue_start can unmasks and discard call
> > >>> events, vhost_virtqueue_start should be modified in one of these ways:
> > >>> * Split in two: One of them uses all logic to start a queue with no
> > >>>     side effects for the guest, and another one tha actually assumes 
> > >>> that
> > >>>     the guest has just started the device. Vdpa should use just the
> > >>>     former.
> > >>> * Actually store and check if the guest notifier is masked, and do it
> > >>>     conditionally.
> > >>> * Left as it is, and duplicate all the logic in vhost-vdpa.
> > >>>
> > >>> Signed-off-by: Eugenio Pérez <epere...@redhat.com>
> > >>> ---
> > >>>    hw/virtio/vhost-shadow-virtqueue.c | 19 +++++++++++++++
> > >>>    hw/virtio/vhost-vdpa.c             | 38 
> > >>> +++++++++++++++++++++++++++++-
> > >>>    2 files changed, 56 insertions(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > >>> b/hw/virtio/vhost-shadow-virtqueue.c
> > >>> index 21dc99ab5d..3fe129cf63 100644
> > >>> --- a/hw/virtio/vhost-shadow-virtqueue.c
> > >>> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > >>> @@ -53,6 +53,22 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> > >>>        event_notifier_set(&svq->kick_notifier);
> > >>>    }
> > >>>
> > >>> +/* Forward vhost notifications */
> > >>> +static void vhost_svq_handle_call_no_test(EventNotifier *n)
> > >>> +{
> > >>> +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> > >>> +                                             call_notifier);
> > >>> +
> > >>> +    event_notifier_set(&svq->guest_call_notifier);
> > >>> +}
> > >>> +
> > >>> +static void vhost_svq_handle_call(EventNotifier *n)
> > >>> +{
> > >>> +    if (likely(event_notifier_test_and_clear(n))) {
> > >>> +        vhost_svq_handle_call_no_test(n);
> > >>> +    }
> > >>> +}
> > >>> +
> > >>>    /*
> > >>>     * Obtain the SVQ call notifier, where vhost device notifies SVQ 
> > >>> that there
> > >>>     * exists pending used buffers.
> > >>> @@ -180,6 +196,8 @@ VhostShadowVirtqueue *vhost_svq_new(struct 
> > >>> vhost_dev *dev, int idx)
> > >>>        }
> > >>>
> > >>>        svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> > >>> +    event_notifier_set_handler(&svq->call_notifier,
> > >>> +                               vhost_svq_handle_call);
> > >>>        return g_steal_pointer(&svq);
> > >>>
> > >>>    err_init_call_notifier:
> > >>> @@ -195,6 +213,7 @@ err_init_kick_notifier:
> > >>>    void vhost_svq_free(VhostShadowVirtqueue *vq)
> > >>>    {
> > >>>        event_notifier_cleanup(&vq->kick_notifier);
> > >>> +    event_notifier_set_handler(&vq->call_notifier, NULL);
> > >>>        event_notifier_cleanup(&vq->call_notifier);
> > >>>        g_free(vq);
> > >>>    }
> > >>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > >>> index bc34de2439..6c5f4c98b8 100644
> > >>> --- a/hw/virtio/vhost-vdpa.c
> > >>> +++ b/hw/virtio/vhost-vdpa.c
> > >>> @@ -712,13 +712,40 @@ static bool vhost_vdpa_svq_start_vq(struct 
> > >>> vhost_dev *dev, unsigned idx)
> > >>>    {
> > >>>        struct vhost_vdpa *v = dev->opaque;
> > >>>        VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, 
> > >>> idx);
> > >>> -    return vhost_svq_start(dev, idx, svq);
> > >>> +    EventNotifier *vhost_call_notifier = 
> > >>> vhost_svq_get_svq_call_notifier(svq);
> > >>> +    struct vhost_vring_file vhost_call_file = {
> > >>> +        .index = idx + dev->vq_index,
> > >>> +        .fd = event_notifier_get_fd(vhost_call_notifier),
> > >>> +    };
> > >>> +    int r;
> > >>> +    bool b;
> > >>> +
> > >>> +    /* Set shadow vq -> guest notifier */
> > >>> +    assert(v->call_fd[idx]);
> > >>
> > >> We need aovid the asser() here. On which case we can hit this?
> > >>
> > > I would say that there is no way we can actually hit it, so let's remove 
> > > it.
> > >
> > >>> +    vhost_svq_set_guest_call_notifier(svq, v->call_fd[idx]);
> > >>> +
> > >>> +    b = vhost_svq_start(dev, idx, svq);
> > >>> +    if (unlikely(!b)) {
> > >>> +        return false;
> > >>> +    }
> > >>> +
> > >>> +    /* Set device -> SVQ notifier */
> > >>> +    r = vhost_vdpa_set_vring_dev_call(dev, &vhost_call_file);
> > >>> +    if (unlikely(r)) {
> > >>> +        error_report("vhost_vdpa_set_vring_call for shadow vq failed");
> > >>> +        return false;
> > >>> +    }
> > >>
> > >> Similar to kick, do we need to set_vring_call() before vhost_svq_start()?
> > >>
> > > It should not matter at this moment because the device should not be
> > > started at this point and device calls should not run
> > > vhost_svq_handle_call until BQL is released.
> >
> >
> > Yes, we stop virtqueue before.
> >
> >
> > >
> > > The "logic" of doing it after is to make clear that svq must be fully
> > > initialized before processing device calls, even in the case that we
> > > extract SVQ in its own iothread or similar. But this could be done
> > > before vhost_svq_start for sure.
> > >
> > >>> +
> > >>> +    /* Check for pending calls */
> > >>> +    event_notifier_set(vhost_call_notifier);
> > >>
> > >> Interesting, can this result spurious interrupt?
> > >>
> > > This actually "queues" a vhost_svq_handle_call after the BQL release,
> > > where the device should be fully reset. In that regard, if there are
> > > no used descriptors there will not be an irq raised to the guest. Does
> > > that answer the question? Or have I missed something?
> >
> >
> > Yes, please explain this in the comment.
> >
>
> I'm reviewing this again, and actually I think I was wrong in solving the 
> issue.
>
> Since at this point the device is being configured, there is no chance
> that we had a missing call notification here: A previous kick is
> needed for the device to generate any calls, and these cannot be
> processed.
>
> What is not solved in this series is that we could have pending used
> buffers in vdpa device stopping SVQ, but queuing a check for that is
> not going to solve anything, since SVQ vring would be already
> destroyed:
>
> * vdpa device marks N > 0 buffers as used, and calls.
> * Before processing them, SVQ stop is called. SVQ have not processed
> these, and cleans them, making this event_notifier_set useless.
>
> So this would require a few changes. Mainly, instead of queueing a
> check for used, these need to be checked before svq cleaning. After
> that, obtain the VQ state (is not obtained in the stop at the moment,
> trusting in guest's used idx) and run a last
> vhost_svq_handle_call_no_test while the device is paused.

It looks to me what's really important is that SVQ needs to
drain/forwared used buffers after vdpa is stopped. Then we should be
fine.

>
> Thanks!
>
> >
> > >
> > >>> +    return true;
> > >>>    }
> > >>>
> > >>>    static unsigned vhost_vdpa_enable_svq(struct vhost_vdpa *v, bool 
> > >>> enable)
> > >>>    {
> > >>>        struct vhost_dev *hdev = v->dev;
> > >>>        unsigned n;
> > >>> +    int r;
> > >>>
> > >>>        if (enable == v->shadow_vqs_enabled) {
> > >>>            return hdev->nvqs;
> > >>> @@ -752,9 +779,18 @@ static unsigned vhost_vdpa_enable_svq(struct 
> > >>> vhost_vdpa *v, bool enable)
> > >>>        if (!enable) {
> > >>>            /* Disable all queues or clean up failed start */
> > >>>            for (n = 0; n < v->shadow_vqs->len; ++n) {
> > >>> +            struct vhost_vring_file file = {
> > >>> +                .index = vhost_vdpa_get_vq_index(hdev, n),
> > >>> +                .fd = v->call_fd[n],
> > >>> +            };
> > >>> +
> > >>> +            r = vhost_vdpa_set_vring_call(hdev, &file);
> > >>> +            assert(r == 0);
> > >>> +
> > >>>                unsigned vq_idx = vhost_vdpa_get_vq_index(hdev, n);
> > >>>                VhostShadowVirtqueue *svq = 
> > >>> g_ptr_array_index(v->shadow_vqs, n);
> > >>>                vhost_svq_stop(hdev, n, svq);
> > >>> +            /* TODO: This can unmask or override call fd! */
> > >>
> > >> I don't get this comment. Does this mean the current code can't work
> > >> with mask_notifiers? If yes, this is something we need to fix.
> > >>
> > > Yes, but it will be addressed in the next series. I should have
> > > explained it bette here, sorry :).
> >
> >
> > Ok.
> >
> > Thanks
> >
> >
> > >
> > > Thanks!
> > >
> > >> Thanks
> > >>
> > >>
> > >>>                vhost_virtqueue_start(hdev, hdev->vdev, &hdev->vqs[n], 
> > >>> vq_idx);
> > >>>            }
> > >>>
> >
>


Reply via email to