On Mon, 30 Jan 2023 16:40:08 +0800, Jason Wang <jasow...@redhat.com> wrote: > On Mon, Jan 30, 2023 at 4:03 PM Xuan Zhuo <xuanz...@linux.alibaba.com> wrote: > > > > On Mon, 30 Jan 2023 15:49:36 +0800, Jason Wang <jasow...@redhat.com> wrote: > > > On Mon, Jan 30, 2023 at 1:32 PM Michael S. Tsirkin <m...@redhat.com> > > > wrote: > > > > > > > > On Mon, Jan 30, 2023 at 10:15:12AM +0800, Xuan Zhuo wrote: > > > > > On Sun, 29 Jan 2023 07:15:47 -0500, "Michael S. Tsirkin" > > > > > <m...@redhat.com> wrote: > > > > > > On Sun, Jan 29, 2023 at 08:03:42PM +0800, Xuan Zhuo wrote: > > > > > > > On Sun, 29 Jan 2023 06:57:29 -0500, "Michael S. Tsirkin" > > > > > > > <m...@redhat.com> wrote: > > > > > > > > On Sun, Jan 29, 2023 at 04:23:08PM +0800, Xuan Zhuo wrote: > > > > > > > > > On Sun, 29 Jan 2023 03:12:12 -0500, "Michael S. Tsirkin" > > > > > > > > > <m...@redhat.com> wrote: > > > > > > > > > > On Sun, Jan 29, 2023 at 03:28:28PM +0800, Xuan Zhuo wrote: > > > > > > > > > > > On Sun, 29 Jan 2023 02:25:43 -0500, "Michael S. Tsirkin" > > > > > > > > > > > <m...@redhat.com> wrote: > > > > > > > > > > > > On Sun, Jan 29, 2023 at 10:51:50AM +0800, Xuan Zhuo > > > > > > > > > > > > wrote: > > > > > > > > > > > > > Check whether it is per-queue reset state in > > > > > > > > > > > > > virtio_net_flush_tx(). > > > > > > > > > > > > > > > > > > > > > > > > > > Before per-queue reset, we need to recover async tx > > > > > > > > > > > > > resources. At this > > > > > > > > > > > > > time, virtio_net_flush_tx() is called, but we should > > > > > > > > > > > > > not try to send > > > > > > > > > > > > > new packets, so virtio_net_flush_tx() should check > > > > > > > > > > > > > the current > > > > > > > > > > > > > per-queue reset state. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > What does "at this time" mean here? > > > > > > > > > > > > Do you in fact mean it's called from > > > > > > > > > > > > flush_or_purge_queued_packets? > > > > > > > > > > > > > > > > > > > > > > Yes > > > > > > > > > > > > > > > > > > > > > > virtio_queue_reset > > > > > > > > > > > k->queue_reset > > > > > > > > > > > virtio_net_queue_reset > > > > > > > > > > > flush_or_purge_queued_packets > > > > > > > > > > > > > > > > > > > > > > qemu_flush_or_purge_queued_packets > > > > > > > > > > > ..... > > > > > > > > > > > (callback) > > > > > > > > > > > virtio_net_tx_complete > > > > > > > > > > > > > > > > > > > > > > virtio_net_flush_tx <-- here send new packet. We need > > > > > > > > > > > stop it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Because it is inside the callback, I can't pass > > > > > > > > > > > information through the stack. I > > > > > > > > > > > originally thought it was a general situation, so I > > > > > > > > > > > wanted to put it in > > > > > > > > > > > struct VirtQueue. > > > > > > > > > > > > > > > > > > > > > > If it is not very suitable, it may be better to put it in > > > > > > > > > > > VirtIONetQueue. > > > > > > > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > > > > > Hmm maybe. Another idea: isn't virtio_net_tx_complete called > > > > > > > > > > with length 0 here? Are there other cases where length is 0? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > What does the call stack look like? > > > > > > > > > > > > > > > > > > > > > > > > If yes introducing a vq state just so > > > > > > > > > > > > virtio_net_flush_tx > > > > > > > > > > > > knows we are in the process of reset would be a bad > > > > > > > > > > > > idea. > > > > > > > > > > > > We want something much more local, ideally on stack > > > > > > > > > > > > even ... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Fixes: 7dc6be52 ("virtio-net: support queue reset") > > > > > > > > > > > > > Fixes: > > > > > > > > > > > > > https://gitlab.com/qemu-project/qemu/-/issues/1451 > > > > > > > > > > > > > Reported-by: Alexander Bulekov <alx...@bu.edu> > > > > > > > > > > > > > Signed-off-by: Xuan Zhuo <xuanz...@linux.alibaba.com> > > > > > > > > > > > > > --- > > > > > > > > > > > > > hw/net/virtio-net.c | 3 ++- > > > > > > > > > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > > > > > > > > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > > > > > > > > > > > > index 3ae909041a..fba6451a50 100644 > > > > > > > > > > > > > --- a/hw/net/virtio-net.c > > > > > > > > > > > > > +++ b/hw/net/virtio-net.c > > > > > > > > > > > > > @@ -2627,7 +2627,8 @@ static int32_t > > > > > > > > > > > > > virtio_net_flush_tx(VirtIONetQueue *q) > > > > > > > > > > > > > VirtQueueElement *elem; > > > > > > > > > > > > > int32_t num_packets = 0; > > > > > > > > > > > > > int queue_index = > > > > > > > > > > > > > vq2q(virtio_get_queue_index(q->tx_vq)); > > > > > > > > > > > > > - if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) > > > > > > > > > > > > > { > > > > > > > > > > > > > + if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK) > > > > > > > > > > > > > || > > > > > > > > > > > > > + virtio_queue_reset_state(q->tx_vq)) { > > > > > > > > > > > > > > > > > > > > btw this sounds like you are asking it to reset some state. > > > > > > > > > > > > > > > > > > > > > > > return num_packets; > > > > > > > > > > > > > > > > > > > > and then > > > > > > > > > > > > > > > > > > > > ret = virtio_net_flush_tx(q); > > > > > > > > > > if (ret >= n->tx_burst) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > will reschedule automatically won't it? > > > > > > > > > > > > > > > > > > > > also why check in virtio_net_flush_tx and not > > > > > > > > > > virtio_net_tx_complete? > > > > > > > > > > > > > > > > > > virtio_net_flush_tx may been called by timer. > > > > > > We stop timer/bh during device reset, do we need to do the same with vq > > > reset? > > > > > > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > timer won't run while flush_or_purge_queued_packets is in > > > > > > > > progress. > > > > > > > > > > > > > > Is timer not executed during the VMEXIT process? Otherwise, we > > > > > > > still have to > > > > > > > consider that after the flush_or_purge_queued_packets, this > > > > > > > process before the > > > > > > > structure is cleared. > > > > > > > > > > > > > > > > > > > > > > > > void virtio_queue_reset(VirtIODevice *vdev, uint32_t queue_index) > > > > > > { > > > > > > VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev); > > > > > > > > > > > > if (k->queue_reset) { > > > > > > k->queue_reset(vdev, queue_index); > > > > > > } > > > > > > > > > > > > __virtio_queue_reset(vdev, queue_index); > > > > > > } > > > > > > > > > > > > > > > > > > No timers do not run between k->queue_reset and > > > > > > __virtio_queue_reset. > > > > > > > > > > > > > > > > > > > Even if it can be processed in virtio_net_tx_complete, is there > > > > > > > any good way? > > > > > > > This is a callback, it is not convenient to pass the parameters. > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > How about checking that length is 0? > > > > > > > > > > > > > > > I think that check length is not a good way. This modifys the > > > > > semantics of 0. > > > > > > 0 seems to mean "purge" and > > > > > > > It is > > > > > not friendly to the future maintenance. On the other hand, > > > > > qemu_net_queue_purge() > > > > > will pass 0, and this function is called by many places. > > > > > > That's exactly what we want actually, when do purge we don't need a flush? > > > > Yes, but I'm not sure. If we stop flush, there will be any other effects. > > So we did: > > virtio_net_queue_reset(): > nc = qemu_get_subqueue(n->nic, vq2q(queue_index); > flush_or_purge_queued_packets(nc); > qemu_flush_or_purge_queued_packets(nc->peer, true); // [1] > if (qemu_net_queue_flush(nc->incoming_queue)) { > .... > } else if (purge) { > qemu_net_queue_purge(nc->incoming_queue, nc->peer); > packet->send_cb() > virtio_net_tx_complete() > virtio_net_flush_tx() > qemu_sendv_packet_async() // [2] > } > > We try to flush the tap's incoming queue and if we fail we will purge > in [1]. But the sent_cb() tries to send more packets which could be > queued to the tap incoming queue [2]. This breaks the semantic of > qemu_flush_or_purge_queued_packets().
Sounds like good news, and I think so too. > > > > > On the other hand, if we use "0" as a judgment condition, do you mean only > > the > > implementation of the purge in the flush_or_purge_queued_packets()? > > It should be all the users of qemu_net_queue_purge(). The rest users > seems all fine: > > virtio_net_vhost_status(), if we do flush, it may end up with touching > vring during vhost is running. > filters: all do a flush before. > > > > > > > > > > > > > > > > How about we add an api in queue.c to replace the sent_cb callback on > > > > > queue? > > > > > > > > > > Thanks. > > > > > > OK I guess. Jason? > > > > > > Not sure, anything different from adding a check in > > > virtio_net_tx_complete()? (assuming bh and timer is cancelled or > > > deleted). > > > > We replaced the sent_cb with a function without flush. > > I meant it won't be different from adding a > > if (virtio_queue_is_rest()) > > somewhere in virtio_net_tx_complete()? Only modify on the stack, without using a variable like disabled_by_reset. Thanks. > > Thanks > > > > > Thanks. > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > 2.32.0.3.g01195cf9f > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >