On Tue, Apr 09, 2024 at 12:41:39PM -0400, Michael S. Tsirkin wrote:
> On Tue, Apr 09, 2024 at 07:37:04PM +0300, Alexey Dobriyan wrote:
> > On Tue, Apr 09, 2024 at 02:51:38AM -0400, Michael S. Tsirkin wrote:
> > > On Fri, Apr 05, 2024 at 02:20:15PM +0300, Alexey Dobriyan wrote:
> > > > Don't send zero length packets in virtio_net_flush_tx().
> > > > 
> > > > Reproducer from https://gitlab.com/qemu-project/qemu/-/issues/1451
> > > > creates small packet (1 segment, len = 10 == n->guest_hdr_len),
> > > > destroys queue.
> > > > 
> > > > "if (n->host_hdr_len != n->guest_hdr_len)" is triggered, if body creates
> > > > zero length/zero segment packet, because there is nothing after guest
> > > > header.
> > > > 
> > > > qemu_sendv_packet_async() tries to send it.
> > > > 
> > > > slirp discards it because it is smaller than Ethernet header,
> > > > but returns 0.
> > > 
> > > won't the same issue trigger with a 1 byte packet
> > > as opposed to a 0 byte packet though?
> > 
> > I don't think so. Only "return 0" is problematic from 
> > qemu_sendv_packet_async().
> > ->deliver hooks are supposed to return total length, so 1 should be fine.
> 
> 
> But you said it's smaller than Ethernet is the problem?

No, the problem is iov_cnt=0 packet is being generated, which enters
qemu_sendv_packet_async(), discarded inside slirp driver, but
net_slirp_receive() returns 0 which is the length of meaningless packet.
which is propagated upwards.

> > > > 0 length is propagated upwards and is interpreted as "packet has been 
> > > > sent"
> > > > which is terrible because queue is being destroyed, nothing has been 
> > > > sent,
> > > > nobody is waiting for TX to complete and assert it triggered.
> > > > 
> > > > Signed-off-by: Alexey Dobriyan <adobri...@yandex-team.ru>
> > > > ---
> > > >  hw/net/virtio-net.c | 18 ++++++++++++------
> > > >  1 file changed, 12 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 58014a92ad..258633f885 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -2765,18 +2765,14 @@ static int32_t 
> > > > virtio_net_flush_tx(VirtIONetQueue *q)
> > > >          out_sg = elem->out_sg;
> > > >          if (out_num < 1) {
> > > >              virtio_error(vdev, "virtio-net header not in first 
> > > > element");
> > > > -            virtqueue_detach_element(q->tx_vq, elem, 0);
> > > > -            g_free(elem);
> > > > -            return -EINVAL;
> > > > +            goto detach;
> > > >          }
> > > >  
> > > >          if (n->has_vnet_hdr) {
> > > >              if (iov_to_buf(out_sg, out_num, 0, &vhdr, 
> > > > n->guest_hdr_len) <
> > > >                  n->guest_hdr_len) {
> > > >                  virtio_error(vdev, "virtio-net header incorrect");
> > > > -                virtqueue_detach_element(q->tx_vq, elem, 0);
> > > > -                g_free(elem);
> > > > -                return -EINVAL;
> > > > +                goto detach;
> > > >              }
> > > >              if (n->needs_vnet_hdr_swap) {
> > > >                  virtio_net_hdr_swap(vdev, (void *) &vhdr);
> > > > @@ -2807,6 +2803,11 @@ static int32_t 
> > > > virtio_net_flush_tx(VirtIONetQueue *q)
> > > >                               n->guest_hdr_len, -1);
> > > >              out_num = sg_num;
> > > >              out_sg = sg;
> > > > +
> > > > +            if (iov_size(out_sg, out_num) == 0) {
> > > > +                virtio_error(vdev, "virtio-net nothing to send");
> > > > +                goto detach;
> > > > +            }
> > > >          }
> > > >  
> > > >          ret = qemu_sendv_packet_async(qemu_get_subqueue(n->nic, 
> > > > queue_index),
> > > > @@ -2827,6 +2828,11 @@ drop:
> > > >          }
> > > >      }
> > > >      return num_packets;
> > > > +
> > > > +detach:
> > > > +    virtqueue_detach_element(q->tx_vq, elem, 0);
> > > > +    g_free(elem);
> > > > +    return -EINVAL;
> > > >  }
> > > >  
> > > >  static void virtio_net_tx_timer(void *opaque);
> > > > -- 
> > > > 2.34.1
> > > 
> 

Reply via email to