https://bugs.dpdk.org/show_bug.cgi?id=383
Bug ID: 383
Summary: dpdk virtio_user lack of notifications make
vhost_net+napi stops tx buffers
Product: DPDK
Version: unspecified
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: vhost/virtio
Assignee: [email protected]
Reporter: [email protected]
Target Milestone: ---
Using the current testpmd vhost_user as:
./app/testpmd -l 6,7,8 --vdev='net_vhost1,iface=/tmp/vhost-user1'
--vdev='net_vhost2,iface=/tmp/vhost-user2' -- -a -i --rxq=1 --txq=1 --txd=1024
--forward-mode=rxonly
And starting qemu using packed=on on the interface:
-netdev vhost-user,chardev=charnet1,id=hostnet1 -device
virtio-net-pci,rx_queue_size=256,...,packed=on
And start to tx in the guest using:
./dpdk/build/app/testpmd -l 1,2 --vdev=eth_af_packet0,iface=eth0 -- \
--forward-mode=txonly --txq=1 --txd=256 --auto-start --txpkts 1500 \
--stats-period 1
After first burst of packets (512 or a little more), sendto() will start to
return EBUSY. kernel NAPI is refusing to send more packets to virtio_net device
until it free old skbs.
However, virtio_net driver is unable to free old buffers since host
does not return them in `vhost_flush_dequeue_packed` until shadow queue is full
except for MAX_PKT_BURST (32) packets.
Sometimes we are lucky and reach this point, or packets are small enough to
fill the queue and flush, but if the packets and the virtqueue are big enough,
we will not be able to tx anymore.
--
You are receiving this mail because:
You are the assignee for the bug.