On Fri, May 08, 2026 at 05:53:07AM -0400, Michael S. Tsirkin wrote:
On Fri, May 08, 2026 at 11:23:30AM +0200, Stefano Garzarella wrote:
From: Stefano Garzarella <[email protected]>
After commit 059b7dbd20a6 ("vsock/virtio: fix potential unbounded skb
queue"), virtio_transport_inc_rx_pkt() subtracts per-skb overhead from
buf_alloc when checking whether a new packet fits. This reduces the
effective receive buffer below what the user configured via
SO_VM_SOCKETS_BUFFER_SIZE, causing legitimate data packets to be
silently dropped and applications that rely on the full buffer size
to deadlock.
Also, the reduced space is not communicated to the remote peer, so
its credit calculation accounts more credit than the receiver will
actually accept, causing data loss (there is no retransmission).
This also causes failures in tools/testing/vsock/vsock_test.c.
Test 18 sometimes fails, while test 22 always fails in this way:
18 - SOCK_STREAM MSG_ZEROCOPY...hash mismatch
22 - SOCK_STREAM virtio credit update + SO_RCVLOWAT...send failed:
Resource temporarily unavailable
Fix this by introducing virtio_transport_rx_buf_size() to calculate the
size of the RX buffer based on the overhead. Using it in the acceptance
check, the advertised buf_alloc, and the credit update decision.
Use buf_alloc * 2 as total budget (payload + overhead), similar to how
SO_RCVBUF is doubled to reserve space for sk_buff metadata.
The function returns buf_alloc as long as overhead fits within the
reservation, then gradually reduces toward 0 as overhead exceeds
buf_alloc (e.g. under small-packet flooding), informing the peer to
slow down.
Fixes: 059b7dbd20a6 ("vsock/virtio: fix potential unbounded skb queue")
Signed-off-by: Stefano Garzarella <[email protected]>
unfortunately, this is a bit of a spec violation and there is no guarantee
it helps.
Loosing data like we are doing in 059b7dbd20a6 is even worse IMHO.
a spec violation because the spec says:
Only payload bytes are counted and header bytes are not
included
and the implication is that a side can not reduce its own buf_alloc.
no guarantee because the other side is not required to process your
packets, so it might not see your buf alloc reduction.
as designed in the current spec, you can only increase your buf alloc,
not decrease it.
We never enforced this, currently an user can reduce it by
SO_VM_SOCKETS_BUFFER_SIZE and we haven't blocked it since virtio-vsock
was introduced, should we update the spec?
what can be done:
- more efficient storage for small packets (poc i posted)
- reduce buf alloc ahead of the time
That's basically what I'm doing here: I'm using twice the size of
`buf_alloc` (just like `SO_RCVBUF` does for other socket types) and
telling the other peer just `buf_alloc`.
But then, somehow, we have to let the other person know that we're
running out of space. With this patch that only happens when the other
peer isn't behaving properly, sending so many small packets that the
overhead exceeds `buf_alloc`.
Stefano
---
net/vmw_vsock/virtio_transport_common.c | 31 +++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport_common.c
b/net/vmw_vsock/virtio_transport_common.c
index 9b8014516f4f..94a4beb8fd61 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -444,12 +444,32 @@ static int virtio_transport_send_pkt_info(struct
vsock_sock *vsk,
return ret;
}
+/* vvs->rx_lock held by the caller */
+static u32 virtio_transport_rx_buf_size(struct virtio_vsock_sock *vvs)
+{
+ u64 skb_overhead = (skb_queue_len(&vvs->rx_queue) + 1) *
SKB_TRUESIZE(0);
+ /* Use buf_alloc * 2 as total budget (payload + overhead), similar to
+ * how SO_RCVBUF is doubled to reserve space for sk_buff metadata.
+ */
+ u64 total_budget = (u64)vvs->buf_alloc * 2;
+
+ /* Overhead within buf_alloc: full buf_alloc available for payload */
+ if (skb_overhead < vvs->buf_alloc)
+ return vvs->buf_alloc;
+
+ /* Overhead exceeded buf_alloc: gradually reduce to bound skb queue */
+ if (skb_overhead < total_budget)
+ return total_budget - skb_overhead;
+
+ return 0;
+}
+
static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
u32 len)
{
- u64 skb_overhead = (skb_queue_len(&vvs->rx_queue) + 1) *
SKB_TRUESIZE(0);
+ u32 rx_buf_size = virtio_transport_rx_buf_size(vvs);
- if (skb_overhead + vvs->buf_used + len > vvs->buf_alloc)
+ if (!rx_buf_size || vvs->buf_used + len > rx_buf_size)
return false;
vvs->rx_bytes += len;
@@ -472,7 +492,7 @@ void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock
*vvs, struct sk_buff *
spin_lock_bh(&vvs->rx_lock);
vvs->last_fwd_cnt = vvs->fwd_cnt;
hdr->fwd_cnt = cpu_to_le32(vvs->fwd_cnt);
- hdr->buf_alloc = cpu_to_le32(vvs->buf_alloc);
+ hdr->buf_alloc = cpu_to_le32(virtio_transport_rx_buf_size(vvs));
spin_unlock_bh(&vvs->rx_lock);
}
EXPORT_SYMBOL_GPL(virtio_transport_inc_tx_pkt);
@@ -594,6 +614,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
bool low_rx_bytes;
int err = -EFAULT;
size_t total = 0;
+ u32 rx_buf_size;
u32 free_space;
spin_lock_bh(&vvs->rx_lock);
@@ -639,7 +660,9 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
}
fwd_cnt_delta = vvs->fwd_cnt - vvs->last_fwd_cnt;
- free_space = vvs->buf_alloc - fwd_cnt_delta;
+ rx_buf_size = virtio_transport_rx_buf_size(vvs);
+ free_space = rx_buf_size > fwd_cnt_delta ?
+ rx_buf_size - fwd_cnt_delta : 0;
low_rx_bytes = (vvs->rx_bytes <
sock_rcvlowat(sk_vsock(vsk), 0, INT_MAX));
--
2.54.0