On 4/30/25 11:16, Christoph Petrausch wrote:
Sorry, my mail client fucked up the format of the commands how to reproduce the issue. Here is a corrected version.

On 4/30/25 10:59, Christoph Petrausch wrote:

We can't reproduce the problem on kernel 5.15, but have seen it on v5.17,v5,18 and v6.1, v6.2, v6.6.85, v6.8 and v6.15-rc4-42- gb6ea1680d0ac. I'm in the process of git bisecting to find the commit that introduced this broken behaviour.

Thank you for the report, the commands, and bisecting efforts!
We will also try to dig deeper on our own.
(side note: CCing IWL ML typically yields faster reply times)


On kernel 5.15, jumbo frames are received normally after the memory pressure is gone.


To reproduce, we currently use 2 servers (server-rx, server-tx)with an Intel E810-XXV NIC. To generate network traffic, we run 2 iperf3 processes with 100 threads each on the load generating server server-tx

iperf3 -c server-rx -P 100 -t 3000 -p 5201
iperf3 -c server-rx -P 100 -t 3000 -p 5202

On the receiving server server-rx, we setup two iperf3 servers:

iperf3 -s -p 5201
iperf3 -s -p 5202

To generate memory pressure, we start stress-ng on the server-rx:
stress-ng --vm 1000 --vm-bytes $(free -g -L | awk '{ print $8 }')G -- vm-keep --timeout 1200s

This consumes all the currently free memory. As soon as the PFMemallocDrop counter increases, we stop stress-ng. Now we see plenty of free memory again, but the counter is still increasing and we have seen problems with new TCP sessions, as soon as their packet size is above 1500 bytes.

The faulty state perpetuates then forever? (say, at least few minutes)


[1] https://github.com/intel/ethernet-linux-ice

Best regards, Christoph Petrausch



Reply via email to