Hi everyone,

I've got a stand-alone executable that auto-loads PMDs with the usual 
rte_eal_init() call, but the performance is horrible at higher speeds, and it 
looks like my call to rte_eth_tx_burst() is being handled by a different 
thread, but running in the same core, so I end up losing lots of packets as the 
downstream rx buffer fills up quicker than it can be emptied.

rte_lcore_count() reports just one core, so is there some way to get either my 
main code or the downstream PMD (vhost, by the way) to run on a separate core.

I'm sending iperf3-generated TCP traffic wrapped as ROS2 messages, so I have 
just got simple code that copies the message bytes from ROS2 format to mbufs, 
but at about 10 Gbps and 1400 byte packets I'm losing about 10000 packets per 
second through rte_eth_tx_burst() errors! If I comment out that function call, 
the code can keep up, so I don't believe it is my upstream code.

Any hints on how to fix this would be helpful.

Thanks,
Ken


Reply via email to