Hi all,

when working with RFNoC at 200 MSps on the X310 using 10GbE I
experience overruns when using less than 512 samples per packet (spp).
A simple flow graph [RFNoC Radio] -> [RFNoC FIFO] -> [Null sink] with
the spp stream arg set at the RFNoC Radio block shows the following
network utilization:

spp | throughput [Gbps]
------------------------
1024 | 6.49
512 | 6.58
256 | 3.60
 64 | 0.70

Although I understand that the total load will increase a little bit
for smaller packets due to increased overhead (headers) as seen from
spp=1024 to spp=512, I find it confusing that so many packets are
dropped for spp <= 256.

Total goodput should be 200 MSps * 4 byte per sample (sc16) = 800 MBps
= 6.40 Gbps.

Is RFNoC somehow limited to a certain number of packets per second
(regardless of their size)?
Could this be resolved by increasing the STR_SINK_FIFOSIZE noc_shell
parameter of any blocks connected to the RFNoC Radio?

I would like to use spp=64 because that is the size of the RFNoC FFT I
want to use. I am using UHD 4.0.0.rfnoc-devel-409-gec9138eb.

Any help or ideas appreciated!

Best,
Sebastian

This is almost certainly an interrupt-rate issue having to do with your
ethernet controller, and nothing to do with RFNoC, per se.

If you're on Linux, try:

ethtool --coalesce <device-name-here>  adaptive-rx on
ethtool --coalesce <device-name-here> adaptive-tx on

Thanks Marcus for your quick response. Unfortunately, that did not help. Also, `ethtool -c enp1s0f0` still reports "Adaptive RX: off TX: off" afterwards. I also tried changing `rx-usecs` which reported correctly but did not help either. I am using Intel 82599ES 10-Gigabit SFI/SFP+ controller with the driver ixgbe (version: 5.1.0-k) on Ubuntu 16.04.

Do you know anything else I could try?

Thanks,
Sebastian









_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to