On 08/12/17 12:55, Saku Ytti wrote:
On 12 August 2017 at 18:21, Raymond Burkholder <r...@oneunified.net> wrote:

I have successfully run iperf bidirectionally in tcp as well as udp and hit
link limits, even on smaller, lower capacity linux based boxes.

On what packet sizes? What link speeds? Linux UDP socket performance

Large packet sizes. My goal with that testing has been to perform rough bandwidth testing through provider wan links. It ferrets out basic provider mtu, duplex, and 'noise' issues.

is terrible, even with rescvmmsg sendmmsg which iperf does not
utilise, the performance is bad. XEON grade server CPU won't congest
1GE single dir - 1.48Mpps, without loss.
If you don't care/look at loss, or use low pps, it's different.

this is where a/b testing comes in. if test back to back, then I know how the boxes will perform: confirm what their maximum pps rate is, and if there are any kernel based drop or limitations. Then during wan link testing, I can see how things diverge from the baseline.

In my particular application, I'm not really testing forwarding performance on any particular box.

And as an aid to the original post, I was providing some baseline info, suggesting that iperf, broken as it may be, does provide some acceptable base line performance, depending upon the reason-for-test.


If you use TCP you're measuring the host stacks TCP implementation,
and you have no visibility on network quality, because packet loss is
hidden from you.

I agree with that, and do perform udp based testing. iperf has some better post-test statistics when udp testing is performed.

But another interesting test scenario, for non-quantitative results, is to run iperf in tcp mode, and run tcpdump at the same time to see if there are more than just acks are happening, and what sort of loss is being corrected.



--
Raymond Burkholder
https://blog.raymond.burkholder.net/

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to