On 14-07-20 04:57 PM, Raimundo Santos wrote:
On 19 July 2014 21:22, Sean Kamath <kam...@moltingpenguin.com> wrote:
Are you counting all those zeros to make sure they all came through?

'cause TCP is guaranteed delivery, in order.  UDP guarantees nothing.
Hello Sean!

Why counting?

My guess, and therefore the start of my reasoning and later questioning
here, is that all those zeroes inside and UDP could flood the virtual
network structure.

May be you are confusing nc(1) with wc(1).

No, what he meant was that using nc -u can produce false results.
The sender can send as many packets as its CPU can possibly send, even if 99.9% of those packets are getting dropped by the receiver; the sender still thinks it "successfully" send a bazillion bytes per second even though it's a meaningless number.
t
I didn't know tcpbench(1) was in base, either... I always install and use iperf. I would expect both tcpbench and iperf to return very similar results. (Note that the results probably won't be perfectly identical, that's normal.)

Using the -u flag to tcpbench(1) over a ~20Mbps radio link, the client reports throghput of 181Mbps, which is impossible. The server reports, simultaneously, 26Mbps. Both of these cannot be simultaneously true, right? Except they can - the client really is sending 181Mbps of traffic, the server really is receiving 26Mbps of traffic. What happened to the other 155Mbps of traffic? Dropped on the floor, probably by the radio. That's why you should run TCP benchmarks, or else be very careful with UDP benchmarks... Remember, too, that any out-of-order packets will kill real-world performance, and UDP has no guarantees about those, either.

FWIW, you're almost certainly going to be CPU-bound. I can't get more than ~200Mbps on an emulated em(4) interface under ProxmoxVE (KVM 1.7.1) between two VMs running on the same host. Granted, the CPUs are slowish (2.2GHz Xeon L5520). I get better throughput using vio(4) but then I have to reboot the VMs once every 2 or 3 days to prevent them from locking up hard.

Previous testing with VMware produced similar results circa OpenBSD v5.0. Some other guests were able to get ~2Gbps on the same VM stack, at the time. It is - almost by definition - impossible to "flood" the virtual network infrastructure without running out of CPU cycles in the guests first. It might be possible if the vSwitch is single-threaded, and you're running on a many-core CPU with each VM pegging its core(s) doing network I/O... but even then, remember that the vSwitch doesn't have to do any layer-3 processing, so it has much less work to do than the guest network stack.

Where you do have to worry about running out of bandwidth is the switch handling traffic between your hypervisors, or more realistically, the network interface(s) leading from your host to said switch. Load-balancing (VMware ESXi < v5.1) and LAGs (everywhere/everything else) are your friends here, unless you have the budget to install 10Gbps switches...

--
-Adam Thompson
 athom...@athompso.net

Reply via email to