So after a little digging and documentation reading, I think what's happening 
here is that the VM thinks it has a 10GBE NIC, but in reality, the underlying 
hardware is only 1GBE.  Yes, the host hardware has multiple NIC cards, but each 
VM gets pinned to one of them.  (So it's VM-level traffic distribution, not 
flow-level.)

My limited understanding of the NetPerf docs says that it should be interpreted 
thusly:

MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
NetPerfServerHost () port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992    1024   10.00     11789372      0    9657.81
124928           10.00      203530            166.73

The first line (begins with 212992) says The SENDER'S socket size (that is the 
VPN client) is 212992 bytes.
The Message size (payload) is 1024 bytes.  Ran for 10 seconds.  It attempted to 
send 11,789,372 bytes in 10 seconds, giving a throughput of 9657.81 ATTEMPTED 
traffic.

The second line (begins with 124928) says the RECEIVER'S socket size (the 
netperf target) is 124928 bytes.

Of the 11,789,372 packets attempted, only 203530 arrived, giving an actual 
network throughput of 166.73 Mbit/sec of UDP traffic.

I suppose it's important to clarify that the last test case (no VPN) that's 
actually bare metal server (not VM). But it's using the same network 
infrastructure. 

_______________________________________________
openconnect-devel mailing list
[email protected]
http://lists.infradead.org/mailman/listinfo/openconnect-devel

Reply via email to