On 08/13/2013 05:34 AM, Alexey Stoyanov wrote:
>> I am not an expert when it comes to setting up rate limiting.  What you
>> would need to do is setup a qdisc and configure it to limit your
>> outgoing traffic.  You can probably find more information on how to do
>> that on the web as what you are seeing could be buffer bloat.  A good
>> test for latency would be to try sending pings while running your
>> throughput test.  You may see a significant increase in latency and
>> dropped packets with the pings if the issue is something such as buffer
>> bloat.
> I can't get any latency increase or dropped packets in this case, i
> will read about rate limiting, but at other side this is a work of
> congestion control system? To decrease rate (typically lower tcp
> window) if any packet losses or latency increase.
> I can't predict bandwidth to any place of internet and tune flow speed
> to each direction, right?
>
>> Other than that the only other thing I can really recommend would be to
>> double check your test protocol.  How are you testing the 82574/e1000e
>> interface versus the 82599/ixgbe interface?  I notice they are both
>> containing IPs on the same subnet.
> Yes, i bind .133 ip to 82574 card and .135 ip to 82599 card.
> Routes managed by iproute2 rt_tables.
>
>> Are the cards in different systems or
>> in the same system?
> Cards in same system. I prefer to be sure that they driven in same
> time by same kernel and same sysctl / tcp stack etc.
>
>>  If they are in the same system are you making
>> certain to disconnect the port that is not under test?
> I tried today even leave just optic 82599 at .135 ip configured,
> rebooted, tested - got same 80-100mbit/s per flow.
>
> TCP TEST
> ------------------------------------------------------------
> Client connecting to yy.yy.74.11, TCP port 5001
> TCP window size: 64.0 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.135 port 60091 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.0 sec   282 MBytes   118 Mbits/sec
>
> UDP TEST with 500mbit desired speed
> ------------------------------------------------------------
> Client connecting to y.y.74.11, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size:  208 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.135 port 49505 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.0 sec  1.19 GBytes   511 Mbits/sec
> [  3] Sent 869257 datagrams
> [  3] Server Report:
> [  3]  0.0-20.0 sec  1.18 GBytes   506 Mbits/sec   0.031 ms 8450/869256 
> (0.97%)
> [  3]  0.0-20.0 sec  1 datagrams received out-of-order
>
> UDP TEST with 800mbit desired speed
> ------------------------------------------------------------
> Client connecting to y.y.74.11, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size:  208 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.135 port 46931 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.0 sec  1.89 GBytes   812 Mbits/sec
> [  3] Sent 1381506 datagrams
> [  3] Server Report:
> [  3]  0.0-20.0 sec  1.70 GBytes   731 Mbits/sec   0.023 ms 137588/1381505 
> (10%)
> [  3]  0.0-20.0 sec  1 datagrams received out-of-order
>
> Btw UDP tests shows interesting results, network capable to serve
> 400-500mbit/s flow with <1% losses with UDP.
>
> After i setup .133 ip at 82574 card, rebooted server with just 82574
> card configured and up.
>
> TCP TEST with 82574 (almost 5 times faster with TCP comparing to 82599)
> ------------------------------------------------------------
> Client connecting to y.y.74.11, TCP port 5001
> TCP window size: 64.0 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.133 port 46783 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.1 sec  1.26 GBytes   538 Mbits/sec
>
> UDP TEST with 82574 with 500mbit desired speed
> ------------------------------------------------------------
> Client connecting to y.y.74.11, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size:  208 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.133 port 39406 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.0 sec  1.19 GBytes   511 Mbits/sec
> [  3] Sent 869070 datagrams
> [  3] Server Report:
> [  3]  0.0-20.0 sec  1.19 GBytes   509 Mbits/sec   0.040 ms 2722/869069 
> (0.31%)
> [  3]  0.0-20.0 sec  1 datagrams received out-of-order
>
> UDP TEST with 82599 with 800mbit desired speed
> ------------------------------------------------------------
> Client connecting to y.y.74.11, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size:  208 KByte (default)
> ------------------------------------------------------------
> [  3] local x.x.185.133 port 39797 connected with y.y.74.11 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-20.0 sec  1.89 GBytes   810 Mbits/sec
> [  3] Sent 1378241 datagrams
> [  3] Server Report:
> [  3]  0.0-20.0 sec  1.79 GBytes   769 Mbits/sec   0.022 ms 71147/1378240 
> (5.2%)
> [  3]  0.0-20.0 sec  1 datagrams received out-of-order
>
>
>> Do they both
>> show the same ping time or different ping times?
> They both show same ping time.
>
>>  Also what kind of
>> results do you see if you don't go through the internet, but instead
>> just send traffic to a local port on the same switch?
> I don't have any another 10G port in same switch - but i easily fill
> 940mbit/s to another 1Gbit/s server in same switch from both 82574 and
> 82599 cards. In LAN all works perfectly. Same thing for close
> datacenters, i have a 11ms latency datacenter here in Kiev, and easily
> fill 930-940mbit/s to that datacenter from both cards too.
>
> Issue looks based on tcp_window and congestion_control, that affects
> WAN traffic.
>

I'm beginning to wonder if this might be some sort of QOS issue on the
WAN instead of something local.  There are a couple of things you could try.

The first would be to try running the UDP test in both directions, you
could probably try the same for the TCP test as well.  I am wondering if
there is an issue in one direction or the other.  By running the test in
both directions we can verify that both paths are good.  My concern here
is that TCP needs ACKs, and if the ACK flow is somehow delayed or
dropped that could cause some issues.

The other item to try would be to swap the MAC addresses and the IP
addresses on the 82574 and 82599.  So the 82599 would have the 82574 MAC
and IP addresses, and the 82574 would have the 82599 MAC and IP
addresses.  This way if there is a rule out there that might be favoring
one over the other this should rule that out.

Thanks,

Alex

------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to