On 08/12/2013 03:28 PM, Alexey Stoyanov wrote:
> I done reload of ixgbe with MQ=0,0 and RSS=1,1
> There are no luck with speed.
>
> [ 3] local xxx.xxx.185.135 port 5001 connected with yy.yy.74.11 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.0 sec 151 MBytes 63.1 Mbits
Compra Premiada: A cada compra superior a R$30,00 você ganha um núme
I done reload of ixgbe with MQ=0,0 and RSS=1,1
There are no luck with speed.
[ 3] local xxx.xxx.185.135 port 5001 connected with yy.yy.74.11 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-20.0 sec 151 MBytes 63.1 Mbits/sec
[ 3] local xxx.xxx.185.133 port 5001 connected with
One other thing that separates the 82574 and the 82599 is that 82599 is
a multiqueue interface. Try loading the driver with RSS=1,1 to see if
this issue might somehow be related to multiqueue.
Other than that the only other thing I can think of would be to start
rate limiting the ixgbe port itsel
One important thing that i not writed from start - this is real
internet, so this is not a LAN, but WAN
I have average 27 ms latency beetween hosts.
--- yy.yy.74.11 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 27.203/27.444/27.791/0.23
Based on the info you provided I would say one possible red flag would
be the flow control bits in the statistics. Specifically:
> tx_flow_control_xon: 0
> rx_flow_control_xon: 164
> tx_flow_control_xoff: 0
> rx_flow_control_xoff: 164
> rx_csum_offload_errors: 1
The fact
03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit
SFI/SFP+ Network Connection (rev 01)
Subsystem: Intel Corporation Ethernet Server Adapter X520-2
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Ca
On 08/12/2013 12:09 PM, Alexey Stoyanov wrote:
> Hello
> I got one issue, and seems i need help from driver developers.
>
> I have a some servers located in a different datacenters around
> Russia, we used mostly 82575/827576 intel nic managed by e1000e and
> igb drivers. When i testing speed with
Hello
I got one issue, and seems i need help from driver developers.
I have a some servers located in a different datacenters around
Russia, we used mostly 82575/827576 intel nic managed by e1000e and
igb drivers. When i testing speed with iperf from one 82576 card to
another - all working good, i