Andrew Harvey wrote:
Hi, thanks for highlighting that.

I made the changes to /kernel/drv/e1000g.conf and did an 'update_drv e1000g' + 
reboot. Unfortunately it hasn't really solved the performance problem - the 
figures I am getting now are mixed: 29.4MB/s down (2MB/s better) and 74.9MB/s 
up (13MB/s worse).

However UDP bandwidth has increased somewhat. Here is my UDP test, with the 
server on the receiving end:

aeon:~ andrewharvey$ iperf -c 192.168.1.100 -fM -u -t 10 -b 1100M
...
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1399 MBytes    140 MBytes/sec
[  3] Sent 997947 datagrams
[  3] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-10.2 sec    493 MBytes  48.1 MBytes/sec  14.946 ms 646189/997947 
(65%)

That's 65% packet loss compared to 79% from earlier and 26MB/s improvement.

Now with the server sending UDP packets:

r...@seraph:~# iperf -c Aeon.lan -fM -u -t 10 -b 1100M
------------------------------------------------------------
Client connecting to Aeon.lan, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 0.05 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.100 port 46760 connected with 192.168.1.67 port 5001
[  4]  0.0-10.0 sec    814 MBytes  81.3 MBytes/sec
[  4] Sent 580566 datagrams
[  4] Server Report:
[  4]  0.0-10.0 sec    809 MBytes  80.8 MBytes/sec  0.316 ms 3720/580565 (0.64%)
[  4]  0.0-10.0 sec  1 datagrams received out-of-order

That's 0.64% packet loss compared to 24% from earlier, and a 20MB/s improvement.

However the performance is still asymmetric and not good enough. There remains a problem somewhere.

I don't think the issue is related with any TCP Segmentation Offloading issues. The previous related bug has been resolved. And that bug should affect TCP traffic only.

r...@seraph:~# kstat -m e1000g -i 0
...
        XOFFs_Recvd                     106
        XOFFs_Xmitd                     0
        XONs_Recvd                      2
        XONs_Xmitd                      0
        Xmit_TCP_Seg_Contexts           0
        Xmit_TCP_Seg_Contexts_Fail      0
        Xmit_with_No_CRS                0

I am afraid I am not a TCP/IP expert - does this look better?

Anyway, we have at least eliminated one thing and perhaps one step closer to 
finding a solution.

This point is suspicious. Could you try "dladm set-linkprop e1000g0 -p flowctrl=no"? And if you don't have a backup switch, could you try to have the client connected directly with the Solaris server?

Thanks,

Andrew



_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to