Hi there, I hope someone here can help. Yesterday I upgraded my file server from Nexenta 2.0 (b104+) to the new 3.0 Beta 2 release (b134). Unfortunately my gigabit network performance is now abysmal, suffering asymmetric speeds of just 27MB/s down and 88MB/s up (TCP). Before the upgrade - just an hour earlier - it had been running at 117MB/s both ways, almost link capacity.
My network adapter is an Intel Pro/1000 PT (single port, EXPI9400PTBLK) (see http://www.intel.com/products/server/adapters/pro1000pt/pro1000pt-overview.htm) with the Intel 82571GB controller. The adapter is listed as being compatible on the OpenSolaris HCL (see http://www.sun.com/bigadmin/hcl/data/components/details/2798.html). Below is some more info about my setup. I have also attached the output from 'prtconf -pv' and 'dmesg' for anyone who wishes to wade through them. r...@seraph:~# uname -a SunOS seraph 5.11 NexentaOS_134b i86pc i386 i86pc Solaris The network driver is e1000g: r...@seraph:~# dmesg ... Apr 14 11:25:37 seraph pcplusmp: [ID 805372 kern.info] pcplusmp: pciex8086,107d (e1000g) instance 0 irq 0x18 vector 0x60 ioapic 0xff intin 0xff is bound to cpu 1 Apr 14 11:25:38 seraph mac: [ID 469746 kern.info] NOTICE: e1000g0 registered Apr 14 11:25:38 seraph e1000g: [ID 766679 kern.info] Intel(R) PRO/1000 Network Connection, Driver Ver. 5.3.22 ... r...@seraph:~# ifconfig e1000g0 e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 0:15:17:be:2a:7c r...@seraph:~# dladm show-linkprop LINK PROPERTY PERM VALUE DEFAULT POSSIBLE e1000g0 speed r- 1000 1000 -- e1000g0 autopush -- -- -- -- e1000g0 zone rw -- -- -- e1000g0 duplex r- full full half,full e1000g0 state r- up up up,down e1000g0 adv_autoneg_cap rw 1 1 1,0 e1000g0 mtu r- 1500 1500 1500-9216 e1000g0 flowctrl rw bi bi no,tx,rx,bi e1000g0 adv_1000fdx_cap r- 1 1 1,0 e1000g0 en_1000fdx_cap rw 1 1 1,0 e1000g0 adv_1000hdx_cap r- 0 0 1,0 e1000g0 en_1000hdx_cap r- 0 0 1,0 e1000g0 adv_100fdx_cap r- 1 1 1,0 e1000g0 en_100fdx_cap rw 1 1 1,0 e1000g0 adv_100hdx_cap r- 1 1 1,0 e1000g0 en_100hdx_cap rw 1 1 1,0 e1000g0 adv_10fdx_cap r- 1 1 1,0 e1000g0 en_10fdx_cap rw 1 1 1,0 e1000g0 adv_10hdx_cap r- 1 1 1,0 e1000g0 en_10hdx_cap rw 1 1 1,0 e1000g0 maxbw rw -- -- -- e1000g0 cpus rw -- -- -- e1000g0 priority rw high high low,medium,high e1000g0 tagmode rw vlanonly vlanonly normal,vlanonly e1000g0 forward rw 1 1 1,0 e1000g0 default_tag rw 1 1 -- e1000g0 learn_limit rw 1000 1000 -- e1000g0 learn_decay rw 200 200 -- e1000g0 stp rw 1 1 1,0 e1000g0 stp_priority rw 128 128 -- e1000g0 stp_cost rw auto auto -- e1000g0 stp_edge rw 1 1 1,0 e1000g0 stp_p2p rw auto auto true,false,auto e1000g0 stp_mcheck rw 0 0 1,0 e1000g0 protection rw -- -- mac-nospoof, ip-nospoof, restricted e1000g0 allowed-ips rw -- -- -- I have been benchmarking network performance using Iperf. The file server is called 'seraph', and my client 'aeon' (a late-2008 unibody MacBook Pro). My switch is a 5-port Cisco/Linksys SD2005. As I mentioned at the outset, this setup was running fine until the moment I upgraded. Invoking the test from Aeon (testing server's receive bandwidth): aeon:~ andrewharvey$ iperf -c 192.168.1.100 -fM -t 10 ------------------------------------------------------------ Client connecting to 192.168.1.100, TCP port 5001 TCP window size: 0.13 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.67 port 56673 connected with 192.168.1.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 270 MBytes 27.0 MBytes/sec r...@seraph:~# iperf -s -fM ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 0.12 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 5001 connected with 192.168.1.67 port 56673 [ 4] 0.0-10.0 sec 270 MBytes 27.0 MBytes/sec Doing it from the other direction, invoking the same test from Seraph (testing send bandwidth): r...@seraph:~# iperf -c Aeon.lan -fM -t 10 ------------------------------------------------------------ Client connecting to Aeon.lan, TCP port 5001 TCP window size: 0.05 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 52761 connected with 192.168.1.67 port 5001 [ 4] 0.0-10.0 sec 883 MBytes 88.2 MBytes/sec aeon:~ andrewharvey$ iperf -s -fM ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 0.25 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.67 port 5001 connected with 192.168.1.100 port 52761 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 883 MBytes 88.2 MBytes/sec Here is a UDP test invoked from the client Aeon. Note the jitter and packet loss. aeon:~ andrewharvey$ iperf -c 192.168.1.100 -fM -u -t 10 -b 1100M ------------------------------------------------------------ Client connecting to 192.168.1.100, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.01 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.67 port 57997 connected with 192.168.1.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1099 MBytes 110 MBytes/sec [ 3] Sent 783664 datagrams [ 3] Server Report: [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.3 sec 230 MBytes 22.4 MBytes/sec 15.270 ms 619898/783662 (79%) And the same UDP test, invoked from the server: The jitter and packet loss are much less. r...@seraph:~# iperf -c Aeon.lan -fM -u -t 10 -b 1100M ------------------------------------------------------------ Client connecting to Aeon.lan, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.05 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 36509 connected with 192.168.1.67 port 5001 [ 4] 0.0-10.0 sec 805 MBytes 80.5 MBytes/sec [ 4] Sent 574392 datagrams [ 4] Server Report: [ 4] 0.0-10.0 sec 614 MBytes 61.4 MBytes/sec 0.206 ms 136275/574391 (24%) [ 4] 0.0-10.0 sec 1 datagrams received out-of-order Finally, here are some network stats from the server and client respectively. There do not appear to be any errors or collisions: r...@seraph:~# netstat -i -I e1000g0 Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue e1000g0 1500 Seraph-NAS.lan Seraph-NAS.lan 1298495 0 1315912 0 0 0 aeon:~ andrewharvey$ netstat -i -I en0 Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll en0 1500 <;Link#4> 00:23:32:d4:c7:d8 26202857 0 37206489 0 0 en0 1500 aeon.local fe80:4::223:32ff: 26202857 - 37206489 - - en0 1500 192.168.1 aeon.lan 26202857 - 37206489 - - Obviously I have tried Googling for a solution, but almost all of the reported problems regarding the e1000g driver appear to be old and now fixed. I would be grateful if someone could shed some light here, and perhaps suggest a few more things I could try. Thanks, Andrew -- This message posted from opensolaris.org _______________________________________________ networking-discuss mailing list [email protected]
