On 12/22/05, Gleb Smirnoff <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 22, 2005 at 12:37:53PM +0200, Danny Braniss wrote:
> D> > On Thu, Dec 22, 2005 at 12:24:42PM +0200, Danny Braniss wrote:
> D> > D> ------------------------------------------------------------
> D> > D> Server listening on TCP port 5001
> D> > D> TCP window size: 64.0 KByte (default)
> D> > D> ------------------------------------------------------------
> D> > D> [  4] local 132.65.16.100 port 5001 connected with [6.0/SE7501WV2] 
> port 58122
> D> > D> (intel westvill)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  4]  0.0-10.0 sec  1.01 GBytes   867 Mbits/sec
> D> > D> [  4] local 132.65.16.100 port 5001 connected with [5.4/SE7501WV2] 
> port 55269
> D> > D> (intel westvill)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  4]  0.0-10.0 sec   967 MBytes   811 Mbits/sec
> D> > D> [  5] local 132.65.16.100 port 5001 connected with [6.0/SR1435VP2 
> port 58363
> D> > D> (intel dual xeon/emt64)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  5]  0.0-10.0 sec   578 MBytes   485 Mbits/sec
> D> > D>
> D> > D> i've run this several times, and the results are very similar.
> D> > D> i also tried i386, and the same bad results.
> D> > D> all hosts are connected at 1gb to the same switch.
> D> >
> D> > So we see a strong drawback between SE7501WV2 and SR1435VP2. Let's 
> compare the NIC
> D> > hardware. Can you plese show pciconf -lv | grep -A3 ^em on both 
> motherboards?
> D>
> D> on a SE7501WV2:
> D> [EMAIL PROTECTED]:7:0:   class=0x020000 card=0x341a8086 chip=0x10108086 
> rev=0x01
> D> hdr=0x00
> D>     vendor   = 'Intel Corporation'
> D>     device   = '82546EB Dual Port Gigabit Ethernet Controller (Copper)'
> D>     class    = network
> D>
> D> on a SR1435VP2:
> D> [EMAIL PROTECTED]:3:0:   class=0x020000 card=0x34668086 chip=0x10768086 
> rev=0x05
> D> hdr=0x00
> D>     vendor   = 'Intel Corporation'
> D>     device   = '82547EI Gigabit Ethernet Controller'
> D>     class    = network
>
> The first one 82546EB is attached to fast PCI-X bus, and the 82547EI is
> on CSA bus. The CSA bus is twice faster than old PCI bus, CSA can handle
> 266 Mbps. I'm not sure but may be it has same ~50% overhead as old PCI bus.
>
> Probably our em(4) driver is not optimized enough and does too many accesses
> to the PCI bus, thus utilizing more bandwidth than needed to handle traffic.
> In this case we see that NIC on slower bus (but enough to handle Gigabit) is
> must slower than NIC on faster bus. (This paragraph is my own theory, it
> can be complete bullshit.)

CSA bus? I've never heard of it.

To get the best gig performance you really want to see it on PCI Express.
I see 930ish Mb/s. I'm not really familiar with this motherboard/lom.

You say you run iperf -s on the server side, but what are you using as
parameters on the client end of the test?

Jack
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to