Hi Auke,

Based on the discussion in this thread, I am inclined to believe that
lack of PCI-e bus bandwidth is NOT the issue.  The theory is that the
extra packet handling associated with TCP acknowledgements are pushing
the PCI-e x1 bus past its limits.  However the evidence seems to show
otherwise:

(1) Bill Fink has reported the same problem on a NIC with a 133 MHz
64-bit PCI connection.  That connection can transfer data at 8Gb/s.

That was even a PCI-X connection, which is known to have extremely good latency
numbers, IIRC better than PCI-e? (?) which could account for a lot of the
latency-induced lower performance...

also, 82573's are _not_ a serverpart and were not designed for this usage. 82546's are and that really does make a difference.

I'm confused. It DOESN'T make a difference! Using 'server grade' 82546's on a PCI-X bus, Bill Fink reports the SAME loss of throughput with TCP full duplex that we see on a 'consumer grade' 82573 attached to a PCI-e x1 bus.

Just like us, when Bill goes from TCP to UDP, he gets wire speed back.

Cheers,
        Bruce
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to