Well, I managed to concoct an updated test, this time with 1G's going into a
10G. A 2.6.23-rc8 kernel on the system with four, dual-port 82546GB's,
connected to an HP ProCurve 3500 series switch with a 10G link to a system
running 2.6.18-8.el5 (I was having difficulty getting cxgb3 going on my
I was perusing Documentation/networking/bonding.txt in a 2.6.23-rc5 tree
and came across the following discussing the round-robin scheduling:
Note that this out of order delivery occurs when both the
sending and receiving systems are utilizing a multiple
interface bond.
Rick Jones [EMAIL PROTECTED] wrote:
[...]
Note that this out of order delivery occurs when both the
sending and receiving systems are utilizing a multiple
interface bond. Consider a configuration in which a
balance-rr bond feeds into a single higher capacity
That said, it's certainly plausible that, for a given set of N
ethernets all enslaved to a single bonding balance-rr, the individual
ethernets could get out of sync, as it were (e.g., one running a fuller
tx ring, and thus running behind the others).
That is the scenario of which I was
Rick Jones [EMAIL PROTECTED] wrote:
[...]
If bonding is the only feeder of the devices, then for a continuous
flow of traffic, all the slaves will generally receive packets (from
the kernel, for transmission) at pretty much the same rate, and so
they won't tend to get ahead or behind.
I could