It's load balanced via a hash over the specified fields. So if you only got
two flows there's a 50% chance that they will end up on the same link.
Your real traffic will have a lot more flows so it will even out.
Paul
2018-06-21 21:48 GMT+02:00 Jacob DeGlopper :
> OK, that was a search-and-repla
OK, that was a search-and-replace error in the original quote. This is
still something with your layer 3/4 load balancing.
iperf2 does not support setting the source port, but iperf3 does - that
might be worth a try.
-- jacob
On 06/21/2018 03:37 PM, mj wrote:
Hi Jacob,
Thanks for you
Hi Jacob,
Thanks for your reply. But I'm not sure I completely understand it. :-)
On 06/21/2018 09:09 PM, Jacob DeGlopper wrote:
In your example, where you see one link being used, I see an even source
IP paired with an odd destination port number for both transfers, or is
that a search and re
Consider trying some variation in source and destination IP addresses
and port numbers - unless you force it, iperf3 at least tends to pick
only even port numbers for the ephemeral source port, which leads to all
traffic being balanced to one link.
In your example, where you see one link being
Pff the layout for the switch configuration was messed up. Sorry.
Here is is again, hopefully better this time:
Procurve chassis(config)# show trunk
Load Balancing Method: L4-based
Port | Name Type | Group Type
+ --
Hi,
I'm trying out bonding to improve ceph performance on our cluster.
(currently in a test setup, using 1G NICs, instead of 10G)
Setup like this on the ProCurve 5412 chassis:
Procurve chassis(config)# show trunk
Load Balancing Method: L4-based
Port | Name Type