ot;linux-cluster@redhat.com"
>Sent: Wednesday, June 6, 2012 9:12 PM
>Subject: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP
>
>
>I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch to
>for a backside network for synchronizing
As an aside; I was using the DGS-3100 switches stacked. The new
generation of DGS-3120 switches I also used stacked, and are a *marked*
improvement over the 3100 series. I've not gone back to re-test the
other bond modes on these switches, as I must live within Red Hat's
supported configuration
I also experimented with D-Link DGS-3xxx switches and the bonding
driver, but in a quite strange configuration: 2 distinct switches
without any "knowledge" of each other, and with each server having NIC
#1 connected in one switch and NIC #2 in the other.
In my case, the bonding driver actually spl
On Wed, 6 Jun 2012 21:12:13 -0700 (PDT), Eric
wrote:
> I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch
> to for a backside network for synchronizing file systems between the
nodes
> in the group. Each host has 4 Gigabit NIC's and the goal is to bond two
of
> the Gigabit NI
I know that the only *supported* bond is Active/Passive (mode=1), which
of course provides no performance benefit.
I tested all types, using more modest D-Link DGS-3100 switches and all
other modes failed at some point in failure and recovery testing. If you
want to experiment, I'd suggest twe
I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch to for
a backside network for synchronizing file systems between the nodes in the
group. Each host has 4 Gigabit NIC's and the goal is to bond two of the Gigabit
NIC's together to create a 2 Gbps link from any host to any