Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-12 Thread Eric
ot;linux-cluster@redhat.com" >Sent: Wednesday, June 6, 2012 9:12 PM >Subject: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP > > >I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch to >for a backside network for synchronizing

Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-07 Thread Digimer
As an aside; I was using the DGS-3100 switches stacked. The new generation of DGS-3120 switches I also used stacked, and are a *marked* improvement over the 3100 series. I've not gone back to re-test the other bond modes on these switches, as I must live within Red Hat's supported configuration

Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-07 Thread Radu Rendec
I also experimented with D-Link DGS-3xxx switches and the bonding driver, but in a quite strange configuration: 2 distinct switches without any "knowledge" of each other, and with each server having NIC #1 connected in one switch and NIC #2 in the other. In my case, the bonding driver actually spl

Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-07 Thread Kaloyan Kovachev
On Wed, 6 Jun 2012 21:12:13 -0700 (PDT), Eric wrote: > I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch > to for a backside network for synchronizing file systems between the nodes > in the group. Each host has 4 Gigabit NIC's and the goal is to bond two of > the Gigabit NI

Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-06 Thread Digimer
I know that the only *supported* bond is Active/Passive (mode=1), which of course provides no performance benefit. I tested all types, using more modest D-Link DGS-3100 switches and all other modes failed at some point in failure and recovery testing. If you want to experiment, I'd suggest twe

[Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

2012-06-06 Thread Eric
I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch to for a backside network for synchronizing file systems between the nodes in the group. Each host has 4 Gigabit NIC's and the goal is to bond two of the Gigabit NIC's together to create a 2 Gbps link from any host  to any