Re: [LARTC] Re: gateway failover with linux

2007-07-19 Thread Mohan Sundaram

Abhijit Menon-Sen wrote:

Hi Grant.

At 2007-07-19 16:15:01 -0500, [EMAIL PROTECTED] wrote:

I'm a bit confused, are you wanting a single Linux firewall /
router to have redundant internet connections, or to route
traffic to redundant systems behind it and intelligently
handle the failure of one or more of said redundant systems?


Neither.

I just want a hot standby for a single Linux firewall, such that clients
behind it are not affected by a hardware failure on the firewall. If my
configuration would allow me to someday promote the backup and run both
firewall machines in a load-balancing configuration, so much the better.

The following example looks very much like what I want:

http://people.netfilter.org/pablo/conntrack-tools/testcase.html

(Can anyone comment on whether I should stick with keepalived as
described above, or try out ucarp?)


Will you please clarify what you are really wanting to do per
above and I'll be more than happy to try to point you in the
right direction.


Thanks, I'd appreciate any advice you can give me.

-- ams
In case your firewall is a proxy for some service, those connections 
will fail though - unless you can use a virtual interface with the same 
IP as the source for such connections.


I guess you'll use vrrp in conjunction for failover. It would make sense 
to use vrrpd with status tracking of WAN gateway but AFAIK no such 
feature exists as yet.


Mohan
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] Re: gateway failover with linux

2007-07-19 Thread Abhijit Menon-Sen
Hi Grant.

At 2007-07-19 16:15:01 -0500, [EMAIL PROTECTED] wrote:
>
> I'm a bit confused, are you wanting a single Linux firewall /
> router to have redundant internet connections, or to route
> traffic to redundant systems behind it and intelligently
> handle the failure of one or more of said redundant systems?

Neither.

I just want a hot standby for a single Linux firewall, such that clients
behind it are not affected by a hardware failure on the firewall. If my
configuration would allow me to someday promote the backup and run both
firewall machines in a load-balancing configuration, so much the better.

The following example looks very much like what I want:

http://people.netfilter.org/pablo/conntrack-tools/testcase.html

(Can anyone comment on whether I should stick with keepalived as
described above, or try out ucarp?)

> Will you please clarify what you are really wanting to do per
> above and I'll be more than happy to try to point you in the
> right direction.

Thanks, I'd appreciate any advice you can give me.

-- ams
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] gateway failover with linux

2007-07-19 Thread Grant Taylor

On 07/19/07 12:02, Abhijit Menon-Sen wrote:
I'm wondering if there's a good way to configure a Linux firewall box 
to failover to a single backup server, while preserving connection 
state.


I'm a bit confused, are you wanting a single Linux firewall / router to 
have redundant internet connections, or to route traffic to redundant 
systems behind it and intelligently handle the failure of one or more of 
said redundant systems?  I'm also not sure how conntrackd (comparable to 
OpenBSD's pfsync) is coming in to play here.  Or is there more than one 
Linux firewall / router that you are wanting to synchronize?  Or are you 
wanting the connection tracking between the multiple systems behind the 
Linux firewall / router?  I think that all of these are possible to 
various degrees, though each uses a different method to achieve it.


This question has been asked before, but the latest reference I can 
find is from 2004, at which time Linux had no equivalent of OpenBSD's 
pfsync, though Harald was said to be working on one.


*nod*  Conntrackd is the tool that you want to use to synchronize 
connection tracking connection meta data between two systems, or the 
closest thing that Linux presently has (that I'm aware of).


Did anything come of those efforts? Or is there now another 
alternative?


Yes, conntrackd.


Any examples or advice would be appreciated.


Will you please clarify what you are really wanting to do per above and 
I'll be more than happy to try to point you in the right direction.





Grant. . . .
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] How to check an inactive slave in a bond?

2007-07-19 Thread Jay Vosburgh
olivier arsac <[EMAIL PROTECTED]> wrote:

[...]
>Scenario:
>your bond0 is running fine. it uses eth0 as active slave and eth2 as inactive
>slave (different cards/ different driver to be safe)
>some bozo reconfigures the switch port where your eth2 is plugged in and you
>don't notice it (the crucial point here)
>later on, your eth0 dies (or is unplugged by the brother of the first bozo)
>and bamm... your nice HA node is off-line.
>note: eth2 is still plugged and fine at the mii level.
>even an arp would return OK so going from mii to arp for the bond is not the
>right option.
>
>So. How could I check that an IP packet send via eth2 would really reach its
>vlan?

One obvious remedy is better bozo control, but for purposes of
discussion, let's look at this as simply a question about assurance of
the inactive path.

The short answer to you question is that you can't do what
you're trying to do.  The bonding driver itself, as you've noted, is
reasonably good at detecting link state, and connectivity to local
network peers (via the ARP monitor), but doesn't provide full end-to-end
path validation for all active and inactive slaves.

The long answer is that end-to-end validation of the inactive
path is fairly complicated, and can be tricky to do correctly.  If an
inactive slave transmits something, it may cause updating of forwarding
tables on the network (either because ARP probes have this effect, or
because many switches snoop traffic to determine which destination is
reachable via which port), which is undesirable.  Inactive slaves, at
least in recent versions of bonding, also drop all incoming traffic to
prevent the same packet from being delivered multiple times (if, e.g., a
switch is flooding traffic to all ports of the bond for whatever
reason).

The ideal case is to issue, e.g., a ping (ICMP Echo Request) to
some IP address on the desired destination.  An IP-level probe is better
in the grand scheme of things because an IP packet is routable and can
reach off the local network (which an ARP cannot).  If we move up to
IPv6, this becomes more complicated, as the "inactive" slave would have
to participated in the IPv6 stateless address autoconfiguration
independently from the master, which also causes headaches for IPv6
snooping switches.

So, to achieve an actual end to end test from the "inactive"
slave to the peer of choice, it's necessary to isolate this traffic so
that it properly returns to the "inactive" slave (and isn't routed back
to the master).  This separate communication needs to take place on a
logically discrete network (which may also be physically discrete, as
appears to be the case in your situation).  If the "probe" network isn't
separate from the "real" network, then intermediate routers may send the
probes or replies over the wrong path, or improperly update forwarding
tables and so on.

The bonding driver today doesn't support this type of
"independent" slave activity; all slaves are considered to be minions of
the master, and aren't allowed to operate independently.

I will point out that the ARP monitor, for a subset of cases,
can come close to what you want.  The current versions of the bonding
driver support an "arp_validate" option, and can validate that the ARP
probes (which are broadcasts) sent out over the active slave actually
reach the inactive slave.  It doesn't validate the ARP replies, as those
are only received by the active slave, and it doesn't attempt to
transmit on the inactive slave.

>I tried a probably naive thing:
>ifenslave -d bond0 eth2
>ifconfig eth2 $ip netmask 255.255.255.255
>route add -host $target eth2
>ping $target
>(target is the gateway, ip is a reserved IP used only on this server to do that
>check)
>but it does not work as I hopped. Sometimes the ping is OK (but goes thru
>bond0) sometimes it blocks...
>The real question is How to do it properly (rather than how to fix my naive
>try).

I would hazard to guess that your problem here is likely one of
routing.  Maybe on the sender end, maybe on the reply end, maybe both.
On the sender end, you can force ping to use a particular interface via
the "-I" option, which will assure you that you're using the eth2 for
the transmission.  The question is going to be which path the reply
packet takes to get back to the sender.

The other problem, of course, is that you've removed your backup
link from the bond, so if the primary should fail while you've got it
running this ping test, you'll lose all connectivity.

One perhaps cheap and easy, but not 100% reliable, method would
be for you to periodically manually fail over the bond to whichever link
is inactive (via ifenslave -c bond0 ethX, cycling X through your set of
slave interfaces).

The up side of this is that you'll exercise both paths
regularly, so any bozo induced nonsense should become visible sooner
rather than later; the precise interval for "sooner" depending upon h

[LARTC] Re: gateway failover with linux

2007-07-19 Thread Abhijit Menon-Sen
At 2007-07-19 22:32:51 +0530, [EMAIL PROTECTED] wrote:
>
> I'm wondering if there's a good way to configure a Linux firewall box
> to failover to a single backup server, while preserving connection
> state.

Looks like this is it:

http://people.netfilter.org/pablo/conntrack-tools/

-- ams
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] gateway failover with linux

2007-07-19 Thread Abhijit Menon-Sen
Hi.

I'm wondering if there's a good way to configure a Linux firewall box to
failover to a single backup server, while preserving connection state.

This question has been asked before, but the latest reference I can find
is from 2004, at which time Linux had no equivalent of OpenBSD's pfsync,
though Harald was said to be working on one.

Did anything come of those efforts? Or is there now another alternative?

Any examples or advice would be appreciated.

Thank you.

-- ams
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] How to check an inactive slave in a bond?

2007-07-19 Thread olivier arsac
I'm using bonding in active-fallback mode to guarantee maximum 
availability on some critical servers.
The mii mode is active so I can detect things like dead card and/or 
unplugged cable even on the inactive slave.
But how do I check that the inactive slave is properly 
configured/connected to the switch/vlan?
I ask this question because it has just bitten me in a part I'll keep 
undisclosed.


Scenario:
your bond0 is running fine. it uses eth0 as active slave and eth2 as 
inactive slave (different cards/ different driver to be safe)
some bozo reconfigures the switch port where your eth2 is plugged in and 
you don't notice it (the crucial point here)

later on, your eth0 dies (or is unplugged by the brother of the first bozo)
and bamm... your nice HA node is off-line.
note: eth2 is still plugged and fine at the mii level.
even an arp would return OK so going from mii to arp for the bond is not 
the right option.


So. How could I check that an IP packet send via eth2 would really reach 
its vlan?

I tried a probably naive thing:
ifenslave -d bond0 eth2
ifconfig eth2 $ip netmask 255.255.255.255
route add -host $target eth2
ping $target
(target is the gateway, ip is a reserved IP used only on this server to 
do that check)
but it does not work as I hopped. Sometimes the ping is OK (but goes 
thru bond0) sometimes it blocks...
The real question is How to do it properly (rather than how to fix my 
naive try).


Thank you for your help.

   Olivier






___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] tc qdisc TEQL limited to two interfaces? [ 1.8Gbps ]

2007-07-19 Thread Leroy van Logchem

I'am using the following script to aggregate the bandwidth of one quad
gigabit ethernet controller (pci-express).

#!/bin/bash
sysctl -w net.ipv4.tcp_reordering = 30
ifconfig eth1 up
ifconfig eth2 up
ifconfig eth3 up
ifconfig eth4 up
modprobe sch_teql
tc qdisc add dev eth1 root teql0
tc qdisc add dev eth2 root teql0
tc qdisc del dev eth3 root teql0
tc qdisc del dev eth4 root teql0
ip link set dev teql0 up
ip addr flush dev eth1
ip addr flush dev eth2
ip addr flush dev eth3
ip addr flush dev eth4
ip addr flush dev teql0
ip addr add dev eth1 10.0.0.3/31
ip addr add dev eth2 10.0.0.5/31
#ip addr add dev eth3 10.0.0.7/31
#ip addr add dev eth4 10.0.0.9/31
ip addr add dev teql0 10.0.0.11/31
echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth2/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth3/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth4/rp_filter
route add -host 10.0.0.2 gw 10.0.0.10
route add -host 10.0.0.4 gw 10.0.0.10
route del -host 10.0.0.6 gw 10.0.0.10
route del -host 10.0.0.8 gw 10.0.0.10

This setup, using just two interfaces, gives a nice and stable iperf
bandwidth of 1.85Gbit/s ( 231MB/s ).
But when configure all four interfaces the bandwidth drops below 1Gbit/s?

Any tips or ideas are welcome!


--
Leroy

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc