On Thu, 19 Sep 2013, Florian Crouzat wrote:
Le 18/09/2013 20:34, Jeff Weber a ?crit :
I am looking to create a 2-node Active/Passive firewall cluster. I am
an experienced Linux user, but new to HA clusters. I have scanned
"Clusters From Scratch" and "Pacemaker Explained". I found these docs
helpful, but a bit overwhelming, being new to HA clusters.
My goals:
* create 2-node Active/Passive firewall cluster
* Each FW node has an external, and internal interface
* Cluster software presents external, internal VIPs
* VIPs must be co-located on same node
* One node is preferred for VIP locations
* If any interface fails on node currently hosting VIPs, VIPs move to
other node
For simplicity sake, I'll start by creating VIPs, and add firewall
plumbing to the VIPs in the future.
My config:
CentOS-6.3 based distro +
corosync-1.4.1-1
pacemaker-1.1.8-1
pcs-0.9.26-1
resource-agents-3.9.2-12
and all required dependencies
My questions:
This sounds like a common use case, but I could not find an
example/HOWTO. Did I miss it?
Do I have the correct HA cluster packages, versions to start work?
Do I also need the cman?, ccs packages?
How many interfaces should each cluster node have?
2 interfaces: internal, external
or
3 interfaces: internal, external, monitor
Do I need to configure corosync.conf/totem/interface/bindnetaddr, and if
so, bind to what net?
$1M question:
How to configure cluster to monitor all internal, external cluster
interfaces, and perform
failover? Here's my estimate:
* create external VIP as IpAddr2 and bind to external interfaces
* create internal VIP as IpAddr2 and bind to internal interfaces
* co-locate both VIPs together
* specify a location constraint for preferred node
Any help would be appreciated,
thanks
Jeff
I have several two-nodes firewall clusters running pacemaker+cman (since
EL6.4) and they work perfectly. My setup is as follow:
Both node boots in a "passive" firewall state (via chkconfig). In this state,
only corosync trafic is allowed between nodes (and admin access on non-VIP
IPs). From that state, they both start cman+pacemaker and via a location
preference + 3 ping nodes, the node with the best score starts the resources.
Resources are a group of 30+ IPaddr2, iptables and custom daemons such as
bind, postfix, ldirectord, etc. All resources are collocated and ordered so
they all are on the same node and starts in a correct order (first I get the
VIPs then I start the firewall, then I bind the daemons, etc)
VIPs are not really monitored as pacemaker doesn't really do that, it just
checks the IP is present in some sort of "sudo ip addr ls | fgrep <ip>" ; if
you unplug the network cable, it won't see it: that's where you define wisely
your ping nodes so that you can monitor the connectivity of certain
subnet/gateway from all nodes and decide which is the best connected one in
case of incident.
If you like, I can paste configuration files (cluster.conf + CIB)
I've been running active/failover firewall clusters with heartbeat since about
2000, and one suggestion that I would make. If you can leave all the daemons
running all the time, the failover process is far more robust (and faster since
you don't have daemons to start). If you set net.ipv4.ip_nonlocal_bind you can
even have the daemons startup binding to the VIP addresses that don't yet exist.
If you do not have to have the daemons bound to the VIP, the fact that they are
always running on the backup box gives you a quick way to check if a failover
would solve the problem or not by having a client connect directly to the second
box. The drawback is that someone may configure something to point directly at a
box and not at a VIP and you won't detect it (without log analysis) until the
box they point at actually goes down.
David Lang
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org