Re: [Linux-HA] NFS active-active failover OCF RA.

2010-03-24 Thread Ben Timby
Done. I am subscribed to linux-ha-dev. Do you want me to repeat my posting there? On Wed, Mar 24, 2010 at 2:37 PM, Florian Haas wrote: > Awesome. I've wanted someone to write this for a while. :) > > Can you please subscribe to linux-ha-dev so we can do a proper patch > review there?

[Linux-HA] NFS active-active failover OCF RA.

2010-03-24 Thread Ben Timby
I would like some opinions on the OCF RA I wrote. I needed to support an active-active setup for NFS, and googling found me no working solution, so I put one together. I have read these list archives and various resources around the 'net when putting this together. My testing is favorable so far, b

Re: [Linux-HA] Duplicate ping

2009-03-24 Thread Ben Timby
A multicast MAC is used for the common IP. The switch will repeat the packet on all ports the mcast MAC is seen on. On Tue, Mar 24, 2009 at 4:55 PM, Les Mikesell wrote: > How does something like this relate to switches that want to learn addresses > and limit delivery to the correct port? ___

Re: [Linux-HA] Duplicate ping

2009-03-24 Thread Ben Timby
You can have the same IP on more than one node if you are using CLUSTERIP. This module for iptables will look at each incoming packet and apply a hash algorithm to it. It then decides if the packet was meant for the local node or not. Each node has an identity. For example, you provide a --total-n

[Linux-HA] DRBD + NFS toggle mount.

2009-03-20 Thread Ben Timby
Given two nodes that share a DRBD volume. The active node exports the volume over NFS. The passive node mounts the volume via NFS. The primary node mounts the volume directly. Upon failure of the primary node, one would want the secondary node to take control and mount the volume directly. How wo

Re: [Linux-HA] dhcp problem in heartbeat

2009-03-12 Thread Ben Timby
By default dhcpd will listen on the first address on the network that it is serving. This means that it is bound to the machine's IP and not the shared IP. You can use the following setting in dhcpd.conf... local-address 10.3.254.113; This will force dhcpd to use the specified (shared) address r

Re: [Linux-HA] Set-up help

2009-03-11 Thread Ben Timby
You got it. Both servers serve the same site. If one dies the other takes over for it. To answer your question about database data, use the same database server from both nodes. That database server could be yet another failover cluster. The most popular way to do failover database (mysql, postgre

Re: [Linux-HA] Set-up help

2009-03-11 Thread Ben Timby
Yes, you should be able to see the web page. The first thing you want to do is make sure that httpd is configured properly without HA. HA just handles the starting and stopping. With the cluster completely down, start httpd on one of the nodes and make sure httpd works and that you can view the p

Re: [Linux-HA] Set-up help

2009-03-10 Thread Ben Timby
I found it useful to use hb_gui. I did the following. 1. Install heartbeat via RPMs. 2. Configure heartbeat ha.cf and authkeys. I set crm to yes in ha.cr, so I did not need an haresources file. 3. usermod -G haclient hacluster 4. passwd hacluster 5. Start heartbeat on one node. 6. If you are runni

Re: [Linux-HA] Set-up help

2009-03-10 Thread Ben Timby
Yes, should be the same subnet. eth0 is optional as it is the default. You could specify eth1 if you wanted to override the default. On Tue, Mar 10, 2009 at 1:55 PM, Dimitri Yioulos wrote: > As to the IP address in haresources, it must be of the same subnet as the two > nodes, right? > As in: > >

Re: [Linux-HA] Mounting NTFS Drives

2009-03-09 Thread Ben Timby
Sorry, RAs are in /usr/lib/ocf/resource.d/, I forgot the resource.d in my previous message. Corrected below. mkdir /usr/lib/ocf/resource.d/ cp /usr/lib/ocf/resource.d/heartbeat/Filesystem /usr/lib/ocf/resource.d/ On Mon, Mar 9, 2009 at 11:18 PM, Ben Timby wrote: > mkdir /usr/lib/ocf/ >

Re: [Linux-HA] Mounting NTFS Drives

2009-03-09 Thread Ben Timby
If you look at the Filesystem RA, inside Filesystem_start() you will see where you are running into issues. -- if [ "X${HOSTOS}" != "XOpenBSD" ];then # Insert SCSI module # TODO: This probably should go away. Why should the filesystem # RA ma

Re: [Linux-HA] ipt_CLUSTERIP

2009-03-07 Thread Ben Timby
On Sat, Mar 7, 2009 at 12:58 AM, Michael Schwartzkopff wrote: > Hi, > > by the way, CLUSTERIP is a quite experimental target of iptables. For a > production cluster think about using Linux Virtual Server. It also > integrates nicely into heartbeat. > > Michael. Thank you for the reply. I have loa

[Linux-HA] ipt_CLUSTERIP

2009-03-06 Thread Ben Timby
I am working with a 3 node cluster and using an IPaddr2 resource. The resource is a clone, which implies the use of iptables CLUSTERIP for load sharing. However, with three nodes it seems impossible to get even load distribution on failure. Let me explain. If I use three clones, when a node fails,