On Tuesday 17 April 2007 20:39, Alan Robertson wrote:
> > But the problem stays.
> > So:
> > Is it possible to do this nfs failover without shared storage? If it's
> > not possible, what is the best approach for this beside using shared
> > storage? Actually at first, it's an ftp server cluster only and it works
> > very well, but then they want the files to be accessible from nfs export
> > too.
> >
> > This is my specs and settings:
> > Centos 4.3 with no updates
> > heartbeat-stonith-2.0.7-1.c4
> > heartbeat-pils-2.0.7-1.c4
> > heartbeat-2.0.7-1.c4
> >
> > /etc/hosts:
> > 127.0.0.1               ftp1.fuji.local ftp1 localhost.localdomain
> > localhost 192.168.0.201           ftp2.fuji.local ftp2
> > 192.168.0.200           ftp1.fuji.local ftp1
> > 10.0.0.201              hb2.fuji.local hb2
> > 10.0.0.200              hb1.fuji.local hb1
> >
> > /etc/exports:
> > /home/pub       *(rw,fsid=888)
> >
> > /etc/init.d/nfslock:
> > daemon rpc.statd "$STATDARG" -n ftp1.fuji.local
> >
> > /etc/ha.d/ha.cf:
> > logfacility daemon
> > serial /dev/ttyS0
> > watchdog /dev/watchdog
> > bcast eth1
> > keepalive 2
> > warntime 5
> > deadtime 20
> > initdead 100
> > baud 19200
> > udpport 694
> > auto_failback on
> > node ftp1.fuji.local ftp2.fuji.local
> > #respawn userid cmd
> > #respawn hacluster /usr/lib/heartbeat/ccm
> > respawn hacluster /usr/lib/heartbeat/ipfail
> > ping 192.168.0.254
> > #ping_group ftpcluster 192.168.1.70 192.168.1.80
> > use_logd yes
> > #crm on
> > #apiauth mgmtd uid=hacluster
> > #respawn root /usr/lib/heartbeat/mgmtd -t
> >
> > /etc/ha.d/haresources:
> > ftp1.fuji.local 192.168.0.203 httpd nfs pure-ftpd rsync2
>
> The two underlying filesystems have to have _exactly_ the same content,
> the same inode numbers for every file, etc.
>
> We recommend using DRBD or something similar for keeping the two sides
> in sync.
>
> If the two sides are read-only, and you don't want to set up DRBD, then
> you _could_ dd the filesystem from one machine to the other.  But, then
> you can't ever update it, etc.  So, that would not be very maintainable.
>
> But, don't misunderstand.  You need something like DRBD or an identical
> disk image between the two machines.

Sorry for the late reply,
I was at the site implementing the above mentioned cluster. So far the client 
can accept those conditions (they have to remount the nfs export in case of 
failover). Regarding the data replication between the 2 node, I setup a rsync 
script that makes sure the data on both machines get synced.

I'll explore DRBD.
Thank you very much.
-- 
Fajar Priyanto | Reg'd Linux User #327841 | Linux tutorial 
http://linux2.arinet.org
11:24am up 1:40, 2.6.18.2-34-default GNU/Linux 
Let's use OpenOffice. http://www.openoffice.org

Attachment: pgpjWJLTqBkY6.pgp
Description: PGP signature

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to