I don't want to deter you from using GFS2, but if the applications are poorly 
behaved, then you can have unnecessary locking.  I wouldn't mention it except 
I've seen it happen.  Well behaved applications won't have issues.  GFS2 is 
also 
limited to 16 nodes.

As far as failover of read-only shares, you have two options:  
1.  NFS active-passive on RHCS/GFS2 (of which you might just want to use GFS2 
directly)
2.  Cachefs, ideal for read-only
3.  Automounter with multiple mounts, ideal for read-only

Option #2 is something that can take downtime of a server.  I haven't 
extensively tested this, but cachefs is ideal for read-only and avoids 
accessing 
the server at all.

Option #3 is also another option.  If your NFS share is read-only, then you can 
use automounter to mount any, available NFS share.  I've used this for years on 
Linux and Solaris myself.  E.g., I have a compute cluster or other shared set 
of 
resources, and I have a few of those systems share out binaries of which any 
other system can mount.

You'd be surprised how well Option #3 actually works for most situations, 
especially when you have lots of systems.  I had 40 systems one time and I had 
nodes 1-9 mount 10 as their first option, 11-19 mount 20 as their first option, 
etc... with the others listed as backups.



----- Original Message ----
From: Win Htin <[email protected]>

Thanks for the various answers and suggestions.

Do not want to go NFS route since the NFS server is single point of failure.

Haven't looked into NFS + cachefs.

I'm actually thinking of going GFS2.

The bottom line is, I want to have only ONE app binary partition so that:
1. I'm 100% sure the servers are always running the same code.
2. Less maintenance; just upgrade the app files once when there is a
new version of app to be installed.
3. No single point of failure - Can't afford the servers hanging due
to NFS server going south.
4. Only ONE partition to back up/restore.

Bottom line, I'm trying to come up with a robust and low maintenance
design. If there are better solutions than GS2 based cluster file
system I'm all ears and flexible. Thanks in advance.

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to