From: Win Htin <[email protected]>

> ANSWER: Not possible because the app is provided by a third party
> vendor in tgz format and the actual app install is done by the app guys.

Lobby your vendor to produce a RPM.

I have done this very recently.  A division at my client continually deployed 
an 
ISV product in quite broken state regularly.  The install script was 
incomplete, 
required manual procedures after that (documented by the ISV) and then the 
department had their own.  It only took a few hours to put about 20 lines in 
the 
RPM SPEC file to properly deploy the correct permissions, location, etc...  
Once 
done, the app deployed perfectly, configuring and starting its services, 
ready-to-go after install.

I provided the SPEC back to the ISV, and also how they should be putting their 
tarball together so they can remove lines from their tarball (like putting 
proper permissions on the files in the tarball in the first place).

> ANSWER: Will look into this. Thanks.

I would argue OCFS2/ASM is really for Oracle RAC, but that's my opinion.  I 
don't like to say anything that could be taken "against" other vendors on one 
vendor's list, but remember where the focus is.  I.e., Red Hat develops for 
_many_ ISVs and solutions for things other than clustered DBs, or one DB for 
that matter.

I.e., if this was OCFS2/ASM for Oracle RAC, I wouldn't say anything either way. 
 
But for something else ... ;)  And yes, I know some people are using OCFS2 for 
other things.  YMMV

> ANSWER: Due to the nature of my HW (Blades), DRBD seems a bit complicated.

I don't recommend DRBD when one has a SAN and RHCS/GFS2 as an option.  That's 
_nothing_ against DRDB, but if you have the hardware, I'd recommend the 
solution 
for it.  In all honest I still prefer RHCS/GFS2 with iSCSI as well, but that's 
more of a personal preference.  I find it more supportable.

> ANSWER: Bryan, do you mean it is not a good idea to have both NFS and
> GFS2 running at the same time? e.g. /app partition mounted through
> GFS2 file system and /home through NFS? Is it better going GFS2 for
> both /app and /home?

No, the opposite.  I was saying -- for some reason I still don't understand -- 
that people are out there saying you must use only 1 file system option.  
That's 
wholly untrue.  With exception of the few hundred KB (maybe MB) in kernel 
module(s), there's really no impact.  Of course GFS2 requires RHCS, but RHCS 
gives you so many other things too.

> A rather crazy question but if the consensus is that GFS2 is not up
> to snuff for production, what is it good for?

GFS/GFS2 are quite stable.  Any problems with them are similar to those like 
NFS 
-- people assuming the file systems are only managed by one kernel, and 
everything (like meta-data) is in memory at all times.  Coherency is always the 
main detail with distributed file systems -- at least if you care about having 
a 
consistent, non-corrupted file system.

I.e., you can't cache everything without checking with other nodes (or the NFS 
server in the case of NFS).  However, if you make a read-only NFS share, you 
can 
usually and safely deloy cachefs.

NOTE:  I currently am involved with a large fraction of peta-byte of data in 
GFS/GFS2 file systems (and have been for the past year and a half at my current 
client).

> I currently have a shared disk group on the SAN and out of my N+1
> servers, N number of servers mount the partition as Read-only and the
> remaining server mounts it as Read-write. Any time updates are
> required, it is done through that server.

This sounds like a solid solution.  If it's working for you with GFS2, don't 
change it.  My suggestion of looking at NFS+cachefs or NFS+automounter was only 
if you don't already have something working.  GFS2 is a very stable and proven 
technology.



-- 
Bryan J  Smith             Professional, Technical Annoyance 
------------------------------------------------------------ 
"Now if you own an automatic ... sell it!
You are totally missing out on the coolest part of driving"
-- Johnny O'Connell

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to