R P Herrold wrote:
> On Sat, 2 Apr 2011, Kyle Gonzales wrote:
> 
>> Interesting.  Default RHEL install since v4 puts all those on LVM2. 
>> The advice given might be a little paranoid and the cause of needless
>> complexity.
> 
>>> Thomson: "Note: It is not recommended to put the following
>>> directories in an LVM2
>>> partition: /etc, /lib, /mnt, /proc, /sbin, /dev, and /root. This way,
> 
> You forgot: /boot/ which should not live on a raided or LVM device, in
> the interest of reducing potential points of failure
> 
> Not really surprising nor all that interesting --- Red Hat wants to
> simplify an install with anaconda, and assumes one will have available a
> local 'rescue CD image' which can reconstruct but not all many broken
> LVM configurations
> 
> A cautious sysadmin may decide it is more important to be able to access
> the (usually static linked) content in /bin/ and /sbin/ without needing
> to get the LVM up and running first. As floppy and CD drives are
> increasingly a rarity on server grade hardware, this seems reasonable,
> as accessing a rescue image 'across the wire' is fairly challenging, as
> it is not often done by most admins

I call BS on that.  In fact, my previous email told you exactly how
sysadmins do make a rescue environment easily available to their servers
using the available hardware features.

> I speak here, 'wearing the tee-shirt', having spent time doing just such
> a recovery where a end customer took the Red Hat approach as to LVM'ing,
> and ended up with a non-bootable system -- the hardware had a
> motherboard from Intel, calling for a hardware raid driver, 'not yet in
> anaconda', and thus not on the rescue disk image either -- anaconda
> called for the driver disk [it could 'see' no drives until such was
> used], but that disk was long forgotten, when the recovery was needed

You are blaming LVM for an issue with the customer's DR planning.

> Another common mistake, to my thinking, is to have /boot/ on raid,
> rather than on a native partition.  If one fails to update all members,
> and the array fails and cannot 'bootstrap' itself together (think Linux'
> MD software raid) one is again 'up a creek.  In ancient times under
> SunOS, a cautions sysadmin would have an 'A' '/' partition on one
> spindle, and a 'B' '/' partition on a separate spindle, each containing
> the initial boot image.  When updating the kernel, one would update only
> the 'A' side, and test; cone satisfied, one would then transfer that
> image over to the B side as well

You can do the same with Linux today.  In fact, when using software
RAID, that is how /boot is updated.

> These are of course, matters of sysadmin taste, but in an 'enterprise'
> environment, I think most would prefer a faster mean time to repair, and
> a dull and predictable path to recovery when a failure occurs.  Oh yes,
> and a recent level 0 backup in hand, just in case ;)

Interesting than that many enterprise environments I see use LVM.

-- 
Kyle Gonzales
[email protected]
GPG Key #0x566B435B

Read My Tech Blog:
http://techiebloggiethingie.blogspot.com/


---------------------------------------------------------------------
Archive      http://marc.info/?l=jaxlug-list&r=1&w=2
RSS Feed     http://www.mail-archive.com/[email protected]/maillist.xml
Unsubscribe  [email protected]

Reply via email to