On 2009-04-09T13:55:47, Alan Robertson  <al...@unix.sh> wrote:

> ------------------------------------------------------------------------------
>      * return anything else for failed resource instances
>   If your resource is an LVM resource (or possibly other os-level resource), 
> then there is yet another common reason for this occurance:
>   
> -  * By default your OS (Linux for example) automatically activates all 
> logical volumes at boot time.
> -  The simplest cure is to ensure the volume groups are not automatically 
> started via the appropriate boot config.
> +  * By default your OS (Linux for example) automatically activates all 
> logical volumes at boot time.  The simplest cure is to ensure the volume 
> groups are not automatically started via the appropriate boot config.  
> Unfortunately, with LVM2, this does not appear to be possible.  Therefore 
> some less satisfactory solution will be necessary.  Here are the issues I'm 
> aware of:
> +   * There is no way to keep LVM2 from activating a volume at boot time and 
> still be able to see the volume ever again.  All documented techniques for 
> making a volume not activate at boot time render it permanently inaccessible. 
>  LVM1 had such mechanisms, it appears that unless you use the Red Hat 
> clustering stack that there is no "nice" method for fixing this in LVM2.
> +   * Having a volume active on both sides doesn't have any particularly 
> harmful effects ''by itself''.  It puts some volume entries in the OS cache 
> which if you don't resize the volume on another machine will have no effect.
> +   * Stopping the volume removes those entries from the OS cache - 
> eliminating the possibility of harmful effects even if the volume is resized. 
> Although this isn't an elegant or perfect solution, it is a reasonable 
> workaround and causes no harm.  A reasonable way to accomplish this result is 
> through the use of a "stopstart" script which Linux-HA executes at cluster 
> startup time.
> +   * Another option is to avoid making LVM a resource.  This is an inferior 
> workaround - as it leaves the possibility of harmful effects if the volume is 
> resized.   Eliminating the LVM resource also means that you can't monitor the 
> volume itself - and have to rely on monitoring the filesystem on top - which 
> doesn't catch problems in access to the volume itself - or at least not very 
> quickly.  In summary, it's more dangerous and less able to detect errors.  
> Without documentation of the first method using stopstart, many people have 
> no doubt taken this more dangerous approach. 
>   
> + Of course, if someone has a proven method for keeping an LVM2 volume from 
> being activated at boot time without eliminating the possibility of 
> activating it later, that would be a better approach.  Please feel free to 
> replace this text with detailed and tested descriptions of how to accomplish 
> this in LVM2.

Hi Alan,

I had no idea that ceckhoff was you, sorry about that.

Anyway, the right way to fix that - at least on SUSE - is
/etc/sysconfig/lvm -> LVM_VGS_ACTIVATED_ON_BOOT which allows one to
specify which VGs get activated automatically. VGs which are managed by
the cluster would obviously not be listed there.

(The stopstart mechanism doesn't work with the SUSE packages either.)

I would assume RHT has a similar mechanism?

The harmful effects would occur possibly for RAIDed LVs, at least I
think so - the two nodes might step on each others toes during resync,
possibly problems with snapshots.

FWIW, the clvm code is integrated with pacemaker on SLE11 HA as well
(based on openais), but that is indeed not a solution for heartbeat
v2.1.x.

In general, I'd hate to _document_ such work-arounds. The right way to
go about this would be to file bugs against the distributions.


Regards,
    Lars

-- 
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to