On Thu, 2012-03-22 at 15:06 +0100, Florian Haas wrote:
> On Thu, Mar 22, 2012 at 10:34 AM, Lars Ellenberg
> <lars.ellenb...@linbit.com> wrote:
> >> order o_nfs_before_vz 0: cl_fs_nfs cl_vz
> >> order o_vz_before_ve992 0: cl_vz ve992
> >
> > a score of "0" is roughly equivalent to
> > "if you happen do plan to do both operations
> >  in the same transition, would you please consider
> >  to do them in this order, pretty please, if you see fit"
> 
> Lars beat me to this, as the post turned out to be a little more
> elaborate than expected, but here's a bit of background info for
> additional clarification:
> 
> http://www.hastexo.com/resources/hints-and-kinks/mandatory-and-advisory-ordering-pacemaker

Thanks for this arcticle. That is valuable information.

> >> This is on / with:
> >> * Debian 6.0.4
> >> * corosync 1.4.2 (from Debian backports)
> >> * pacemaker 1.0.9.1 (from Debian backports)
> 
> squeeze-backports is on 1.1.6, and you _really_ want to upgrade.

Oh.. that's embarassing. I wasn't actually testing with the backports
version all the time! 

I upgraded now and things look more promising. I can
'/etc/init.d/corosync restart' now and all resources are migrated to the
remaining node and then back to their original node without failure.
However, it seems I need to stick with the order score of 0 for this and
similar constraints:

---
order o_vz_before_ve992 0: cl_vz ve992
---

When I set the score to 'inf' and the second node is coming back, not
only the OpenVZ containers that are supposed to be migrated are
restarted, but all containers. I was told in irc #linux-ha, that some
versions pacemake suffer from the problem, that cloned resources (in my
case the vz service [lsb:vz] and the nfs-mount
[ocf:heartbeat:Filesystem]) are restarted too often and I was advised to
use an advisory score. This seems to work correctly for me now: when the
offline node is coming back, only the the containers that are migrated
are restarted. 
The drawback is that the containers are not stopped, when I stop the
filesystem resource. But since I don't have to do that, that is not a
problem.

Roman


_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to