On Mon, Jun 7, 2010 at 10:20 AM, Dejan Muhamedagic <deja...@fastmail.fm>wrote:

> Hi,
>
> On Fri, Jun 04, 2010 at 02:25:44PM -0700, Tony Hunter wrote:
> > On Thu, Jun 03, 2010 at 07:18:16PM -0300, Diego Woitasen wrote:
> > > On Wed, Jun 2, 2010 at 7:43 AM, Andrew Beekhof <and...@beekhof.net>
> wrote:
> > >
> > > > On Sat, May 29, 2010 at 3:54 AM, Diego Woitasen <
> dieg...@xtech.com.ar>
> > > > wrote:
> > > > > Hi,
> > > > >  * I have three nodes: "ha1", "ha2" y "ha3".
> > > > >  * Three resources: "sfex", "xfs_fs", "ip".
> > > > >  * "sfex" and "xfs_fs" are members of a group called "xfs_grp".
> > > > >  * "xfs_grp" can run on any node but "ip" resource can run on "ha1"
> or
> > > > > "ha2" only.
> > > > >  * When "xfs_grp" is running on "ha1" or "ha2", "ip" must run on
> the same
> > > > > node.
> > > > >  * One last thing, I need manual failback.
> > > > >
> > > > > My current configuration works except for the "manual failback"
> (a.k.a.
> > > > > auto_failback off).
> > > > >
> > > > > node $id="0ace77ab-600a-4541-a682-ab0534bb3fc4" ha3
> > > > > node $id="3d1f07b5-a79b-478f-b07c-02a7a5c5106c" ha2
> > > > > node $id="c44a3a26-35d4-476e-a1e6-49f03f068f12" ha1
> > > > > primitive ip ocf:heartbeat:IPaddr \
> > > > >        params ip="192.168.1.147"
> > > > > primitive sfex ocf:heartbeat:sfex \
> > > > >        params device="/dev/sdb1" \
> > > > >        op monitor interval="10" timeout="10" depth="0"
> > > > > primitive xfs_fs ocf:heartbeat:Filesystem \
> > > > >        params device="/dev/sdb2" directory="/shared" fstype="xfs" \
> > > > >        op monitor interval="20" timeout="40" depth="0"
> > > > > group xfs_grp sfex xfs_fs
> > > > > location srv_loc ip -inf: ha3
> > > > > colocation srv_col inf: ip xfs_grp
> > > > > property $id="cib-bootstrap-options" \
> > > > >        no-quorum-policy="ignore" \
>
> You shouldn't disable quorum unless for very special
> configurations.
>
> > > > >        expected-quorum-votes="1" \
>
> Why is this set to 1 and you have three nodes? Should be set to 3.
>
> > > > >        stonith-enabled="0" \
> > > > >        default-resource-stickiness="INFINITY"
> > > > >
> > > > > When "xfs_grp" is running in "ha3" and "ha1" or "ha2" are alive
> again,
> > > > the
> > > > > resources ("xfs_grp" and "ip") move to any of them.
> > > > >
> > > > > Any ideas?
> > > >
> > > > Not really, I don't understand what the problem is.
> > > > ip can only run on ha1 or ha2, so its not surprising that it gets
> > > > stopped occasionally (ie. when you shut down one node and make the
> > > > other standby) while the group remains running.
> > > > _______________________________________________
> > > > Linux-HA mailing list
> > > > Linux-HA@lists.linux-ha.org
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also: http://linux-ha.org/ReportingProblems
> > > >
> > >
> > > May be my explanation was wrong.
> > >
> > > xfs_grp can run on ha1, ha2 or ha3.
> > > ip can run on ha1 or ha2.
> > >
> > > If I shutdown ha1 and ha2, xfs_grp moves to ha3 without "ip". If ha1
> (or
> > > ha2) returns back, xfs_grp moves to ha1 and "ip" are started. I have
> > > default-resource-stickiness="
> > > INFINITY" so I think that xfs_grp should stays in ha3 until manual
> failback.
> >
> > I'm fairly new to pacemaker/corosync but I haven't seen a reply to your
> mail,
> > so I'll take a shot. It seems your colocation rule below prevents xfs_grp
> > from running on ha3 unless ip is also running there:
> > colocation srv_col inf: ip xfs_grp
> >
> > And this rule seems to suggest the ip resource should _never_ run on ha3.
> > location srv_loc ip -inf: ha3
> >
> > So, as far as I can see, the cluster is behaving as configured - ha1 or
> ha2
> > takes over ip when one of them comes back online, since ip is not running
> > anywhere. And of course xfs_grp is migrated because of the colocation
> > constraint.
>
> Just to add that -inf + inf = -inf
>
> Thanks,
>
> Dejan
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>

My config is simple. I don't need quorum logic here.

-- 
Diego Woitasen
XTECH
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to