Done:

http://bugs.clusterlabs.org/show_bug.cgi?id=5216

Best regards,
Christianc


2014-05-27 22:51 GMT+02:00 Andrew Beekhof <and...@beekhof.net>:

>
> On 27 May 2014, at 7:20 pm, Christian Ciach <derein...@gmail.com> wrote:
>
> >
> >
> >
> > 2014-05-27 7:34 GMT+02:00 Andrew Beekhof <and...@beekhof.net>:
> >
> > On 27 May 2014, at 3:12 pm, Gao,Yan <y...@suse.com> wrote:
> >
> > > On 05/27/14 08:07, Andrew Beekhof wrote:
> > >>
> > >> On 26 May 2014, at 10:47 pm, Christian Ciach <derein...@gmail.com>
> wrote:
> > >>
> > >>> I am sorry to get back to this topic, but I'm genuinely curious:
> > >>>
> > >>> Why is "demote" an option for the ticket "loss-policy" for
> multi-site-clusters but not for the normal "no-quorum-policy" of local
> clusters? This seems like a missing feature to me.
> > >>
> > >> Or one feature too many.
> > >> Perhaps Yan can explain why he wanted demote as an option for the
> loss-policy.
> > > Loss-policy="demote" is a kind of natural default if the "Master" mode
> > > of a resource requires a ticket like:
> > > <rsc_ticket rsc="ms1" rsc-role="Master" ticket="ticketA"/>
> > >
> > > The idea is for running stateful resource instances across clusters.
> And
> > > loss-policy="demote" provides the possibility if there's the need to
> > > still run the resource in slave mode for any reason when losing the
> > > ticket, rather than stopping it or fencing the node hosting it.
> >
> > I guess the same logic applies to the single cluster use-case too and we
> should allow no-quorum-policy=demote.
> >
> >
> > Thank you for mentioning this. This was my thought as well.
> >
> > At the moment we "simulate" this behaviour by using a primitive resource
> where "started" means "master" and "stopped" means "slave". This way we can
> use "no-quorum-policy=stop" to actually switch the resource to slave on
> quorum loss. This seems hacky, so I would appreciate if this could be done
> in a proper way some time in the future.
>
> Could you file a bug for that in bugs.clusterlabs.org so we don't loose
> track of it?
>
> >
> > One question though... do we still stop non-master/slave resources for
> loss-policy=demote?
> >
> > >
> > > Regards,
> > >  Yan
> > >
> > >>
> > >>>
> > >>> Best regards
> > >>> Christian
> > >>>
> > >>>
> > >>> 2014-04-07 9:54 GMT+02:00 Christian Ciach <derein...@gmail.com>:
> > >>> Hello,
> > >>>
> > >>> I am using Corosync 2.0 with Pacemaker 1.1 on Ubuntu Server 14.04
> (daily builds until final release).
> > >>>
> > >>> My problem is as follows: I have a 2-node (plus a quorum-node)
> cluster to manage a multistate-resource. One node should be the master and
> the other one the slave. It is absolutely not allowed to have two masters
> at the same time. To prevent a split-brain situation, I am also using a
> third node as a quorum-only node (set to standby). There is no redundant
> connection because the nodes are connected over the internet.
> > >>>
> > >>> If one of the two nodes managing the resource becomes disconnected,
> it loses quorum. In this case, I want this resource to become a slave, but
> the resource should never be stopped completely! This leaves me with a
> problem: "no-quorum-policy=stop" will stop the resource, while
> "no-quorum-policy=ignore" will keep this resource in a master-state. I
> already tried to demote the resource manually inside the monitor-action of
> the OCF-agent, but pacemaker will promote the resource immediately again.
> > >>>
> > >>> I am aware that I am trying the manage a multi-site-cluster and
> there is something like the booth-daemon, which sounds like the solution to
> my problem. But unfortunately I need the location-constraints of pacemaker
> based on the score of the OCF-agent. As far as I know location-constraints
> are not possible when using booth, because the 2-node-cluster is
> essentially split into two 1-node-clusters. Is this correct?
> > >>>
> > >>> To conclude: Is it possible to demote a resource on quorum loss
> instead of stopping it? Is booth an option if I need to manage the location
> of the master based on the score returned by the OCF-agent?
> > >>>
> > >>>
> > >>> _______________________________________________
> > >>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > >>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >>>
> > >>> Project Home: http://www.clusterlabs.org
> > >>> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > >>> Bugs: http://bugs.clusterlabs.org
> > >>
> > >
> > > --
> > > Gao,Yan <y...@suse.com>
> > > Software Engineer
> > > China Server Team, SUSE.
> > >
> > > _______________________________________________
> > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >
> > > Project Home: http://www.clusterlabs.org
> > > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to