On 27/06/2013, at 3:40 AM, andreas graeper <agrae...@googlemail.com> wrote:

> thanks four your answer. 
> but still question open.
> 
> when i switch off the active node: though this is done reliable for me, the 
> still passive node wants to know for sure and will kill the (already dead) 
> former active node.
> i have no stonith-hardware (and i could not find till now stonith:null what 
> would help for the first) and so i'm still asking whether there is a clean 
> way to change roles ? 
> but maybe the message from Andrew has answered this already. he told: no ! 
> (if i understand right) 

More recent versions include "stonith_admin --confirm name" to say that a node 
is safely down after a manual inspection.
But otherwise, no. There is no way to automatically recover the drbd resource 
_safely_ without fencing.
You'd just be hoping that the other node really is dead.

> 
> i thought of something like 
>   crm resource demote m_drbd    # on active node, and all that resources that 
> depend on drbd:master was stopped
>   crm resource promote m_drbd   # on passive node after active node was 
> stopped and !!! the passive node was informed about the active is not active 
> any longer and does not need killed  
> 
> if pm actively stops all the resources and demotes drbd then it knows about 
> this condition ? at least if all stops and demotes succeed. 
> 
> and again to that drbd-handler organized location-constraint: 
> if it tells, that drbd:master must not started on other than n1, it could get 
> started on n1 if n1 would know for sure n2 has gone. 
> 
> 
> 2013/6/26 Michael Schwartzkopff <mi...@clusterbau.com>
> Am Mittwoch, 26. Juni 2013, 16:36:43 schrieb andreas graeper:
> > hi and thanks.
> > a primitive can be moved to another node. how can i move (change roles) of
> > drbd:master to the other node ?
>  
> Switch off the other node.
>  
>  
> > and will all the depending resources follow?
>  
> Depends on your constraints(order/collocation).
>  
>  
> > then i can stop (and start again) corosync on passive node without problem
> > (my tiny experience).
>  
> Yes.
>  
>  
> > another question: i found a location-constraint (i think it was set by some
> > of the handlers in drbd-config):
> >
> > location drbd:master -INF #uname ne n1
> > does this mean: drbd must not get promoted on node other than n1 ?!
>  
> Yes. See: Resource fencing in DRBD.
>  
>  
> > i found this in a situation, when n2 had gone and drbd could not get
> > promoted on n1, so all the other resources did not start.
> > but the problem was not drbd, cause i could manually set it to primary.
>  
> Yes. because this location constraint is valid inside the cluster software. 
> Not outside.
>  
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 München
>  
> Tel: (0163) 172 50 98
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to