(in addition) n1 (former active/master node) is still stopped n2 : corosync stop + start now drbd is slave on n2 and nothing else starts
there is a location-constraint <rsc_location rsc="ms_drbd_r0" id="drbd-fence-by-handler-r0-ms_drbd_r0"> <rule role="Master" score="-INFINITY" id="drbd-fence-by-handler-r0-rule-ms_drbd_r0"> <expression attribute="#uname" operation="ne" value="lisel1" id="drbd-fence-by-handler-r0-expr-ms_drbd_r0"/> </rule> </rsc_location> Master, -INF uname __ne__ lisel1 i would interprete : master must not start where ?node-name? is not lisel1 but this node (n2) is 'lisel1' i have deleted that location-constraint and corosync-stop+start again. did not help. 2013/6/25 andreas graeper <agrae...@googlemail.com> > hi, > maybe again and again the same question, please excuse. > > two nodes (n1 active / n2 passive) and `service corosync stop` on active. > does the node, that is going down, tells the other that he has gone, > before he actually disconnect ? > so that there is no reason for n2 to kill n1 ? > > on n2 after n1.corosync.stop : > > drbd:promote OK > lvm:start OK > filesystem:start OK > but ipaddr2 still stopped ? > > n1::drbd:demote works ?! so i would expect that all that depending > resource should have been > stopped successfully ?! > and if not, why ? why should ipaddr2:stop fail > and if it would fail, can filesystem:stop , lvm:stop , drbd:demote succeed > ? > > how can i find some hint in logs why ipaddr fails to start ? > > thanks > andreas >
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org