02.05.2018 05:52, 范国腾 пишет:
> Hi,
> The cluster has three nodes: one is master and two are slave. Now we run “pcs 
> cluster stop --all” to stop all of the nodes. Then we run “pcs cluster start” 
> in the master node. We find it not able to started. The cause is that the 
> stonith resource could not be started so all of the other resource could not 
> be started.
> 
> We test this case in two cluster system and the result is same:
> 
> l  If we start all of the three nodes, the stonith resource could be started. 
> If we stop one node after it starts, the stonith resource could be migrated 
> to another node and the cluster still work.
> 
> l  If we start only one or only two nodes, the stonith resource could not be 
> started.
> 
> 
> (1)   We create the stonith resource using this method in one system:
> pcs stonith create ipmi_node1 fence_ipmilan ipaddr="192.168.100.202" 
> login="ADMIN" passwd="ADMIN" pcmk_host_list="node1"
> pcs stonith create ipmi_node2 fence_ipmilan ipaddr="192.168.100.203" 
> login="ADMIN" passwd="ADMIN" pcmk_host_list="node2"
> pcs stonith create ipmi_node3 fence_ipmilan ipaddr="192.168.100.204" 
> login="ADMIN" passwd="ADMIN" pcmk_host_list="node3"
> 
> 
> (2)   We create the stonith resource using this method in another system:
> 
> pcs stonith create scsi-stonith-device fence_scsi devices=/dev/mapper/fence 
> pcmk_monitor_action=metadata pcmk_reboot_action=off pcmk_host_list="node1 
> node2 node3 node4" meta provides=unfencing;
> 
> 
> The log is in the attachment.
> What prevents the stonith resource to be started if we only started part of 
> the nodes?

It says quite clearly

May  1 22:02:09 node3 pengine[17997]:  notice: Cannot fence unclean
nodes until quorum is attained (or no-quorum-policy is set to ignore)
_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to