Hi Ken,
I've tried with and without colocation. The rule was:
colocation bla2 inf: VM_VM1 AA_Filesystem_CDrive1
In both cases the VM_VM1 tries to live migrate back the coming after standby
node while cloned AA_Filesystem_CDrive1 isn't up on it yet.
Same result with pacemaker 1.14-rc2
Regards,
Hi Ulrich,
This is only a part of the config, which concerns the problem.
Even with dummy resources, the behaviour will be identical, so don't think
that dlm/clvmd res. config will help solving the problem.
Regards,
KIecho
On 17.12.2015 08:19:43 Ulrich Windl wrote:
> >>> Klechomir
Here is what pacemaker says right after node1 comes back after standby:
Dec 16 16:11:41 [4512] CLUSTER-2pengine:debug: native_assign_node:
All nodes for resource VM_VM1 are unavailable, unclean or shutting down
(CLUSTER-1: 1, -100)
Dec 16 16:11:41 [4512] CLUSTER-2pengine:
I have a customer (running SLE 11 SP4 HAE) who is seeing the following
stonith behavior running the ipmi stonith plugin.
Dec 15 14:21:43 test4 pengine[24002]: warning: pe_fence_node: Node
test3 will be fenced because termination was requested
Dec 15 14:21:43 test4 pengine[24002]: warning:
On 12/17/2015 10:32 AM, Ron Kerry wrote:
> I have a customer (running SLE 11 SP4 HAE) who is seeing the following
> stonith behavior running the ipmi stonith plugin.
>
> Dec 15 14:21:43 test4 pengine[24002]: warning: pe_fence_node: Node
> test3 will be fenced because termination was requested
>