On Thu, Dec 17, 2020 at 11:11 AM Gabriele Bulfon <gbul...@sonicle.com> wrote:
>
> Yes, sorry took same bash by mistake...here are the correct logs.
>
> Yes, xstha1 has delay 10s so that I'm giving him precedence, xstha2 has delay 
> 1s and will be stonished earlier.
> During the short time before xstha2 got powered off, I saw it had time to 
> turn on NFS IP (I saw duplicated IP on xstha1).

Again - please write so that others can understand you. How should we
know what "NFS IP" is supposed to be? You have two resources that
looks like they are IP related and neither of them has NFS in its
name: xstha1_san0_IP, xstha2_san0_IP. And even if they had NFS in
their names - which of two resources are you talking about?

According to logs from xstha1, it started to activate resources only
after stonith was confirmed

Dec 16 15:08:12 [708] stonith-ng:   notice: log_operation:
Operation 'off' [1273] (call 4 from crmd.712) for host 'xstha2' with
device 'xstha2-stonith' returned: 0 (OK)
Dec 16 15:08:12 [708] stonith-ng:   notice: remote_op_done:
Operation 'off' targeting xstha2 on xstha1 for
crmd.712@xstha1.e487e7cc: OK

It is possible that your IPMI/BMC/whatever implementation responds
with success before it actually completes this action. I have seen at
least some delays in the past. There is not really much that can be
done here except adding artificial delay to stonith resource agent.
You need to test IPMI functionality before using it in pacemaker.

In this case xstha1 may have configured xstha2_san0_IP resource before
xstha2 was down. This would explain duplicated IP.

> And becase configuration has "order zpool_data_order inf: zpool_data ( 
> xstha1_san0_IP )", that means xstha2 had imported the zpool for a small time 
> before being stonished, and this must never happen.

There is no indication in logs that pacemaker  started or attempted to
start either of xstha1_san0_IP or zpool_sata on xstha2.

>
> What suggests me that resources were started on xstha2 (and duplicated IP is 
> an effect) are these logs portions of xstha2.

The resources xstha2_san0_IP *remained* started on xstha2. pacemaker
did not try to stop them at all, it had no reasons to do so.

> These tells me it could not turn off resources on xstha1 (correct, it 
> couldn't contact xstha1):
>
> Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
> xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
> Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
> zpool_data_stop_0 on xstha1 is unrunnable (offline)
> Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
> xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
> Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
> xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
>
> These tells me xstha2 took control of resources, that were actually running 
> on xstha1:
>
> Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
> xstha1_san0_IP     ( xstha1 -> xstha2 )
> Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   
> xstha2_san0_IP  (Started xstha2)
> Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
> zpool_data         ( xstha1 -> xstha2 )
> Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   
> xstha1-stonith  (Started xstha2)
> Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Stop       
> xstha2-stonith     (           xstha1 )   due to node availability
>

These lines are only action plan what pacemaker *will* do.
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to