Re: [ClusterLabs] 2 node clusters - ipmi fencing

2020-02-21 Thread Ricardo Esteves
Hi,

I'm trying to understand what is the objective of the constraints to
have the fencing devices running on opposite node or on its own node or
running all on the same node. Can you explain the difference?


On 21/02/2020 02:35, Ondrej wrote:
> Hello Ricardo,
>
> On 2/21/20 9:59 AM, Dan Swartzendruber wrote:
>> I believe you in fact want each fence agent to run on the other node,
>> yes.
>>
>> On February 20, 2020, at 6:23 PM, Ricardo Esteves  wrote:
>>
>> Hi,
>>
>> I have a question regarding fencing, i have 2 physical servers: node01,
>> node02, each one has an ipmi card
>>
>> so i create 2 fence devices:
>>
>> fence_ipmi_node01 (with ip of ipmi card of server node01) - with
>> constraint to prefer to run on node01
>> fence_ipmi_node02 (with ip of ipmi card of server node02) - with
>> constraint to prefer to run on node02 - configured also a 20s delay on
>> this one
>
> with 20s delay: make sure to test that this behaves well with your
> cluster. This value heavily depends on the hardware (IPMI device
> responsiveness) so proper testing is essential.
>
>> Is this the best practice?
>> Like this node01 can only fence itself right? and node02 also can only
>> fence itself right?
> Not exactly:
> - node2 will use node1 IPMI device to fence node1
> - node1 will use node2 IPMI device to fence node2
> Both above should happen regardless of constraints.
> Node should never "fence itself" with IPMI device because fencing is
> typically following procedure:
> - 1. check state
> - 2. turn off
> - 3. check state
> - 4. turn on
> If the node would "fence itself" then there would be no one to
> continue after step 2. :) All actions during fencing with
> fence_ipmilan or similar are taking place on one node. (agents that
> have unfencing like fence_scsi are a different story)
>
>> Shouldn't i configure fence_ipmi_node01 location constraint to be placed
>> on node02 and fence_ipmi_node02 location constraint to be placed on
>> node01 ? So that node01 can fence node02 and vice versa?
> IPMI device of node1 prefering node2 and IPMI device of node2
> prefering the node1 might be considered a good practice. Important to
> understand is that you configure where the cluster will run 'monitor'
> check of IPMI device. That means you will be monitoring readiness of
> node1 to use node2 IPMI device and readiness of node2 to use node1
> IPMI device.
>
> -- 
> Ondrej Famera
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] 2 node clusters - ipmi fencing

2020-02-20 Thread Ricardo Esteves
Hi,

I have a question regarding fencing, i have 2 physical servers: node01,
node02, each one has an ipmi card

so i create 2 fence devices:

fence_ipmi_node01 (with ip of ipmi card of server node01) - with
constraint to prefer to run on node01
fence_ipmi_node02 (with ip of ipmi card of server node02) - with
constraint to prefer to run on node02 - configured also a 20s delay on
this one

Is this the best practice?
Like this node01 can only fence itself right? and node02 also can only
fence itself right?
Shouldn't i configure fence_ipmi_node01 location constraint to be placed
on node02 and fence_ipmi_node02 location constraint to be placed on
node01 ? So that node01 can fence node02 and vice versa?
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Fedora 31 - systemd based resources don't start

2020-02-17 Thread Ricardo Esteves
Hi,

Yes, i also don't understand why is trying to stop them first.

SELinux is disabled:

# getenforce
Disabled

All systemd services controlled by the cluster are disabled from
starting at boot:

# systemctl is-enabled httpd
disabled

# systemctl is-enabled openvpn-server@01-server
disabled


On 17/02/2020 20:28, Ken Gaillot wrote:
> On Mon, 2020-02-17 at 17:35 +, Maverick wrote:
>> Hi,
>>
>> When i start my cluster, most of my systemd resources won't start:
>>
>> Failed Resource Actions:
>>   * apache_stop_0 on boss1 'OCF_TIMEOUT' (198): call=82,
>> status='Timed Out', exitreason='', last-rc-change='1970-01-01
>> 01:00:54 +01:00', queued=29ms, exec=197799ms
>>   * openvpn_stop_0 on boss1 'OCF_TIMEOUT' (198): call=61,
>> status='Timed Out', exitreason='', last-rc-change='1970-01-01
>> 01:00:54 +01:00', queued=1805ms, exec=198841ms
> These show that attempts to stop failed, rather than start.
>
>> So everytime i reboot my node, i need to start the resources manually
>> using systemd, for example:
>>
>> systemd start apache
>>
>> and then pcs resource cleanup
>>
>> Resources configuration:
>>
>> Clone: apache-clone
>>   Meta Attrs: maintenance=false
>>   Resource: apache (class=systemd type=httpd)
>>Meta Attrs: maintenance=false
>>Operations: monitor interval=60 timeout=100 (apache-monitor-
>> interval-60)
>>start interval=0s timeout=100 (apache-start-interval-
>> 0s)
>>stop interval=0s timeout=100 (apache-stop-interval-0s)
>>
>>
>>
>> Resource: openvpn (class=systemd type=openvpn-server@01-server)
>>Meta Attrs: maintenance=false
>>Operations: monitor interval=60 timeout=100 (openvpn-monitor-
>> interval-60)
>>start interval=0s timeout=100 (openvpn-start-interval-
>> 0s)
>>stop interval=0s timeout=100 (openvpn-stop-interval-
>> 0s)
>>
>>
>>
>> Btw, if i try a debug-start / debug-stop the mentioned resources
>> start and stop ok.
> Based on that, my first guess would be SELinux. Check the SELinux logs
> for denials.
>
> Also, make sure your systemd services are not enabled in systemd itself
> (e.g. via systemctl enable). Clustered systemd services should be
> managed by the cluster only.

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/