On Thu, 2021-02-11 at 11:48 +0300, Ben .T.George wrote: > HI > > yes, that's what I meant, both resources on the same node. > > On Thu, Feb 11, 2021 at 11:39 AM Ulrich Windl < > ulrich.wi...@rz.uni-regensburg.de> wrote: > > >>> "Ben .T.George" <bentech4...@gmail.com> schrieb am 10.02.2021 > > um 20:28 in > > Nachricht > > <ca+c_govfvm10jgyuouezpatensozjbfrhyuyycwjf8hsj88...@mail.gmail.com > > >: > > > HI > > > > > > i have 2 resources and i would like configure in such a way that > > both > > > should always run from same node, > > > > "from" == "on"? > > > > see "colocation" constraints. > > > > > > > > also is it safe to give below values for 2 node cluster: > > > > > > pcs resource defaults migration-threshold=1
You can safely set migration-threshold to whatever you want. 1 means that any failure will result in failover to the other node. The resource will not be able to run again on the node where the failure occurred until you clear the failure (either manually, or with a failure-timeout). > > > pcs property set no-quorum-policy=ignore If you are using corosync 3, it's not necessary to set no-quorum- policy=ignore. pcs should set "two_node: 1" in corosync.conf, which will make corosync do something equivalent to no-quorum-policy=ignore at that level. This will also enable wait-for-all, which means the two nodes must see each other once at cluster start-up before corosync will enable quorum. Fencing is necessary to prevent split-brain. > > > > > > below is my pcs config: > > > --------------------------------------------- > > > Cluster Name: EMS > > > Corosync Nodes: > > > zkwemsapp01.example.com zkwemsapp02.example.com > > > Pacemaker Nodes: > > > zkwemsapp01.example.com zkwemsapp02.example.com > > > > > > Resources: > > > Group: ems_rg > > > Resource: ems_vip (class=ocf provider=heartbeat type=IPaddr2) > > > Attributes: cidr_netmask=24 ip=10.96.11.39 > > > Meta Attrs: resource-stickiness=1 > > > Operations: monitor interval=30s (ems_vip-monitor-interval- > > 30s) > > > start interval=0s timeout=20s (ems_vip-start- > > interval-0s) > > > stop interval=0s timeout=20s (ems_vip-stop- > > interval-0s) > > > Resource: ems_app (class=systemd type=ems-app) > > > Meta Attrs: resource-stickiness=1 > > > Operations: monitor interval=60 timeout=100 (ems_app-monitor- > > interval-60) > > > start interval=0s timeout=40s (ems_app-start- > > interval-0s) > > > stop interval=0s timeout=40s (ems_app-stop- > > interval-0s) > > > > > > Stonith Devices: > > > Resource: ems_vmware_fence (class=stonith > > type=fence_vmware_soap) > > > Attributes: ip=10.151.37.110 password=gfghfghfghfghfgh > > > > > pcmk_host_map=zkwemsapp01.example.com:ZKWEMSAPP01;zkwemsapp02.examp > > le.com:ZK > > > WEMSAPP02 > > > ssl_insecure=1 username=domain\redhat.fadmin > > > Operations: monitor interval=60s (ems_vmware_fence-monitor- > > interval-60s) > > > Fencing Levels: > > > Target: zkwemsapp01.example.com > > > Level 1 - ems_vmware_fence > > > Target: zkwemsapp02.example.com > > > Level 1 - ems_vmware_fence > > > > > > Location Constraints: > > > Ordering Constraints: > > > Colocation Constraints: > > > Ticket Constraints: > > > > > > Alerts: > > > No alerts defined > > > > > > Resources Defaults: > > > resource-stickiness=1000 > > > Operations Defaults: > > > No defaults set > > > > > > Cluster Properties: > > > cluster-infrastructure: corosync > > > cluster-name: EMS > > > dc-version: 2.0.2-3.el8-744a30d655 > > > have-watchdog: false > > > last-lrm-refresh: 1612951127 > > > symmetric-cluster: true > > > > > > Quorum: > > > Options: > > > ---------------------------------------------- > > > > > > Regards, > > > Ben > > > > > > > > > > _______________________________________________ > Manage your subscription: > https://lists.clusterlabs.org/mailman/listinfo/users > > ClusterLabs home: https://www.clusterlabs.org/ -- Ken Gaillot <kgail...@redhat.com> _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/