[ClusterLabs] Antw: [EXT] Re: Order set troubles

2021-03-24 Thread Ulrich Windl
>>> Ken Gaillot  schrieb am 24.03.2021 um 18:56 in
Nachricht
<5bffded9c6e614919981dcc7d0b2903220bae19d.ca...@redhat.com>:
> On Wed, 2021‑03‑24 at 09:27 +, Strahil Nikolov wrote:
>> Hello All,
>>  
>> I have a trouble creating an order set .
>> The end goal is to create a 2 node cluster where nodeA will mount
>> nfsA , while nodeB will mount nfsB.On top of that a depended cloned
>> resource should start on the node only if nfsA or nfsB has started
>> locally.

This looks like ad odd design to me, and I wonder: What is the use case?
(We are using "NFS loop-mounts" for many years, where the cluster needs the
NFS service it provides, but that's a different design)

Regards,
Ulrich


>>  
>> A prototype code would be something like:
>> pcs constraint order start (nfsA or nfsB) then start resource‑clone
>>  
>> I tried to create a set like this, but it works only on nodeB:
>> pcs constraint order set nfsA nfsB resource‑clone
>> 
>> Any idea how to implement that order constraint ?
>> Thanks in advance.
>> 
>> Best Regards,
>> Strahil Nikolov
> 
> Basically you want two sets, one with nfsA and nfsB with no ordering
> between them, and a second set with just resource‑clone, ordered after
> the first set.
> 
> I believe the pcs syntax is:
> 
> pcs constraint order set nfsA nfsB sequential=false require‑all=false
> set resource‑clone
> 
> sequential=false says nfsA and nfsB have no ordering between them, and
> require‑all=false says that resource‑clone only needs one of them.
> 
> (I don't remember for sure the order of the sets in the command, i.e.
> whether it's the primary set first or the dependent set first, but I
> think that's right.)
> ‑‑ 
> Ken Gaillot 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Order set troubles

2021-03-24 Thread Andrei Borzenkov
On 24.03.2021 20:56, Ken Gaillot wrote:
> On Wed, 2021-03-24 at 09:27 +, Strahil Nikolov wrote:
>> Hello All,
>>  
>> I have a trouble creating an order set .
>> The end goal is to create a 2 node cluster where nodeA will mount
>> nfsA , while nodeB will mount nfsB.On top of that a depended cloned
>> resource should start on the node only if nfsA or nfsB has started
>> locally.
>>  
>> A prototype code would be something like:
>> pcs constraint order start (nfsA or nfsB) then start resource-clone
>>  
>> I tried to create a set like this, but it works only on nodeB:
>> pcs constraint order set nfsA nfsB resource-clone
>>
>> Any idea how to implement that order constraint ?
>> Thanks in advance.
>>
>> Best Regards,
>> Strahil Nikolov
> 
> Basically you want two sets, one with nfsA and nfsB with no ordering
> between them, and a second set with just resource-clone, ordered after
> the first set.
> 
> I believe the pcs syntax is:
> 
> pcs constraint order set nfsA nfsB sequential=false require-all=false
> set resource-clone
> 
> sequential=false says nfsA and nfsB have no ordering between them, and
> require-all=false says that resource-clone only needs one of them.
> 

Won't that start clone instances on all nodes when either nfsA or nfsB
is active? While he wants to start clone instance on A only if nfsA was
started on A and start clone instance on B only if nfsB was started on B.

The closest I can come up with is making nfsA/B a clone and
ordering/colocating clones. That works during brief test. If nfs
instance cannot be started on a node, resource-clone instance is not
started either, but both are started on another node.

> (I don't remember for sure the order of the sets in the command, i.e.
> whether it's the primary set first or the dependent set first, but I
> think that's right.)
> 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Order set troubles

2021-03-24 Thread Ken Gaillot
On Wed, 2021-03-24 at 09:27 +, Strahil Nikolov wrote:
> Hello All,
>  
> I have a trouble creating an order set .
> The end goal is to create a 2 node cluster where nodeA will mount
> nfsA , while nodeB will mount nfsB.On top of that a depended cloned
> resource should start on the node only if nfsA or nfsB has started
> locally.
>  
> A prototype code would be something like:
> pcs constraint order start (nfsA or nfsB) then start resource-clone
>  
> I tried to create a set like this, but it works only on nodeB:
> pcs constraint order set nfsA nfsB resource-clone
> 
> Any idea how to implement that order constraint ?
> Thanks in advance.
> 
> Best Regards,
> Strahil Nikolov

Basically you want two sets, one with nfsA and nfsB with no ordering
between them, and a second set with just resource-clone, ordered after
the first set.

I believe the pcs syntax is:

pcs constraint order set nfsA nfsB sequential=false require-all=false
set resource-clone

sequential=false says nfsA and nfsB have no ordering between them, and
require-all=false says that resource-clone only needs one of them.

(I don't remember for sure the order of the sets in the command, i.e.
whether it's the primary set first or the dependent set first, but I
think that's right.)
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] WebSite_start_0 on node2 'error' (1): call=6, status='complete', exitreason='Failed to access httpd status page.'

2021-03-24 Thread Ken Gaillot
On Wed, 2021-03-24 at 10:50 +, Jason Long wrote:
> Thank you.
> Form node1 and node2, I can ping the floating IP address
> (192.168.56.9).
> I stopped node1:
> 
> # pcs cluster stop node1
> node1: Stopping Cluster (pacemaker)...
> node1: Stopping Cluster (corosync)...
> 
> And from both machines, I can ping the floating IP address:
> 
> [root@node1 ~]# ping 192.168.56.9
> PING 192.168.56.9 (192.168.56.9) 56(84) bytes of data.
> 64 bytes from 192.168.56.9: icmp_seq=1 ttl=64 time=0.504 ms
> 64 bytes from 192.168.56.9: icmp_seq=2 ttl=64 time=0.750 ms
> ...
> 
> [root@node2 ~]# ping 192.168.56.9
> PING 192.168.56.9 (192.168.56.9) 56(84) bytes of data.
> 64 bytes from 192.168.56.9: icmp_seq=1 ttl=64 time=0.423 ms
> 64 bytes from 192.168.56.9: icmp_seq=2 ttl=64 time=0.096 ms
> ...
> 
> 
> So?

Now you can proceed with the "Add Apache HTTP" section. Once apache is
set up as a cluster resource, you should be able to contact the web
server at the floating IP (or more realistically whatever name you've
associated with that IP), and have the cluster fail over both the IP
address and web server as needed.


> On Wednesday, March 24, 2021, 02:41:44 AM GMT+4:30, Ken Gaillot <
> kgail...@redhat.com> wrote: 
> 
> 
> 
> 
> 
> On Tue, 2021-03-23 at 20:15 +, Jason Long wrote:
> > Thanks.
> > The floating IP address must not use by other machines. I have two
> > VMs that using "192.168.57.6" and "192.168.57.7". Could the
> > floating
> > IP address be "192.168.57.8"?
> 
> Yes, if it's in the same subnet and not already in use by some other
> machine.
> 
> > Which part of my configuration is wrong? Why, when I disconnect
> > node1, then node2 doesn't replace it?
> 
> The first thing I would do is configure and test fencing. Once you're
> confident fencing is working, add the floating IP address. Make sure
> you can ping the floating IP address from some other machine. Then
> test
> fail-over and ensure you can still ping the floating IP. From there
> it
> should be straightforward.
> 
> 
> > 
> > 
> > 
> > 
> > 
> > On Wednesday, March 24, 2021, 12:33:53 AM GMT+4:30, Ken Gaillot <
> > kgail...@redhat.com> wrote: 
> > 
> > 
> > 
> > 
> > 
> > On Tue, 2021-03-23 at 19:07 +, Jason Long wrote:
> > > Thanks, but I want to have a cluster with two nodes and nothing
> > > more!
> > 
> > The end result is to have 2 nodes with 3 IP addresses:
> > 
> > * The first node has a permanently assigned IP address that it
> > brings
> > up when it boots; this address is not managed by the cluster
> > 
> > * The second node also has a permanent address not managed by the
> > cluster
> > 
> > * A third, unused IP address from the same subnet is used as a
> > "floating" IP address, which means the cluster can sometimes run it
> > on
> > the first node and sometimes on the second node. This IP address is
> > the
> > one that users will use to contact the service.
> > 
> > That way, users always have a single address that they use, no
> > matter
> > which node is providing the service.
> > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > On Tuesday, March 23, 2021, 07:59:57 PM GMT+4:30, Klaus Wenninger
> > > <
> > > kwenn...@redhat.com> wrote: 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > On 3/23/21 4:07 PM, Jason Long wrote:
> > > > Thank you.
> > > > Thus, where I must define my node2 IP address? When node1
> > > > disconnected, I want node2 replace it.
> > > > 
> > > 
> > > You just need a single IP address that you are assigning to the
> > > virtual 
> > > IP resource.
> > > And pacemaker is gonna move that IP address - along with the web-
> > > proxy - 
> > > between
> > > the 2 nodes.
> > > Of course node1 & node2 have IP addresses that are being used
> > > for 
> > > cluster-communication
> > > but they are totally independent (well maybe in the same subnet
> > > for
> > > a 
> > > simple setup)
> > > from the IP address your web-proxy is reachable at.
> > > 
> > > Klaus
> > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On Tuesday, March 23, 2021, 01:03:39 PM GMT+4:30, Klaus
> > > > Wenninger
> > > > <
> > > > kwenn...@redhat.com> wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On 3/23/21 9:13 AM, Jason Long wrote:
> > > > > Thank you.
> > > > > But: 
> > > > > https://www.clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch06.html
> > > > > ?
> > > > > 
> > > > > The floating IP address is: 
> > > > > https://www.clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_add_a_resource.html
> > > > > In the "Warning" written: "The chosen address must not
> > > > > already
> > > > > be
> > > > > in use on the network. Do not reuse an IP address one of the
> > > > > nodes already has configured.", what does it mean?
> > > > 
> > > > It means that if you would be using an IP that is already in
> > > > use
> > > > on your network - by one of your cluster-nodes or something
> > > > else
> > > > -
> > > > pacemaker would possibly activate that IP and you would have
> > > > a duplicate IP i

Re: [ClusterLabs] WebSite_start_0 on node2 'error' (1): call=6, status='complete', exitreason='Failed to access httpd status page.'

2021-03-24 Thread Jason Long
Thank you.
Form node1 and node2, I can ping the floating IP address (192.168.56.9).
I stopped node1:

# pcs cluster stop node1
node1: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (corosync)...

And from both machines, I can ping the floating IP address:

[root@node1 ~]# ping 192.168.56.9
PING 192.168.56.9 (192.168.56.9) 56(84) bytes of data.
64 bytes from 192.168.56.9: icmp_seq=1 ttl=64 time=0.504 ms
64 bytes from 192.168.56.9: icmp_seq=2 ttl=64 time=0.750 ms
...

[root@node2 ~]# ping 192.168.56.9
PING 192.168.56.9 (192.168.56.9) 56(84) bytes of data.
64 bytes from 192.168.56.9: icmp_seq=1 ttl=64 time=0.423 ms
64 bytes from 192.168.56.9: icmp_seq=2 ttl=64 time=0.096 ms
...


So?


On Wednesday, March 24, 2021, 02:41:44 AM GMT+4:30, Ken Gaillot 
 wrote: 





On Tue, 2021-03-23 at 20:15 +, Jason Long wrote:
> Thanks.
> The floating IP address must not use by other machines. I have two
> VMs that using "192.168.57.6" and "192.168.57.7". Could the floating
> IP address be "192.168.57.8"?

Yes, if it's in the same subnet and not already in use by some other
machine.

> Which part of my configuration is wrong? Why, when I disconnect
> node1, then node2 doesn't replace it?

The first thing I would do is configure and test fencing. Once you're
confident fencing is working, add the floating IP address. Make sure
you can ping the floating IP address from some other machine. Then test
fail-over and ensure you can still ping the floating IP. From there it
should be straightforward.


> 
> 
> 
> 
> 
> On Wednesday, March 24, 2021, 12:33:53 AM GMT+4:30, Ken Gaillot <
> kgail...@redhat.com> wrote: 
> 
> 
> 
> 
> 
> On Tue, 2021-03-23 at 19:07 +, Jason Long wrote:
> > Thanks, but I want to have a cluster with two nodes and nothing
> > more!
> 
> The end result is to have 2 nodes with 3 IP addresses:
> 
> * The first node has a permanently assigned IP address that it brings
> up when it boots; this address is not managed by the cluster
> 
> * The second node also has a permanent address not managed by the
> cluster
> 
> * A third, unused IP address from the same subnet is used as a
> "floating" IP address, which means the cluster can sometimes run it
> on
> the first node and sometimes on the second node. This IP address is
> the
> one that users will use to contact the service.
> 
> That way, users always have a single address that they use, no matter
> which node is providing the service.
> 
> > 
> > 
> > 
> > 
> > 
> > On Tuesday, March 23, 2021, 07:59:57 PM GMT+4:30, Klaus Wenninger <
> > kwenn...@redhat.com> wrote: 
> > 
> > 
> > 
> > 
> > 
> > On 3/23/21 4:07 PM, Jason Long wrote:
> > > Thank you.
> > > Thus, where I must define my node2 IP address? When node1
> > > disconnected, I want node2 replace it.
> > > 
> > 
> > You just need a single IP address that you are assigning to the
> > virtual 
> > IP resource.
> > And pacemaker is gonna move that IP address - along with the web-
> > proxy - 
> > between
> > the 2 nodes.
> > Of course node1 & node2 have IP addresses that are being used for 
> > cluster-communication
> > but they are totally independent (well maybe in the same subnet for
> > a 
> > simple setup)
> > from the IP address your web-proxy is reachable at.
> > 
> > Klaus
> > 
> > > 
> > > 
> > > 
> > > 
> > > On Tuesday, March 23, 2021, 01:03:39 PM GMT+4:30, Klaus Wenninger
> > > <
> > > kwenn...@redhat.com> wrote:
> > > 
> > > 
> > > 
> > > 
> > > 
> > > On 3/23/21 9:13 AM, Jason Long wrote:
> > > > Thank you.
> > > > But: 
> > > > https://www.clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch06.html
> > > > ?
> > > > 
> > > > The floating IP address is: 
> > > > https://www.clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_add_a_resource.html
> > > > In the "Warning" written: "The chosen address must not already
> > > > be
> > > > in use on the network. Do not reuse an IP address one of the
> > > > nodes already has configured.", what does it mean?
> > > 
> > > It means that if you would be using an IP that is already in use
> > > on your network - by one of your cluster-nodes or something else
> > > -
> > > pacemaker would possibly activate that IP and you would have
> > > a duplicate IP in your network.
> > > Thus for the question below: Don't use the IP od node2 for
> > > your floating IP.
> > > 
> > > Klaus
> > > 
> > > > In the below command, "IP" is the IP address of my node2?
> > > > # pcs resource create ClusterIP
> > > > ocf:heartbeat:IPaddr2 ip=192.168.122.120 cidr_netmask=32 op
> > > > monitor interval=30s
> > > > 
> > > > If yes, then I must update it with below command?
> > > > 
> > > > # pcs resource update floating_ip ocf:heartbeat:IPaddr2
> > > > ip="Node2
> > > > IP" cidr_netmask=32 op monitor interval=30s
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On Tuesday, March 23, 2021, 12:02:15 AM GMT+4:30, Ken Gaillot <
> > > > kgail...@redhat.com> wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> 

[ClusterLabs] Order set troubles

2021-03-24 Thread Strahil Nikolov

Hello All,

 

I have a trouble creating an order set .

The end goal is to create a 2 node cluster where nodeA will mount nfsA , while 
nodeB will mount nfsB.On top of that a depended cloned resource should start on 
the node only if nfsA or nfsB has started locally.

 

A prototype code would be something like:
pcs constraint order start (nfsA or nfsB) then start resource-clone

 

I tried to create a set like this, but it works only on nodeB:

pcs constraint order set nfsA nfsB resource-clone




Any idea how to implement that order constraint ?

Thanks in advance.




Best Regards,

Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] resource-agents v4.8.0

2021-03-24 Thread Oyvind Albrigtsen

ClusterLabs is happy to announce resource-agents v4.8.0.

Source code is available at:
https://github.com/ClusterLabs/resource-agents/releases/tag/v4.8.0

The most significant enhancements in this release are:
- bugfixes and enhancements:
 - awsvip: dont partially match similar IPs during monitor-action
 - aws agents: dont spam log files when getting token
 - galera/rabbitmq-cluster/redis: run crm_mon without performing validation
   to solve pcmk version mismatch issues between host and container(s)
 - podman: return OCF_NOT_RUNNING when monitor cmd fails (not running)
 - Filesystem: change force_unmount default to safe for RHEL9+
 - Route: return OCF_NOT_RUNNING status if iface doesn't exist.
 - VirtualDomain: fix pid_status() on EL8 (and other distros with newer 
versions of qemu) (#1614)
 - anything: only write PID to pidfile (when sh prints message(s))
 - azure-lb: redirect stdout and stderr to /dev/null to avoid nc dying with 
EPIPE error
 - configure: dont use OCF_ROOT_DIR from glue.h
 - docker-compose: use -f $YML in all calls to avoid issues when not using 
default YML file
 - gcp-vpc-move-route, gcp-vpc-move-vip: add project ID parameter
 - gcp-vpc-move-route: fix stop-action when route stopped, and fix 
check_conflicting_routes()
 - gcp-vpc-move-route: make "vpc_network" optional
 - gcp-vpc-move-vip: correctly return error when no instances are returned
 - ldirectord: added real servers threshold settings
 - mysql-common: check datadir permissions
 - nfsclient: fix stop-action when export not present
 - nfsserver: error-check unbind_tree
 - pgsql: make wal receiver check compatible with PostgreSQL >= 11
 - spec: add BuildRequires for google lib

The full list of changes for resource-agents is available at:
https://github.com/ClusterLabs/resource-agents/blob/v4.8.0/ChangeLog

Everyone is encouraged to download and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all the contributors to this release.


Best,
The resource-agents maintainers

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Q: Is there any plan for pcs to support corosync-notifyd?

2021-03-24 Thread 井上和徳
On Thu, Mar 18, 2021 at 6:31 PM Jehan-Guillaume de Rorthais
 wrote:
>
> On Thu, 18 Mar 2021 17:29:59 +0900
> 井上和徳  wrote:
>
> > On Tue, Mar 16, 2021 at 10:23 PM Jehan-Guillaume de Rorthais
> >  wrote:
> > >
> > > > On Tue, 16 Mar 2021, 09:58 井上和徳,  wrote:
> > > >
> > > > > Hi!
> > > > >
> > > > > Cluster (corosync and pacemaker) can be started with pcs,
> > > > > but corosync-notifyd needs to be started separately with systemctl,
> > > > > which is not easy to use.
> > >
> > > Maybe you can add to the [Install] section of corosync-notifyd a 
> > > dependency
> > > with corosync? Eg.:
> > >
> > >   WantedBy=corosync.service
> > >
> > > (use systemctl edit corosync-notifyd)
> > >
> > > Then re-enable the service (without starting it by hands).
> >
> > I appreciate your proposal. How to use WantedBy was helpful!
> > However, since I want to start the cluster (corosync, pacemaker) only
> > manually, it is unacceptable to start corosync along with corosync-notifyd 
> > at
> > OS boot time.
>
> This is perfectly fine.
>
> I suppose corosync-notifyd is starting because the default service config has:
>
>   [Install]
>   WantedBy=multi-user.target
>
> If you want corosync-notifyd to be enabled ONLY on corosync startup, but noton
> system startup, you have to remove this startup dependency on "multi-user"
> target. So, your drop-in setup of corosync-notifyd shoudl be (remove leading
> spaces):
>
>   cat <   [Install]
>   WantedBy=
>   WantedBy=corosync.service
>   EOF
>

Oh, that makes sense!
With this setting, it seems that the purpose can be achieved.
Thank you.

> The first empty WantedBy= removes any pre-existing dependency.
>
> Then disable/enable corosync-notifyd again to install the new dependency and
> remove old ones. It should only creates ONE link in
> "/etc/systemd/system/corosync.service.wants/corosync-notifyd.service",
> but NOT in "/etc/systemd/system/multi-user.target.wants/".
>
> Regards,


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/