Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
On Mon, Apr 11, 2016 at 08:23:03AM +0200, Jan Friesse wrote: ... >>> bond0:mtu 1500 qdisc noqueue state UP >>> link/ether 74:e6:e2:73:e5:61 brd ff:ff:ff:ff:ff:ff >>> inet 10.150.20.91/24 brd 10.150.20.55 scope global bond0 >>> inet 192.168.150.12/22 brd 192.168.151.255 scope global bond0:cluster >>> inet6 fe80::76e6:e2ff:fe73:e561/64 scope link >>> valid_lft forever preferred_lft forever >> >> This is ifconfig output? I'm just wondering how you were able to set >> two ipv4 addresses (in this format, I would expect another interface >> like bond0:1 or nothing at all)? ... No, it is "ip addr show" output. > RHEL 6: > > # tunctl -p > Set 'tap0' persistent and owned by uid 0 > > # ip addr add 192.168.7.1/24 dev tap0 > # ip addr add 192.168.8.1/24 dev tap0 > # ifconfig tap0 > tap0 Link encap:Ethernet HWaddr 22:95:B1:85:67:3F > inet addr:192.168.7.1 Bcast:0.0.0.0 Mask:255.255.255.0 > BROADCAST MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:500 > RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) # ip addr add 192.168.7.1/24 dev tap0 # ip addr add 192.168.8.1/24 dev tap0 label tap0:jan # ip addr show dev tap0 And as long as you actually use those "label"s, you then can even see these with "ifconfig tap0:jan". -- : Lars Ellenberg : LINBIT | Keeping the Digital World Running : DRBD -- Heartbeat -- Corosync -- Pacemaker : R, Integration, Ops, Consulting, Support DRBD® and LINBIT® are registered trademarks of LINBIT ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
08.04.2016 17:51, Jan Friesse пишет: On 04/08/16 13:01, Jan Friesse wrote: >> pacemaker 1.1.12-11.12 >> openais 1.1.4-5.24.5 >> corosync 1.4.7-0.23.5 >> >> Its a two node active/passive cluster and we just upgraded the SLES 11 >> SP 3 to SLES 11 SP 4(nothing else) but when we try to start the cluster >> service we get the following error: >> >> "Totem is unable to form a cluster because of an operating system or >> network fault." >> >> Firewall is stopped and disabled on both the nodes. Both nodes can >> ping/ssh/vnc each other. > > Hard to help. First of all, I would recommend to ask SUSE support because I don't really have access to source code of corosync 1.4.7-0.23.5 package, so really don't know what patches are added. > > Yup, ticket opened with SUSE Support. >> >> >> >> /var/log/messages: >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync Cluster Engine >> ('1.4.7'): started and ready to provide service. >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync built-in >> features: nss >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully configured >> openais services to load >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully read main >> configuration file '/etc/corosync/corosync.conf'. >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport >> (UDP/IP Unicast). >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing >> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network interface is >> down. > > ^^^ This is important line. It means corosync was unable to find interface for bindnetaddr 192.168.150.0. Make sure interface with this network address exists. > > this machine has two IP address assigned on interface bond0 bond0:mtu 1500 qdisc noqueue state UP link/ether 74:e6:e2:73:e5:61 brd ff:ff:ff:ff:ff:ff inet 10.150.20.91/24 brd 10.150.20.55 scope global bond0 inet 192.168.150.12/22 brd 192.168.151.255 scope global bond0:cluster inet6 fe80::76e6:e2ff:fe73:e561/64 scope link valid_lft forever preferred_lft forever This is ifconfig output? I'm just wondering how you were able to set two ipv4 addresses (in this format, I would expect another interface like bond0:1 or nothing at all)? That is how Linux stack works for the last 10 or 15 years. The bond0:1 is legacy emulation for ifconfig addicts. ip addr add 10.150.20.91/24 dev bond0 Hmm. RHEL 6: # tunctl -p Set 'tap0' persistent and owned by uid 0 # ip addr add 192.168.7.1/24 dev tap0 # ip addr add 192.168.8.1/24 dev tap0 # ifconfig tap0 tap0 Link encap:Ethernet HWaddr 22:95:B1:85:67:3F inet addr:192.168.7.1 Bcast:0.0.0.0 Mask:255.255.255.0 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) RHEL 7: # ip tuntap add dev tap0 mode tap # ip addr add 192.168.7.1/24 dev tap0 # ip addr add 192.168.8.1/24 dev tap0 # ifconfig tap0 tap0: flags=4098 mtu 1500 inet 192.168.7.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 36:02:5c:ff:29:ea txqueuelen 500 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 So where do you see 192.168.8.1 in ifconfig output? Anyway, I was trying to create bonding interface and set second ipv4 (via ip addr) and corosync (flatiron what is 1.4.8 + 4 for your problem completely unrelated patches) was able to detect it without any problem. I can recommend you to try: - Set bindnetaddr to IP address of given node (so you have to change bindnetaddr on both nodes) - Try upstream corosync 1.4.8/flatiron Regards, Honza And I can ping 192.168.150.12 from this machine and from other machines on network -- Regards, Muhammad Sharfuddin ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started:
Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
On 04/08/16 19:51, Jan Friesse wrote: >> >> /var/log/messages: >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport >> >> (UDP/IP Unicast). >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing >> >> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network interface is >> >> down. >> > >> > ^^^ This is important line. It means corosync was unable to find >> interface for bindnetaddr 192.168.150.0. Make sure interface with this >> network address exists. >> > >> > >> this machine has two IP address assigned on interface bond0 >> >> bond0:mtu 1500 qdisc noqueue state UP >> link/ether 74:e6:e2:73:e5:61 brd ff:ff:ff:ff:ff:ff >> inet 10.150.20.91/24 brd 10.150.20.55 scope global bond0 >> inet 192.168.150.12/22 brd 192.168.151.255 scope global bond0:cluster >> inet6 fe80::76e6:e2ff:fe73:e561/64 scope link >> valid_lft forever preferred_lft forever > > This is ifconfig output? > No this is the output of "ip a s bond0". > I'm just wondering how you were able to set two ipv4 addresses (in this format, I would expect another interface like bond0:1 or nothing > at all)? > These IPes are physical i.e assigned via configuration file of bond0 interface cat /etc/sysconfig/network/ifcfg-bond0 BONDING_MASTER='yes' BONDING_MODULE_OPTS='mode=active-backup miimon=100' BONDING_SLAVE0='em1' BONDING_SLAVE1='em2' BOOTPROTO='static' IPADDR='10.150.20.91/24' IPADDR_0='192.168.150.12/22' LABEL_0='cluster' > Anyway, I was trying to create bonding interface and set second ipv4 (via ip addr) and corosync (flatiron what is 1.4.8 + 4 for your problem completely unrelated > patches) was able to detect it without any problem. > > I can recommend you to try: > - Set bindnetaddr to IP address of given node (so you have to change bindnetaddr on both nodes) > Thanks a lot, I got the similar advice from SUSE Support, and changing the bindnetaddr from 192.168.150.0 to 192.168.148.0(as the netmask we are using on this bond is bond0:cluster IP address: 192.168.150.12/22 ) fixed the issue. > - Try upstream corosync 1.4.8/flatiron > Not required. > Regards, > Honza > -- Regards, Muhammad Sharfuddin. ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
08.04.2016 17:51, Jan Friesse пишет: >> On 04/08/16 13:01, Jan Friesse wrote: >> >> pacemaker 1.1.12-11.12 >> >> openais 1.1.4-5.24.5 >> >> corosync 1.4.7-0.23.5 >> >> >> >> Its a two node active/passive cluster and we just upgraded the >> SLES 11 >> >> SP 3 to SLES 11 SP 4(nothing else) but when we try to start the >> cluster >> >> service we get the following error: >> >> >> >> "Totem is unable to form a cluster because of an operating system or >> >> network fault." >> >> >> >> Firewall is stopped and disabled on both the nodes. Both nodes can >> >> ping/ssh/vnc each other. >> > >> > Hard to help. First of all, I would recommend to ask SUSE support >> because I don't really have access to source code of corosync >> 1.4.7-0.23.5 package, so really don't know what patches are added. >> > >> > >> Yup, ticket opened with SUSE Support. >> >> >> >> >> >> >> >> >> /var/log/messages: >> >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync Cluster >> Engine >> >> ('1.4.7'): started and ready to provide service. >> >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync built-in >> >> features: nss >> >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully >> configured >> >> openais services to load >> >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully read main >> >> configuration file '/etc/corosync/corosync.conf'. >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport >> >> (UDP/IP Unicast). >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing >> >> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). >> >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network >> interface is >> >> down. >> > >> > ^^^ This is important line. It means corosync was unable to find >> interface for bindnetaddr 192.168.150.0. Make sure interface with this >> network address exists. >> > >> > >> this machine has two IP address assigned on interface bond0 >> >> bond0:mtu 1500 qdisc noqueue state UP >> link/ether 74:e6:e2:73:e5:61 brd ff:ff:ff:ff:ff:ff >> inet 10.150.20.91/24 brd 10.150.20.55 scope global bond0 >> inet 192.168.150.12/22 brd 192.168.151.255 scope global >> bond0:cluster >> inet6 fe80::76e6:e2ff:fe73:e561/64 scope link >> valid_lft forever preferred_lft forever > > This is ifconfig output? I'm just wondering how you were able to set two > ipv4 addresses (in this format, I would expect another interface like > bond0:1 or nothing at all)? > That is how Linux stack works for the last 10 or 15 years. The bond0:1 is legacy emulation for ifconfig addicts. ip addr add 10.150.20.91/24 dev bond0 > Anyway, I was trying to create bonding interface and set second ipv4 > (via ip addr) and corosync (flatiron what is 1.4.8 + 4 for your problem > completely unrelated patches) was able to detect it without any problem. > > I can recommend you to try: > - Set bindnetaddr to IP address of given node (so you have to change > bindnetaddr on both nodes) > - Try upstream corosync 1.4.8/flatiron > > Regards, > Honza > >> >> And I can ping 192.168.150.12 from this machine and from other machines >> on network >> >> >> >> -- >> Regards, >> >> Muhammad Sharfuddin >> >> ___ >> Users mailing list: Users@clusterlabs.org >> http://clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org > > > ___ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
On 04/08/16 13:01, Jan Friesse wrote: >> pacemaker 1.1.12-11.12 >> openais 1.1.4-5.24.5 >> corosync 1.4.7-0.23.5 >> >> Its a two node active/passive cluster and we just upgraded the SLES 11 >> SP 3 to SLES 11 SP 4(nothing else) but when we try to start the cluster >> service we get the following error: >> >> "Totem is unable to form a cluster because of an operating system or >> network fault." >> >> Firewall is stopped and disabled on both the nodes. Both nodes can >> ping/ssh/vnc each other. > > Hard to help. First of all, I would recommend to ask SUSE support because I don't really have access to source code of corosync 1.4.7-0.23.5 package, so really don't know what patches are added. > > Yup, ticket opened with SUSE Support. >> >> >> >> /var/log/messages: >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync Cluster Engine >> ('1.4.7'): started and ready to provide service. >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync built-in >> features: nss >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully configured >> openais services to load >> Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully read main >> configuration file '/etc/corosync/corosync.conf'. >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport >> (UDP/IP Unicast). >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing >> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). >> Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network interface is >> down. > > ^^^ This is important line. It means corosync was unable to find interface for bindnetaddr 192.168.150.0. Make sure interface with this network address exists. > > this machine has two IP address assigned on interface bond0 bond0:mtu 1500 qdisc noqueue state UP link/ether 74:e6:e2:73:e5:61 brd ff:ff:ff:ff:ff:ff inet 10.150.20.91/24 brd 10.150.20.55 scope global bond0 inet 192.168.150.12/22 brd 192.168.151.255 scope global bond0:cluster inet6 fe80::76e6:e2ff:fe73:e561/64 scope link valid_lft forever preferred_lft forever And I can ping 192.168.150.12 from this machine and from other machines on network -- Regards, Muhammad Sharfuddin ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] getting "Totem is unable to form a cluster" error
pacemaker 1.1.12-11.12 openais 1.1.4-5.24.5 corosync 1.4.7-0.23.5 Its a two node active/passive cluster and we just upgraded the SLES 11 SP 3 to SLES 11 SP 4(nothing else) but when we try to start the cluster service we get the following error: "Totem is unable to form a cluster because of an operating system or network fault." Firewall is stopped and disabled on both the nodes. Both nodes can ping/ssh/vnc each other. Hard to help. First of all, I would recommend to ask SUSE support because I don't really have access to source code of corosync 1.4.7-0.23.5 package, so really don't know what patches are added. corosync.conf: aisexec { group:root user:root } service { use_mgmtd:yes use_logd:yes ver:0 name:pacemaker } totem { rrp_mode:none join:60 max_messages:20 vsftype:none token:5000 consensus:6000 interface { bindnetaddr:192.168.150.0 member { memberaddr: 192.168.150.12 } member { memberaddr: 192.168.150.13 } mcastport:5405 ringnumber:0 } secauth:off version:2 transport:udpu token_retransmits_before_loss_const:10 clear_node_high_bit:new } logging { to_logfile:no to_syslog:yes debug:off timestamp:off to_stderr:no fileline:off syslog_facility:daemon } amf { mode:disable } /var/log/messages: Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service. Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync built-in features: nss Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully configured openais services to load Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'. Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport (UDP/IP Unicast). Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network interface is down. ^^^ This is important line. It means corosync was unable to find interface for bindnetaddr 192.168.150.0. Make sure interface with this network address exists. Regards, Honza ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[ClusterLabs] getting "Totem is unable to form a cluster" error
pacemaker 1.1.12-11.12 openais 1.1.4-5.24.5 corosync 1.4.7-0.23.5 Its a two node active/passive cluster and we just upgraded the SLES 11 SP 3 to SLES 11 SP 4(nothing else) but when we try to start the cluster service we get the following error: "Totem is unable to form a cluster because of an operating system or network fault." Firewall is stopped and disabled on both the nodes. Both nodes can ping/ssh/vnc each other. corosync.conf: aisexec { group:root user:root } service { use_mgmtd:yes use_logd:yes ver:0 name:pacemaker } totem { rrp_mode:none join:60 max_messages:20 vsftype:none token:5000 consensus:6000 interface { bindnetaddr:192.168.150.0 member { memberaddr: 192.168.150.12 } member { memberaddr: 192.168.150.13 } mcastport:5405 ringnumber:0 } secauth:off version:2 transport:udpu token_retransmits_before_loss_const:10 clear_node_high_bit:new } logging { to_logfile:no to_syslog:yes debug:off timestamp:off to_stderr:no fileline:off syslog_facility:daemon } amf { mode:disable } /var/log/messages: Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service. Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Corosync built-in features: nss Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully configured openais services to load Apr 6 17:51:49 prd1 corosync[8672]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'. Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transport (UDP/IP Unicast). Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Apr 6 17:51:49 prd1 corosync[8672]: [TOTEM ] The network interface is down. Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais cluster membership service B.01.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais event service B.01.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais checkpoint service B.01.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais availability management framework B.01.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais message service B.03.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais distributed locking service B.03.01 Apr 6 17:51:49 prd1 corosync[8672]: [SERV ] Service engine loaded: openais timer service A.01.01 Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: process_ais_conf: Reading configure Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_init: Local handle: 7685269064754659330 for logging Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional logging options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: Found 'off' for option: debug Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: Found 'no' for option: to_logfile Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: Found 'yes' for option: to_syslog Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: Found 'daemon' for option: syslog_facility Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_init: Local handle: 8535092201842016259 for quorum Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: No additional configuration supplied for: quorum Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: No default for option: provider Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_init: Local handle: 8054506479773810692 for service Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: config_find_next: Processing additional service options... Apr 6 17:51:49 prd1 corosync[8672]: [pcmk ] info: get_config_opt: Found