Hello Digimer, As an idea, might be some settings in sysctl.conf ? Slava.
----- Original Message ----- From: "Slava Bendersky" <[email protected]> To: "Digimer" <[email protected]> Cc: [email protected] Sent: Saturday, November 23, 2013 10:27:22 PM Subject: Re: [corosync] information request Hello Digimer, Yes I set to passive and selinux is disabled [root@eusipgw01 ~]# sestatus SELinux status: disabled [root@eusipgw01 ~]# cat /etc/corosync/corosync.conf totem { version: 2 token: 160 token_retransmits_before_loss_const: 3 join: 250 consensus: 300 vsftype: none max_messages: 20 threads: 0 nodeid: 2 rrp_mode: passive interface { ringnumber: 0 bindnetaddr: 10.10.10.0 mcastaddr: 226.94.1.1 mcastport: 5405 } } logging { fileline: off to_stderr: yes to_logfile: yes to_syslog: off logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } Slava. ----- Original Message ----- From: "Digimer" <[email protected]> To: "Slava Bendersky" <[email protected]> Cc: "Steven Dake" <[email protected]>, [email protected] Sent: Saturday, November 23, 2013 7:04:43 PM Subject: Re: [corosync] information request First up, I'm not Steven. Secondly, did you follow Steven's recommendation to not use active RRP? Does the cluster form with no RRP at all? Is selinux enabled? On 23/11/13 18:29, Slava Bendersky wrote: > Hello Steven, > In multicast it log filling with this message > > Nov 24 00:26:28 corosync [TOTEM ] A processor failed, forming new > configuration. > Nov 24 00:26:28 corosync [TOTEM ] A processor joined or left the > membership and a new membership was formed. > Nov 24 00:26:31 corosync [CPG ] chosen downlist: sender r(0) > ip(10.10.10.1) ; members(old:2 left:0) > Nov 24 00:26:31 corosync [MAIN ] Completed service synchronization, > ready to provide service. > > In uudp it not working at all. > > Slava. > > > ------------------------------------------------------------------------ > *From: *"Digimer" <[email protected]> > *To: *"Slava Bendersky" <[email protected]> > *Cc: *"Steven Dake" <[email protected]>, [email protected] > *Sent: *Saturday, November 23, 2013 6:05:56 PM > *Subject: *Re: [corosync] information request > > So multicast works with the firewall disabled? > > On 23/11/13 17:28, Slava Bendersky wrote: >> Hello Steven, >> I disabled iptables and no difference, error message the same, but at >> least in multicast is wasn't generate the error. >> >> >> Slava. >> >> ------------------------------------------------------------------------ >> *From: *"Digimer" <[email protected]> >> *To: *"Slava Bendersky" <[email protected]>, "Steven Dake" >> <[email protected]> >> *Cc: *[email protected] >> *Sent: *Saturday, November 23, 2013 4:37:36 PM >> *Subject: *Re: [corosync] information request >> >> Does either mcast or unicast work if you disable the firewall? If so, >> then at least you know for sure that iptables is the problem. >> >> The link here shows the iptables rules I use (for corosync in mcast and >> other apps): >> >> https://alteeve.ca/w/AN!Cluster_Tutorial_2#Configuring_iptables >> >> digimer >> >> On 23/11/13 16:12, Slava Bendersky wrote: >>> Hello Steven, >>> Than what I see when setup through UDPU >>> >>> Nov 23 22:08:13 corosync [MAIN ] Compatibility mode set to whitetank. >>> Using V1 and V2 of the synchronization engine. >>> Nov 23 22:08:13 corosync [TOTEM ] adding new UDPU member {10.10.10.1} >>> Nov 23 22:08:16 corosync [MAIN ] Totem is unable to form a cluster >>> because of an operating system or network fault. The most common cause >>> of this message is that the local firewall is configured improperly. >>> >>> >>> Might be missing some firewall rules ? I allowed unicast. >>> >>> Slava. >>> >>> ------------------------------------------------------------------------ >>> *From: *"Steven Dake" <[email protected]> >>> *To: *"Slava Bendersky" <[email protected]> >>> *Cc: *[email protected] >>> *Sent: *Saturday, November 23, 2013 10:33:31 AM >>> *Subject: *Re: [corosync] information request >>> >>> >>> On 11/23/2013 08:23 AM, Slava Bendersky wrote: >>> >>> Hello Steven, >>> >>> My setup >>> >>> 10.10.10.1 primary server -----EoIP tunnel vpn ipsec ----- dr server >>> 10.10.10.2 >>> >>> On both servers is 2 interfaces eth0 which default gw out and eth1 >>> where corosync live. >>> >>> Iptables: >>> >>> -A INPUT -i eth1 -p udp -m state --state NEW -m udp --dport 5404:5407 >>> -A INPUT -i eth1 -m pkttype --pkt-type multicast >>> -A INPUT -i eth1 -p igmp >>> >>> >>> Corosync.conf >>> >>> totem { >>> version: 2 >>> token: 160 >>> token_retransmits_before_loss_const: 3 >>> join: 250 >>> consensus: 300 >>> vsftype: none >>> max_messages: 20 >>> threads: 0 >>> nodeid: 2 >>> rrp_mode: active >>> interface { >>> ringnumber: 0 >>> bindnetaddr: 10.10.10.0 >>> mcastaddr: 226.94.1.1 >>> mcastport: 5405 >>> } >>> } >>> >>> Join message >>> >>> [root@eusipgw01 ~]# corosync-objctl | grep member >>> runtime.totem.pg.mrp.srp.members.2.ip=r(0) ip(10.10.10.2) >>> runtime.totem.pg.mrp.srp.members.2.join_count=1 >>> runtime.totem.pg.mrp.srp.members.2.status=joined >>> runtime.totem.pg.mrp.srp.members.1.ip=r(0) ip(10.10.10.1) >>> runtime.totem.pg.mrp.srp.members.1.join_count=254 >>> runtime.totem.pg.mrp.srp.members.1.status=joined >>> >>> Is it possible that ping sends out of wrong interface ? >>> >>> Slava, >>> >>> I wouldn't expect so. >>> >>> Which version? >>> >>> Have you tried udpu instead? If not, it is preferable to multicast >>> unless you want absolute performance on cpg groups. In most cases the >>> performance difference is very small and not worth the trouble of >>> setting up multicast in your network. >>> >>> Fabio had indicated rrp active mode is broken. I don't know the >>> details, but try passive RRP - it is actually better then active > IMNSHO :) >>> >>> Regards >>> -steve >>> >>> Slava. >>> >>> >> ------------------------------------------------------------------------ >>> *From: *"Steven Dake" <[email protected]> >>> *To: *"Slava Bendersky" <[email protected]>, > [email protected] >>> *Sent: *Saturday, November 23, 2013 6:01:11 AM >>> *Subject: *Re: [corosync] information request >>> >>> >>> On 11/23/2013 12:29 AM, Slava Bendersky wrote: >>> >>> Hello Everyone, >>> Corosync run on box with 2 Ethernet interfaces. >>> I am getting this message >>> CPG mcast failed (6) >>> >>> Any information thank you in advance. >>> >>> >>> >>> >> > https://github.com/corosync/corosync/blob/master/include/corosync/corotypes.h#L84 > >>> >>> This can occur because: >>> a) firewall is enabled - there should be something in the logs >>> telling you to properly configure the firewall >>> b) a config change is in progress - this is a normal response, and >>> you should try the request again >>> c) a bug in the synchronization code is resulting in a blocked >>> unsynced cluster >>> >>> c is very unlikely at this point. >>> >>> 2 ethernet interfaces = rrp mode, bonding, or something else? >>> >>> Digimer needs moar infos :) >>> >>> Regards >>> -steve >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list >>> [email protected] >>> http://lists.corosync.org/mailman/listinfo/discuss >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list >>> [email protected] >>> http://lists.corosync.org/mailman/listinfo/discuss >>> >> >> >> -- >> Digimer >> Papers and Projects: https://alteeve.ca/w/ >> What if the cure for cancer is trapped in the mind of a person without >> access to education? >> > > > -- > Digimer > Papers and Projects: https://alteeve.ca/w/ > What if the cure for cancer is trapped in the mind of a person without > access to education? > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ discuss mailing list [email protected] http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________ discuss mailing list [email protected] http://lists.corosync.org/mailman/listinfo/discuss
