Eric,

Well, I finally got around to trying out the <altname> tag and I managed to get 
two rings running, but the first ring is not obeying my address and port directives.

Here's my cluster.conf

<cluster config_version="9" name="ha10ab">
   <fence_daemon/>
   <clusternodes>
     <clusternode name="ha10a" nodeid="1">
       <multicast addr="239.255.5.1" port="4000"/>
       <altname name="ha10a-cl" port="4000" mcast="239.255.5.2"/>
       <fence>
         <method name="pcmk-method">
           <device name="pcmk-redirect" port="ha10a"/>
         </method>
       </fence>
     </clusternode>
     <clusternode name="ha10b" nodeid="2">
       <multicast addr="239.255.5.1" port="4000"/>
       <altname name="ha10b-cl" port="4000" mcast="239.255.5.2"/>
       <fence>
         <method name="pcmk-method">
           <device name="pcmk-redirect" port="ha10b"/>
         </method>
       </fence>
     </clusternode>
   </clusternodes>
   <cman broadcast="no" expected_votes="1" transport="udp" two_node="1"/>
   <fencedevices>
     <fencedevice agent="fence_pcmk" name="pcmk-redirect"/>
   </fencedevices>
   <rm>
     <failoverdomains/>
     <resources/>
   </rm>
</cluster>

The rings are up...

[root@ha10b ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
         id      = 192.168.10.61
         status  = ring 0 active with no faults
RING ID 1
         id      = 198.51.100.61
         status  = ring 1 active with no faults

HOWEVER, when I run tcpdump, I can see that ring2 running on the appropriate 
multicast address and port, but ring1 is running on the default address and 
port...

[root@ha10b ~]# tcpdump -nn -i bond0 net 239.192.0.0/16
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond0, link-type EN10MB (Ethernet), capture size 65535 bytes
22:54:36.738395 IP 192.168.10.60.5404 > 239.192.170.111.5405: UDP, length 119
22:54:40.547048 IP 192.168.10.60.5404 > 239.192.170.111.5405: UDP, length 119

How do I get ring1 running on my desired address and port of 239.255.5.1, port 
4000

I'm not sure "mcast" for every node is really needed.

Try last example of:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-rrp-cli-ca


<cman>
   <multicast addr="239.192.99.73" port="666" ttl="2"/>
   <altmulticast addr="239.192.99.88" port="888" ttl="3"/>
</cman>

Honza


--Eric

_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to