Ziqing Zhuang napsal(a):
>
> I am trying to configure Corosync here. At first, I tried to use unicast,
> here is part of my corosync.conf file (I did not change others):
> interface {
> # The following values need to be set based on your
> environment
> member {
> memberaddr: 192.168.254.131
> }
> member {
> memberaddr: 192.168.254.134
> }
> ringnumber: 0
> bindnetaddr: 192.168.254.0
> mcastport: 5405
> }
> transport: udpu
> Then I run crm_mon -1,
> Everything works fine. It says 2 nodes configured.
> But if I use multicast, by changing this interface part to the following:
>
> interface {
> # The following values need to be set based on your
> environment
> ringnumber: 0
> bindnetaddr: 192.168.254.0
> mcastaddr: 227.94.1.2
> mcastport: 5405
> }
> Then the system doesn’t recognize any nodes. It just said 0 node configured.
> So I use
> root@devenv1: corosync-cfgtool –s
> Printing ring status.
> Local node ID 117352640
> Could not get the ring status, the error is: 6
> I am assuming mcastaddr is not correct, so I tried several different address,
> but it still doesn’t work.
> Any advice?
Ziqing,
mcast address should be ok (at least it's multicast, even we are
recommending 239.255.x.x). Error 6 is corosync version of EAGAIN error.
This is usually happening with badly configured firewall (just try to
disable firewall completely for a while) or switch with improper
configuration of multicast (or again firewall).
Take a look to /var/log/messages (/var/log/cluster/corosync.log) if you
find something useful here.
Regards,
Honza
>
>
>
> _______________________________________________
> discuss mailing list
> [email protected]
> http://lists.corosync.org/mailman/listinfo/discuss
>
_______________________________________________
discuss mailing list
[email protected]
http://lists.corosync.org/mailman/listinfo/discuss