Hi Dan
 
Thanks
 
That resolved it.
 
Anoop

________________________________

From: Dan Frincu [mailto:dfri...@streamwide.ro] 
Sent: Wednesday, July 21, 2010 3:13 AM
To: Rajkumar, Anoop
Cc: linux-cluster@redhat.com
Subject: Re: Linux-cluster Digest, Vol 75, Issue 21


>From the man page.

Multicast network configuration 

Cman can be configured to use multicast instead of broadcast (broadcast
is used by default if no multicast parameters are given.) To configure
multicast add one line under the <cman> section and another under the
<clusternode> section:

<cman> <multicast addr="224.0.0.1"/> </cman>

<clusternode name="nd1"> <multicast addr="224.0.0.1" interface="eth0"/>
</clusternode>

The multicast addresses must match and the address must be usable on the
interface name given for the node.

When running netstat -gn you have something like this:

IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
br0             1      224.0.0.1
tap0            1      224.0.0.1

On the third node check to see a matching pair between eth0 and
239.192.104.2 in the output of netstat -gn. If you don't see it or it
matches another interface, such as eth5, change according to the man
page reference above.

Regards,
Dan.

Rajkumar, Anoop wrote: 

         You are right. I was not having multicast group in third node
when I
        did netstat -gn, have added below config in cluster.conf and
restarted
        cluster and now I have multicast group on all nodes. Problem is
packets
        are being received at public network of third node unlike other
two
        nodes where they are being received at private network IP. Any
inputs
        how to change that?
        
        Added to cluster.conf
        
        <multicast addr="239.192.104.2"/>
                        <cman/>
        
        From third node.
        
        [r...@usrylxap235 ~]# ip addr list|grep eth0
        2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast
        qlen 1000
            inet 192.168.0.7/24 brd 192.168.0.255 scope global eth0
        [r...@usrylxap235 ~]# ip addr list|grep eth5
        7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast
        qlen 1000
            inet 54.3.254.235/24 brd 54.3.254.255 scope global eth5
        
        [r...@usrylxap235 ~]# tcpdump -i eth0 icmp
        tcpdump: verbose output suppressed, use -v or -vv for full
protocol
        decode
        listening on eth0, link-type EN10MB (Ethernet), capture size 96
bytes
        
        0 packets captured
        1 packets received by filter
        0 packets dropped by kernel
        
        [r...@usrylxap235 ~]# tcpdump -i eth5 icmp
        tcpdump: verbose output suppressed, use -v or -vv for full
protocol
        decode
        listening on eth5, link-type EN10MB (Ethernet), capture size 96
bytes
        17:48:19.292089 IP usrylxap237.merck.com > 239.192.104.2: ICMP
echo
        request, id 26924, seq 23, length 64
        22 packets captured
        22 packets received by filter
        0 packets dropped by kernel
        
        Appreciate your help.
        
        Anoop
        
        -----Original Message-----
        From: linux-cluster-boun...@redhat.com
        [mailto:linux-cluster-boun...@redhat.com] On Behalf Of
        linux-cluster-requ...@redhat.com
        Sent: Tuesday, July 20, 2010 12:00 PM
        To: linux-cluster@redhat.com
        Subject: Linux-cluster Digest, Vol 75, Issue 21
        
        Send Linux-cluster mailing list submissions to
                linux-cluster@redhat.com
        
        To subscribe or unsubscribe via the World Wide Web, visit
                https://www.redhat.com/mailman/listinfo/linux-cluster
        or, via email, send a message with subject or body 'help' to
                linux-cluster-requ...@redhat.com
        
        You can reach the person managing the list at
                linux-cluster-ow...@redhat.com
        
        When replying, please edit your Subject line so it is more
specific
        than "Re: Contents of Linux-cluster digest..."
        
        
        Today's Topics:
        
           1. Re: Linux-cluster Digest, Vol 75, Issue 19 (Dan Frincu)
        
        
        
----------------------------------------------------------------------
        
        Message: 1
        Date: Tue, 20 Jul 2010 11:57:13 +0300
        From: Dan Frincu <dfri...@streamwide.ro>
<mailto:dfri...@streamwide.ro> 
        To: linux clustering <linux-cluster@redhat.com>
<mailto:linux-cluster@redhat.com> 
        Subject: Re: [Linux-cluster] Linux-cluster Digest, Vol 75, Issue
19
        Message-ID: <4c4564e9.1010...@streamwide.ro>
<mailto:4c4564e9.1010...@streamwide.ro> 
        Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
        
        
        Rajkumar, Anoop wrote:
          

                 
                Hi
                
                Cluster is using private subnet address as node name and
cman shows
                following information on thir node.
                
                [r...@usrylxap235 ~]# cman_tool status
                Version: 6.1.0
                Config Version: 44
                Cluster Name: cluster1
                Cluster Id: 26777
                Cluster Member: Yes
                Cluster Generation: 88
                Membership state: Cluster-Member
                Nodes: 1
                Expected votes: 3
                Total votes: 1
                Quorum: 2 Activity blocked
                Active subsystems: 5
                Flags:
                Ports Bound: 0
                Node name: usrylxap235-1.merck.com
                Node ID: 3
                Multicast addresses: 239.192.104.2
                Node addresses: 192.168.0.7
                
                Two active nodes have following info.
                
                [r...@usrylxap237 net]# cman_tool status
                Version: 6.1.0
                Config Version: 44
                Cluster Name: cluster1
                Cluster Id: 26777
                Cluster Member: Yes
                Cluster Generation: 296
                Membership state: Cluster-Member
                Nodes: 2
                Expected votes: 3
                Total votes: 2
                Quorum: 2
                Active subsystems: 8
                Flags: Dirty
                Ports Bound: 0 177
                Node name: usrylxap237-1.merck.com
                Node ID: 1
                Multicast addresses: 239.192.104.2
                Node addresses: 192.168.0.2
                
                All the nodes have private network connected to foundry
switch. How I
                make sure that multicast traffic is going through
private network
                    

        which
          

                is eth5 for third node, eth0 for first and second node?
                  
                    

        netstat -gn shows multicast groups and the interfaces they're
linked to.
        Ping the multicast address from one node and run "tcpdump -ni
ethx icmp"
        
        on the other node to see if multicast traffic is reaching the
node. Then
        
        repeat the process in the other direction.
        
        Must have icmp traffic allowed between nodes first. If you don't
want to
        
        test icmp and only want to test TCP/UDP, use netcat.
        
        Regards,
        Dan.
          

                Thanks
                Anoop 
                -----Original Message-----
                From: linux-cluster-boun...@redhat.com
                [mailto:linux-cluster-boun...@redhat.com] On Behalf Of
                linux-cluster-requ...@redhat.com
                Sent: Sunday, July 18, 2010 12:00 PM
                To: linux-cluster@redhat.com
                Subject: Linux-cluster Digest, Vol 75, Issue 19
                
                Send Linux-cluster mailing list submissions to
                        linux-cluster@redhat.com
                
                To subscribe or unsubscribe via the World Wide Web,
visit
        
https://www.redhat.com/mailman/listinfo/linux-cluster
                or, via email, send a message with subject or body
'help' to
                        linux-cluster-requ...@redhat.com
                
                You can reach the person managing the list at
                        linux-cluster-ow...@redhat.com
                
                When replying, please edit your Subject line so it is
more specific
                than "Re: Contents of Linux-cluster digest..."
                
                
                Today's Topics:
                
                   1. Re: Adding third node to existing cluster (Marti,
Robert)
                   2. Re: How do i stop VM's on a failed node? (Volker
Dormeyer)
                
                
        
----------------------------------------------------------------------
                
                Message: 1
                Date: Sat, 17 Jul 2010 22:10:52 -0500
                From: "Marti, Robert" <rjm...@shsu.edu>
<mailto:rjm...@shsu.edu> 
                To: linux clustering <linux-cluster@redhat.com>
<mailto:linux-cluster@redhat.com> 
                Subject: Re: [Linux-cluster] Adding third node to
existing cluster
                Message-ID:
        
<8fac1e47484e43469aa28dbf35c955e4bde0b47...@exmbx.shsu.edu>
<mailto:8fac1e47484e43469aa28dbf35c955e4bde0b47...@exmbx.shsu.edu> 
                Content-Type: text/plain; charset="us-ascii"
                
                
                  
                    

                        On Fri, Jul 16, 2010 at 2:06 AM, Rajkumar, Anoop
                            
                              

                <anoop_rajku...@merck.com>
<mailto:anoop_rajku...@merck.com>  wrote:
                  
                    

                        Hi
                        
                        Here are the firewall settings
                        
                        #more /etc/sysconfig/iptables
                        # Generated by iptables-save v1.3.5 on Wed Jul
14 10:13:12 2010
                        *filter
                        :INPUT ACCEPT [2:186]
                        :FORWARD ACCEPT [0:0]
                        :OUTPUT ACCEPT [4:486]
                        -A INPUT -i eth5 -p udp -m udp --dport 5405 -j
ACCEPT
                        -A INPUT -i eth5 -p udp -m udp --sport 5405 -j
ACCEPT
                        -A INPUT -i eth0 -p tcp -m tcp --dport 14567 -j
ACCEPT
                        -A INPUT -i eth0 -p tcp -m tcp --sport 14567 -j
ACCEPT
                        -A INPUT -i eth0 -p tcp -m tcp --dport 16851 -j
ACCEPT
                        -A INPUT -i eth0 -p tcp -m tcp --sport 16851 -j
ACCEPT
                        -A INPUT -i eth5 -p udp -m udp --dport 50007 -j
ACCEPT
                        -A INPUT -i eth5 -p udp -m udp --sport 50007 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 11111 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 11111 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 21064 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 21064 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 50009 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 50009 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 50008 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 50008 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 50006 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 50006 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 41969 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 41969 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 41968 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 41968 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 41967 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 41967 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --dport 41966 -j
ACCEPT
                        -A INPUT -i eth5 -p tcp -m tcp --sport 41966 -j
ACCEPT
                        -A OUTPUT -o eth5 -p udp -m udp --sport 5405 -j
ACCEPT
                        -A OUTPUT -o eth5 -p udp -m udp --dport 5405 -j
ACCEPT
                        -A OUTPUT -o eth0 -p tcp -m tcp --sport 14567 -j
ACCEPT
                        -A OUTPUT -o eth0 -p tcp -m tcp --dport 14567 -j
ACCEPT
                        -A OUTPUT -o eth0 -p tcp -m tcp --sport 16851 -j
ACCEPT
                        -A OUTPUT -o eth0 -p tcp -m tcp --dport 16851 -j
ACCEPT
                        -A OUTPUT -o eth5 -p udp -m udp --sport 50007 -j
ACCEPT
                        -A OUTPUT -o eth5 -p udp -m udp --dport 50007 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 11111 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 11111 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 21064 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 21064 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 50009 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 50009 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 50008 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 50008 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 50006 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 50006 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 41969 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 41969 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 41968 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 41968 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 41967 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 41967 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --sport 41966 -j
ACCEPT
                        -A OUTPUT -o eth5 -p tcp -m tcp --dport 41966 -j
ACCEPT
                        COMMIT
                        
                        Here are the IP in the system,I am using eth5
(Which is in private
                            
                              

                network
                  
                    

                        with other two nodes, connected to switch)
                        
                        [r...@usrylxap235 ~]# ip addr list
                        1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc
noqueue
                            link/loopback 00:00:00:00:00:00 brd
00:00:00:00:00:00
                            inet 127.0.0.1/8 scope host lo
                            inet6 ::1/128 scope host
                               valid_lft forever preferred_lft forever
                        2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast
                            
                              

                qlen
                  
                    

                        1000
                            link/ether 00:1c:c4:f0:bd:d8 brd
ff:ff:ff:ff:ff:ff
                            inet 54.3.254.235/24 brd 54.3.254.255 scope
global eth0
                            inet6 fe80::21c:c4ff:fef0:bdd8/64 scope link
                               valid_lft forever preferred_lft forever
                        3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop qlen 1000
                            link/ether 00:1c:c4:f0:bd:da brd
ff:ff:ff:ff:ff:ff
                        4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop qlen 1000
                            link/ether 00:1c:c4:5e:f8:d8 brd
ff:ff:ff:ff:ff:ff
                        5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop qlen 1000
                            link/ether 00:1c:c4:5e:f8:da brd
ff:ff:ff:ff:ff:ff
                        6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop qlen 1000
                            link/ether 00:1e:0b:71:ac:6c brd
ff:ff:ff:ff:ff:ff
                        7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 qdisc pfifo_fast
                            
                              

                qlen
                  
                    

                        1000
                            link/ether 00:1e:0b:71:ac:6e brd
ff:ff:ff:ff:ff:ff
                            inet 192.168.0.7/24 brd 192.168.0.255 scope
global eth5
                            inet6 fe80::21e:bff:fe71:ac6e/64 scope link
                               valid_lft forever preferred_lft forever
                        8: sit0: <NOARP> mtu 1480 qdisc noop
                            link/sit 0.0.0.0 brd 0.0.0.0
                        
                        Thanks
                        Anoop
                            
                              

                Hi,
                
                Let's see if we can rule out security or network issues
here. Are you
                able to allow all traffic for eth5 on all nodes? In
addition, you may
                want to add a static multicast route to ensure the
multicast traffic
                is going through eth5 on all nodes.
                
                Regards,
                Bernard
                
                
                    

        
-----------------------------------------------------------------------
          

                He's essentially running a wide open firewall with those
rules.
                    

        Default
          

                ACCEPT on all chains, and no DROPs or REJECTs.  It's not
a firewall
                issue.  It may be multicast not going out the right
interface,
                    

        however.
          

                
                ------------------------------
                
                Message: 2
                Date: Sun, 18 Jul 2010 11:28:53 +0200
                From: Volker Dormeyer <vol...@ixolution.de>
<mailto:vol...@ixolution.de> 
                To: linux clustering <linux-cluster@redhat.com>
<mailto:linux-cluster@redhat.com> 
                Subject: Re: [Linux-cluster] How do i stop VM's on a
failed node?
                Message-ID: <20100718092853.ga4...@dijkstra>
                Content-Type: text/plain; charset=us-ascii
                
                Hi,
                
                On Fri, Jul 16, 2010 at 01:21:57PM -0400,
                Nathan Lager <lag...@lafayette.edu>
<mailto:lag...@lafayette.edu>  wrote:
                  
                    

                        I have a 4-node cluster, running KVM, and a host
of VM's.
                        
                        One of the nodes failed unexpectedly.  I'm
having two issues.  First,
                        the VM's which were on that node are still
reported as up and running
                            
                              

                on
                  
                    

                        that node in clustat, and second, i'm unable to
interact with the
                        cluster using clusvcadm.  Every command hangs.
I've tried disabling
                        these VM's, and the commands hang.
                        
                        How can i clear out this dead node, without
having to forceably
                            
                              

                restart
                  
                    

                        my entire cluster?
                            
                              

                To me, it sounds like fencing is not configured,
properly.
                
                What has been logged on the remaining nodes? I would
assume, they
                    

        tried
          

                to
                fence the failed node - but are not able to.
                
                Log-Snippets and Config would be helpful.
                
                Regards,
                Volker
                
                
                
                ------------------------------
                
                --
                Linux-cluster mailing list
                Linux-cluster@redhat.com
                https://www.redhat.com/mailman/listinfo/linux-cluster
                
                End of Linux-cluster Digest, Vol 75, Issue 19
                *********************************************
                Notice:  This e-mail message, together with any
attachments, contains
                information of Merck & Co., Inc. (One Merck Drive,
Whitehouse Station,
                New Jersey, USA 08889), and/or its affiliates Direct
contact
                    

        information
          

                for affiliates is available at 
                http://www.merck.com/contact/contacts.html) that may be
confidential,
                proprietary copyrighted and/or legally privileged. It is
intended
                    

        solely
          

                for the use of the individual or entity named on this
message. If you
                    

        are
          

                not the intended recipient, and have received this
message in error,
                please notify us immediately by reply e-mail and then
delete it from 
                your system.
                
                
                --
                Linux-cluster mailing list
                Linux-cluster@redhat.com
                https://www.redhat.com/mailman/listinfo/linux-cluster
                  
                    

        
          


-- 
Dan FRINCU
Systems Engineer
CCNA, RHCE
Streamwide Romania
Notice:  This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates Direct contact information
for affiliates is available at 
http://www.merck.com/contact/contacts.html) that may be confidential,
proprietary copyrighted and/or legally privileged. It is intended solely
for the use of the individual or entity named on this message. If you are
not the intended recipient, and have received this message in error,
please notify us immediately by reply e-mail and then delete it from 
your system.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to