Hi Everton,

I followed the exact directions as you suggested and ran the rebuilt quagga
on the nodes. However I don't see any difference in behavior. Is there
anything in particular you're looking for after these changes ?
Below is the output from pimd running on both nodes:


Trying 192.168.1.1...

Connected to 192.168.1.1.

Escape character is '^]'.

Hello, this is Quagga 0.99.15 pimd 0.158

Copyright 1996-2005 Kunihiro Ishiguro, et al.

 User Access Verification

Password:

node1> enable

Password:

node1# show ip pim neighbor

Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list

T=can_disable_join_suppression

Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv

node1# show ip pim hello

Interface Address Period Timer StatStart Recv Rfail Send Sfail

ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0

ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0

node1# q

Connection closed by foreign host.

Trying 192.168.3.10...

Connected to 192.168.3.10.

Escape character is '^]'.

Hello, this is Quagga 0.99.15 pimd 0.158

Copyright 1996-2005 Kunihiro Ishiguro, et al.

 User Access Verification

Password:

node2> enable

Password:

node2# show ip pim neighbor

Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
A=address_list

T=can_disable_join_suppression

Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv

node2# show ip pim hello

Interface Address Period Timer StatStart Recv Rfail Send Sfail

ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0

ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0

node2# q

Connection closed by foreign host.



Thanks,

Yoda



On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
<[email protected]>wrote:

> Hi Yoda,
>
> Based on the Rfail counter you spotted, I suspect the code under
> PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
>
> Can you experiment with commenting out the following line:
>
> PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
>
> from pimd/Makefile.am ?
>
> Then you will need to bootstrap autotools with:
>
> autoreconf -i --force
>
> And finally to rebuild quagga.
>
> I know this test may be cumbersome since it requires the whole autotools
> suit present on your system, but it could help to identify why pimd is
> missing the hello packets.
>
> Thanks,
> Everton
>
>
> On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek <[email protected]>
> wrote:
> > Hi Everton,
> >
> > Below are the answers :
> >
> > 1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2 ra_sta0.
> >
> > 2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2 are
> > receiving PIMv2 "Hello" packets however they are addressed to 224.0.0.13.
> >
> > 3) Don't see any error logs on nodes 1 and 2. Below is the output of
> "show
> > ip pim hello" on both nodes 1 and 2. Please notice the "Rfail" counters.
> >
> > node1# show ip pim hello
> > Interface Address         Period Timer StatStart Recv Rfail Send Sfail
> > ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595     0
> > ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595     0
> > node1#
> >
> > node2# show ip pim hello
> > Interface Address         Period Timer StatStart Recv Rfail Send Sfail
> > ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593     0
> > ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593     0
> > node2#
> >
> >
> > Thanks,
> >
> > On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques <
> [email protected]>
> > wrote:
> >>
> >> Hi,
> >>
> >> I think the problem is node2 fails to bring up the node1 as pim neighbor
> >> on ra_sta0, since node1 is missing from node2 "show ip pim neighbor".
> >>
> >> Can you please double check the following?
> >>
> >> 1) "ip pim ssm" is enabled on node1 ra_sta0 ?
> >> 2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
> >> 3) node2 pimd is logging any error/warning ? look for messages about
> >> packets from node1, specially hello packets.
> >>
> >> Thanks,
> >> Everton
> >>
> >> On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek <[email protected]>
> >> wrote:
> >> > Below is the output as requested
> >> >
> >> >
> >> > User Access Verification
> >> >
> >> > Password:
> >> >
> >> > node2> enable
> >> >
> >> > Password:
> >> >
> >> > node2# show ip igmp interface
> >> >
> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc
> >> > Del
> >> >
> >> > ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
> >> >
> >> > node2# show ip igmp interface group
> >> >
> >> > Interface Address Group Mode Timer Srcs V Uptime
> >> >
> >> > ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
> >> >
> >> > ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
> >> >
> >> > ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
> >> >
> >> > node2# show ip igmp group sources
> >> >
> >> > Interface Address Group Source Timer Fwd Uptime
> >> >
> >> > node2# show ip igmp sources pim designated-router
> >> >
> >> > NonPri: Number of neighbors missing DR Priority hello option
> >> >
> >> > Interface Address DR Uptime Elections NonPri
> >> >
> >> > ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
> >> >
> >> > ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
> >> >
> >> > node2# show ip pim designated-router hello
> >> >
> >> > Interface Address Period Timer StatStart Recv Rfail Send Sfail
> >> >
> >> > ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
> >> >
> >> > ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
> >> >
> >> > node2# show ip pim hello interface
> >> >
> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop AllMu Prmsc
> >> > Del
> >> >
> >> > ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
> >> >
> >> > ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
> >> >
> >> > node2# show ip pim interface local-membership
> >> >
> >> > Interface Address Source Group Membership
> >> >
> >> > node2# show ip pim local-membership join
> >> >
> >> > Interface Address Source Group State Uptime Expire Prune
> >> >
> >> > node2# show ip pim join neighbor
> >> >
> >> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority G=generation_id
> >> > A=address_list
> >> >
> >> > T=can_disable_join_suppression
> >> >
> >> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
> >> >
> >> > node2# show ip pim neighbor rpf
> >> >
> >> > RPF Cache Refresh Delay: 10000 msecs
> >> >
> >> > RPF Cache Refresh Timer: 0 msecs
> >> >
> >> > RPF Cache Refresh Requests: 6
> >> >
> >> > RPF Cache Refresh Events: 3
> >> >
> >> > RPF Cache Refresh Last: 00:34:24
> >> >
> >> > Source Group RpfIface RpfAddress RibNextHop Metric Pref
> >> >
> >> > node2# show ip pim rpf upstream
> >> >
> >> > Source Group State Uptime JoinTimer RefCnt
> >> >
> >> > node2# show ip pim upstream-join-desired
> >> >
> >> > Interface Source Group LostAssert Joins PimInclude JoinDesired EvalJD
> >> >
> >> > node2# show ip pim upstream-join-desired rpf
> >> >
> >> > Source Group RpfIface RibNextHop RpfAddress
> >> >
> >> > node2# show ip pim upstream-rpf route 192.168.4.60
> >> >
> >> > Address NextHop Interface Metric Preference
> >> >
> >> > 192.168.4.60 192.168.3.20 ra_sta0 1 0
> >> >
> >> > node2# q
> >> >
> >> > On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
> >> > <[email protected]>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Can you send the following commands from node2 ?
> >> >>
> >> >> show ip igmp interface
> >> >> show ip igmp group
> >> >> show ip igmp sources
> >> >> show ip pim designated-router
> >> >> show ip pim hello
> >> >> show ip pim interface
> >> >> show ip pim local-membership
> >> >> show ip pim join
> >> >> show ip pim neighbor
> >> >> show ip pim rpf
> >> >> show ip pim upstream
> >> >> show ip pim upstream-join-desired
> >> >> show ip pim upstream-rpf
> >> >> show ip route 192.168.4.60
> >> >>
> >> >> Thanks,
> >> >> Everton
> >> >>
> >> >> On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek <[email protected]>
> >> >> wrote:
> >> >> > Hi Everton,
> >> >> >
> >> >> > I added the entry "ip pim ssm" on ra_ap0  as you suggested. I still
> >> >> > don't
> >> >> > see join request coming into the source. Below is what the
> >> >> > configuration
> >> >> > looks like on the individual nodes:
> >> >> >
> >> >> > Node 1 pimd.conf
> >> >> > -------------------------
> >> >> > !
> >> >> > ! Zebra configuration saved from vty
> >> >> > ! 2009/08/08 05:03:23
> >> >> > !
> >> >> > hostname node1
> >> >> > password zebra
> >> >> > enable password zebra
> >> >> > log stdout
> >> >> > !
> >> >> > interface eth0
> >> >> > !
> >> >> > interface eth1
> >> >> > !
> >> >> > interface lo
> >> >> > !
> >> >> > interface ra_ap0
> >> >> > ip pim ssm
> >> >> > ip igmp query-interval 125
> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> > !
> >> >> > interface ra_sta0
> >> >> > ip pim ssm
> >> >> > ip igmp query-interval 125
> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> > !
> >> >> > !
> >> >> > ip multicast-routing
> >> >> > !
> >> >> > line vty
> >> >> > !
> >> >> >
> >> >> >
> >> >> > Node 2 pimd.conf
> >> >> > -------------------------
> >> >> > !
> >> >> > ! Zebra configuration saved from vty
> >> >> > ! 2009/08/09 22:38:12
> >> >> > !
> >> >> > hostname node2
> >> >> > password zebra
> >> >> > enable password zebra
> >> >> > log stdout
> >> >> > !
> >> >> > interface br-lan
> >> >> > !
> >> >> > interface eth0
> >> >> > !
> >> >> > interface eth1
> >> >> > !
> >> >> > interface lo
> >> >> > !
> >> >> > interface ra_ap0
> >> >> > ip pim ssm
> >> >> > ip igmp
> >> >> > ip igmp query-interval 125
> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> >> > !
> >> >> > interface ra_sta0
> >> >> > ip pim ssm
> >> >> > ip igmp query-interval 125
> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> > !
> >> >> > !
> >> >> > ip multicast-routing
> >> >> > !
> >> >> > line vty
> >> >> > !
> >> >> > On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
> >> >> > <[email protected]>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> Yes, pimd should route the join request towards the source.
> >> >> >>
> >> >> >> However, you need to enable "ip pim ssm" on ra_ap0 as well.
> >> >> >> If you enable only "ip igmp" on a interface, pimd won't inject
> >> >> >> IGMP-learnt membership into the pim protocol.
> >> >> >>
> >> >> >> Cheers,
> >> >> >> Everton
> >> >> >>
> >> >> >> On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek <
> [email protected]>
> >> >> >> wrote:
> >> >> >> > Hi Everton,
> >> >> >> >
> >> >> >> > Thanks for the suggestions. I made the changes to the config
> files
> >> >> >> > on
> >> >> >> > both
> >> >> >> > nodes as you suggested. Since it is not possible for me to force
> >> >> >> > the
> >> >> >> > client
> >> >> >> > to do a source specific join I added the following line at
> >> >> >> > interface
> >> >> >> > ra_ap0
> >> >> >> > on node 2 where the client is attached:
> >> >> >> >
> >> >> >> > interface ra_ap0
> >> >> >> > ip igmp
> >> >> >> > ip igmp query-interval 125
> >> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> >> >> >
> >> >> >> > I do see the source-specific IGMPv3 join group 239.255.255.250
> for
> >> >> >> > source
> >> >> >> > 192.168.4.60 which is addressed to 224.0.0.22 on the side of
> >> >> >> > node2.
> >> >> >> > However
> >> >> >> > this join request never makes it to node 1 where the source is
> >> >> >> > located
> >> >> >> > on
> >> >> >> > ra_ap0.
> >> >> >> > Shouldn't the pimd route this join request to the node where the
> >> >> >> > source
> >> >> >> > is
> >> >> >> > attached ?
> >> >> >> >
> >> >> >> > Thanks,
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
> >> >> >> > <[email protected]>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >> You did not mention whether you got a source-specific IGMPv3
> join
> >> >> >> >> to
> >> >> >> >> the
> >> >> >> >> channel (S,G)=(192.168.4.60,239.255.255.250). Please notice
> qpimd
> >> >> >> >> is
> >> >> >> >> unable to program the multicast forwarding cache with
> >> >> >> >> non-source-specific
> >> >> >> >> groups. Usually the key issue is to instruct the receiver
> >> >> >> >> application
> >> >> >> >> to
> >> >> >> >> join the source-specific channel (S,G).
> >> >> >> >>
> >> >> >> >> Regarding the config, the basic rule is:
> >> >> >> >> 1) Enable "ip pim ssm" everywhere (on every interface that
> should
> >> >> >> >> pass
> >> >> >> >> mcast).
> >> >> >> >> 2) Enable both "ip pim ssm" and "ip igmp" on interfaces
> attached
> >> >> >> >> to
> >> >> >> >> the receivers (IGMPv3 hosts).
> >> >> >> >>
> >> >> >> >> An even simpler config rule to remember is to enable both
> >> >> >> >> commands
> >> >> >> >> everywhere. They should not cause any harm.
> >> >> >> >>
> >> >> >> >> Hence, if your mcast receiver is attached to Node 2 at  ra_ap0,
> I
> >> >> >> >> think
> >> >> >> >> you will
> >> >> >> >> need at least the following config:
> >> >> >> >>
> >> >> >> >> !
> >> >> >> >> ! Node 1
> >> >> >> >> !
> >> >> >> >> interface ra_ap0
> >> >> >> >>  ip pim ssm
> >> >> >> >> interface ra_sta0
> >> >> >> >>  ip pim ssm
> >> >> >> >>
> >> >> >> >> !
> >> >> >> >> ! Node 2
> >> >> >> >> !
> >> >> >> >> interface ra_ap0
> >> >> >> >>  ip pim ssm
> >> >> >> >>  ip igmp
> >> >> >> >> interface ra_sta0
> >> >> >> >>  ip pim ssm
> >> >> >> >>
> >> >> >> >> Hope this helps,
> >> >> >> >> Everton
> >> >> >> >>
> >> >> >> >> On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
> >> >> >> >> <[email protected]>
> >> >> >> >> wrote:
> >> >> >> >> > Hi Everton & Fellow  qpimd users,
> >> >> >> >> >
> >> >> >> >> > We're trying to stream multicast video traffic between a
> >> >> >> >> > Tversity
> >> >> >> >> > server
> >> >> >> >> > and
> >> >> >> >> > a multicast client separated by 2 nodes (node1 and node2).
> Each
> >> >> >> >> > node
> >> >> >> >> > is
> >> >> >> >> > running quagga suite (version 0.99.15) along with qpimd
> >> >> >> >> > (version
> >> >> >> >> > 0.158)
> >> >> >> >> > running on top of Linux 2.6.26.
> >> >> >> >> > Node 1 has 3 network interfaces - eth0, ap0 and ra_sta0
> >> >> >> >> > Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
> >> >> >> >> > The Tversity server talks to interface ra_ap0 on Node 1 and
> the
> >> >> >> >> > multicast
> >> >> >> >> > client talks to interface ra_ap0 on Node 2
> >> >> >> >> > Nodes 1 and 2 talk with each other over their ra_sta0
> >> >> >> >> > interfaces
> >> >> >> >> >
> >> >> >> >> > Below is a graphical depiction :
> >> >> >> >> >
> >> >> >> >> > Tversity server   -----------ra_ap0--> Node 1
> >> >> >> >> > --ra_sta0-----------------ra_sta0-->Node
> >> >> >> >> > 2-----ra_ap0------------------------> Video Client
> >> >> >> >> > ===========             ======================
> >> >> >> >> > ======================                      =============
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > Node 1 pimd.conf file
> >> >> >> >> > ==================
> >> >> >> >> > !
> >> >> >> >> > ! Zebra configuration saved from vty
> >> >> >> >> > ! 2009/08/01 20:26:06
> >> >> >> >> > !
> >> >> >> >> > hostname node1
> >> >> >> >> > password zebra
> >> >> >> >> > enable password zebra
> >> >> >> >> > log stdout
> >> >> >> >> > !
> >> >> >> >> > interface eth0
> >> >> >> >> > !
> >> >> >> >> > interface eth1
> >> >> >> >> > !
> >> >> >> >> > interface lo
> >> >> >> >> > !
> >> >> >> >> > interface ra_ap0
> >> >> >> >> > ip pim ssm
> >> >> >> >> > ip igmp
> >> >> >> >> > ip igmp query-interval 125
> >> >> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> >> >> >> > !
> >> >> >> >> > interface ra_sta0
> >> >> >> >> > ip igmp
> >> >> >> >> > ip igmp query-interval 125
> >> >> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> >> >> > !
> >> >> >> >> > !
> >> >> >> >> > ip multicast-routing
> >> >> >> >> > !
> >> >> >> >> > line vty
> >> >> >> >> > !
> >> >> >> >> >
> >> >> >> >> > Node 2 pimd.conf configuration file
> >> >> >> >> > ============================
> >> >> >> >> > !
> >> >> >> >> > ! Zebra configuration saved from vty
> >> >> >> >> > ! 2009/08/02 21:54:14
> >> >> >> >> > !
> >> >> >> >> > hostname node2
> >> >> >> >> > password zebra
> >> >> >> >> > enable password zebra
> >> >> >> >> > log stdout
> >> >> >> >> > !
> >> >> >> >> > interface eth0
> >> >> >> >> > !
> >> >> >> >> > interface eth1
> >> >> >> >> > !
> >> >> >> >> > interface lo
> >> >> >> >> > !
> >> >> >> >> > interface ra_ap0
> >> >> >> >> > ip igmp
> >> >> >> >> > ip igmp query-interval 125
> >> >> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
> >> >> >> >> > !
> >> >> >> >> > interface ra_sta0
> >> >> >> >> > ip igmp
> >> >> >> >> > ip igmp query-interval 125
> >> >> >> >> > ip igmp query-max-response-time-dsec 100
> >> >> >> >> > !
> >> >> >> >> > !
> >> >> >> >> > ip multicast-routing
> >> >> >> >> > !
> >> >> >> >> > line vty
> >> >> >> >> > !
> >> >> >> >> >
> >> >> >> >> > From the above configuration you can see that interface
> ra_ap0
> >> >> >> >> > on
> >> >> >> >> > node 1
> >> >> >> >> > is
> >> >> >> >> > configured to be multicast source (ip pim ssm).
> >> >> >> >> > We do see some multicast join requests in wireshark from both
> >> >> >> >> > the
> >> >> >> >> > server
> >> >> >> >> > and
> >> >> >> >> > the client however no data flow. Initially we started
> >> >> >> >> > qpimd without
> >> >> >> >> > the entry "igmp join ..." on either client side node or
> server
> >> >> >> >> > side
> >> >> >> >> > node.
> >> >> >> >> > Looking at node 1 configuration through "show  ip igmp
> groups"
> >> >> >> >> > we
> >> >> >> >> > didn't
> >> >> >> >> > see
> >> >> >> >> > the group membership for "239.255.255.250" while this group
> >> >> >> >> > membership
> >> >> >> >> > was
> >> >> >> >> > observed on node 2. I put this group membership on both nodes
> >> >> >> >> > to
> >> >> >> >> > force
> >> >> >> >> > them
> >> >> >> >> > to join this multicast group - however without success.
> >> >> >> >> >
> >> >> >> >> > Just to give you a background - when both client and server
> are
> >> >> >> >> > talking
> >> >> >> >> > to
> >> >> >> >> > same node - say node 2 and same interface ra_ap0 (without
> qpimd
> >> >> >> >> > running)
> >> >> >> >> > multicast video gets served flawlessly from Tversity server
> to
> >> >> >> >> > client
> >> >> >> >> > through the node.
> >> >> >> >> > But with the 2 node setup we aren't able to see the video
> >> >> >> >> > streams
> >> >> >> >> > go
> >> >> >> >> > through
> >> >> >> >> > to the client.
> >> >> >> >> >
> >> >> >> >> > Could you please review  the above configuration for errors
> or
> >> >> >> >> > have
> >> >> >> >> > any
> >> >> >> >> > suggestions to reseolve this issue ? Any help would be
> greatly
> >> >> >> >> > appreciated.
> >> >> >> >> >
> >> >> >> >> > Thanks,
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >
> >> >> >
> >> >
> >> >
> >
> >
>

Reply via email to