Yep, it should.  In IOS 12.4, it complains if PIM is not enabled on your RP
interface.  I took that config out of IOS 15.1 (was checking to see if it
was different in IOS 15), which apparently doesn't require that.

I added 'ip pim sparse-mode' to the loopback, and am getting the same
results.

Good catch, though!

Keller Giacomarro
[email protected]


On Mon, Nov 12, 2012 at 3:53 AM, Samir Idris <[email protected]> wrote:

> Shouldn't pim be enabled on R1's loopback?
>
>
> On Monday, November 12, 2012, Keller Giacomarro <[email protected]>
> wrote:
> > Okay, I must be totally missing the boat here, but I can't get Multicast
> > over NBMA to work AT ALL.
> >
> > R2-----\
> >           -------- R1
> > R3-----/
> >
> > All interfaces are physical interfaces with static ipv4 mappings.  R1 has
> > DLCIs to both spoke routers, and spoke routers only have DLCIs to R1.
>  This
> > is as simple as I know how to get it.
> >
> > *** R1 ***
> > interface Serial1/0
> >  ip address 10.0.0.1 255.255.255.0
> >  ip pim dr-priority 1000
> >  ip pim nbma-mode
> >  ip pim sparse-mode
> >  encapsulation frame-relay
> >  frame-relay map ip 10.0.0.3 103 broadcast
> >  frame-relay map ip 10.0.0.2 102 broadcast
> >  no frame-relay inverse-arp
> > !
> > interface Loopback0
> >  ip address 1.1.1.1 255.255.255.0
> > !
> > ip pim rp-address 1.1.1.1
> >
> > *** R2 ***
> > interface Serial1/0
> >  ip address 10.0.0.2 255.255.255.0
> >  ip pim sparse-mode
> >  encapsulation frame-relay
> >  frame-relay map ip 10.0.0.3 201
> >  frame-relay map ip 10.0.0.1 201 broadcast
> > !
> > interface Loopback0
> >  ip address 2.2.2.2 255.255.255.255
> >  ip pim sparse-mode
> >  ip igmp join-group 229.0.0.2
> > !
> > ip route 1.1.1.1 255.255.255.255 10.0.0.1
> > ip pim rp-address 1.1.1.1
> >
> > *** R3 ***
> > interface Serial1/0
> >  ip address 10.0.0.3 255.255.255.0
> >  ip pim sparse-mode
> >  encapsulation frame-relay
> >  frame-relay map ip 10.0.0.2 301
> >  frame-relay map ip 10.0.0.1 301 broadcast
> > !
> > ip route 1.1.1.1 255.255.255.255 10.0.0.1
> > ip pim rp-address 1.1.1.1
> >
> > *** Testing ***
> > Ping is from R3 to 229.0.0.2, which is joined on R2.  The first ping goes
> > through fine, all others drop until the mroute times out on R1.
> >
> > ---
> > R3(config)#do ping 229.0.0.2 re 10
> > Type escape sequence to abort.
> > Sending 10, 100-byte ICMP Echos to 229.0.0.2, timeout is 2 seconds:
> >
> > Reply to request 0 from 2.2.2.2, 48 ms.........
> > R3(config)#
> > ---
> >
> > Debugs indicate that R2 (subscriber router) is sending a PIM Prune to R1
> > (the hub/RP) as soon as the first packet is received.  R2 retains the
> (S,G)
> > mapping with an incoming interface of s1/0, but the prune message causes
> R1
> > to remove S1/0 from the OIL.  Any packets after the first are dropped on
> R1
> > due to the olist being null.
> >
> > I don't understand why the PIM Prune is being generated on R2 for R1 --
> > isn't that the router that's sending the stream?  Most of all, I don't
> > understand why something that seems so simple isn't working!
> >
> > In conclusion, I hate multicast!
> >
> > Appreciate any help you might be able to provide. =)
> >
> > Keller Giacomarro
> > [email protected]
> > _______________________________________________
> > For more information regarding industry leading CCIE Lab training,
> please visit www.ipexpert.com
> >
> > Are you a CCNP or CCIE and looking for a job? Check out
> www.PlatinumPlacement.com
> >
> > http://onlinestudylist.com/mailman/listinfo/ccie_rs
> >
>
> --
> Samir Idris
>
_______________________________________________
For more information regarding industry leading CCIE Lab training, please visit 
www.ipexpert.com

Are you a CCNP or CCIE and looking for a job? Check out 
www.PlatinumPlacement.com

http://onlinestudylist.com/mailman/listinfo/ccie_rs

Reply via email to