Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari <nagp.li...@gmail.com>
Date: Sunday, 9 July 2017 at 03:55
To: "Neale Ranns (nranns)" <nra...@cisco.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Sure, I will push this and other multicast options in the CLIs shortly. 
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node = 
1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--
I still am not able to understand, what has rpf_id on the tail node to do with 
the rpf_id assigned to an interface on the head node. :-\
Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)       We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)       Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if it’s 
non-zero.    Be nice if this patch was up-streamed ☺

Thanks
/neale



From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 17:55
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Here is the output of "show mpls fib 501" on the tail node (encapsulation is 
happening at the head node where rpf_id is set as 0, JFYI)

--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
    index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
      index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved, 
cfg-flags:attached,rpf-id,
       [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
    [0] [@1]: mpls-disposition:[0]:[ip4]
        [@1]: dst-address,multicast lookup in ipv4-VRF:1
--
Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 14:05

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching with the incoming 
packet's vnet_buffer(b0)->ip.rpf_id (which is always zero). Because of that, 
the packets are getting dropped. I have worked around by setting "accept all 
interface" flag on the route for now, but I am sure that's not the right way to 
do.
Many thanks!
-nagp


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to