Hi nagp,

As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the MPLS 
route’s path. During the mFIB lookup this RPF-ID must match the route’s RPF-ID.

To update the mroute’s RPF-ID do:
    mfib_table_entry_update (fib_index, &pfx,
                                                      MFIB_SOURCE_XXX,
                                                      <RPF_ID_VALUE>
                                                     MFIB_ENTRY_FLAG_NONE);

If the mroute has an associated RPF-ID it does not need an accepting interface. 
In order words, the RPF-ID acts like the accepting interface.

Hth,
Neale


From: Nagaprabhanjan Bellari <nagp.li...@gmail.com>
Date: Thursday, 6 July 2017 at 11:24
To: "Neale Ranns (nranns)" <nra...@cisco.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to point 
to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a label 
lookup to lookup multicast table.
But how do I associate the same rpf id when doing a "ip mroute add" ? Do I have 
to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a 
mfib_entry_path_update? And then set frp_sw_if_index to the correct interface 
and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you please throw some 
light on the same?
Thanks,
-nagp

On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi,

The lookup DPO the route’s path links to needs to be created with:
      lookup_dpo_add_or_lock_w_fib_index(fib_index,
                                       DPO_PROTO_IP4,
                                       LOOKUP_MULTICAST,   <<<<
                                        Etc…);

This is done in fib_path_resolve if the path is flagged as RPF_ID, which yours 
is not…

/neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Tuesday, 4 July 2017 at 14:18

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
I have added a mpls local-label 501 with eos 1 to lookup in multicast table of 
table_id 3.
Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls route. 
It is displayed as follows in "show mpls fib":
--
501:eos/21 fib:0 index:60 locks:2
  src:API  refs:1 flags:multicast,
    index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
      index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
       [@0]: dst-address,unicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
    [0] [@1]: mpls-disposition:[0]:[ip4]
        [@1]: dst-address,unicast lookup in ipv4-VRF:1
--
The flags show multicast, but the destination address has lookup in unicast 
table. Why is that so? Am I missing something?
Thanks,
-nagp

On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari 
<nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>> wrote:
Got it, thanks!

On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi,

No. If we did allow that then every receiver on that interface would receive 
packets encapsulated with that label, which is equivalent to the label being 
upstream assigned and probably not what you want. An MPLS tunnel can itself be 
P2MP so the same argument applies.
If you have one or more unicast MPLS tunnels in the multicast replication list, 
then you can create each MPLS tunnel with multiple output labels.

Hth,
neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Thursday, 29 June 2017 at 13:01

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
For the IP-to-MPLS case, can we specify another label in rpath.frp_label_stack 
(the vc label) in addition to specifying the sw_if_index of an MPLS tunnel 
(which will carry the tunnel label)?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Quite a few of the multicast related options are not available via the CLI. I 
don’t use the debug CLI much for testing, it’s all done in the python UT 
through the API. If you can see your way to adding CLI options, I’d be grateful.

Thanks,
neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Tuesday, 13 June 2017 at 12:37
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

One question: I dont see a "multicast" option with the "mpls local-label" 
command with latest VPP - am I missing something?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari 
<nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>> wrote:
Thank you! As always, a great answer!
This itself is good enough a documentation to get going. :-)
-nagp

On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi nagp,

VPP does support those scenarios. I’ve not written any documentation sadly, so 
I’ll write some instructions here… When I wrote the mfib/mpls code I did little 
testing via the debug CLI, so some of the options below are not be available 
via the CLI, I’ve put those options is []. I’ll be cheeky and leave the 
implementation as an exercise to the reader…. but see test_mpls.py and the 
test_mcast* functions therein for the various head, midpoint and tail 
programming via the API. There are also examples of how to setup the 
fib_route_path_t for each scenario in [m]fib_test.c.

IP-to-IP:

Adding routes to the mfib, and adding replications for those routes, is similar 
to adding ECMP paths to routes in the unicast-fib (they even share the sme 
fib_path[_list] code).
The prefix in the mFIB can be:

1)        (*,G/m)
ip mroute add 232.1.0.0/16<http://232.1.0.0/16> via Eth0 Forward

2)       (*,G)

        ip mroute add 232.1.1.1 via Eth0 Forward

3)       (S,G)

       ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward

adding more replications is achieved by adding more paths:

ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward

You’ll note the path is “attached”, i.e. only the interface is given and not 
the next-hop. For mcast routes this is the expected case. The path will resolve 
via the special multicast adjacency on the interface, which will perform the 
MAC address fixup at switch time. If you want to specify a next-hop in the 
path, and so have a unicast MAC address applied, then you’ll need: 
https://gerrit.fd.io/r/#/c/6974/

The flag ‘Forward’ is required to specify this is as a replication, i.e. a path 
used for forwarding. This distinction is required because for multicast one 
must also specify the RPF constraints. RPF can be specified in several ways:

1)       a particular [set of] interfaces;

  ip mroute add 1.1.1.1 239.1.1.1 via Eth10 Accept

2)       An RPF-ID (more on this later). No CLI for this

3)       Accept from any interface – this can be thought of as an RPF ‘don’t 
care’ – there are no loops, I personally guarantee it :/
   ip mroute 1.1.1.1 239.1.1.1 AA
the AA stands for accept-any-interface – it is a property of the mFIB entry, 
not one of its paths

At this point we do not support PIM register tunnels.

IP-to-MPLS: the head-end

VPP supports point-2-multi-point (P2MP) MPLS tunnels (the sort of tunnels that 
one gets via e.g. RSVP P2MP-TE) and P2MP LSPs (e.g. those one gets from mLDP) – 
the difference is essentially the presence (for the former) and lack of (for 
the latter) an interface associated with the LSP.

At the head end one has choices:

1)       Add replications to the mroute over multiple unicast tunnels, e.g.

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel3 Forward

but note that we do not support having an ‘out-label’ specification here, hence 
the tunnels must be dedicated to the mroute – which is not so scalable.

2)       Define one MPLS P2MP tunnel with replications
      mpls tunnel [multicast] add via 10.10.10.10 Eth0 out-label 55
      mpls tunnel <SW_IF_HANDLE> add via 11.11.11.11 Eth1 out-label 66
and then point the mroute via this tunnel
      ip mroute 1.1.1.1 239.1.1.1 via mpls-tunnelXXX Forward
the tunnel can be shared or dedicated to the mroute (i.e. a default or data 
MDT).

option 2 is the typical/recommended way, since it reflects the control plane 
model; BGP gives paths via the core-tress (i.e. replications for the mroute via 
tunnels) and mLDP/RSVP-TE builds the LSPs (add/removes replications for the 
P2MP MPLS tunnel).

If there are also IP replications to make, then combinations can be used, e.g.;
ip mroute 1.1.1.1 239.1.1.1 via mpls-tunnelXXX
ip mroute 1.1.1.1 239.1.1.1 via Eth0 Forward
2-stage replication; first via all tunnels and all attached peers, then via 
each next-hop in the tunnel.


MPLS-to-MPLS: the mid-point.


At the mid-point we build MPLS entries that perform replication:
   mpls local-label [multicast] 33 eos via 10.10.10.10 eth0 out-label 77
   mpls local-label [multicast] 33 eos via 11.11.11.11 eth1 out-label 77
each path is unicast. i.e. the packet is sent out with a unicast MAC to the 
peer in the path, consequently RPF is necessary.

MPLS-to-IP: the tail


The use of upstream assigned labels has not been tested, though it could easily 
be made to work I think, but let’s discount this use-case as it’s rarely, if 
ever, used. Consequently, any VRF context must be recovered from the transport 
LSP – i.e. P2MP LSPs cannot PHP.

At the tail, we then need to extract two pieces of information from the label 
the packet arrives with; firstly, the VRF and secondly the RPF information. For 
an LSP with an associated interface (i.e. RSVP-TE) this is easier, since the 
interface is bound to a VRF and the mroutes in that VRF can have RPF 
constraints specified against that interface. The MPLS local label programming 
can then simply say, if the packet arrives with label X, pop it, now it’s ip4, 
and make it look like it came in on interface Y.
     mpls local-label 77 eos via rx-ip4 mpls-tunnel3

If there is no interface associated with the LSP, then we need another way to 
specify RPF constraints – this is where the RPF-ID comes in, think of it as 
equivalent to specifying a virtual interface, but with the [scaling] benefit of 
not having to create an interface representation. We associate the LSP’s path 
with an RPF-ID and indicate it is a pop and lookup in an mFIB. The RPF-ID is 
associated with the path and not with the label so we can support extranets – 
i.e. replications into multiple VRFs with a different RPF-ID in each [this is 
untested though]

   mpls local label [multicast] 77 eos via [rpf-id 44] 
[ip4-mcast-lookip-in-table] 3

where table 3 is the VRF in question. Mroutes in VRF 3 then also need to be 
associated with RPF-ID 44 so that the RPF check passes for packets egressing 
the LSP;
  ip mroute table 3 1.1.1.1 239.1.1.1 [rpf-id 3]

if the RPF-ID the packet ‘acquires’ as it egresses the LSP does not match the 
RPF-ID of the mroute, the RPF check fails, and the packet is dropped.

Bud Nodes

A bud node is a combination of a mid-point and a tail, i.e. MPLS replications 
will be made further down the core tree/LSP and a copy will be made locally to 
be forwarded within the IP VRF. This is done simply by specifying both types of 
replication associated with the LSP’s local-label;
   mpls local label [multicast] 77 eos [rpf-id 44] [ip4-mcast-lookip-in-table] 3
   mpls local-label [multicast] 77 eos via 10.10.10.10 eth0 out-label 88



hth,
neale


From: <vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on 
behalf of Nagaprabhanjan Bellari 
<nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Tuesday, 13 June 2017 at 07:31
To: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] A few questions regarding mcast fib

Hi,
Just wanted to understand if the following things are achievable with VPP w.r.t 
IP multicast:
1. Ip multicast routing - check (S, G) and forward to one or more listeners
2. Ip multicast MPLS tunnel origination - check (S,G) and replicate to one or 
more MPLS tunnels (with different labels)
3. MPLS termination and IP multicast routing - terminate MPLS and lookup (S,G) 
of the packet and forward to one or more listeners.
If there is any document that details how IP mcast is usable in VPP, it would 
be great!

Thanks,
-nagp






_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to