[vpp-dev] A few questions regarding mcast fib

2017-06-12 Thread Nagaprabhanjan Bellari
Hi,

Just wanted to understand if the following things are achievable with VPP
w.r.t IP multicast:

1. Ip multicast routing - check (S, G) and forward to one or more listeners
2. Ip multicast MPLS tunnel origination - check (S,G) and replicate to one
or more MPLS tunnels (with different labels)
3. MPLS termination and IP multicast routing - terminate MPLS and lookup
(S,G) of the packet and forward to one or more listeners.

If there is any document that details how IP mcast is usable in VPP, it
would be great!

Thanks,
-nagp
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-13 Thread Neale Ranns (nranns)
 
constraints specified against that interface. The MPLS local label programming 
can then simply say, if the packet arrives with label X, pop it, now it’s ip4, 
and make it look like it came in on interface Y.
 mpls local-label 77 eos via rx-ip4 mpls-tunnel3

If there is no interface associated with the LSP, then we need another way to 
specify RPF constraints – this is where the RPF-ID comes in, think of it as 
equivalent to specifying a virtual interface, but with the [scaling] benefit of 
not having to create an interface representation. We associate the LSP’s path 
with an RPF-ID and indicate it is a pop and lookup in an mFIB. The RPF-ID is 
associated with the path and not with the label so we can support extranets – 
i.e. replications into multiple VRFs with a different RPF-ID in each [this is 
untested though]

   mpls local label [multicast] 77 eos via [rpf-id 44] 
[ip4-mcast-lookip-in-table] 3

where table 3 is the VRF in question. Mroutes in VRF 3 then also need to be 
associated with RPF-ID 44 so that the RPF check passes for packets egressing 
the LSP;
  ip mroute table 3 1.1.1.1 239.1.1.1 [rpf-id 3]

if the RPF-ID the packet ‘acquires’ as it egresses the LSP does not match the 
RPF-ID of the mroute, the RPF check fails, and the packet is dropped.

Bud Nodes

A bud node is a combination of a mid-point and a tail, i.e. MPLS replications 
will be made further down the core tree/LSP and a copy will be made locally to 
be forwarded within the IP VRF. This is done simply by specifying both types of 
replication associated with the LSP’s local-label;
   mpls local label [multicast] 77 eos [rpf-id 44] [ip4-mcast-lookip-in-table] 3
   mpls local-label [multicast] 77 eos via 10.10.10.10 eth0 out-label 88



hth,
neale


From:  on behalf of Nagaprabhanjan Bellari 

Date: Tuesday, 13 June 2017 at 07:31
To: vpp-dev 
Subject: [vpp-dev] A few questions regarding mcast fib

Hi,
Just wanted to understand if the following things are achievable with VPP w.r.t 
IP multicast:
1. Ip multicast routing - check (S, G) and forward to one or more listeners
2. Ip multicast MPLS tunnel origination - check (S,G) and replicate to one or 
more MPLS tunnels (with different labels)
3. MPLS termination and IP multicast routing - terminate MPLS and lookup (S,G) 
of the packet and forward to one or more listeners.
If there is any document that details how IP mcast is usable in VPP, it would 
be great!

Thanks,
-nagp
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-13 Thread Nagaprabhanjan Bellari
 local-label [multicast] 33 eos via 10.10.10.10 eth0 out-label 77
>
>mpls local-label [multicast] 33 eos via 11.11.11.11 eth1 out-label 77
>
> each path is unicast. i.e. the packet is sent out with a unicast MAC to
> the peer in the path, consequently RPF is necessary.
>
>
>
> MPLS-to-IP: the tail
>
>
>
>
>
> The use of upstream assigned labels has not been tested, though it could
> easily be made to work I think, but let’s discount this use-case as it’s
> rarely, if ever, used. Consequently, any VRF context must be recovered from
> the transport LSP – i.e. P2MP LSPs cannot PHP.
>
>
>
> At the tail, we then need to extract two pieces of information from the
> label the packet arrives with; firstly, the VRF and secondly the RPF
> information. For an LSP with an associated interface (i.e. RSVP-TE) this is
> easier, since the interface is bound to a VRF and the mroutes in that VRF
> can have RPF constraints specified against that interface. The MPLS local
> label programming can then simply say, if the packet arrives with label X,
> pop it, now it’s ip4, and make it look like it came in on interface Y.
>
>  mpls local-label 77 eos via rx-ip4 mpls-tunnel3
>
>
>
> If there is no interface associated with the LSP, then we need another way
> to specify RPF constraints – this is where the RPF-ID comes in, think of it
> as equivalent to specifying a virtual interface, but with the [scaling]
> benefit of not having to create an interface representation. We associate
> the LSP’s path with an RPF-ID and indicate it is a pop and lookup in an
> mFIB. The RPF-ID is associated with the path and not with the label so we
> can support extranets – i.e. replications into multiple VRFs with a
> different RPF-ID in each [this is untested though]
>
>
>
>mpls local label [multicast] 77 eos via [rpf-id 44]
> [ip4-mcast-lookip-in-table] 3
>
>
>
> where table 3 is the VRF in question. Mroutes in VRF 3 then also need to
> be associated with RPF-ID 44 so that the RPF check passes for packets
> egressing the LSP;
>
>   ip mroute table 3 1.1.1.1 239.1.1.1 [rpf-id 3]
>
>
>
> if the RPF-ID the packet ‘acquires’ as it egresses the LSP does not match
> the RPF-ID of the mroute, the RPF check fails, and the packet is dropped.
>
>
>
> Bud Nodes
>
>
>
> A bud node is a combination of a mid-point and a tail, i.e. MPLS
> replications will be made further down the core tree/LSP and a copy will be
> made locally to be forwarded within the IP VRF. This is done simply by
> specifying both types of replication associated with the LSP’s local-label;
>
>mpls local label [multicast] 77 eos [rpf-id 44]
> [ip4-mcast-lookip-in-table] 3
>
>mpls local-label [multicast] 77 eos via 10.10.10.10 eth0 out-label 88
>
>
>
>
>
>
>
> hth,
>
> neale
>
>
>
>
>
> *From: * on behalf of Nagaprabhanjan Bellari
> 
> *Date: *Tuesday, 13 June 2017 at 07:31
> *To: *vpp-dev 
> *Subject: *[vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi,
>
> Just wanted to understand if the following things are achievable with VPP
> w.r.t IP multicast:
>
> 1. Ip multicast routing - check (S, G) and forward to one or more listeners
>
> 2. Ip multicast MPLS tunnel origination - check (S,G) and replicate to one
> or more MPLS tunnels (with different labels)
>
> 3. MPLS termination and IP multicast routing - terminate MPLS and lookup
> (S,G) of the packet and forward to one or more listeners.
>
> If there is any document that details how IP mcast is usable in VPP, it
> would be great!
>
>
>
> Thanks,
>
> -nagp
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-13 Thread Nagaprabhanjan Bellari
t;
>>
>> option 2 is the typical/recommended way, since it reflects the control
>> plane model; BGP gives paths via the core-tress (i.e. replications for the
>> mroute via tunnels) and mLDP/RSVP-TE builds the LSPs (add/removes
>> replications for the P2MP MPLS tunnel).
>>
>>
>>
>> If there are also IP replications to make, then combinations can be used,
>> e.g.;
>>
>> ip mroute 1.1.1.1 239.1.1.1 via mpls-tunnelXXX
>>
>> ip mroute 1.1.1.1 239.1.1.1 via Eth0 Forward
>>
>> 2-stage replication; first via all tunnels and all attached peers, then
>> via each next-hop in the tunnel.
>>
>>
>>
>>
>>
>> MPLS-to-MPLS: the mid-point.
>>
>>
>>
>>
>>
>> At the mid-point we build MPLS entries that perform replication:
>>
>>mpls local-label [multicast] 33 eos via 10.10.10.10 eth0 out-label 77
>>
>>mpls local-label [multicast] 33 eos via 11.11.11.11 eth1 out-label 77
>>
>> each path is unicast. i.e. the packet is sent out with a unicast MAC to
>> the peer in the path, consequently RPF is necessary.
>>
>>
>>
>> MPLS-to-IP: the tail
>>
>>
>>
>>
>>
>> The use of upstream assigned labels has not been tested, though it could
>> easily be made to work I think, but let’s discount this use-case as it’s
>> rarely, if ever, used. Consequently, any VRF context must be recovered from
>> the transport LSP – i.e. P2MP LSPs cannot PHP.
>>
>>
>>
>> At the tail, we then need to extract two pieces of information from the
>> label the packet arrives with; firstly, the VRF and secondly the RPF
>> information. For an LSP with an associated interface (i.e. RSVP-TE) this is
>> easier, since the interface is bound to a VRF and the mroutes in that VRF
>> can have RPF constraints specified against that interface. The MPLS local
>> label programming can then simply say, if the packet arrives with label X,
>> pop it, now it’s ip4, and make it look like it came in on interface Y.
>>
>>  mpls local-label 77 eos via rx-ip4 mpls-tunnel3
>>
>>
>>
>> If there is no interface associated with the LSP, then we need another
>> way to specify RPF constraints – this is where the RPF-ID comes in, think
>> of it as equivalent to specifying a virtual interface, but with the
>> [scaling] benefit of not having to create an interface representation. We
>> associate the LSP’s path with an RPF-ID and indicate it is a pop and lookup
>> in an mFIB. The RPF-ID is associated with the path and not with the label
>> so we can support extranets – i.e. replications into multiple VRFs with a
>> different RPF-ID in each [this is untested though]
>>
>>
>>
>>mpls local label [multicast] 77 eos via [rpf-id 44]
>> [ip4-mcast-lookip-in-table] 3
>>
>>
>>
>> where table 3 is the VRF in question. Mroutes in VRF 3 then also need to
>> be associated with RPF-ID 44 so that the RPF check passes for packets
>> egressing the LSP;
>>
>>   ip mroute table 3 1.1.1.1 239.1.1.1 [rpf-id 3]
>>
>>
>>
>> if the RPF-ID the packet ‘acquires’ as it egresses the LSP does not match
>> the RPF-ID of the mroute, the RPF check fails, and the packet is dropped.
>>
>>
>>
>> Bud Nodes
>>
>>
>>
>> A bud node is a combination of a mid-point and a tail, i.e. MPLS
>> replications will be made further down the core tree/LSP and a copy will be
>> made locally to be forwarded within the IP VRF. This is done simply by
>> specifying both types of replication associated with the LSP’s local-label;
>>
>>mpls local label [multicast] 77 eos [rpf-id 44]
>> [ip4-mcast-lookip-in-table] 3
>>
>>mpls local-label [multicast] 77 eos via 10.10.10.10 eth0 out-label 88
>>
>>
>>
>>
>>
>>
>>
>> hth,
>>
>> neale
>>
>>
>>
>>
>>
>> *From: * on behalf of Nagaprabhanjan
>> Bellari 
>> *Date: *Tuesday, 13 June 2017 at 07:31
>> *To: *vpp-dev 
>> *Subject: *[vpp-dev] A few questions regarding mcast fib
>>
>>
>>
>> Hi,
>>
>> Just wanted to understand if the following things are achievable with VPP
>> w.r.t IP multicast:
>>
>> 1. Ip multicast routing - check (S, G) and forward to one or more
>> listeners
>>
>> 2. Ip multicast MPLS tunnel origination - check (S,G) and replicate to
>> one or more MPLS tunnels (with different labels)
>>
>> 3. MPLS termination and IP multicast routing - terminate MPLS and lookup
>> (S,G) of the packet and forward to one or more listeners.
>>
>> If there is any document that details how IP mcast is usable in VPP, it
>> would be great!
>>
>>
>>
>> Thanks,
>>
>> -nagp
>>
>>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-13 Thread Neale Ranns (nranns)

Quite a few of the multicast related options are not available via the CLI. I 
don’t use the debug CLI much for testing, it’s all done in the python UT 
through the API. If you can see your way to adding CLI options, I’d be grateful.

Thanks,
neale

From: Nagaprabhanjan Bellari 
Date: Tuesday, 13 June 2017 at 12:37
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

One question: I dont see a "multicast" option with the "mpls local-label" 
command with latest VPP - am I missing something?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Thank you! As always, a great answer!
This itself is good enough a documentation to get going. :-)
-nagp

On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

VPP does support those scenarios. I’ve not written any documentation sadly, so 
I’ll write some instructions here… When I wrote the mfib/mpls code I did little 
testing via the debug CLI, so some of the options below are not be available 
via the CLI, I’ve put those options is []. I’ll be cheeky and leave the 
implementation as an exercise to the reader…. but see test_mpls.py and the 
test_mcast* functions therein for the various head, midpoint and tail 
programming via the API. There are also examples of how to setup the 
fib_route_path_t for each scenario in [m]fib_test.c.

IP-to-IP:

Adding routes to the mfib, and adding replications for those routes, is similar 
to adding ECMP paths to routes in the unicast-fib (they even share the sme 
fib_path[_list] code).
The prefix in the mFIB can be:

1)(*,G/m)
ip mroute add 232.1.0.0/16<http://232.1.0.0/16> via Eth0 Forward

2)   (*,G)

ip mroute add 232.1.1.1 via Eth0 Forward

3)   (S,G)

   ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward

adding more replications is achieved by adding more paths:

ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward

You’ll note the path is “attached”, i.e. only the interface is given and not 
the next-hop. For mcast routes this is the expected case. The path will resolve 
via the special multicast adjacency on the interface, which will perform the 
MAC address fixup at switch time. If you want to specify a next-hop in the 
path, and so have a unicast MAC address applied, then you’ll need: 
https://gerrit.fd.io/r/#/c/6974/

The flag ‘Forward’ is required to specify this is as a replication, i.e. a path 
used for forwarding. This distinction is required because for multicast one 
must also specify the RPF constraints. RPF can be specified in several ways:

1)   a particular [set of] interfaces;

  ip mroute add 1.1.1.1 239.1.1.1 via Eth10 Accept

2)   An RPF-ID (more on this later). No CLI for this

3)   Accept from any interface – this can be thought of as an RPF ‘don’t 
care’ – there are no loops, I personally guarantee it :/
   ip mroute 1.1.1.1 239.1.1.1 AA
the AA stands for accept-any-interface – it is a property of the mFIB entry, 
not one of its paths

At this point we do not support PIM register tunnels.

IP-to-MPLS: the head-end

VPP supports point-2-multi-point (P2MP) MPLS tunnels (the sort of tunnels that 
one gets via e.g. RSVP P2MP-TE) and P2MP LSPs (e.g. those one gets from mLDP) – 
the difference is essentially the presence (for the former) and lack of (for 
the latter) an interface associated with the LSP.

At the head end one has choices:

1)   Add replications to the mroute over multiple unicast tunnels, e.g.

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel3 Forward

but note that we do not support having an ‘out-label’ specification here, hence 
the tunnels must be dedicated to the mroute – which is not so scalable.

2)   Define one MPLS P2MP tunnel with replications
  mpls tunnel [multicast] add via 10.10.10.10 Eth0 out-label 55
  mpls tunnel  add via 11.11.11.11 Eth1 out-label 66
and then point the mroute via this tunnel
  ip mroute 1.1.1.1 239.1.1.1 via mpls-tunnelXXX Forward
the tunnel can be shared or dedicated to the mroute (i.e. a default or data 
MDT).

option 2 is the typical/recommended way, since it reflects the control plane 
model; BGP gives paths via the core-tress (i.e. replications for the mroute via 
tunnels) and mLDP/RSVP-TE builds the LSPs (add/removes replications for the 
P2MP MPLS tunnel).

If there are also IP replications to make, then combinations can be used, e.g.;
ip mroute 1.1.1.1 239.1.1.1 via mpls-tunnelXXX
ip mroute 1.1.1.1 239.1.1.1 via Eth0 Forward
2-stage replication; first via all tunnels and all attached peers, then via 
each next-hop in the tunnel.


MPLS-to-MPLS: the mid-point.


At the mid-point we build MPLS entri

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-29 Thread Nagaprabhanjan Bellari
Hi Neale,

For the IP-to-MPLS case, can we specify another label in
rpath.frp_label_stack (the vc label) in addition to specifying the
sw_if_index of an MPLS tunnel (which will carry the tunnel label)?

Thanks,
-nagp

On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
wrote:

>
>
> Quite a few of the multicast related options are not available via the
> CLI. I don’t use the debug CLI much for testing, it’s all done in the
> python UT through the API. If you can see your way to adding CLI options,
> I’d be grateful.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 13 June 2017 at 12:37
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> One question: I dont see a "multicast" option with the "mpls local-label"
> command with latest VPP - am I missing something?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari <
> nagp.li...@gmail.com> wrote:
>
> Thank you! As always, a great answer!
>
> This itself is good enough a documentation to get going. :-)
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> VPP does support those scenarios. I’ve not written any documentation
> sadly, so I’ll write some instructions here… When I wrote the mfib/mpls
> code I did little testing via the debug CLI, so some of the options below
> are not be available via the CLI, I’ve put those options is []. I’ll be
> cheeky and leave the implementation as an exercise to the reader…. but see
> test_mpls.py and the test_mcast* functions therein for the various head,
> midpoint and tail programming via the API. There are also examples of how
> to setup the fib_route_path_t for each scenario in [m]fib_test.c.
>
>
>
> IP-to-IP:
>
>
>
> Adding routes to the mfib, and adding replications for those routes, is
> similar to adding ECMP paths to routes in the unicast-fib (they even share
> the sme fib_path[_list] code).
>
> The prefix in the mFIB can be:
>
> 1)(*,G/m)
>
> ip mroute add 232.1.0.0/16 via Eth0 Forward
>
> 2)   (*,G)
>
> ip mroute add 232.1.1.1 via Eth0 Forward
>
> 3)   (S,G)
>
>ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward
>
>
>
> adding more replications is achieved by adding more paths:
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward
>
>
>
> You’ll note the path is “attached”, i.e. only the interface is given and
> not the next-hop. For mcast routes this is the expected case. The path will
> resolve via the special multicast adjacency on the interface, which will
> perform the MAC address fixup at switch time. If you want to specify a
> next-hop in the path, and so have a unicast MAC address applied, then
> you’ll need: https://gerrit.fd.io/r/#/c/6974/
>
>
>
> The flag ‘Forward’ is required to specify this is as a replication, i.e. a
> path used for forwarding. This distinction is required because for
> multicast one must also specify the RPF constraints. RPF can be specified
> in several ways:
>
> 1)   a particular [set of] interfaces;
>
>   ip mroute add 1.1.1.1 239.1.1.1 via Eth10 Accept
>
> 2)   An RPF-ID (more on this later). No CLI for this
>
> 3)   Accept from any interface – this can be thought of as an RPF
> ‘don’t care’ – there are no loops, I personally guarantee it :/
>
>ip mroute 1.1.1.1 239.1.1.1 AA
>
> the AA stands for accept-any-interface – it is a property of the mFIB
> entry, not one of its paths
>
>
>
> At this point we do not support PIM register tunnels.
>
>
>
> IP-to-MPLS: the head-end
>
>
>
> VPP supports point-2-multi-point (P2MP) MPLS tunnels (the sort of tunnels
> that one gets via e.g. RSVP P2MP-TE) and P2MP LSPs (e.g. those one gets
> from mLDP) – the difference is essentially the presence (for the former)
> and lack of (for the latter) an interface associated with the LSP.
>
>
>
> At the head end one has choices:
>
> 1)   Add replications to the mroute over multiple unicast tunnels,
> e.g.
>
> ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel1 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel2 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel3 Forward
>
> but note that we do not support having an ‘out-label’ specification here,
> hence the tunnels must be dedicated to the mroute – which is not so
> scalable.
>
> 2)   Define 

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-29 Thread Neale Ranns (nranns)
Hi,

No. If we did allow that then every receiver on that interface would receive 
packets encapsulated with that label, which is equivalent to the label being 
upstream assigned and probably not what you want. An MPLS tunnel can itself be 
P2MP so the same argument applies.
If you have one or more unicast MPLS tunnels in the multicast replication list, 
then you can create each MPLS tunnel with multiple output labels.

Hth,
neale

From: Nagaprabhanjan Bellari 
Date: Thursday, 29 June 2017 at 13:01
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
For the IP-to-MPLS case, can we specify another label in rpath.frp_label_stack 
(the vc label) in addition to specifying the sw_if_index of an MPLS tunnel 
(which will carry the tunnel label)?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Quite a few of the multicast related options are not available via the CLI. I 
don’t use the debug CLI much for testing, it’s all done in the python UT 
through the API. If you can see your way to adding CLI options, I’d be grateful.

Thanks,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 13 June 2017 at 12:37
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

One question: I dont see a "multicast" option with the "mpls local-label" 
command with latest VPP - am I missing something?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Thank you! As always, a great answer!
This itself is good enough a documentation to get going. :-)
-nagp

On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

VPP does support those scenarios. I’ve not written any documentation sadly, so 
I’ll write some instructions here… When I wrote the mfib/mpls code I did little 
testing via the debug CLI, so some of the options below are not be available 
via the CLI, I’ve put those options is []. I’ll be cheeky and leave the 
implementation as an exercise to the reader…. but see test_mpls.py and the 
test_mcast* functions therein for the various head, midpoint and tail 
programming via the API. There are also examples of how to setup the 
fib_route_path_t for each scenario in [m]fib_test.c.

IP-to-IP:

Adding routes to the mfib, and adding replications for those routes, is similar 
to adding ECMP paths to routes in the unicast-fib (they even share the sme 
fib_path[_list] code).
The prefix in the mFIB can be:

1)(*,G/m)
ip mroute add 232.1.0.0/16<http://232.1.0.0/16> via Eth0 Forward

2)   (*,G)

ip mroute add 232.1.1.1 via Eth0 Forward

3)   (S,G)

   ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward

adding more replications is achieved by adding more paths:

ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward

You’ll note the path is “attached”, i.e. only the interface is given and not 
the next-hop. For mcast routes this is the expected case. The path will resolve 
via the special multicast adjacency on the interface, which will perform the 
MAC address fixup at switch time. If you want to specify a next-hop in the 
path, and so have a unicast MAC address applied, then you’ll need: 
https://gerrit.fd.io/r/#/c/6974/

The flag ‘Forward’ is required to specify this is as a replication, i.e. a path 
used for forwarding. This distinction is required because for multicast one 
must also specify the RPF constraints. RPF can be specified in several ways:

1)   a particular [set of] interfaces;

  ip mroute add 1.1.1.1 239.1.1.1 via Eth10 Accept

2)   An RPF-ID (more on this later). No CLI for this

3)   Accept from any interface – this can be thought of as an RPF ‘don’t 
care’ – there are no loops, I personally guarantee it :/
   ip mroute 1.1.1.1 239.1.1.1 AA
the AA stands for accept-any-interface – it is a property of the mFIB entry, 
not one of its paths

At this point we do not support PIM register tunnels.

IP-to-MPLS: the head-end

VPP supports point-2-multi-point (P2MP) MPLS tunnels (the sort of tunnels that 
one gets via e.g. RSVP P2MP-TE) and P2MP LSPs (e.g. those one gets from mLDP) – 
the difference is essentially the presence (for the former) and lack of (for 
the latter) an interface associated with the LSP.

At the head end one has choices:

1)   Add replications to the mroute over multiple unicast tunnels, e.g.

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via mpls-tunnel3 Forward

but note that we do not support having an ‘out-label’ specification here, hence 
the tunnels mus

Re: [vpp-dev] A few questions regarding mcast fib

2017-06-29 Thread Nagaprabhanjan Bellari
Got it, thanks!

On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
wrote:

> Hi,
>
>
>
> No. If we did allow that then every receiver on that interface would
> receive packets encapsulated with that label, which is equivalent to the
> label being upstream assigned and probably not what you want. An MPLS
> tunnel can itself be P2MP so the same argument applies.
>
> If you have one or more unicast MPLS tunnels in the multicast replication
> list, then you can create each MPLS tunnel with multiple output labels.
>
>
>
> Hth,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 29 June 2017 at 13:01
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> For the IP-to-MPLS case, can we specify another label in
> rpath.frp_label_stack (the vc label) in addition to specifying the
> sw_if_index of an MPLS tunnel (which will carry the tunnel label)?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Quite a few of the multicast related options are not available via the
> CLI. I don’t use the debug CLI much for testing, it’s all done in the
> python UT through the API. If you can see your way to adding CLI options,
> I’d be grateful.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 13 June 2017 at 12:37
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> One question: I dont see a "multicast" option with the "mpls local-label"
> command with latest VPP - am I missing something?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari <
> nagp.li...@gmail.com> wrote:
>
> Thank you! As always, a great answer!
>
> This itself is good enough a documentation to get going. :-)
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> VPP does support those scenarios. I’ve not written any documentation
> sadly, so I’ll write some instructions here… When I wrote the mfib/mpls
> code I did little testing via the debug CLI, so some of the options below
> are not be available via the CLI, I’ve put those options is []. I’ll be
> cheeky and leave the implementation as an exercise to the reader…. but see
> test_mpls.py and the test_mcast* functions therein for the various head,
> midpoint and tail programming via the API. There are also examples of how
> to setup the fib_route_path_t for each scenario in [m]fib_test.c.
>
>
>
> IP-to-IP:
>
>
>
> Adding routes to the mfib, and adding replications for those routes, is
> similar to adding ECMP paths to routes in the unicast-fib (they even share
> the sme fib_path[_list] code).
>
> The prefix in the mFIB can be:
>
> 1)(*,G/m)
>
> ip mroute add 232.1.0.0/16 via Eth0 Forward
>
> 2)   (*,G)
>
> ip mroute add 232.1.1.1 via Eth0 Forward
>
> 3)   (S,G)
>
>ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward
>
>
>
> adding more replications is achieved by adding more paths:
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward
>
> ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward
>
>
>
> You’ll note the path is “attached”, i.e. only the interface is given and
> not the next-hop. For mcast routes this is the expected case. The path will
> resolve via the special multicast adjacency on the interface, which will
> perform the MAC address fixup at switch time. If you want to specify a
> next-hop in the path, and so have a unicast MAC address applied, then
> you’ll need: https://gerrit.fd.io/r/#/c/6974/
>
>
>
> The flag ‘Forward’ is required to specify this is as a replication, i.e. a
> path used for forwarding. This distinction is required because for
> multicast one must also specify the RPF constraints. RPF can be specified
> in several ways:
>
> 1)   a particular [set of] interfaces;
>
>   ip mroute add 1.1.1.1 239.1.1.1 via Eth10 Accept
>
> 2)   An RPF-ID (more on this later). No CLI for this
>
> 3)   Accept from any interface – this can be thought of as an RPF
> ‘don’t care’ – there are no loops, I personally guarantee it :/
>
>ip mroute 1.1.1.1 239.1.1.1 AA
>
> the AA stands for accept-any-interface – it is a property of the mFIB
> entry, not one of its paths
>
>
>
> At this point we do not support PIM register tunnels.

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-04 Thread Nagaprabhanjan Bellari
Hi Neale,

I have added a mpls local-label 501 with eos 1 to lookup in multicast table
of table_id 3.

Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls
route. It is displayed as follows in "show mpls fib":
--
501:eos/21 fib:0 index:60 locks:2
  src:API  refs:1 flags:multicast,
index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
  index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
   [@0]: dst-address,unicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,unicast lookup in ipv4-VRF:1
--

The flags show multicast, but the destination address has lookup in unicast
table. Why is that so? Am I missing something?

Thanks,
-nagp

On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari <
nagp.li...@gmail.com> wrote:

> Got it, thanks!
>
> On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
> wrote:
>
>> Hi,
>>
>>
>>
>> No. If we did allow that then every receiver on that interface would
>> receive packets encapsulated with that label, which is equivalent to the
>> label being upstream assigned and probably not what you want. An MPLS
>> tunnel can itself be P2MP so the same argument applies.
>>
>> If you have one or more unicast MPLS tunnels in the multicast replication
>> list, then you can create each MPLS tunnel with multiple output labels.
>>
>>
>>
>> Hth,
>>
>> neale
>>
>>
>>
>> *From: *Nagaprabhanjan Bellari 
>> *Date: *Thursday, 29 June 2017 at 13:01
>>
>> *To: *"Neale Ranns (nranns)" 
>> *Cc: *vpp-dev 
>> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>>
>>
>>
>> Hi Neale,
>>
>> For the IP-to-MPLS case, can we specify another label in
>> rpath.frp_label_stack (the vc label) in addition to specifying the
>> sw_if_index of an MPLS tunnel (which will carry the tunnel label)?
>>
>> Thanks,
>>
>> -nagp
>>
>>
>>
>> On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
>> wrote:
>>
>>
>>
>> Quite a few of the multicast related options are not available via the
>> CLI. I don’t use the debug CLI much for testing, it’s all done in the
>> python UT through the API. If you can see your way to adding CLI options,
>> I’d be grateful.
>>
>>
>>
>> Thanks,
>>
>> neale
>>
>>
>>
>> *From: *Nagaprabhanjan Bellari 
>> *Date: *Tuesday, 13 June 2017 at 12:37
>> *To: *"Neale Ranns (nranns)" 
>> *Cc: *vpp-dev 
>> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>>
>>
>>
>> One question: I dont see a "multicast" option with the "mpls local-label"
>> command with latest VPP - am I missing something?
>>
>> Thanks,
>>
>> -nagp
>>
>>
>>
>> On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari <
>> nagp.li...@gmail.com> wrote:
>>
>> Thank you! As always, a great answer!
>>
>> This itself is good enough a documentation to get going. :-)
>>
>> -nagp
>>
>>
>>
>> On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
>> wrote:
>>
>> Hi nagp,
>>
>>
>>
>> VPP does support those scenarios. I’ve not written any documentation
>> sadly, so I’ll write some instructions here… When I wrote the mfib/mpls
>> code I did little testing via the debug CLI, so some of the options below
>> are not be available via the CLI, I’ve put those options is []. I’ll be
>> cheeky and leave the implementation as an exercise to the reader…. but see
>> test_mpls.py and the test_mcast* functions therein for the various head,
>> midpoint and tail programming via the API. There are also examples of how
>> to setup the fib_route_path_t for each scenario in [m]fib_test.c.
>>
>>
>>
>> IP-to-IP:
>>
>>
>>
>> Adding routes to the mfib, and adding replications for those routes, is
>> similar to adding ECMP paths to routes in the unicast-fib (they even share
>> the sme fib_path[_list] code).
>>
>> The prefix in the mFIB can be:
>>
>> 1)(*,G/m)
>>
>> ip mroute add 232.1.0.0/16 via Eth0 Forward
>>
>> 2)   (*,G)
>>
>> ip mroute add 232.1.1.1 via Eth0 Forward
>>
>> 3)   (S,G)
>>
>>ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward
>>
>>
>>
>> adding more replications is achieved by adding more paths:
>&

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-04 Thread Neale Ranns (nranns)
Hi,

The lookup DPO the route’s path links to needs to be created with:
  lookup_dpo_add_or_lock_w_fib_index(fib_index,
   DPO_PROTO_IP4,
   LOOKUP_MULTICAST,   <<<<
Etc…);

This is done in fib_path_resolve if the path is flagged as RPF_ID, which yours 
is not…

/neale

From: Nagaprabhanjan Bellari 
Date: Tuesday, 4 July 2017 at 14:18
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
I have added a mpls local-label 501 with eos 1 to lookup in multicast table of 
table_id 3.
Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls route. 
It is displayed as follows in "show mpls fib":
--
501:eos/21 fib:0 index:60 locks:2
  src:API  refs:1 flags:multicast,
index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
  index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
   [@0]: dst-address,unicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,unicast lookup in ipv4-VRF:1
--
The flags show multicast, but the destination address has lookup in unicast 
table. Why is that so? Am I missing something?
Thanks,
-nagp

On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Got it, thanks!

On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

No. If we did allow that then every receiver on that interface would receive 
packets encapsulated with that label, which is equivalent to the label being 
upstream assigned and probably not what you want. An MPLS tunnel can itself be 
P2MP so the same argument applies.
If you have one or more unicast MPLS tunnels in the multicast replication list, 
then you can create each MPLS tunnel with multiple output labels.

Hth,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Thursday, 29 June 2017 at 13:01

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
For the IP-to-MPLS case, can we specify another label in rpath.frp_label_stack 
(the vc label) in addition to specifying the sw_if_index of an MPLS tunnel 
(which will carry the tunnel label)?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Quite a few of the multicast related options are not available via the CLI. I 
don’t use the debug CLI much for testing, it’s all done in the python UT 
through the API. If you can see your way to adding CLI options, I’d be grateful.

Thanks,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 13 June 2017 at 12:37
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

One question: I dont see a "multicast" option with the "mpls local-label" 
command with latest VPP - am I missing something?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Thank you! As always, a great answer!
This itself is good enough a documentation to get going. :-)
-nagp

On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

VPP does support those scenarios. I’ve not written any documentation sadly, so 
I’ll write some instructions here… When I wrote the mfib/mpls code I did little 
testing via the debug CLI, so some of the options below are not be available 
via the CLI, I’ve put those options is []. I’ll be cheeky and leave the 
implementation as an exercise to the reader…. but see test_mpls.py and the 
test_mcast* functions therein for the various head, midpoint and tail 
programming via the API. There are also examples of how to setup the 
fib_route_path_t for each scenario in [m]fib_test.c.

IP-to-IP:

Adding routes to the mfib, and adding replications for those routes, is similar 
to adding ECMP paths to routes in the unicast-fib (they even share the sme 
fib_path[_list] code).
The prefix in the mFIB can be:

1)(*,G/m)
ip mroute add 232.1.0.0/16<http://232.1.0.0/16> via Eth0 Forward

2)   (*,G)

ip mroute add 232.1.1.1 via Eth0 Forward

3)   (S,G)

   ip mroute add 1.1.1.1 239.1.1.1 via Eth0 Forward

adding more replications is achieved by adding more paths:

ip mroute add 1.1.1.1 239.1.1.1 via Eth1 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth2 Forward

ip mroute add 1.1.1.1 239.1.1.1 via Eth3 Forward

You’ll note the path is “attached”, i.e. only the interface is given and not 
the next-hop. For mcast routes this

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-06 Thread Nagaprabhanjan Bellari
Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to
point to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a
label lookup to lookup multicast table.

But how do I associate the same rpf id when doing a "ip mroute add" ? Do I
have to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a
mfib_entry_path_update? And then set frp_sw_if_index to the correct
interface and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you
please throw some light on the same?

Thanks,
-nagp

On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
wrote:

> Hi,
>
>
>
> The lookup DPO the route’s path links to needs to be created with:
>
>   lookup_dpo_add_or_lock_w_fib_index(fib_index,
>
>DPO_PROTO_IP4,
>
>LOOKUP_MULTICAST,   <<<<
>
> Etc…);
>
>
>
> This is done in fib_path_resolve if the path is flagged as RPF_ID, which
> yours is not…
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 4 July 2017 at 14:18
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> I have added a mpls local-label 501 with eos 1 to lookup in multicast
> table of table_id 3.
>
> Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls
> route. It is displayed as follows in "show mpls fib":
> --
> 501:eos/21 fib:0 index:60 locks:2
>   src:API  refs:1 flags:multicast,
> index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
>   index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
>[@0]: dst-address,unicast lookup in ipv4-VRF:1
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
> [0] [@1]: mpls-disposition:[0]:[ip4]
> [@1]: dst-address,unicast lookup in ipv4-VRF:1
> --
>
> The flags show multicast, but the destination address has lookup in
> unicast table. Why is that so? Am I missing something?
>
> Thanks,
>
> -nagp
>
>
>
> On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari <
> nagp.li...@gmail.com> wrote:
>
> Got it, thanks!
>
>
>
> On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi,
>
>
>
> No. If we did allow that then every receiver on that interface would
> receive packets encapsulated with that label, which is equivalent to the
> label being upstream assigned and probably not what you want. An MPLS
> tunnel can itself be P2MP so the same argument applies.
>
> If you have one or more unicast MPLS tunnels in the multicast replication
> list, then you can create each MPLS tunnel with multiple output labels.
>
>
>
> Hth,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 29 June 2017 at 13:01
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> For the IP-to-MPLS case, can we specify another label in
> rpath.frp_label_stack (the vc label) in addition to specifying the
> sw_if_index of an MPLS tunnel (which will carry the tunnel label)?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Quite a few of the multicast related options are not available via the
> CLI. I don’t use the debug CLI much for testing, it’s all done in the
> python UT through the API. If you can see your way to adding CLI options,
> I’d be grateful.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 13 June 2017 at 12:37
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> One question: I dont see a "multicast" option with the "mpls local-label"
> command with latest VPP - am I missing something?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari <
> nagp.li...@gmail.com> wrote:
>
> Thank you! As always, a great answer!
>
> This itself is good enough a documentation to get going. :-)
>
> -nagp
>
>
>
> On Tue, Jun 13, 2017 at 4:34 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> VPP does support those scenarios. I’ve not written any documentation
> sadly, so I’ll write some instructions here… When I wrote the mfib/mpls
> code I did little testing via the debug CLI, so some of the options below
>

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-06 Thread Neale Ranns (nranns)
Hi nagp,

As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the MPLS 
route’s path. During the mFIB lookup this RPF-ID must match the route’s RPF-ID.

To update the mroute’s RPF-ID do:
mfib_table_entry_update (fib_index, &pfx,
  MFIB_SOURCE_XXX,
  
 MFIB_ENTRY_FLAG_NONE);

If the mroute has an associated RPF-ID it does not need an accepting interface. 
In order words, the RPF-ID acts like the accepting interface.

Hth,
Neale


From: Nagaprabhanjan Bellari 
Date: Thursday, 6 July 2017 at 11:24
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to point 
to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a label 
lookup to lookup multicast table.
But how do I associate the same rpf id when doing a "ip mroute add" ? Do I have 
to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a 
mfib_entry_path_update? And then set frp_sw_if_index to the correct interface 
and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you please throw some 
light on the same?
Thanks,
-nagp

On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

The lookup DPO the route’s path links to needs to be created with:
  lookup_dpo_add_or_lock_w_fib_index(fib_index,
   DPO_PROTO_IP4,
   LOOKUP_MULTICAST,   <<<<
Etc…);

This is done in fib_path_resolve if the path is flagged as RPF_ID, which yours 
is not…

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 4 July 2017 at 14:18

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
I have added a mpls local-label 501 with eos 1 to lookup in multicast table of 
table_id 3.
Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls route. 
It is displayed as follows in "show mpls fib":
--
501:eos/21 fib:0 index:60 locks:2
  src:API  refs:1 flags:multicast,
index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
  index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
   [@0]: dst-address,unicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,unicast lookup in ipv4-VRF:1
--
The flags show multicast, but the destination address has lookup in unicast 
table. Why is that so? Am I missing something?
Thanks,
-nagp

On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Got it, thanks!

On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

No. If we did allow that then every receiver on that interface would receive 
packets encapsulated with that label, which is equivalent to the label being 
upstream assigned and probably not what you want. An MPLS tunnel can itself be 
P2MP so the same argument applies.
If you have one or more unicast MPLS tunnels in the multicast replication list, 
then you can create each MPLS tunnel with multiple output labels.

Hth,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Thursday, 29 June 2017 at 13:01

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
For the IP-to-MPLS case, can we specify another label in rpath.frp_label_stack 
(the vc label) in addition to specifying the sw_if_index of an MPLS tunnel 
(which will carry the tunnel label)?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 5:28 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Quite a few of the multicast related options are not available via the CLI. I 
don’t use the debug CLI much for testing, it’s all done in the python UT 
through the API. If you can see your way to adding CLI options, I’d be grateful.

Thanks,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 13 June 2017 at 12:37
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

One question: I dont see a "multicast" option with the "mpls local-label" 
command with latest VPP - am I missing something?
Thanks,
-nagp

On Tue, Jun 13, 2017 at 4:40 PM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Thank you! As always, a great answer!
Th

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-06 Thread Nagaprabhanjan Bellari
Thanks much Neale! It worked.

One last question for the day :-)

an (S, G) route in vpp looks like as follows:

--
(31.1.1.1, 239.2.1.1/32):
  Interfaces:
   mpls-tunnel0: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:19 buckets:1 to:[0:0]]
[0] [@2]: ipv4-mcast-midchain: DELETED:710449000
stacked-on:
  [@3]: dpo-replicate: [index:18 buckets:1 to:[0:0]]
[0] [@1]: mpls-label:[20]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 twc-0/1/1.1:
01050004010181148847
--

the traffic is expected to go out on the tunnel with a couple of labels,
and looks good, but I am not able to understand the presence of
"ipv4-mcast-midchain" with "DELETED" state  - is it expected? What does it
convey?

Thanks,
-nagp

On Thu, Jul 6, 2017 at 4:53 PM, Neale Ranns (nranns) 
wrote:

> Hi nagp,
>
>
>
> As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the
> MPLS route’s path. During the mFIB lookup this RPF-ID must match the
> route’s RPF-ID.
>
>
>
> To update the mroute’s RPF-ID do:
>
> mfib_table_entry_update (fib_index, &pfx,
>
>   MFIB_SOURCE_XXX,
>
>   
>
>  MFIB_
> ENTRY_FLAG_NONE);
>
>
>
> If the mroute has an associated RPF-ID it does not need an accepting
> interface. In order words, the RPF-ID acts like the accepting interface.
>
>
>
> Hth,
>
> Neale
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 6 July 2017 at 11:24
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to
> point to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a
> label lookup to lookup multicast table.
>
> But how do I associate the same rpf id when doing a "ip mroute add" ? Do I
> have to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a
> mfib_entry_path_update? And then set frp_sw_if_index to the correct
> interface and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you
> please throw some light on the same?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi,
>
>
>
> The lookup DPO the route’s path links to needs to be created with:
>
>   lookup_dpo_add_or_lock_w_fib_index(fib_index,
>
>DPO_PROTO_IP4,
>
>LOOKUP_MULTICAST,   <<<<
>
>                 Etc…);
>
>
>
> This is done in fib_path_resolve if the path is flagged as RPF_ID, which
> yours is not…
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 4 July 2017 at 14:18
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> I have added a mpls local-label 501 with eos 1 to lookup in multicast
> table of table_id 3.
>
> Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls
> route. It is displayed as follows in "show mpls fib":
> --
> 501:eos/21 fib:0 index:60 locks:2
>   src:API  refs:1 flags:multicast,
> index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
>   index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
>[@0]: dst-address,unicast lookup in ipv4-VRF:1
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
> [0] [@1]: mpls-disposition:[0]:[ip4]
> [@1]: dst-address,unicast lookup in ipv4-VRF:1
> --
>
> The flags show multicast, but the destination address has lookup in
> unicast table. Why is that so? Am I missing something?
>
> Thanks,
>
> -nagp
>
>
>
> On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari <
> nagp.li...@gmail.com> wrote:
>
> Got it, thanks!
>
>
>
> On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi,
>
>
>
> No. If we did allow that then every receiver on that interface would
> receive packets encapsulated with that label, which is equivalent to the
> label being upstream assigned and probably not what you want. An MPLS
> tunnel can itself be P2MP so the same argument applies.
>
> If you have one or more unicast MPLS tunnels in the multicast replication
> list, then you can create each MPLS tunnel with multiple output

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-06 Thread Neale Ranns (nranns)

Hi nagp,

Can’t be good if it’s in all caps ☺

It’s coming from format_vnet_rewrite() and indicates that the rewirte has not 
correctly set the sw_if_index. That should be OK at forwarding time – please 
confirm – as the MPLS tunnel midchain has no bytes of rewrite and the packet 
will following the stacked DPO chain. But for display purposes, if nothing 
else, it’s something we should fix.

Thanks,
neale

From: Nagaprabhanjan Bellari 
Date: Thursday, 6 July 2017 at 15:11
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks much Neale! It worked.
One last question for the day :-)
an (S, G) route in vpp looks like as follows:

--
(31.1.1.1, 239.2.1.1/32<http://239.2.1.1/32>):
  Interfaces:
   mpls-tunnel0: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:19 buckets:1 to:[0:0]]
[0] [@2]: ipv4-mcast-midchain: DELETED:710449000
stacked-on:
  [@3]: dpo-replicate: [index:18 buckets:1 to:[0:0]]
[0] [@1]: mpls-label:[20]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 twc-0/1/1.1: 
01050004010181148847
--
the traffic is expected to go out on the tunnel with a couple of labels, and 
looks good, but I am not able to understand the presence of 
"ipv4-mcast-midchain" with "DELETED" state  - is it expected? What does it 
convey?
Thanks,
-nagp

On Thu, Jul 6, 2017 at 4:53 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the MPLS 
route’s path. During the mFIB lookup this RPF-ID must match the route’s RPF-ID.

To update the mroute’s RPF-ID do:
mfib_table_entry_update (fib_index, &pfx,
  MFIB_SOURCE_XXX,
  
 MFIB_ENTRY_FLAG_NONE);

If the mroute has an associated RPF-ID it does not need an accepting interface. 
In order words, the RPF-ID acts like the accepting interface.

Hth,
Neale


From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Thursday, 6 July 2017 at 11:24

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to point 
to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a label 
lookup to lookup multicast table.
But how do I associate the same rpf id when doing a "ip mroute add" ? Do I have 
to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a 
mfib_entry_path_update? And then set frp_sw_if_index to the correct interface 
and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you please throw some 
light on the same?
Thanks,
-nagp

On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

The lookup DPO the route’s path links to needs to be created with:
  lookup_dpo_add_or_lock_w_fib_index(fib_index,
   DPO_PROTO_IP4,
   LOOKUP_MULTICAST,   <<<<
Etc…);

This is done in fib_path_resolve if the path is flagged as RPF_ID, which yours 
is not…

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 4 July 2017 at 14:18

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
I have added a mpls local-label 501 with eos 1 to lookup in multicast table of 
table_id 3.
Basically, I am setting FIB_ENTRY_FLAG_MULTICAST while adding the mpls route. 
It is displayed as follows in "show mpls fib":
--
501:eos/21 fib:0 index:60 locks:2
  src:API  refs:1 flags:multicast,
index:77 locks:12 flags:shared, uPRF-list:61 len:0 itfs:[]
  index:121 pl-index:77 ipv4 weight=1 deag:  oper-flags:resolved,
   [@0]: dst-address,unicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,unicast lookup in ipv4-VRF:1
--
The flags show multicast, but the destination address has lookup in unicast 
table. Why is that so? Am I missing something?
Thanks,
-nagp

On Fri, Jun 30, 2017 at 7:55 AM, Nagaprabhanjan Bellari 
mailto:nagp.li...@gmail.com>> wrote:
Got it, thanks!

On Thu, Jun 29, 2017 at 10:56 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

No. If we did allow that then every receiver on that interface would receive 
packets encapsulated with that label, which is equivalent to the label being 
upstream assigned and probably not what you want. An MPLS tunne

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-08 Thread Nagaprabhanjan Bellari
Hi Neale! Sorry for a late reply.

You are right, the DELETED flag does not seem to have any impact w.r.t
forwarding. It goes through fine i.e the multicast packets get encapsulated
and sent across.

I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is
assigned. Because, the rpf_id associated with the route is not matching
with the incoming packet's vnet_buffer(b0)->ip.rpf_id (which is always
zero). Because of that, the packets are getting dropped. I have worked
around by setting "accept all interface" flag on the route for now, but I
am sure that's not the right way to do.

Many thanks!
-nagp

On Thu, Jul 6, 2017 at 8:18 PM, Neale Ranns (nranns) 
wrote:

>
>
> Hi nagp,
>
>
>
> Can’t be good if it’s in all caps J
>
>
>
> It’s coming from format_vnet_rewrite() and indicates that the rewirte has
> not correctly set the sw_if_index. That should be OK at forwarding time –
> please confirm – as the MPLS tunnel midchain has no bytes of rewrite and
> the packet will following the stacked DPO chain. But for display purposes,
> if nothing else, it’s something we should fix.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 6 July 2017 at 15:11
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Thanks much Neale! It worked.
>
> One last question for the day :-)
>
> an (S, G) route in vpp looks like as follows:
>
> --
> (31.1.1.1, 239.2.1.1/32):
>   Interfaces:
>mpls-tunnel0: Forward,
>   RPF-ID:1
>   multicast-ip4-chain
>   [@1]: dpo-replicate: [index:19 buckets:1 to:[0:0]]
> [0] [@2]: ipv4-mcast-midchain: DELETED:710449000
> stacked-on:
>   [@3]: dpo-replicate: [index:18 buckets:1 to:[0:0]]
> [0] [@1]: mpls-label:[20]:[201:255:0:neos][501:255:0:eos]
> [@1]: mpls via 20.1.1.2 twc-0/1/1.1:
> 01050004010181148847
> --
>
> the traffic is expected to go out on the tunnel with a couple of labels,
> and looks good, but I am not able to understand the presence of
> "ipv4-mcast-midchain" with "DELETED" state  - is it expected? What does it
> convey?
>
> Thanks,
>
> -nagp
>
>
>
> On Thu, Jul 6, 2017 at 4:53 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the
> MPLS route’s path. During the mFIB lookup this RPF-ID must match the
> route’s RPF-ID.
>
>
>
> To update the mroute’s RPF-ID do:
>
> mfib_table_entry_update (fib_index, &pfx,
>
>   MFIB_SOURCE_XXX,
>
>   
>
>  MFIB_
> ENTRY_FLAG_NONE);
>
>
>
> If the mroute has an associated RPF-ID it does not need an accepting
> interface. In order words, the RPF-ID acts like the accepting interface.
>
>
>
> Hth,
>
> Neale
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 6 July 2017 at 11:24
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to
> point to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a
> label lookup to lookup multicast table.
>
> But how do I associate the same rpf id when doing a "ip mroute add" ? Do I
> have to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a
> mfib_entry_path_update? And then set frp_sw_if_index to the correct
> interface and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you
> please throw some light on the same?
>
> Thanks,
>
> -nagp
>
>
>
> On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi,
>
>
>
> The lookup DPO the route’s path links to needs to be created with:
>
>       lookup_dpo_add_or_lock_w_fib_index(fib_index,
>
>DPO_PROTO_IP4,
>
>LOOKUP_MULTICAST,   <<<<
>
> Etc…);
>
>
>
> This is done in fib_path_resolve if the path is flagged as RPF_ID, which
> yours is not…
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Tuesday, 4 July 2017 at 14:18
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-08 Thread Neale Ranns (nranns)
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari 
Date: Saturday, 8 July 2017 at 14:05
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching with the incoming 
packet's vnet_buffer(b0)->ip.rpf_id (which is always zero). Because of that, 
the packets are getting dropped. I have worked around by setting "accept all 
interface" flag on the route for now, but I am sure that's not the right way to 
do.
Many thanks!
-nagp

On Thu, Jul 6, 2017 at 8:18 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

Can’t be good if it’s in all caps ☺

It’s coming from format_vnet_rewrite() and indicates that the rewirte has not 
correctly set the sw_if_index. That should be OK at forwarding time – please 
confirm – as the MPLS tunnel midchain has no bytes of rewrite and the packet 
will following the stacked DPO chain. But for display purposes, if nothing 
else, it’s something we should fix.

Thanks,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Thursday, 6 July 2017 at 15:11

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks much Neale! It worked.
One last question for the day :-)
an (S, G) route in vpp looks like as follows:

--
(31.1.1.1, 239.2.1.1/32<http://239.2.1.1/32>):
  Interfaces:
   mpls-tunnel0: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:19 buckets:1 to:[0:0]]
[0] [@2]: ipv4-mcast-midchain: DELETED:710449000
stacked-on:
  [@3]: dpo-replicate: [index:18 buckets:1 to:[0:0]]
[0] [@1]: mpls-label:[20]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 twc-0/1/1.1: 
01050004010181148847
--
the traffic is expected to go out on the tunnel with a couple of labels, and 
looks good, but I am not able to understand the presence of 
"ipv4-mcast-midchain" with "DELETED" state  - is it expected? What does it 
convey?
Thanks,
-nagp

On Thu, Jul 6, 2017 at 4:53 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the MPLS 
route’s path. During the mFIB lookup this RPF-ID must match the route’s RPF-ID.

To update the mroute’s RPF-ID do:
mfib_table_entry_update (fib_index, &pfx,
  MFIB_SOURCE_XXX,
  
 MFIB_ENTRY_FLAG_NONE);

If the mroute has an associated RPF-ID it does not need an accepting interface. 
In order words, the RPF-ID acts like the accepting interface.

Hth,
Neale


From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Thursday, 6 July 2017 at 11:24

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to point 
to rpf id and set FIB_ROUTE_PATH_RPF_ID in frp_flags while adding a label 
lookup to lookup multicast table.
But how do I associate the same rpf id when doing a "ip mroute add" ? Do I have 
to set frp_rpf_id to the rpf_id and set MFIB_ITF_FLAG_ACCEPT and do a 
mfib_entry_path_update? And then set frp_sw_if_index to the correct interface 
and re-add the route with MFIB_ITF_FLAG_FORWARD flag? Can you please throw some 
light on the same?
Thanks,
-nagp

On Tue, Jul 4, 2017 at 8:00 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

The lookup DPO the route’s path links to needs to be created with:
  lookup_dpo_add_or_lock_w_fib_index(fib_index,
   DPO_PROTO_IP4,
   LOOKUP_MULTICAST,   <<<<
Etc…);

This is done in fib_path_resolve if the path is flagged as RPF_ID, which yours 
is not…

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Tuesday, 4 July 2017 at 14:18

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
I have added a mpls local-label 501 with eos 1 to lookup in multicast t

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-08 Thread Nagaprabhanjan Bellari
Hi Neale,

Here is the output of "show mpls fib 501" on the tail node (encapsulation
is happening at the head node where rpf_id is set as 0, JFYI)
--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
  index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved,
cfg-flags:attached,rpf-id,
   [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,multicast lookup in ipv4-VRF:1
--

Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
wrote:

> Hi nagp,
>
>
>
> vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
>
> Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Saturday, 8 July 2017 at 14:05
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale! Sorry for a late reply.
>
> You are right, the DELETED flag does not seem to have any impact w.r.t
> forwarding. It goes through fine i.e the multicast packets get encapsulated
> and sent across.
>
> I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is
> assigned. Because, the rpf_id associated with the route is not matching
> with the incoming packet's vnet_buffer(b0)->ip.rpf_id (which is always
> zero). Because of that, the packets are getting dropped. I have worked
> around by setting "accept all interface" flag on the route for now, but I
> am sure that's not the right way to do.
>
> Many thanks!
>
> -nagp
>
>
>
> On Thu, Jul 6, 2017 at 8:18 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> Can’t be good if it’s in all caps J
>
>
>
> It’s coming from format_vnet_rewrite() and indicates that the rewirte has
> not correctly set the sw_if_index. That should be OK at forwarding time –
> please confirm – as the MPLS tunnel midchain has no bytes of rewrite and
> the packet will following the stacked DPO chain. But for display purposes,
> if nothing else, it’s something we should fix.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 6 July 2017 at 15:11
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Thanks much Neale! It worked.
>
> One last question for the day :-)
>
> an (S, G) route in vpp looks like as follows:
>
> --
> (31.1.1.1, 239.2.1.1/32):
>   Interfaces:
>mpls-tunnel0: Forward,
>   RPF-ID:1
>   multicast-ip4-chain
>   [@1]: dpo-replicate: [index:19 buckets:1 to:[0:0]]
> [0] [@2]: ipv4-mcast-midchain: DELETED:710449000
> stacked-on:
>   [@3]: dpo-replicate: [index:18 buckets:1 to:[0:0]]
> [0] [@1]: mpls-label:[20]:[201:255:0:neos][501:255:0:eos]
> [@1]: mpls via 20.1.1.2 twc-0/1/1.1:
> 01050004010181148847
> --
>
> the traffic is expected to go out on the tunnel with a couple of labels,
> and looks good, but I am not able to understand the presence of
> "ipv4-mcast-midchain" with "DELETED" state  - is it expected? What does it
> convey?
>
> Thanks,
>
> -nagp
>
>
>
> On Thu, Jul 6, 2017 at 4:53 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> As the packets egress the LSP, they ‘acquire’ the RPF-ID specified in the
> MPLS route’s path. During the mFIB lookup this RPF-ID must match the
> route’s RPF-ID.
>
>
>
> To update the mroute’s RPF-ID do:
>
> mfib_table_entry_update (fib_index, &pfx,
>
>   MFIB_SOURCE_XXX,
>
>                   
>
>  MFIB_
> ENTRY_FLAG_NONE);
>
>
>
> If the mroute has an associated RPF-ID it does not need an accepting
> interface. In order words, the RPF-ID acts like the accepting interface.
>
>
>
> Hth,
>
> Neale
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Thursday, 6 July 2017 at 11:24
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Thanks for the reply Neale, I see that we can set frp_rpf_id in rpath to
> point to rpf id and set FIB_ROUTE_PATH_RPF_ID

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-08 Thread Neale Ranns (nranns)

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)   We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if it’s 
non-zero.Be nice if this patch was up-streamed ☺

Thanks
/neale



From: Nagaprabhanjan Bellari 
Date: Saturday, 8 July 2017 at 17:55
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Here is the output of "show mpls fib 501" on the tail node (encapsulation is 
happening at the head node where rpf_id is set as 0, JFYI)
--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
  index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved, 
cfg-flags:attached,rpf-id,
   [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,multicast lookup in ipv4-VRF:1
--
Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 14:05

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching with the incoming 
packet's vnet_buffer(b0)->ip.rpf_id (which is always zero). Because of that, 
the packets are getting dropped. I have worked around by setting "accept all 
interface" flag on the route for now, but I am sure that's not the right way to 
do.
Many thanks!
-nagp

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-08 Thread Nagaprabhanjan Bellari
Hi Neale,

Sure, I will push this and other multicast options in the CLIs shortly.
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node
= 1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--

I still am not able to understand, what has rpf_id on the tail node to do
with the rpf_id assigned to an interface on the head node. :-\

Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
wrote:

>
>
> Hi nagp,
>
>
>
> We need to find out the value of the RPF-ID that’s stored in the
> mpls-disposition DPO. That’s not displayed below. So two options;
>
> 1)   We can tell from the output that it’s index #0, so hook up gdb
> and do: ‘print mpls_disp_dpo_pool[0]’
>
> 2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if
> it’s non-zero.Be nice if this patch was up-streamed J
>
>
>
> Thanks
>
> /neale
>
>
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Saturday, 8 July 2017 at 17:55
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> Here is the output of "show mpls fib 501" on the tail node (encapsulation
> is happening at the head node where rpf_id is set as 0, JFYI)
>
> --
> 501:eos/21 fib:0 index:61 locks:2
>   src:API  refs:1 flags:attached,multicast,
> index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
>   index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved,
> cfg-flags:attached,rpf-id,
>[@0]: dst-address,multicast lookup in ipv4-VRF:1
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
> [0] [@1]: mpls-disposition:[0]:[ip4]
> [@1]: dst-address,multicast lookup in ipv4-VRF:1
> --
>
> Would be glad to provide any other information.
>
>
>
> Thanks,
>
> -nagp
>
>
>
> On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
>
> Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Saturday, 8 July 2017 at 14:05
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale! Sorry for a late reply.
>
> You are right, the DELETED flag does not seem to have any impact w.r.t
> forwarding. It goes through fine i.e the multicast packets get encapsulated
> and sent across.
>
> I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is
> assigned. Because, the rpf_id associated with the route is not matching
> with the incoming packet's vnet_buffer(b0)->ip.rpf_id (which is always
> zero). Because of that, the packets are getting dropped. I have worked
> around by setting "accept all interface" flag on the route for now, but I
> am sure that's not the right way to do.
>
> Many thanks!
>
> -nagp
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Neale Ranns (nranns)

Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari 
Date: Sunday, 9 July 2017 at 03:55
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Sure, I will push this and other multicast options in the CLIs shortly. 
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node = 
1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--
I still am not able to understand, what has rpf_id on the tail node to do with 
the rpf_id assigned to an interface on the head node. :-\
Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)   We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if it’s 
non-zero.Be nice if this patch was up-streamed ☺

Thanks
/neale



From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 17:55
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Here is the output of "show mpls fib 501" on the tail node (encapsulation is 
happening at the head node where rpf_id is set as 0, JFYI)

--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
  index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved, 
cfg-flags:attached,rpf-id,
   [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,multicast lookup in ipv4-VRF:1
--
Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 14:05

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching with the incoming 
packet's vnet_buffer(b0)->ip.rpf_id (which is always zero). Because of that, 
the packets are getting dropped. I have worked around by setting "accept all 
interface" flag on the route for now, but I am sure that's not the right way to 
do.
Many thanks!
-nagp


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Nagaprabhanjan Bellari
Hi Neale,

In my case, this is what is happening - IP MC packets are arriving on a
normal interface, going from ip_input_inline to mfib_forward_lookup to
mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.

the mpls disposition node is not even figuring the path - what could I be
missing?

-nagp

On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
wrote:

>
>
> Hi nagp,
>
>
>
> The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the
> tail route not at the head.
>
> IP multicast forwarding requires RPF checks to prevent loops, so at the
> tail we need to RPF - this is done against the interface on which the
> packet arrives. But in this case the only interface is the physical. Now
> technically we could use that physical to RPF against, but the physical
> belongs to the core ‘underlay’ and we are taking about sources, receivers
> and routes in the VPN ‘overlay’ and to mix the two would be unfortunate. So
> instead, the scheme is to use the LSP on which the packet arrives as the
> ‘interface’ against which to RPF. But in this case the LSP has no
> associated SW interface. We’ve choices then; 1) create a SW interface,
> which would not scale too well, or 2) pretend we have one and call it an
> RPF-ID. So at the tail as packets egress the LSP they are tagged as having
> come ingressed with PFR-ID and that is checked in the subsequent mfib
> forwarding.
>
>
>
> The RPF-ID value is assigned and used only at the tail, so no head-to-tail
> signalling thereof is required.
>
>
>
> Hth,
>
> neale
>
>
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 03:55
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> Sure, I will push this and other multicast options in the CLIs shortly.
> Meanwhile, here is the output from gdb:
>
> --
> (gdb) p mpls_disp_dpo_pool[0]
> $1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4,
> dpoi_next_node = 1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
>   mdd_rpf_id = 0, mdd_locks = 1}
> --
>
> I still am not able to understand, what has rpf_id on the tail node to do
> with the rpf_id assigned to an interface on the head node. :-\
>
> Thanks,
>
> -nagp
>
>
>
> On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> We need to find out the value of the RPF-ID that’s stored in the
> mpls-disposition DPO. That’s not displayed below. So two options;
>
> 1)   We can tell from the output that it’s index #0, so hook up gdb
> and do: ‘print mpls_disp_dpo_pool[0]’
>
> 2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if
> it’s non-zero.Be nice if this patch was up-streamed J
>
>
>
> Thanks
>
> /neale
>
>
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Saturday, 8 July 2017 at 17:55
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> Here is the output of "show mpls fib 501" on the tail node (encapsulation
> is happening at the head node where rpf_id is set as 0, JFYI)
>
>
> --
> 501:eos/21 fib:0 index:61 locks:2
>   src:API  refs:1 flags:attached,multicast,
> index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
>   index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved,
> cfg-flags:attached,rpf-id,
>[@0]: dst-address,multicast lookup in ipv4-VRF:1
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
> [0] [@1]: mpls-disposition:[0]:[ip4]
> [@1]: dst-address,multicast lookup in ipv4-VRF:1
> --
>
> Would be glad to provide any other information.
>
>
>
> Thanks,
>
> -nagp
>
>
>
> On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
> wrote:
>
> Hi nagp,
>
>
>
> vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
>
> Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Saturday, 8 July 2017 at 14:05
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale! Sorry for a late reply.
>
> You are right, the DELETED flag does not seem to have any impact w.r.t
> forwarding. It goes through fine i.e the multicast packets get encapsulated
> and sent across.
>
> I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is
> assigned. Because, the rpf_id associated with the route is not matching
> with the incoming packet's vnet_buffer(b0)->ip.rpf_id (which is always
> zero). Because of that, the packets are getting dropped. I have worked
> around by setting "accept all interface" flag on the route for now, but I
> am sure that's not the right way to do.
>
> Many thanks!
>
> -nagp
>
>
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Neale Ranns (nranns)

Hi nagp,

If the packets arrive IP on an IP interface, then this is not the use-case for 
RPF-ID.
Show me the mfib route and tell me the ingress interface.

/neale

From: Nagaprabhanjan Bellari 
Date: Sunday, 9 July 2017 at 11:13
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
In my case, this is what is happening - IP MC packets are arriving on a normal 
interface, going from ip_input_inline to mfib_forward_lookup to 
mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
the mpls disposition node is not even figuring the path - what could I be 
missing?
-nagp

On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 03:55

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Sure, I will push this and other multicast options in the CLIs shortly. 
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node = 
1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--
I still am not able to understand, what has rpf_id on the tail node to do with 
the rpf_id assigned to an interface on the head node. :-\
Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)   We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if it’s 
non-zero.Be nice if this patch was up-streamed ☺

Thanks
/neale



From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 17:55
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Here is the output of "show mpls fib 501" on the tail node (encapsulation is 
happening at the head node where rpf_id is set as 0, JFYI)

--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
  index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved, 
cfg-flags:attached,rpf-id,
   [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
[0] [@1]: mpls-disposition:[0]:[ip4]
[@1]: dst-address,multicast lookup in ipv4-VRF:1
--
Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 14:05

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching 

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Nagaprabhanjan Bellari
Hi Neale,

Please find the output of "show ip mfib table 1" below:

--
DBGvpp# show ip mfib table 1
ipv4-VRF:1, fib_index 1
(*, 0.0.0.0/0):  flags:D,
  Interfaces:
  RPF-ID:0
  multicast-ip4-chain
  [@0]: dpo-drop ip4
(1.2.3.4, 229.1.1.2/32):  flags:AA,
  Interfaces:
   mpls-tunnel1: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:20 buckets:1 to:[15:1260]]
[0] [@2]: ipv4-mcast-midchain: DELETED:578217307
stacked-on:
  [@3]: dpo-replicate: [index:17 buckets:1 to:[15:1260]]
[0] [@1]: mpls-label:[3]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 FortyGigabitEthernet-6/0/0.1:
01050004010181148847
--

Ignore the "AA" flag for now, because that is my work around, apart from
this, I have rpf-id set as 1 for this route. The ingress interface is a
normal sub interface - which is FortyGigabitEthernet-6/0/1.1 (different
from the above) with a vlan tag.

When a packet is received on FortyGigabitEthernet-6/0/1.1 having the above
(S, G), I see that from mfib_forward_lookup, packets unconditionally go to
mfib_forward_rpf and that's where I am seeing an issue with rpf_id. Please
let me know if I am missing something on FortyGigabitEthernet-6/0/1.1.

Thanks,
-nagp

On Sun, Jul 9, 2017 at 10:22 PM, Neale Ranns (nranns) 
wrote:

>
>
> Hi nagp,
>
>
>
> If the packets arrive IP on an IP interface, then this is not the use-case
> for RPF-ID.
>
> Show me the mfib route and tell me the ingress interface.
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 11:13
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> In my case, this is what is happening - IP MC packets are arriving on a
> normal interface, going from ip_input_inline to mfib_forward_lookup to
> mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
>
> the mpls disposition node is not even figuring the path - what could I be
> missing?
>
> -nagp
>
>
>
> On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the
> tail route not at the head.
>
> IP multicast forwarding requires RPF checks to prevent loops, so at the
> tail we need to RPF - this is done against the interface on which the
> packet arrives. But in this case the only interface is the physical. Now
> technically we could use that physical to RPF against, but the physical
> belongs to the core ‘underlay’ and we are taking about sources, receivers
> and routes in the VPN ‘overlay’ and to mix the two would be unfortunate. So
> instead, the scheme is to use the LSP on which the packet arrives as the
> ‘interface’ against which to RPF. But in this case the LSP has no
> associated SW interface. We’ve choices then; 1) create a SW interface,
> which would not scale too well, or 2) pretend we have one and call it an
> RPF-ID. So at the tail as packets egress the LSP they are tagged as having
> come ingressed with PFR-ID and that is checked in the subsequent mfib
> forwarding.
>
>
>
> The RPF-ID value is assigned and used only at the tail, so no head-to-tail
> signalling thereof is required.
>
>
>
> Hth,
>
> neale
>
>
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 03:55
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> Sure, I will push this and other multicast options in the CLIs shortly.
> Meanwhile, here is the output from gdb:
>
> --
> (gdb) p mpls_disp_dpo_pool[0]
> $1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4,
> dpoi_next_node = 1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
>   mdd_rpf_id = 0, mdd_locks = 1}
> --
>
> I still am not able to understand, what has rpf_id on the tail node to do
> with the rpf_id assigned to an interface on the head node. :-\
>
> Thanks,
>
> -nagp
>
>
>
> On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> We need to find out the value of the RPF-ID that’s stored in the
> mpls-disposition DPO. That’s not displayed below. So two options;
>
> 1)       We can tell from the output that it’s index #0, so hook up gdb
> and do: ‘print mpls_disp_dpo_pool[0]’
>
> 2)   Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if
> it’s non-zero.Be nice if this patch was up-streamed J
>
>
>
> Thanks
>
> /neale
>
>
>
&

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Neale Ranns (nranns)

Hi nagp,

This is the head end. So there should not be an RPF-ID associated with the mfib 
entry. Instead, there should be an accepting path for your phy interface, i.e;

   ip mroute table 1 add 1.2.3.4 229.1.1.2 via FortyGigabitEthernet-6/0/1.1 
Accept

regards,
neale

From: Nagaprabhanjan Bellari 
Date: Sunday, 9 July 2017 at 18:54
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Please find the output of "show ip mfib table 1" below:

--
DBGvpp# show ip mfib table 1
ipv4-VRF:1, fib_index 1
(*, 0.0.0.0/0<http://0.0.0.0/0>):  flags:D,
  Interfaces:
  RPF-ID:0
  multicast-ip4-chain
  [@0]: dpo-drop ip4
(1.2.3.4, 229.1.1.2/32<http://229.1.1.2/32>):  flags:AA,
  Interfaces:
   mpls-tunnel1: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:20 buckets:1 to:[15:1260]]
[0] [@2]: ipv4-mcast-midchain: DELETED:578217307
stacked-on:
  [@3]: dpo-replicate: [index:17 buckets:1 to:[15:1260]]
[0] [@1]: mpls-label:[3]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 FortyGigabitEthernet-6/0/0.1: 
01050004010181148847
--
Ignore the "AA" flag for now, because that is my work around, apart from this, 
I have rpf-id set as 1 for this route. The ingress interface is a normal sub 
interface - which is FortyGigabitEthernet-6/0/1.1 (different from the above) 
with a vlan tag.
When a packet is received on FortyGigabitEthernet-6/0/1.1 having the above (S, 
G), I see that from mfib_forward_lookup, packets unconditionally go to 
mfib_forward_rpf and that's where I am seeing an issue with rpf_id. Please let 
me know if I am missing something on FortyGigabitEthernet-6/0/1.1.

Thanks,
-nagp

On Sun, Jul 9, 2017 at 10:22 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

If the packets arrive IP on an IP interface, then this is not the use-case for 
RPF-ID.
Show me the mfib route and tell me the ingress interface.

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 11:13

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
In my case, this is what is happening - IP MC packets are arriving on a normal 
interface, going from ip_input_inline to mfib_forward_lookup to 
mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
the mpls disposition node is not even figuring the path - what could I be 
missing?
-nagp

On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 03:55

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Sure, I will push this and other multicast options in the CLIs shortly. 
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node = 
1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--
I still am not able to understand, what has rpf_id on the tail node to do with 
the rpf_id assigned to an interface on the head node. :-\
Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)   We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)   Modify

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-09 Thread Nagaprabhanjan Bellari
Ok, so:

An mroute should be associated with both an RPF-id as well as an accept
interface. For the head end ingress IP multicast traffic, the accept
interface is used as a check and for the tail end mpls terminating traffic
- the rpf_id associated with the mroute should match the rpf_id associated
with the label.

Is this correct?

Thanks,
-nagp

On Mon, Jul 10, 2017 at 12:22 AM, Neale Ranns (nranns) 
wrote:

>
>
> Hi nagp,
>
>
>
> This is the head end. So there should not be an RPF-ID associated with the
> mfib entry. Instead, there should be an accepting path for your phy
> interface, i.e;
>
>
>
>ip mroute table 1 add 1.2.3.4 229.1.1.2 via FortyGigabitEthernet-6/0/1.1
> Accept
>
>
>
> regards,
>
> neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 18:54
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> Please find the output of "show ip mfib table 1" below:
>
> --
> DBGvpp# show ip mfib table 1
> ipv4-VRF:1, fib_index 1
> (*, 0.0.0.0/0):  flags:D,
>   Interfaces:
>   RPF-ID:0
>   multicast-ip4-chain
>   [@0]: dpo-drop ip4
> (1.2.3.4, 229.1.1.2/32):  flags:AA,
>   Interfaces:
>mpls-tunnel1: Forward,
>   RPF-ID:1
>   multicast-ip4-chain
>   [@1]: dpo-replicate: [index:20 buckets:1 to:[15:1260]]
> [0] [@2]: ipv4-mcast-midchain: DELETED:578217307
> stacked-on:
>   [@3]: dpo-replicate: [index:17 buckets:1 to:[15:1260]]
> [0] [@1]: mpls-label:[3]:[201:255:0:neos][501:255:0:eos]
> [@1]: mpls via 20.1.1.2 FortyGigabitEthernet-6/0/0.1:
> 01050004010181148847
> --
>
> Ignore the "AA" flag for now, because that is my work around, apart from
> this, I have rpf-id set as 1 for this route. The ingress interface is a
> normal sub interface - which is FortyGigabitEthernet-6/0/1.1 (different
> from the above) with a vlan tag.
>
> When a packet is received on FortyGigabitEthernet-6/0/1.1 having the above
> (S, G), I see that from mfib_forward_lookup, packets unconditionally go to
> mfib_forward_rpf and that's where I am seeing an issue with rpf_id. Please
> let me know if I am missing something on FortyGigabitEthernet-6/0/1.1.
>
>
>
> Thanks,
>
> -nagp
>
>
>
> On Sun, Jul 9, 2017 at 10:22 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> If the packets arrive IP on an IP interface, then this is not the use-case
> for RPF-ID.
>
> Show me the mfib route and tell me the ingress interface.
>
>
>
> /neale
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 11:13
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc: *vpp-dev 
> *Subject: *Re: [vpp-dev] A few questions regarding mcast fib
>
>
>
> Hi Neale,
>
> In my case, this is what is happening - IP MC packets are arriving on a
> normal interface, going from ip_input_inline to mfib_forward_lookup to
> mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
>
> the mpls disposition node is not even figuring the path - what could I be
> missing?
>
> -nagp
>
>
>
> On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi nagp,
>
>
>
> The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the
> tail route not at the head.
>
> IP multicast forwarding requires RPF checks to prevent loops, so at the
> tail we need to RPF - this is done against the interface on which the
> packet arrives. But in this case the only interface is the physical. Now
> technically we could use that physical to RPF against, but the physical
> belongs to the core ‘underlay’ and we are taking about sources, receivers
> and routes in the VPN ‘overlay’ and to mix the two would be unfortunate. So
> instead, the scheme is to use the LSP on which the packet arrives as the
> ‘interface’ against which to RPF. But in this case the LSP has no
> associated SW interface. We’ve choices then; 1) create a SW interface,
> which would not scale too well, or 2) pretend we have one and call it an
> RPF-ID. So at the tail as packets egress the LSP they are tagged as having
> come ingressed with PFR-ID and that is checked in the subsequent mfib
> forwarding.
>
>
>
> The RPF-ID value is assigned and used only at the tail, so no head-to-tail
> signalling thereof is required.
>
>
>
> Hth,
>
> neale
>
>
>
>
>
>
>
> *From: *Nagaprabhanjan Bellari 
> *Date: *Sunday, 9 July 2017 at 03:55
>
>
> *To: *"Neale Ranns (nranns)" 
> *Cc

Re: [vpp-dev] A few questions regarding mcast fib

2017-07-10 Thread Neale Ranns (nranns)

Yes, and specifically, at the head the mroute has only an accepting interface, 
at the tail the mroute has only an RPF-ID.

/neale

From: Nagaprabhanjan Bellari 
Date: Monday, 10 July 2017 at 05:56
To: "Neale Ranns (nranns)" 
Cc: vpp-dev 
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Ok, so:
An mroute should be associated with both an RPF-id as well as an accept 
interface. For the head end ingress IP multicast traffic, the accept interface 
is used as a check and for the tail end mpls terminating traffic - the rpf_id 
associated with the mroute should match the rpf_id associated with the label.
Is this correct?
Thanks,
-nagp

On Mon, Jul 10, 2017 at 12:22 AM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

This is the head end. So there should not be an RPF-ID associated with the mfib 
entry. Instead, there should be an accepting path for your phy interface, i.e;

   ip mroute table 1 add 1.2.3.4 229.1.1.2 via FortyGigabitEthernet-6/0/1.1 
Accept

regards,
neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 18:54

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Please find the output of "show ip mfib table 1" below:

--
DBGvpp# show ip mfib table 1
ipv4-VRF:1, fib_index 1
(*, 0.0.0.0/0<http://0.0.0.0/0>):  flags:D,
  Interfaces:
  RPF-ID:0
  multicast-ip4-chain
  [@0]: dpo-drop ip4
(1.2.3.4, 229.1.1.2/32<http://229.1.1.2/32>):  flags:AA,
  Interfaces:
   mpls-tunnel1: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:20 buckets:1 to:[15:1260]]
[0] [@2]: ipv4-mcast-midchain: DELETED:578217307
stacked-on:
  [@3]: dpo-replicate: [index:17 buckets:1 to:[15:1260]]
[0] [@1]: mpls-label:[3]:[201:255:0:neos][501:255:0:eos]
[@1]: mpls via 20.1.1.2 FortyGigabitEthernet-6/0/0.1: 
01050004010181148847
--
Ignore the "AA" flag for now, because that is my work around, apart from this, 
I have rpf-id set as 1 for this route. The ingress interface is a normal sub 
interface - which is FortyGigabitEthernet-6/0/1.1 (different from the above) 
with a vlan tag.
When a packet is received on FortyGigabitEthernet-6/0/1.1 having the above (S, 
G), I see that from mfib_forward_lookup, packets unconditionally go to 
mfib_forward_rpf and that's where I am seeing an issue with rpf_id. Please let 
me know if I am missing something on FortyGigabitEthernet-6/0/1.1.

Thanks,
-nagp

On Sun, Jul 9, 2017 at 10:22 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

If the packets arrive IP on an IP interface, then this is not the use-case for 
RPF-ID.
Show me the mfib route and tell me the ingress interface.

/neale

From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 11:13

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
In my case, this is what is happening - IP MC packets are arriving on a normal 
interface, going from ip_input_inline to mfib_forward_lookup to 
mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
the mpls disposition node is not even figuring the path - what could I be 
missing?
-nagp

On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:

Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 03:55

To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regard