From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of 
haiyan...@ilinkall.cn via lists.fd.io <haiyan.li=ilinkall...@lists.fd.io>
Date: Tuesday, 26 July 2022 at 12:30
To: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] questions about fib
Hi neale

i see "stacked-on" from "show ip fib" command

we have two default route in vpp, but only one is correct which can access the 
internet, but sometimes the incorrect one will be configured into vpp first 
which will lead to some routes wrong.

vpp detail config, may be a little different from previous email

vpp# show gre tunnel
[0] instance 2000 src 198.18.0.207 dst 198.18.0.205 fib-idx 0 sw-if-idx 2 
payload L3
[1] instance 2001 src 198.18.0.207 dst 198.18.0.203 fib-idx 0 sw-if-idx 3 
payload L3
[2] instance 2002 src 198.18.0.207 dst 198.18.0.206 fib-idx 0 sw-if-idx 4 
payload L3
vpp# show interface addr
G0 (up):
  L3 10.120.0.230/24
gre2000 (up):
  L3 10.10.20.2/30
gre2001 (up):
  L3 10.10.22.2/30
gre2002 (up):
  L3 10.10.25.1/30
local0 (dn):
loop0 (up):
  L3 10.1.1.1/32
loop2003 (up):
  L3 198.18.0.207/32


after step 1 : ip route add 0.0.0.0/0 via 10.10.20.1 weight 1 preference 20
vpp# show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:2, src:recursive-resolution:1, 
src:default-route:1, src:(nil):1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:58 to:[399:140930] 
via:[211:9744]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[866:180194]]
          [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
              stacked-on entry:12:
                [@2]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.1.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:32 to:[0:0]]
    [0] [@2]: dpo-receive: 10.1.1.1 on loop0
10.10.20.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:15 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.10.20.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 to:[100:9600] 
via:[866:180194]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
        stacked-on entry:12:
          [@2]: dpo-drop ip4
10.10.20.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:14 to:[0:0]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
        stacked-on entry:12:
          [@2]: dpo-drop ip4
10.10.20.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:19 to:[0:0]]
    [0] [@2]: dpo-receive: 10.10.20.2 on gre2000
10.10.20.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:17 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.10.22.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:20 buckets:1 uRPF:21 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.10.22.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:20 to:[100:9600]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2001: mtu:9000 
4500000000000000fe2f2f10c61200cfc61200cb00000800
        stacked-on entry:17:
          [@4]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[866:180194]]
            [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
                stacked-on entry:12:
                  [@2]: dpo-drop ip4
10.10.22.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:25 to:[0:0]]
    [0] [@2]: dpo-receive: 10.10.22.2 on gre2001
10.10.22.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:21 buckets:1 uRPF:23 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.10.25.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:27 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.10.25.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:26 to:[100:9600]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2002: mtu:9000 
4500000000000000fe2f2f0dc61200cfc61200ce00000800
        stacked-on entry:22:
          [@4]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[866:180194]]
            [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
                stacked-on entry:12:
                  [@2]: dpo-drop ip4
10.10.25.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:27 buckets:1 uRPF:31 to:[0:0]]
    [0] [@2]: dpo-receive: 10.10.25.1 on gre2002
10.10.25.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:1 uRPF:29 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.120.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:9 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.120.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:8 to:[0:0]]
    [0] [@4]: ipv4-glean: G0: mtu:9000 ffffffffffff00163e002e8f0806
10.120.0.230/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:13 to:[354:69804]]
    [0] [@2]: dpo-receive: 10.120.0.230 on G0
10.120.0.253/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:48 buckets:1 uRPF:62 to:[0:0]]
    [0] [@5]: ipv4 via 10.120.0.253 G0: mtu:9000 eeffffffffff00163e002e8f0800
10.120.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:11 to:[0:0]]
    [0] [@0]: dpo-drop ip4
198.18.0.203/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:58 to:[0:0]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[866:180194]]
          [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
              stacked-on entry:12:
                [@2]: dpo-drop ip4
198.18.0.205/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:59 to:[0:0]]
    [0] [@0]: dpo-drop ip4
198.18.0.206/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:58 to:[0:0]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[866:180194]]
          [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
              stacked-on entry:12:
                [@2]: dpo-drop ip4
198.18.0.207/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:33 to:[0:0]]
    [0] [@2]: dpo-receive: 198.18.0.207 on loop2003

vpp# show fib entry
FIB Entries:
0@0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:58 to:[426:150250] 
via:[225:10388]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:56 
to:[100:9600] via:[911:190638]]
          [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
              stacked-on entry:12:
                [@2]: dpo-drop ip4
1@0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
    [0] [@0]: dpo-drop ip4
2@240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
    [0] [@0]: dpo-drop ip4
3@224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
    [0] [@0]: dpo-drop ip4
4@255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
    [0] [@0]: dpo-drop ip4
5@::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
6@fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[0:0]]
    [0] [@14]: ip6-link-local
7@10.120.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:8 to:[0:0]]
    [0] [@4]: ipv4-glean: G0: mtu:9000 ffffffffffff00163e002e8f0806
8@10.120.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:9 to:[0:0]]
    [0] [@0]: dpo-drop ip4
9@10.120.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:11 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10@10.120.0.230/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:13 to:[378:74648]]
    [0] [@2]: dpo-receive: 10.120.0.230 on G0
11@10.10.20.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:14 to:[0:0]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2000: mtu:9000 
4500000000000000fe2f2f0ec61200cfc61200cd00000800
        stacked-on entry:12:
          [@2]: dpo-drop ip4
12@198.18.0.205/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:59 to:[0:0]]
    [0] [@0]: dpo-drop ip4


after step 3:  ip route add 0.0.0.0/0 via 10.120.0.253 weight 1 preference 0
vpp# show ip fib 0.0.0.0/0
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:2, src:recursive-resolution:3, 
src:default-route:1, src:(nil):1, ]
0.0.0.0/0 fib:0 index:0 locks:3
  src:API refs:1 src-flags:added,contributing,active,
    path-list:[62] locks:8 flags:shared, uPRF-list:65 len:2 itfs:[1, 2, ]
      path:[87] pl-index:62 ip4 weight=1 pref=0 recursive:  oper-flags:resolved,
        via 10.120.0.253 in fib:0 via-fib:46 via-dpo:[dpo-load-balance:48]
      path:[86] pl-index:62 ip4 weight=1 pref=20 recursive:  
oper-flags:resolved,
        via 10.10.20.1 in fib:0 via-fib:42 via-dpo:[dpo-load-balance:44]

  src:default-route refs:1 entry-flags:drop, src-flags:added,
    path-list:[0] locks:1 flags:drop, uPRF-list:0 len:0 itfs:[]
      path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
        [@0]: dpo-drop ip4

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:65 to:[518:183020] 
via:[285:13148]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:48 buckets:1 uRPF:62 to:[0:0] 
via:[30:5268]]
          [0] [@5]: ipv4 via 10.120.0.253 G0: mtu:9000 
eeffffffffff00163e002e8f0800

at the moment, the default route is correct now, the higher priority is 
contributing to forwarding. but others with red color also use the wrong 
default route.

vpp# show ip fib

10.10.22.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:20 to:[140:13440] 
via:[10:960]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2001: mtu:9000 
4500000000000000fe2f2f10c61200cfc61200cb00000800
        stacked-on entry:17:
          [@5]: arp-ipv4: via 10.10.20.1 G0
10.10.25.0/30
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:26 to:[140:13440] 
via:[10:960]]
    [0] [@6]: ipv4 [features] via 0.0.0.0 gre2002: mtu:9000 
4500000000000000fe2f2f0dc61200cfc61200ce00000800
        stacked-on entry:22:
          [@5]: arp-ipv4: via 10.10.20.1 G0


These 2 above are the subnets configured on the tunnel, they will always 
resolve through the tunnel (the default route does not affect this).


198.18.0.203/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:58 to:[0:0]]
    [0] [@3]: arp-ipv4: via 10.10.20.1 G0
198.18.0.205/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:58 to:[0:0]]
    [0] [@3]: arp-ipv4: via 10.10.20.1 G0
198.18.0.206/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:58 to:[0:0]]
    [0] [@3]: arp-ipv4: via 10.10.20.1 G0


These are how to reach the tunnel endpoints  ….

198.18.0.207/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:33 to:[38:4560]]
    [0] [@2]: dpo-receive: 198.18.0.207 on loop2003

vpp# show fib entry 17
17@198.18.0.203/32 fib:0 index:17 locks:4
  src:API refs:1 src-flags:added,contributing,active,

… which you have explicitly programmed to be ….

    path-list:[63] locks:6 flags:shared, uPRF-list:58 len:1 itfs:[1, ]
      path:[88] pl-index:63 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
        10.10.20.1 G0
      [@0]: arp-ipv4: via 10.10.20.1 G0

… via G0, although with the wrong next-hop.

  src:recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:58 to:[0:0]]
    [0] [@3]: arp-ipv4: via 10.10.20.1 G0
 Delegates:
  track: sibling:29
  Children:{adj:11}{adj:2}
 Children:{fib-entry-track:4}
vpp# show fib en
entry           entry-delegate
vpp# show fib entry 22
22@198.18.0.206/32 fib:0 index:22 locks:4
  src:API refs:1 src-flags:added,contributing,active,
    path-list:[63] locks:6 flags:shared, uPRF-list:58 len:1 itfs:[1, ]
      path:[88] pl-index:63 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
        10.10.20.1 G0
      [@0]: arp-ipv4: via 10.10.20.1 G0

…. And here too.

  src:recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:58 to:[0:0]]
    [0] [@3]: arp-ipv4: via 10.10.20.1 G0
 Delegates:
  track: sibling:39
  Children:{adj:14}{adj:3}
 Children:{fib-entry-track:6}
vpp# show ip fib 0.0.0.0/0
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:2, src:recursive-resolution:3, 
src:default-route:1, src:(nil):1, ]
0.0.0.0/0 fib:0 index:0 locks:3
  src:API refs:1 src-flags:added,contributing,active,
    path-list:[62] locks:8 flags:shared, uPRF-list:65 len:2 itfs:[1, 2, ]
      path:[87] pl-index:62 ip4 weight=1 pref=0 recursive:  oper-flags:resolved,
        via 10.120.0.253 in fib:0 via-fib:46 via-dpo:[dpo-load-balance:48]
      path:[86] pl-index:62 ip4 weight=1 pref=20 recursive:  
oper-flags:resolved,
        via 10.10.20.1 in fib:0 via-fib:42 via-dpo:[dpo-load-balance:44]

this looks fine.

/neale

  src:default-route refs:1 entry-flags:drop, src-flags:added,
    path-list:[0] locks:1 flags:drop, uPRF-list:0 len:0 itfs:[]
      path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
        [@0]: dpo-drop ip4

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:65 to:[524:183432] 
via:[1737:80008]]
    [0] [@13]: dpo-load-balance: [proto:ip4 index:48 buckets:1 uRPF:62 to:[0:0] 
via:[1816:84791]]
          [0] [@5]: ipv4 via 10.120.0.253 G0: mtu:9000 
eeffffffffff00163e002e8f0800



From: Neale Ranns<mailto:ne...@graphiant.com>
Date: 2022-07-26 06:02
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] questions about fib

Hi,

Before I answer can you please elaborate on what you mean by ‘stacked on’. Can 
you please give the config example.

/neale

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of 
haiyan...@ilinkall.cn via lists.fd.io <haiyan.li=ilinkall...@lists.fd.io>
Date: Monday, 25 July 2022 at 11:12
To: vpp-dev <vpp-dev@lists.fd.io>
Subject: [vpp-dev] questions about fib
Hello  vpps

1. first we add a default route using api like this:
ip route add 0.0.0.0/0 via 10.121.20.1 weight 1 preference 20

2.  than add some other routes which will stacked on default route above

3. add another defalut route using api with same weight value but higher 
priority (lower preference value) like this:
 ip route add 0.0.0.0/0 via 10.120.0.253 weight 1 preference 0

my question is: when the third step completed, does routes added by step 2 will 
stacked on the new default routes added by step 3?

actually the default route added by step 3 is correct.

 I think step 3 route is higher priority than step 1, when it's added, it will 
back forward to all its children, am i right ? or did i miss something ?

Thanks very much .


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21724): https://lists.fd.io/g/vpp-dev/message/21724
Mute This Topic: https://lists.fd.io/mt/92596414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to