Hi Cipher,

Thank you for the bug report. I was able to reproduce it once i had analysed 
the curiosities in your config.

Here’s the Jira bug i created :
  https://jira.fd.io/browse/VPP-1803
and the patch :
  https://gerrit.fd.io/r/c/vpp/+/23645

your setup is exceptional because, you have the same address marked as for-us :
  127.0.0.11/32
    unicast-ip4-chain
      [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:40 to:[0:0]]
        [0] [@2]: dpo-receive: 127.0.0.11 on loop11

Also reachable through the same interface :
  # vppctl show ip arp
      Time           IP4       Flags      Ethernet              Interface
      …
      4.7627   127.0.0.11      D    52:54:7f:00:00:0b loop11

This is what caused FIB to crash, when a covering default route was added.

I’ve used this same trick before too:
  # vppctl ip route add 0.0.0.0/1 table 1 via 127.0.0.11 loop11
  # vppctl ip route add 1.0.0.0/1 table 1 via 127.0.0.11 loop11
And made the same mistake __ you meant
  # vppctl ip route add 0.0.0.0/1 table 1 via 127.0.0.11 loop11
  # vppctl ip route add 128.0.0.0/1 table 1 via 127.0.0.11 loop11

/neale


From: <vpp-dev@lists.fd.io> on behalf of Cipher Chen <cipher.chen2...@gmail.com>
Date: Tuesday 26 November 2019 at 11:59
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: [vpp-dev] vpp crash while configuring route #vpp

Hi vpp devs,

VPP has been crashed after I try to configure default route. 

# vppctl ip route add 0.0.0.0/0 table 1 via 127.0.0.11 loop11

Stack is here:

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51    ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff5ab2801 in __GI_abort () at abort.c:79
#2  0x000055555555c05d in os_panic () at /root/ws/vpp/src/vpp/vnet/main.c:355
#3  0x00007ffff5e96d6d in debugger () at /root/ws/vpp/src/vppinfra/error.c:84
#4  0x00007ffff5e9713c in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7ffff775a408 "%s:%d (%s) assertion `%s' fails") at 
/root/ws/vpp/src/vppinfra/error.c:143
#5  0x00007ffff751aaa6 in fib_entry_src_adj_cover_update (src=0x7fffb5eca278, 
fib_entry=0x7fffb5e9ef70) at /root/ws/vpp/src/vnet/fib/fib_entry_src_adj.c:381
#6  0x00007ffff75146f7 in fib_entry_src_action_cover_update 
(fib_entry=0x7fffb5e9ef70, esrc=0x7fffb5eca278) at 
/root/ws/vpp/src/vnet/fib/fib_entry_src.c:199
#7  0x00007ffff750d508 in fib_entry_cover_updated (fib_entry_index=33) at 
/root/ws/vpp/src/vnet/fib/fib_entry.c:1362
#8  0x00007ffff751bab1 in fib_entry_cover_update_one (cover=0x7fffb5e9eb38, 
covered=33, args=0x0) at /root/ws/vpp/src/vnet/fib/fib_entry_cover.c:169
#9  0x00007ffff751b96f in fib_entry_cover_walk_node_ptr (depend=0x7fffb5dab794, 
args=0x7fffb5e78780) at /root/ws/vpp/src/vnet/fib/fib_entry_cover.c:81
#10 0x00007ffff75099e0 in fib_node_list_walk (list=38, fn=0x7ffff751b934 
<fib_entry_cover_walk_node_ptr>, args=0x7fffb5e78780) at 
/root/ws/vpp/src/vnet/fib/fib_node_list.c:375
#11 0x00007ffff751b9d7 in fib_entry_cover_walk (cover=0x7fffb5e9eb38, 
walk=0x7ffff751ba94 <fib_entry_cover_update_one>, args=0x0) at 
/root/ws/vpp/src/vnet/fib/fib_entry_cover.c:105
#12 0x00007ffff751badc in fib_entry_cover_update_notify 
(fib_entry=0x7fffb5e9eb38) at /root/ws/vpp/src/vnet/fib/fib_entry_cover.c:178
#13 0x00007ffff750c5be in fib_entry_post_update_actions 
(fib_entry=0x7fffb5e9eb38, source=FIB_SOURCE_CLI, 
old_flags=FIB_ENTRY_FLAG_DROP) at /root/ws/vpp/src/vnet/fib/fib_entry.c:809
#14 0x00007ffff750c6b2 in fib_entry_source_change_w_flags 
(fib_entry=0x7fffb5e9eb38, old_source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=FIB_ENTRY_FLAG_DROP, new_source=FIB_SOURCE_CLI) at 
/root/ws/vpp/src/vnet/fib/fib_entry.c:863
#15 0x00007ffff750c702 in fib_entry_source_change (fib_entry=0x7fffb5e9eb38, 
old_source=FIB_SOURCE_DEFAULT_ROUTE, new_source=FIB_SOURCE_CLI) at 
/root/ws/vpp/src/vnet/fib/fib_entry.c:876
#16 0x00007ffff750c9db in fib_entry_path_add (fib_entry_index=18, 
source=FIB_SOURCE_CLI, flags=FIB_ENTRY_FLAG_NONE, rpaths=0x7fffb5ecd580) at 
/root/ws/vpp/src/vnet/fib/fib_entry.c:934
#17 0x00007ffff74f9100 in fib_table_entry_path_add2 (fib_index=1, 
prefix=0x7fffb5e78a70, source=FIB_SOURCE_CLI, flags=FIB_ENTRY_FLAG_NONE, 
rpaths=0x7fffb5ecd580) at /root/ws/vpp/src/vnet/fib/fib_table.c:599
#18 0x00007ffff6fc2f4a in vnet_ip_route_cmd (vm=0x7ffff66b8dc0 
<vlib_global_main>, main_input=0x7fffb5e78f00, cmd=0x7fffb5b081f0) at 
/root/ws/vpp/src/vnet/ip/lookup.c:471
#19 0x00007ffff63c85aa in vlib_cli_dispatch_sub_commands (vm=0x7ffff66b8dc0 
<vlib_global_main>, cm=0x7ffff66b8ff0 <vlib_global_main+560>, 
input=0x7fffb5e78f00, parent_command_index=395) at 
/root/ws/vpp/src/vlib/cli.c:645
#20 0x00007ffff63c843f in vlib_cli_dispatch_sub_commands (vm=0x7ffff66b8dc0 
<vlib_global_main>, cm=0x7ffff66b8ff0 <vlib_global_main+560>, 
input=0x7fffb5e78f00, parent_command_index=0) at /root/ws/vpp/src/vlib/cli.c:606
#21 0x00007ffff63c8a6f in vlib_cli_input (vm=0x7ffff66b8dc0 <vlib_global_main>, 
input=0x7fffb5e78f00, function=0x7ffff647066e <unix_vlib_cli_output>, 
function_arg=0) at /root/ws/vpp/src/vlib/cli.c:746
#22 0x00007ffff64770fe in unix_cli_process_input (cm=0x7ffff66b9760 
<unix_cli_main>, cli_file_index=0) at /root/ws/vpp/src/vlib/unix/cli.c:2572
#23 0x00007ffff6477dc0 in unix_cli_process (vm=0x7ffff66b8dc0 
<vlib_global_main>, rt=0x7fffb5e38000, f=0x0) at 
/root/ws/vpp/src/vlib/unix/cli.c:2688
#24 0x00007ffff6416e49 in vlib_process_bootstrap (_a=140736233346832) at 
/root/ws/vpp/src/vlib/main.c:1472
#25 0x00007ffff5eb7d40 in clib_calljmp () from 
/root/ws/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.08.1
#26 0x00007fffb53216e0 in ?? ()
#27 0x00007ffff6416f51 in vlib_process_startup (vm=0x7fffb5e8bf20, 
p=0x7fffb4bba388, f=0x7fffb5e8bf30) at /root/ws/vpp/src/vlib/main.c:1494
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb)

The crash is not easy to be reproduced with a simple topology/config (but 
crashed everytime in my environment).
Some env info before crashed have been collected here:

# vppctl show ip fib table 1
ipv4-VRF:1, fib_index:1, flow hash:[src dst sport dport proto ] 
locks:[src:CLI:3, src:adjacency:2, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:20 buckets:1 uRPF:19 to:[0:0]]
    [0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:21 buckets:1 uRPF:21 to:[0:0]]
    [0] [@0]: dpo-drop ip4
10.255.1.201/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:1 uRPF:27 to:[0:0]]
    [0] [@5]: ipv4 via 127.0.0.1 loop1: mtu:9000 52547f00000152547f0000010800
10.255.1.202/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:27 buckets:1 uRPF:27 to:[0:0]]
    [0] [@5]: ipv4 via 127.0.0.1 loop1: mtu:9000 52547f00000152547f0000010800
127.0.0.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:26 to:[0:0]]
    [0] [@2]: dpo-receive: 127.0.0.1 on loop1
127.0.0.11/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:40 to:[0:0]]
    [0] [@2]: dpo-receive: 127.0.0.11 on loop11
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:23 to:[0:0]]
    [0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:22 to:[0:0]]
    [0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:24 to:[0:0]]
    [0] [@0]: dpo-drop ip4
# vppctl show adj
[@0] ipv4-glean: BondEthernet0.13: mtu:9000 ffffffffffff5254000100028100000d0806
[@1] ipv4 via 172.31.13.1 BondEthernet0.13: mtu:9000 
5254000100005254000100028100000d0800
[@2] ipv4 via 172.31.13.2 BondEthernet0.13: mtu:9000 
5254000200005254000100028100000d0800
[@3] ipv4 via 172.31.13.3 BondEthernet0.13: mtu:9000 
5254000400005254000100028100000d0800
[@4] ipv4 via 172.31.13.4 BondEthernet0.13: mtu:9000 
5254000500005254000100028100000d0800
[@5] ipv4 via 127.0.0.1 loop1: mtu:9000 52547f00000152547f0000010800
[@6] ipv4 via 0.0.0.0 pipe102.0: mtu:9000 0000000000000000000000660800
[@7] ipv4 via 127.0.0.1 loop102: mtu:9000 52547f000001dead000000660800
[@8] ipv4 via 127.0.0.11 loop11: mtu:9000 52547f00000bdead0000000b0800
# vppctl show interface loop11
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
Counter          Count
loop11                            18     up          9000/0/0/0
# vppctl show interface address loop11
loop11 (up):
  L2 bridge bd-id 11 idx 2 shg 0 bvi
  L3 127.0.0.11/32 ip4 table-id 1 fib-idx 1
# vppctl show ip arp
    Time           IP4       Flags      Ethernet              Interface
      4.0640   172.31.13.1     D    52:54:00:01:00:00 BondEthernet0.13
      4.0677   172.31.13.2     D    52:54:00:02:00:00 BondEthernet0.13
      4.0866   172.31.13.3     D    52:54:00:04:00:00 BondEthernet0.13
      4.0904   172.31.13.4     D    52:54:00:05:00:00 BondEthernet0.13
      4.6127    127.0.0.1      D    52:54:7f:00:00:01 loop102
      4.7627   127.0.0.11      D    52:54:7f:00:00:0b loop11
      4.2042    127.0.0.1      D    52:54:7f:00:00:01 loop1
#

Luckily, I found a tricky way to workaround this crash:

# vppctl ip route add 0.0.0.0/1 table 1 via 127.0.0.11 loop11
# vppctl ip route add 1.0.0.0/1 table 1 via 127.0.0.11 loop11

If any other info should be provided to help solve this, I would be glad to 
provide. 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14705): https://lists.fd.io/g/vpp-dev/message/14705
Mute This Topic: https://lists.fd.io/mt/61957174/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to