From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of Pim van Pelt via 
lists.fd.io <pim=ipng...@lists.fd.io>
Date: Sunday, 27 March 2022 at 14:01
To: Stanislav Zaikin <zsta...@gmail.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Crash in BVI and Loopback interaction
Hoi Stanislav,

Thanks for the response. As I noted in my original email, I am aware that 
loopbacks work as BVI members, but since they are there (and have a whole 
device class dedicated to them!) I was hoping to

  1.  get some historical context on the need/differences between BVI and 
Loopback devices, and

I’m guilty there. My rationale for introducing the BVI interface was to have an 
interface type dedicated to, and optimised for, the L2 function, rather than 
overloading the loopback type, which in the L3 world has a very different 
purpose. However, since there is a large installed base using the loopback for 
the purpose of l2, its L2 functions were never deprecated.

/neale

(b) get to the bottom of this bug and fix it :)

I can certainly work around the bug for now by dedicating a set of loopback 
interfaces and avoiding the use of BVIs, for now.

groet,
Pim

On Sun, Mar 27, 2022 at 12:43 PM Stanislav Zaikin 
<zsta...@gmail.com<mailto:zsta...@gmail.com>> wrote:
Hi Pim,

Well, I wasn't aware of "bvi ..." commands. Anyway, usually I go with something 
like:

create loopback interface instance 20
set interface state loop20 up
create bridge-domain 20 learn 1 forward 1 flood 1 arp-term 1 arp-ufwd 0
set interface l2 bridge loop20 20 bvi

On Sun, 27 Mar 2022 at 00:41, Pim van Pelt <p...@ipng.nl<mailto:p...@ipng.nl>> 
wrote:
Hoi,

I've noticed that a pattern of 'create loopback; delete loopback; create bvi' 
as well as 'create bvi; delete bvi; create loopback' makes VPP at HEAD unhappy.
I've actually long since wondered what the difference is between BVI and 
Loopback interface types, other than the BVI plumbing lives in l2/l2_bvi.c and 
the loopback lives in ethernet/interface.c their _use_ seems very similar if 
not identical. I understand that BVIs are used in bridges, but a loopback in 
practice serves that purpose equally well.

I assume the issue is in the bvi/loop deletion not the creation, but I stared 
at this for an hour or so and could not understand it. Can somebody more 
knowledgeable help me out ?
Take the following simple repro to crash VPP. The assertion in noce.c:194 fails 
in both cases:

1) create loop after bvi:
DBGvpp# show version
vpp v22.06-rc0~268-g4859d8d8e built by pim on hippo at 2022-03-23T19:23:53
DBGvpp# bvi create instance 0
bvi0
DBGvpp# bvi delete bvi0
DBGvpp# create loopback interface instance 0
0: /home/pim/src/vpp/src/vlib/node.c:194 (vlib_node_add_next_with_slot) 
assertion `slot == p[0]' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff69de859 in __GI_abort () at abort.c:79
#2  0x00000000004072f3 in os_panic () at 
/home/pim/src/vpp/src/vpp/vnet/main.c:413
#3  0x00007ffff6d2ebc9 in debugger () at 
/home/pim/src/vpp/src/vppinfra/error.c:84
#4  0x00007ffff6d2e92d in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7ffff6f3b19c "%s:%d (%s) assertion `%s' fails")
    at /home/pim/src/vpp/src/vppinfra/error.c:143
#5  0x00007ffff6ea462b in vlib_node_add_next_with_slot (vm=0x7fff96800740, 
node_index=696, next_node_index=648, slot=2)
    at /home/pim/src/vpp/src/vlib/node.c:194
#6  0x00007ffff6ea61d8 in vlib_node_add_named_next_with_slot 
(vm=0x7fff96800740, node=696, name=0x7ffff7cc7c86 "l2-input", slot=2)
    at /home/pim/src/vpp/src/vlib/node.c:267
#7  0x00007ffff70d5ce5 in vnet_create_loopback_interface 
(sw_if_indexp=0x7fff515421e8, mac_address=0x7fff515421e2 "", is_specified=1 
'\001',
    user_instance=0) at /home/pim/src/vpp/src/vnet/ethernet/interface.c:890
#8  0x00007ffff70d98df in create_simulated_ethernet_interfaces 
(vm=0x7fff96800740, input=0x7fff51542e40, cmd=0x7fff99b88088)
    at /home/pim/src/vpp/src/vnet/ethernet/interface.c:930
#9  0x00007ffff6e681d4 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
cm=0x4312e0 <vlib_global_main+32>, input=0x7fff51542e40,
    parent_command_index=1146) at /home/pim/src/vpp/src/vlib/cli.c:592
#10 0x00007ffff6e67f4e in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
cm=0x4312e0 <vlib_global_main+32>, input=0x7fff51542e40,
    parent_command_index=33) at /home/pim/src/vpp/src/vlib/cli.c:549
#11 0x00007ffff6e67f4e in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
cm=0x4312e0 <vlib_global_main+32>, input=0x7fff51542e40,
    parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:549
#12 0x00007ffff6e66e98 in vlib_cli_input (vm=0x7fff96800740, 
input=0x7fff51542e40, function=0x7ffff6ef2c40 <unix_vlib_cli_output>, 
function_arg=0)
    at /home/pim/src/vpp/src/vlib/cli.c:695
#13 0x00007ffff6ef48dd in unix_cli_process_input (cm=0x7ffff6f69748 
<unix_cli_main>, cli_file_index=0) at /home/pim/src/vpp/src/vlib/unix/cli.c:2617
#14 0x00007ffff6ef1cb1 in unix_cli_process (vm=0x7fff96800740, 
rt=0x7fff9bc4ea80, f=0x0) at /home/pim/src/vpp/src/vlib/unix/cli.c:2746
#15 0x00007ffff6e9f26d in vlib_process_bootstrap (_a=140735646532920) at 
/home/pim/src/vpp/src/vlib/main.c:1220
#16 0x00007ffff6d47e08 in clib_calljmp () at 
/home/pim/src/vpp/src/vppinfra/longjmp.S:123
#17 0x00007fff92380530 in ?? ()
#18 0x00007ffff6e9eb3f in vlib_process_startup (vm=0x1, p=0x8, 
f=0x7fff968008b0) at /home/pim/src/vpp/src/vlib/main.c:1245
#19 0x00007fff96800740 in ?? ()
#20 0x00007fff923805c0 in ?? ()
#21 0x00007ffff6eff104 in vlib_process_signal_event (vm=<error reading 
variable: Cannot access memory at address 0x2a7>,
    node_index=<error reading variable: Cannot access memory at address 0x29f>,
    type_opaque=<error reading variable: Cannot access memory at address 0x297>,
    data=<error reading variable: Cannot access memory at address 0x28f>) at 
/home/pim/src/vpp/src/vlib/node_funcs.h:1025
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

2) Create bvi after loop:
DBGvpp# show version
vpp v22.06-rc0~268-g4859d8d8e built by pim on hippo at 2022-03-23T19:23:53
DBGvpp# create loopback
loop0
DBGvpp# delete loopback interface intfc loop0
DBGvpp# bvi create
0: /home/pim/src/vpp/src/vlib/node.c:194 (vlib_node_add_next_with_slot) 
assertion `slot == p[0]' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff69de859 in __GI_abort () at abort.c:79
#2  0x00000000004072f3 in os_panic () at 
/home/pim/src/vpp/src/vpp/vnet/main.c:413
#3  0x00007ffff6d2ebc9 in debugger () at 
/home/pim/src/vpp/src/vppinfra/error.c:84
#4  0x00007ffff6d2e92d in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7ffff6f3b19c "%s:%d (%s) assertion `%s' fails")
    at /home/pim/src/vpp/src/vppinfra/error.c:143
#5  0x00007ffff6ea462b in vlib_node_add_next_with_slot (vm=0x7fff96800740, 
node_index=696, next_node_index=648, slot=0)
    at /home/pim/src/vpp/src/vlib/node.c:194
#6  0x00007ffff6ea61d8 in vlib_node_add_named_next_with_slot 
(vm=0x7fff96800740, node=696, name=0x7ffff7cc7c86 "l2-input", slot=0)
    at /home/pim/src/vpp/src/vlib/node.c:267
#7  0x00007ffff7123f69 in l2_bvi_create (user_instance=4294967295, 
mac_in=0x7fff515424e8, sw_if_indexp=0x7fff515424f8)
    at /home/pim/src/vpp/src/vnet/l2/l2_bvi.c:186
#8  0x00007ffff71266b9 in l2_bvi_create_cli (vm=0x7fff96800740, 
input=0x7fff51542e40, cmd=0x7fff99b87868)
    at /home/pim/src/vpp/src/vnet/l2/l2_bvi.c:256
#9  0x00007ffff6e681d4 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
cm=0x4312e0 <vlib_global_main+32>, input=0x7fff51542e40,
    parent_command_index=1124) at /home/pim/src/vpp/src/vlib/cli.c:592
#10 0x00007ffff6e67f4e in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
cm=0x4312e0 <vlib_global_main+32>, input=0x7fff51542e40,
    parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:549
#11 0x00007ffff6e66e98 in vlib_cli_input (vm=0x7fff96800740, 
input=0x7fff51542e40, function=0x7ffff6ef2c40 <unix_vlib_cli_output>, 
function_arg=0)
    at /home/pim/src/vpp/src/vlib/cli.c:695
#12 0x00007ffff6ef48dd in unix_cli_process_input (cm=0x7ffff6f69748 
<unix_cli_main>, cli_file_index=0) at /home/pim/src/vpp/src/vlib/unix/cli.c:2617
#13 0x00007ffff6ef1cb1 in unix_cli_process (vm=0x7fff96800740, 
rt=0x7fff9bc4ea80, f=0x0) at /home/pim/src/vpp/src/vlib/unix/cli.c:2746
#14 0x00007ffff6e9f26d in vlib_process_bootstrap (_a=140735646532920) at 
/home/pim/src/vpp/src/vlib/main.c:1220
#15 0x00007ffff6d47e08 in clib_calljmp () at 
/home/pim/src/vpp/src/vppinfra/longjmp.S:123
#16 0x00007fff92380530 in ?? ()
#17 0x00007ffff6e9eb3f in vlib_process_startup (vm=0x1, p=0x8, 
f=0x7fff968008b0) at /home/pim/src/vpp/src/vlib/main.c:1245
#18 0x00007fff96800740 in ?? ()
#19 0x00007fff923805c0 in ?? ()
#20 0x00007ffff6eff104 in vlib_process_signal_event (vm=<error reading 
variable: Cannot access memory at address 0x2a7>,
    node_index=<error reading variable: Cannot access memory at address 0x29f>,
    type_opaque=<error reading variable: Cannot access memory at address 0x297>,
    data=<error reading variable: Cannot access memory at address 0x28f>) at 
/home/pim/src/vpp/src/vlib/node_funcs.h:1025
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

--
Pim van Pelt <p...@ipng.nl<mailto:p...@ipng.nl>>
PBVP1-RIPE - http://www.ipng.nl/




--
Best regards
Stanislav Zaikin


--
Pim van Pelt <p...@ipng.nl<mailto:p...@ipng.nl>>
PBVP1-RIPE - http://www.ipng.nl/
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21120): https://lists.fd.io/g/vpp-dev/message/21120
Mute This Topic: https://lists.fd.io/mt/90053941/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to