See ../src/vppinfra/bihash_16_8.h:

#define BIHASH_USE_HEAP 1

The the sv reassembly bihash table configuration appears to be hardwired, and 
complex enough to satisfy the cash customers. If the number of buckets is way 
too low for your use-case, bihash is capable of wasting a considerable amount 
of memory.

Suggest that you ping Klement Sekera, it's his code...

D.

-----Original Message-----
From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Elias Rudberg
Sent: Friday, February 19, 2021 7:41 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP 20.09 os_out_of_memory() in clib_bihash_add_del_16_8 in 
IPv4 Shallow Virtual reassembly code

Hello VPP experts,

We have a problem with VPP 20.09 crashing with SIGABRT, this happened several 
times lately but we do not have an exact way of reproducing it. Here is a 
backtrace from gdb:

Thread 10 "vpp_wk_7" received signal SIGABRT, Aborted.
[Switching to Thread 0x7feac47f8700 (LWP 6263)] __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51
#0  __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff4044921 in __GI_abort () at abort.c:79
#2  0x000055555555c640 in os_panic () at src/vpp/vnet/main.c:368
#3  0x00007ffff7719229 in alloc_aligned_16_8 (h=0x7ffff7b79990 
<ip4_sv_reass_main+32>, nbytes=<optimized out>) at
src/vppinfra/bihash_template.c:34
#4  0x00007ffff771b650 in value_alloc_16_8 (h=0x7ffff7b79990 
<ip4_sv_reass_main+32>, log2_pages=4) at
src/vppinfra/bihash_template.c:356
#5  0x00007ffff771b43a in split_and_rehash_16_8 (h=0x7ffff7b79990 
<ip4_sv_reass_main+32>, old_values=0x7ff87c7b0d40, old_log2_pages=3,
new_log2_pages=4) at src/vppinfra/bihash_template.c:453
#6  0x00007ffff7710f84 in clib_bihash_add_del_inline_with_hash_16_8
(h=0x7ffff7b79990 <ip4_sv_reass_main+32>, add_v=0x7ffbf2088c60, hash=<optimized 
out>, is_add=<optimized out>, is_stale_cb=0x0, arg=0x0) at 
src/vppinfra/bihash_template.c:765
#7  clib_bihash_add_del_inline_16_8 (h=0x7ffff7b79990 <ip4_sv_reass_main+32>, 
add_v=0x7ffbf2088c60, is_add=<optimized out>, is_stale_cb=0x0, arg=0x0) at 
src/vppinfra/bihash_template.c:857
#8  clib_bihash_add_del_16_8 (h=0x7ffff7b79990 <ip4_sv_reass_main+32>, 
add_v=0x7ffbf2088c60, is_add=<optimized out>) at
src/vppinfra/bihash_template.c:864
#9  0x00007ffff66795ec in ip4_sv_reass_find_or_create (vm=<optimized
out>, rm=<optimized out>, rt=<optimized out>, kv=<optimized out>,
do_handoff=<optimized out>) at src/vnet/ip/reass/ip4_sv_reass.c:364
#10 ip4_sv_reass_inline (vm=<optimized out>, node=<optimized out>, 
frame=<optimized out>, is_feature=255, is_output_feature=false,
is_custom=false) at src/vnet/ip/reass/ip4_sv_reass.c:726
#11 ip4_sv_reass_node_feature_fn_skx (vm=<optimized out>, node=<optimized out>, 
frame=<optimized out>) at
src/vnet/ip/reass/ip4_sv_reass.c:919
#12 0x00007ffff5ac806e in dispatch_node (vm=0x7ffbf1e74400, 
node=0x7ffbf2553fc0, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=<optimized out>, 
last_time_stamp=<optimized out>) at src/vlib/main.c:1194
#13 dispatch_pending_node (vm=0x7ffbf1e74400, pending_frame_index=<optimized 
out>, last_time_stamp=<optimized out>) at src/vlib/main.c:1353
#14 vlib_main_or_worker_loop (vm=0x7ffbf1e74400, is_main=0) at
src/vlib/main.c:1846
#15 vlib_worker_loop (vm=0x7ffbf1e74400) at src/vlib/main.c:1980

The line at bihash_template.c:34 is "os_out_of_memory ()".

If VPP calls "os_out_of_memory()" at that point in the code, what does that 
mean, is there some way we could configure VPP to allow it to use more memory 
for this kind of allocations?

We have plenty of physical memory available and the main heap ("heapsize" in 
startup.conf) has already been set to a large value but maybe this part of the 
code is using some other kind of memory allocation, not using the main heap? 
How do we know if this particular allocation is using the main heap or not?

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18771): https://lists.fd.io/g/vpp-dev/message/18771
Mute This Topic: https://lists.fd.io/mt/80753669/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to