Which vpp version are you using? The code looks substantially different in master/latest. In particular, you must not have this patch...:
Author: Neale Ranns <nra...@cisco.com> 2020-05-25 05:09:36 Committer: Ole Trøan <otr...@employees.org> 2020-05-26 10:54:23 Parent: 080aa503b23a90ed43d7c0b2bc68e2726190a990 (vcl: do not propagate epoll events if session closed) Child: 1bf6df4ff9c83bac1fc329a4b5c4d7061f13720a (fib: Fix interpose source reactivate) Branches: master, remotes/origin/master Follows: v20.09-rc0 Precedes: WORKS_05_27_2020 fib: Use basic hash for adjacency neighbour table Type: improvement a bihash per-interface used too much memory. Change-Id: I447bb66c0907e1632fa5d886a3600e518663c39e Signed-off-by: Neale Ranns <nra...@cisco.com> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Stanislav Zaikin Sent: Thursday, July 2, 2020 12:22 PM To: vpp-dev@lists.fd.io Subject: [vpp-dev] p2p interfaces and clib_bihash_init (and oom) Hello folks, I've tried to set up vpp to handle many pppoe connections. But I faced OOM issue: #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007ffff67b3899 in __GI_abort () at abort.c:79 #2 0x000055555555cef2 in os_panic () at /home/zstas/vpp_gerrit/src/vpp/vnet/main.c:366 #3 0x00007ffff6a51183 in os_out_of_memory () at /home/zstas/vpp_gerrit/src/vppinfra/unix-misc.c:222 #4 0x00007ffff7994f1e in alloc_aligned_24_8 (h=0x7fffb85880c0, nbytes=64) at /home/zstas/vpp_gerrit/src/vppinfra/bihash_template.c:60 #5 0x00007ffff7994fe0 in clib_bihash_instantiate_24_8 (h=0x7fffb85880c0) at /home/zstas/vpp_gerrit/src/vppinfra/bihash_template.c:86 ... #9 0x00007ffff7a02fa5 in adj_nbr_insert (nh_proto=FIB_PROTOCOL_IP4, link_type=VNET_LINK_IP4, nh_addr=0x7ffff7c3da60 <zero_addr>, sw_if_index=1365, adj_index=1351) at /home/zstas/vpp_gerrit/src/vnet/adj/adj_nbr.c:83 #10 0x00007ffff7a0345b in adj_nbr_alloc (nh_proto=FIB_PROTOCOL_IP4, link_type=VNET_LINK_IP4, nh_addr=0x7ffff7c3da60 <zero_addr>, sw_if_index=1365) at /home/zstas/vpp_gerrit/src/vnet/adj/adj_nbr.c:200 ... #22 0x00007fffb018033d in vnet_pppoe_add_del_session (a=0x7fffb7420d10, sw_if_indexp=0x7fffb7420cdc) at /home/zstas/vpp_gerrit/src/plugins/pppoe/pppoe.c:418 Despite the fact PPPoE interface is P2P interface, there is the logic in vpp which create a pretty big adj table: BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index], "Adjacency Neighbour table", ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS, ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE); At first, I've tried to to fix it with allocating a smaller table: int numbuckets = ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS; int memsize = ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE; if( vnet_sw_interface_is_p2p( vnet_get_main(), sw_if_index ) == 1 ) { numbuckets = 4; memsize = 32 << 8; } BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index], "Adjacency Neighbour table", numbuckets, memsize); But I saw that hugetable pages still consumed every time pppoe connection is coming up. I've looked into the code of alloc_aligned function, and it seems to me that a new memory page will be allocated in any case (when you are initializing new hash table). But how we can deal with situations when we have hundreds of thousands of interfaces. Is there a way to prevent this behavior? Can we allocate an adj table in some kind of memory pool, or keep one adj table for all p2p interfaces? -- Best regards Stanislav Zaikin
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#16873): https://lists.fd.io/g/vpp-dev/message/16873 Mute This Topic: https://lists.fd.io/mt/75261778/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-