Here is maybe a better fix: https://gerrit.fd.io/r/c/vpp/+/33693 My previous changeset was only hiding the underlying issue which is arrays passed to vlib_buffer_enqueue_to_next() must be big enough to allow for some overflow (because of the use of clib_mask_compare_u16() and clib_mask_compare_u32() that overflow for optimization reasons). This is usually fine for your typical VPP node which allocates buffer[VLIB_FRAME_SIZE] and nexts[VLIB_FRAME_SIZE] but not when using vectors. This patch should actually fix the issue (and tests should pass), but there might be better ways. However I do not think the last issue reported by Chetan will be fixed with that one, looks like a new one. Chetan, Florin's request is good: do you reproduce it only with your plugin (in which case that could be an issue in your plugin) or also with stock VPP?
ben > -----Original Message----- > From: Florin Coras <fcoras.li...@gmail.com> > Sent: mercredi 8 septembre 2021 08:39 > To: chetan bhasin <chetan.bhasin...@gmail.com> > Cc: Benoit Ganne (bganne) <bga...@cisco.com>; vpp-dev <vpp- > d...@lists.fd.io> > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer enabled > > Hi Chetan, > > Something like the following should exercise using iperf and homegrown > test apps most of the components of the host stack, i.e., vcl, session > layer and the transports, including tcp: > > make test-debug VPP_EXTRA_CMAKE_ARGS=-DVPP_ENABLE_SANITIZE_ADDR=ON CC=gcc- > 10 TEST=vcl CACHE_OUTPUT=0 > > Having said that, it seems some issue has creeped in since last time I’ve > tested (granted it’s been a long time). That is, most of the tests seems > to fail but I’ve only checked without Benoit’s patch. > > Regards, > Florin > > > On Sep 7, 2021, at 9:57 PM, chetan bhasin > <chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com> > wrote: > > Hi Florin , > > This issue is coming when we are testing with iperf traffic with our > plugins that involve VPP TCP Host stack (VPP 21.06). > > What's the best way to test VPP TCP host stack with Sanitizer > without involving our out of tree plugins ? > > > Thanks, > Chetan > > On Tue, Sep 7, 2021 at 7:47 PM Florin Coras <fcoras.li...@gmail.com > <mailto:fcoras.li...@gmail.com> > wrote: > > > Hi Chetan, > > Out of curiosity, is this while running some make test (like > the iperf3 one) or with actual traffic through vpp? > > Regards, > Florin > > > > On Sep 7, 2021, at 12:17 AM, chetan bhasin > <chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com> > wrote: > > Hi Ben, > > After applying the patch , now its crashing at below > place > > > static void > session_flush_pending_tx_buffers (session_worker_t * wrk, > vlib_node_runtime_t * node) > { > vlib_buffer_enqueue_to_next (wrk->vm, node, wrk->pending_tx_buffers, > > wrk->pending_tx_nexts, > vec_len (wrk->pending_tx_nexts)); > vec_reset_length (wrk->pending_tx_buffers); > vec_reset_length (wrk->pending_tx_nexts); > } > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7ffb527f9700 (LWP 27762)] > 0x00007ffff71db5c1 in > __asan::FakeStack::AddrIsInFakeStack(unsigned long, unsigned long*, > unsigned long*) () from /lib64/libasan.so.5 > (gdb) bt > #0 0x00007ffff71db5c1 in > __asan::FakeStack::AddrIsInFakeStack(unsigned long, unsigned long*, > unsigned long*) () from /lib64/libasan.so.5 > #1 0x00007ffff72c2a11 in > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*) > () from /lib64/libasan.so.5 > #2 0x00007ffff72dcdc2 in > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from > /lib64/libasan.so.5 > #3 0x00007ffff72c3e5a in > __asan::FindThreadByStackAddress(unsigned long) () from > /lib64/libasan.so.5 > #4 0x00007ffff71d5fb6 in > __asan::GetStackAddressInformation(unsigned long, unsigned long, > __asan::StackAddressDescription*) () from /lib64/libasan.so.5 > #5 0x00007ffff71d73f9 in > __asan::AddressDescription::AddressDescription(unsigned long, unsigned > long, bool) () from /lib64/libasan.so.5 > #6 0x00007ffff71d9e51 in > __asan::ErrorGeneric::ErrorGeneric(unsigned int, unsigned long, unsigned > long, unsigned long, unsigned long, bool, unsigned long) () from > /lib64/libasan.so.5 > #7 0x00007ffff72bdc2a in > __asan::ReportGenericError(unsigned long, unsigned long, unsigned long, > unsigned long, bool, unsigned long, unsigned int, bool) () from > /lib64/libasan.so.5 > #8 0x00007ffff72be927 in __asan_report_load4 () from > /lib64/libasan.so.5 > #9 0x00007ffff58e5185 in > session_flush_pending_tx_buffers (wrk=0x7fff89101e80, node=0x7fff891a6e80) > at src/vnet/session/session_node.c:1658 > #10 0x00007ffff58e8531 in session_queue_node_fn > (vm=0x7fff72bf2c40, node=0x7fff891a6e80, frame=0x0) > at src/vnet/session/session_node.c:1812 > #11 0x00007ffff3e04121 in dispatch_node > (vm=0x7fff72bf2c40, node=0x7fff891a6e80, type=VLIB_NODE_TYPE_INPUT, > dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, > last_time_stamp=2510336347228480) > at src/vlib/main.c:1024 > #12 0x00007ffff3e13394 in vlib_main_or_worker_loop > (vm=0x7fff72bf2c40, is_main=0) at src/vlib/main.c:2949 > #13 0x00007ffff3e14fb8 in vlib_worker_loop > (vm=0x7fff72bf2c40) at src/vlib/main.c:3114 > #14 0x00007ffff3eac8f1 in vlib_worker_thread_fn > (arg=0x7fff700ef480) at src/vlib/threads.c:1560 > #15 0x00007ffff34d9504 in clib_calljmp () at > src/vppinfra/longjmp.S:123 > #16 0x00007ffb527f8230 in ?? () > #17 0x00007ffff3ea0141 in > vlib_worker_thread_bootstrap_fn (arg=0x7fff700ef480) at > src/vlib/threads.c:431 > #18 0x00007fff6d41971b in eal_thread_loop () from > /opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so > #19 0x00007ffff38abea5 in start_thread () from > /lib64/libpthread.so.0 > #20 0x00007ffff2e328cd in clone () from /lib64/libc.so.6 > (gdb) > > > On Mon, Sep 6, 2021 at 6:10 PM Benoit Ganne (bganne) > <bga...@cisco.com <mailto:bga...@cisco.com> > wrote: > > > Yes we are aware of it, still working on the > correct fix though. > In the meantime you can try to apply > https://gerrit.fd.io/r/c/vpp/+/32765 which should workaround that for now. > > Best > ben > > > -----Original Message----- > > From: chetan bhasin > <chetan.bhasin...@gmail.com > <mailto:chetan.bhasin...@gmail.com> > > > Sent: lundi 6 septembre 2021 14:21 > > To: Benoit Ganne (bganne) <bga...@cisco.com > <mailto:bga...@cisco.com> > > > Cc: vpp-dev <vpp-dev@lists.fd.io <mailto:vpp- > d...@lists.fd.io> > > > Subject: Re: [vpp-dev] VPP 2106 with Sanitizer > enabled > > > > Hi, > > > > The below crash is coming as we involved VPP > TCP > host stack. > > > > > > Program received signal SIGSEGV, Segmentation > fault. > > [Switching to Thread 0x7ffb527f9700 (LWP > 2013)] > > 0x00007ffff71db5c1 in > __asan::FakeStack::AddrIsInFakeStack(unsigned long, > > unsigned long*, unsigned long*) () from > /lib64/libasan.so.5 > > (gdb) bt > > #0 0x00007ffff71db5c1 in > __asan::FakeStack::AddrIsInFakeStack(unsigned > > long, unsigned long*, unsigned long*) () from > /lib64/libasan.so.5 > > #1 0x00007ffff72c2a11 in > > > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*) > > () from /lib64/libasan.so.5 > > #2 0x00007ffff72dcdc2 in > > > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool > > (*)(__sanitizer::ThreadContextBase*, void*), > void*) () from > > /lib64/libasan.so.5 > > #3 0x00007ffff72c3e5a in > __asan::FindThreadByStackAddress(unsigned long) > > () from /lib64/libasan.so.5 > > #4 0x00007ffff71d5fb6 in > __asan::GetStackAddressInformation(unsigned > > long, unsigned long, > __asan::StackAddressDescription*) () from > > /lib64/libasan.so.5 > > #5 0x00007ffff71d73f9 in > > > __asan::AddressDescription::AddressDescription(unsigned long, unsigned > > long, bool) () from /lib64/libasan.so.5 > > #6 0x00007ffff71d9e51 in > __asan::ErrorGeneric::ErrorGeneric(unsigned int, > > unsigned long, unsigned long, unsigned long, > unsigned long, bool, unsigned > > long) () from /lib64/libasan.so.5 > > #7 0x00007ffff72bdc2a in > __asan::ReportGenericError(unsigned long, > > unsigned long, unsigned long, unsigned long, > bool, unsigned long, unsigned > > int, bool) () from /lib64/libasan.so.5 > > #8 0x00007ffff72bf194 in __asan_report_load_n > () from /lib64/libasan.so.5 > > #9 0x00007ffff3f45463 in > clib_mask_compare_u16_x64 (v=2, > > a=0x7fff89fd6150, n_elts=2) at > src/vppinfra/vector_funcs.h:24 > > #10 0x00007ffff3f4571e in > clib_mask_compare_u16 > (v=2, a=0x7fff89fd6150, > > mask=0x7ffb51fe2310, n_elts=2) > > at src/vppinfra/vector_funcs.h:79 > > #11 0x00007ffff3f45b81 in enqueue_one > (vm=0x7fff7314ec40, > > node=0x7fff89fe1e00, > used_elt_bmp=0x7ffb51fe2440, next_index=2, > > buffers=0x7fff7045a9e0, nexts=0x7fff89fd6150, > n_buffers=2, n_left=2, > > tmp=0x7ffb51fe2480) at > src/vlib/buffer_funcs.c:30 > > #12 0x00007ffff3f6bdae in > vlib_buffer_enqueue_to_next_fn_skx > > (vm=0x7fff7314ec40, node=0x7fff89fe1e00, > buffers=0x7fff7045a9e0, > > nexts=0x7fff89fd6150, count=2) > > at src/vlib/buffer_funcs.c:110 > > #13 0x00007ffff58cd2b6 in > vlib_buffer_enqueue_to_next (vm=0x7fff7314ec40, > > node=0x7fff89fe1e00, buffers=0x7fff7045a9e0, > nexts=0x7fff89fd6150, > > count=2) > > at src/vlib/buffer_node.h:344 > > #14 0x00007ffff58e4cf1 in > session_flush_pending_tx_buffers > > (wrk=0x7fff8912b780, node=0x7fff89fe1e00) > > at src/vnet/session/session_node.c:1654 > > #15 0x00007ffff58e844f in > session_queue_node_fn > (vm=0x7fff7314ec40, > > node=0x7fff89fe1e00, frame=0x0) > > at src/vnet/session/session_node.c:1812 > > #16 0x00007ffff3e0402f in dispatch_node > (vm=0x7fff7314ec40, > > node=0x7fff89fe1e00, > type=VLIB_NODE_TYPE_INPUT, > > dispatch_state=VLIB_NODE_STATE_POLLING, > frame=0x0, > > last_time_stamp=2463904994699296) > > at src/vlib/main.c:1024 > > #17 0x00007ffff3e132a2 in > vlib_main_or_worker_loop (vm=0x7fff7314ec40, > > is_main=0) at src/vlib/main.c:2949 > > #18 0x00007ffff3e14ec6 in vlib_worker_loop > (vm=0x7fff7314ec40) at > > src/vlib/main.c:3114 > > #19 0x00007ffff3eac7ff in > vlib_worker_thread_fn > (arg=0x7fff7050ef40) at > > src/vlib/threads.c:1560 > > #20 0x00007ffff34d9504 in clib_calljmp () at > src/vppinfra/longjmp.S:123 > > #21 0x00007ffb527f8230 in ?? () > > #22 0x00007ffff3ea004f in > vlib_worker_thread_bootstrap_fn > > (arg=0x7fff7050ef40) at > src/vlib/threads.c:431 > > #23 0x00007fff6d41971b in eal_thread_loop () > from > > > /opt/opwv/integra/99.9/tools/vpp_2106_asan/bin/../lib/dpdk_plugin.so > > #24 0x00007ffff38abea5 in start_thread () from > /lib64/libpthread.so.0 > > #25 0x00007ffff2e328cd in clone () from > /lib64/libc.so.6 > > (gdb) :q > > > > On Mon, Sep 6, 2021 at 1:52 PM chetan bhasin > <chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com> > > <mailto:chetan.bhasin...@gmail.com > <mailto:chetan.bhasin...@gmail.com> > > wrote: > > > > > > Hi Ben, > > > > Thanks for the direction. Looks like it > will fix both the issues as > > mentioned above. I will update you with the > results after applying the > > patch. > > > > Is there any ASAN related patch inside > the > TCP host stack code. I > > will be sharing the issue shortly with you. > > > > Thanks, > > Chetan > > > > On Mon, Sep 6, 2021 at 1:22 PM Benoit > Ganne (bganne) > > <bga...@cisco.com <mailto:bga...@cisco.com> > <mailto:bga...@cisco.com <mailto:bga...@cisco.com> > > wrote: > > > > > > It should be fixed in master by > > https://gerrit.fd.io/r/c/vpp/+/32643 > > > > ben > > > > > -----Original Message----- > > > From: vpp-dev@lists.fd.io > <mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io <mailto:vpp- > d...@lists.fd.io> > > > <vpp-dev@lists.fd.io <mailto:vpp- > d...@lists.fd.io> <mailto:vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > > > On Behalf Of chetan > > bhasin > > > Sent: lundi 6 septembre 2021 > 09:36 > > > To: vpp-dev > <vpp-dev@lists.fd.io > <mailto:vpp-dev@lists.fd.io> <mailto:vpp- <mailto:vpp-> > > d...@lists.fd.io <mailto:d...@lists.fd.io> > > > > > Subject: [vpp-dev] VPP 2106 > with > Sanitizer enabled > > > > > > Hi > > > > > > > > > We are facing two errors with > vpp2106 and Address Sanitizer > > enabled. > > > > > > make V=1 -j4 build > VPP_EXTRA_CMAKE_ARGS=- > > DVPP_ENABLE_SANITIZE_ADDR=ON > > > > > > > > > Work-around - After adding the > two api’s string_key_sum and > > > strnlen_s_inline to ASAN > suppression, vpp2106 comes up > > fine. > > > > > > > > > Error 1: > > > ---------- > > > Program received signal > SIGSEGV, > Segmentation fault. > > > 0x00007ffff71db5c1 in > > __asan::FakeStack::AddrIsInFakeStack(unsigned > long, > > > unsigned long*, unsigned > long*) > () > > > from /lib64/libasan.so.5 > > > (gdb) bt > > > #0 0x00007ffff71db5c1 in > > __asan::FakeStack::AddrIsInFakeStack(unsigned > > > long, unsigned long*, unsigned > long*) () > > > from /lib64/libasan.so.5 > > > #1 0x00007ffff72c2a11 in > > > > > > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*) > > > () > > > from /lib64/libasan.so.5 > > > #2 0x00007ffff72dcdc2 in > > > > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool > > > > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from > > > /lib64/libasan.so.5 > > > #3 0x00007ffff72c3e5a in > > __asan::FindThreadByStackAddress(unsigned > long) > > > () from /lib64/libasan.so.5 > > > #4 0x00007ffff71d5fb6 in > > __asan::GetStackAddressInformation(unsigned > > > long, unsigned long, > __asan::StackAddressDescription*) () > > from > > > /lib64/libasan.so.5 > > > #5 0x00007ffff71d73f9 in > > > > __asan::AddressDescription::AddressDescription(unsigned > > long, unsigned > > > long, bool) () > > > from /lib64/libasan.so.5 > > > #6 0x00007ffff71d9e51 in > > __asan::ErrorGeneric::ErrorGeneric(unsigned > int, > > > unsigned long, unsigned long, > unsigned long, unsigned long, > > bool, unsigned > > > long) () from > /lib64/libasan.so.5 > > > #7 0x00007ffff72bdc2a in > > __asan::ReportGenericError(unsigned long, > > > unsigned long, unsigned long, > unsigned long, bool, unsigned > > long, unsigned > > > int, bool) () from > /lib64/libasan.so.5 > > > #8 0x00007ffff720ef9c in > __interceptor_strlen.part.0 () > > from > > > /lib64/libasan.so.5 > > > #9 0x00007ffff34ce2ec in > string_key_sum (h=0x7fff6ff6e970, > > > key=140735097062688) > > > at > src/vppinfra/hash.c:947 > > > #10 0x00007ffff34caf15 in > key_sum (h=0x7fff6ff6e970, > > key=140735097062688) > > > at > src/vppinfra/hash.c:333 > > > #11 0x00007ffff34cbf76 in > lookup > (v=0x7fff6ff6e9b8, > > key=140735097062688, > > > op=GET, new_value=0x0, > old_value=0x0) > > > at > src/vppinfra/hash.c:557 > > > #12 0x00007ffff34cc59d in > _hash_get_pair (v=0x7fff6ff6e9b8, > > > key=140735097062688) > > > at > src/vppinfra/hash.c:653 > > > #13 0x000000000042e885 in > lookup_hash_index > > (name=0x7fff7177c520 > > > "/mem/stat segment") > > > at > src/vpp/stats/stat_segment.c:69 > > > #14 0x0000000000431790 in > stat_segment_new_entry > > (name=0x7fff7177c520 > > > "/mem/stat segment", > > > > t=STAT_DIR_TYPE_COUNTER_VECTOR_SIMPLE) > > > at > src/vpp/stats/stat_segment.c:402 > > > #15 0x00000000004401fb in > vlib_stats_register_mem_heap > > > (heap=0x7ffb67e00000) > > > at > src/vpp/stats/stat_segment_provider.c:96 > > > #16 0x00000000004327fa in > vlib_map_stat_segment_init () > > > at > src/vpp/stats/stat_segment.c:493 > > > ---Type <return> to continue, > or > q <return> to quit--- > > > #17 0x00007ffff3e15d19 in > vlib_main (vm=0x7fff6eeff680, > > > input=0x7fff6a0a9f70) > > > at src/vlib/main.c:3272 > > > #18 0x00007ffff3f0d924 in > thread0 (arg=140735054608000) > > > at > src/vlib/unix/main.c:671 > > > #19 0x00007ffff34d9504 in > clib_calljmp () > > > at > src/vppinfra/longjmp.S:123 > > > #20 0x00007fffffffc940 in ?? > () > > > #21 0x00007ffff3f0e67e in > vlib_unix_main (argc=282, > > argv=0x61d00001a480) > > > at > src/vlib/unix/main.c:751 > > > #22 0x000000000040b482 in main > (argc=282, > > argv=0x61d00001a480) > > > at > src/vpp/vnet/main.c:336 > > > (gdb) q > > > > > > > > > > > > > > > Error 2: > > > ----------- > > > Program received signal > SIGSEGV, > Segmentation fault. > > > 0x00007ffff71db5c1 in > > __asan::FakeStack::AddrIsInFakeStack(unsigned > long, > > > unsigned long*, unsigned > long*) > () > > > from /lib64/libasan.so.5 > > > (gdb) bt > > > #0 0x00007ffff71db5c1 in > > __asan::FakeStack::AddrIsInFakeStack(unsigned > > > long, unsigned long*, unsigned > long*) () > > > from /lib64/libasan.so.5 > > > #1 0x00007ffff72c2a11 in > > > > > > __asan::ThreadStackContainsAddress(__sanitizer::ThreadContextBase*, void*) > > > () > > > from /lib64/libasan.so.5 > > > #2 0x00007ffff72dcdc2 in > > > > __sanitizer::ThreadRegistry::FindThreadContextLocked(bool > > > > (*)(__sanitizer::ThreadContextBase*, void*), void*) () from > > > /lib64/libasan.so.5 > > > #3 0x00007ffff72c3e5a in > > __asan::FindThreadByStackAddress(unsigned > long) > > > () from /lib64/libasan.so.5 > > > #4 0x00007ffff71d5fb6 in > > __asan::GetStackAddressInformation(unsigned > > > long, unsigned long, > __asan::StackAddressDescription*) () > > from > > > /lib64/libasan.so.5 > > > #5 0x00007ffff71d73f9 in > > > > __asan::AddressDescription::AddressDescription(unsigned > > long, unsigned > > > long, bool) () > > > from /lib64/libasan.so.5 > > > #6 0x00007ffff71d9e51 in > > __asan::ErrorGeneric::ErrorGeneric(unsigned > int, > > > unsigned long, unsigned long, > unsigned long, unsigned long, > > bool, unsigned > > > long) () from > /lib64/libasan.so.5 > > > #7 0x00007ffff72bdc2a in > > __asan::ReportGenericError(unsigned long, > > > unsigned long, unsigned long, > unsigned long, bool, unsigned > > long, unsigned > > > int, bool) () from > /lib64/libasan.so.5 > > > #8 0x00007ffff721252c in > __interceptor_strnlen.part.0 () > > from > > > /lib64/libasan.so.5 > > > #9 0x00007ffff3595a0c in > strnlen_s_inline (s=0x7fff7177c520 > > "/mem/stat > > > segment", maxsize=128) > > > at > src/vppinfra/string.h:800 > > > #10 0x00007ffff3595f63 in > strcpy_s_inline > > (dest=0x7fff6a0a9ab0 "", > > > dmax=128, src=0x7fff7177c520 > "/mem/stat segment") > > > at > src/vppinfra/string.h:960 > > > #11 0x00007ffff3597e1c in > strcpy_s (dest=0x7fff6a0a9ab0 "", > > dmax=128, > > > src=0x7fff7177c520 "/mem/stat > segment") > > > at > src/vppinfra/string.c:274 > > > #12 0x0000000000431820 in > stat_segment_new_entry > > (name=0x7fff7177c520 > > > "/mem/stat segment", > > > > t=STAT_DIR_TYPE_COUNTER_VECTOR_SIMPLE) > > > at > src/vpp/stats/stat_segment.c:408 > > > #13 0x00000000004401fb in > vlib_stats_register_mem_heap > > > (heap=0x7ffb67e00000) > > > at > src/vpp/stats/stat_segment_provider.c:96 > > > #14 0x00000000004327fa in > vlib_map_stat_segment_init () > > > at > src/vpp/stats/stat_segment.c:493 > > > #15 0x00007ffff3e15d19 in > vlib_main (vm=0x7fff6eeff680, > > > input=0x7fff6a0a9f70) > > > at src/vlib/main.c:3272 > > > #16 0x00007ffff3f0d924 in > thread0 (arg=140735054608000) > > > at > src/vlib/unix/main.c:671 > > > ---Type <return> to continue, > or > q <return> to quit--- > > > #17 0x00007ffff34d9504 in > clib_calljmp () > > > at > src/vppinfra/longjmp.S:123 > > > #18 0x00007fffffffc940 in ?? > () > > > #19 0x00007ffff3f0e67e in > vlib_unix_main (argc=282, > > argv=0x61d00001a480) > > > at > src/vlib/unix/main.c:751 > > > #20 0x000000000040b482 in main > (argc=282, > > argv=0x61d00001a480) > > > at > src/vpp/vnet/main.c:336 > > > (gdb) > > > > > > > > > Thanks, > > > Chetan > > > > > > > > > > >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20097): https://lists.fd.io/g/vpp-dev/message/20097 Mute This Topic: https://lists.fd.io/mt/85407456/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-