[vpp-dev] Pager Buffer limit
Hi, I am working on creating a VPP cli which dumps a lot of info. Is there any limitation to the number of lines we can print in a paged format using vlib_cli_output ? I am not able to print more than 100K lines. Beyond that i get the error "-- pager buffer overflowed --" . Also, how do I modify this limit ? Thanks & Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10402): https://lists.fd.io/g/vpp-dev/message/10402 Mute This Topic: https://lists.fd.io/mt/25236542/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP main thread gets stuck in a deadlock on running CLI in loop
Hi, I am running a script which fires a VPP CLI in loop and keeps collecting some data. The CLI hangs after sometimes and the main thread gets locked at "mheap_maybe_lock". This is happening with other CLI commands as well. Can anyone help? Also has anyone else observed this issue? Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10444): https://lists.fd.io/g/vpp-dev/message/10444 Mute This Topic: https://lists.fd.io/mt/25364321/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP cores out of 'unix_cli_read_ready'
Hi all, I am facing an occasional VPP crash when running CLIs ( show node counters) whilst under load. I am using VPP version v18.01.1-100~g3a6948c. Upgrading to a newer version is not an option for me currently. VPP cores out of 'unix_cli_read_ready' Here is the BT: *(gdb) bt* *#0 0x2ab99d694207 in raise () from /lib64/libc.so.6* *#1 0x2ab99d6958f8 in abort () from /lib64/libc.so.6* *#2 0x00405ef3 in os_panic () at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268* *#3 0x2ab99cb106d2 in clib_mem_alloc_aligned_at_offset (os_out_of_memory_on_failure=1, align_offset=, align=4, size=8589930500)* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:105* *#4 vec_resize_allocate_memory (v=, length_increment=4294967296, data_bytes=, header_bytes=, header_bytes@entry=0, data_align=data_align@entry=4)* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:84* *#5 0x2ab99b79d349 in _vec_resize (data_align=, header_bytes=, data_bytes=, length_increment=, v=)* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.h:142* *#6 unix_cli_read_ready (uf=0x2ab99fe8509c) at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/unix/cli.c:2504* *#7 0x2ab99b7a900a in linux_epoll_input (vm=, node=, frame=)* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/unix/input.c:198* *#8 0x2ab99b772caa in dispatch_node (last_time_stamp=54044224341205389, frame=0x0, dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_PRE_INPUT, node=0x2ab99e234f40, vm=0x2ab99b9c42c0 )* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:988* *#9 vlib_main_or_worker_loop (is_main=1, vm=0x2ab99b9c42c0 ) at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1498* *#10 vlib_main_loop (vm=0x2ab99b9c42c0 ) at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1628* *#11 vlib_main (vm=vm@entry=0x2ab99b9c42c0 , input=input@entry=0x2ab9a055efa0)* *at /bfs-build/build-area.42/builds/LinuxNBngp_mainline_RH7/2018-09-13-1353/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1783* Has anyone faced this issue? Any help will be greatly appreciated. Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10830): https://lists.fd.io/g/vpp-dev/message/10830 Mute This Topic: https://lists.fd.io/mt/27367342/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP crashing out of dead_client_scan()
Hi all, I am facing an occasional VPP crash from dead_client_scan() when I restart a client . I am using VPP version v18.01.1-100~g3a6948c. Upgrading to a newer version is not an option for me currently. Here is the backtrace : Program terminated with signal 6, Aborted. *#0 0x2ad10a37f207 in raise () from /lib64/libc.so.6* *Missing separate debuginfos, use: debuginfo-install OPWVmepCR-99.9-el7.x86_64* *(gdb) bt* *#0 0x2ad10a37f207 in raise () from /lib64/libc.so.6* *#1 0x2ad10a3808f8 in abort () from /lib64/libc.so.6* *#2 0x00405ef3 in os_panic () at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268* *#3 0x2ad1097fb6d2 in clib_mem_alloc_aligned_at_offset (os_out_of_memory_on_failure=1, align_offset=, align=4, size=60)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:105* *#4 vec_resize_allocate_memory (v=, length_increment=length_increment@entry=4, data_bytes=, header_bytes=, header_bytes@entry=0, data_align=data_align@entry=4)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:84* *#5 0x2ad1097bd5bc in _vec_resize (data_align=0, header_bytes=0, data_bytes=, length_increment=4, v=)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.h:142* *#6 format_integer (s=, s@entry=0x308d6a84 "svm_client_scan_this_region_nolock:", number=, options=options@entry=0x2ad10d3cabf0)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/format.c:535* *#7 0x2ad1097be32e in do_percent (va=0x2ad10d3cac78, fmt=, _s=)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/format.c:314* *#8 va_format (s=0x308d6a84 "svm_client_scan_this_region_nolock:", fmt=, va=va@entry=0x2ad10d3cac78)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/format.c:404* *#9 0x2ad1097bd707 in format (s=, fmt=fmt@entry=0x2ad10980b5a3 "%wd:")* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/format.c:423* *#10 0x2ad1097b9234 in _clib_error (how_to_die=how_to_die@entry=4, function_name=function_name@entry=0x2ad10959a0a0 <__FUNCTION__.11526> "svm_client_scan_this_region_nolock", line_number=line_number@entry=1205, * *fmt=fmt@entry=0x2ad10959a00f "%s: cleanup ghost pid %d") at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/error.c:122* *#11 0x2ad109586b11 in svm_client_scan_this_region_nolock (rp=0x30021000) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/svm/svm.c:1204* *#12 0x2ad10820b896 in dead_client_scan (am=, now=, shm=0x3004300c)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_vlib.c:803* *#13 memclnt_process (vm=0x2ad1086af260 , node=0x2ad10d3c2000, f=)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_vlib.c:1091* *#14 0x2ad10845b656 in vlib_process_bootstrap (_a=) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1231* *#15 0x2ad1097c6838 in clib_calljmp () at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vppinfra/longjmp.S:110* *#16 0x2ad10d249e30 in ?? ()* *#17 0x2ad10845c999 in vlib_process_startup (f=0x0, p=0x2ad10d3c2000, vm=0x2ad1086af260 )* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1253* *#18 dispatch_process (vm=0x2ad1086af260 , p=0x2ad10d3c2000, last_time_stamp=59319358378453656, f=0x0)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-18-1026/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1296* Any help will be greatly appreciated. Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10918): https://lists.fd.io/g/vpp-dev/message/10918 Mute This Topic: https://lists.fd.io/mt/27567087/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP cores out of vlib_worker_thread_barrier_sync_int()
Hi all, I am facing occasional VPP crash from vlib_worker_thread_barrier_sync_int(). There was hardly any load on VPP when this happened I am using VPP version v18.01.1-100~g3a6948c. Upgrading to a newer version is not an option for me currently. Here is the backtrace: *#0 0x2b95c3015207 in raise () from /lib64/libc.so.6* *#1 0x2b95c30168f8 in abort () from /lib64/libc.so.6* *#2 0x00405ef3 in os_panic ()* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268* *#3 0x2b95c111560a in vlib_worker_thread_barrier_sync_int (vm=0x2b95c1345260 )* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/threads.c:1480* *#4 0x2b95c0e90d3a in vl_msg_api_handler_with_vm_node (am=0x2b95c10c14a0 , the_msg=0x3026f5ac, vm=0x2b95c1345260 ,* *node=)* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlibapi/api_shared.c:506* *#5 0x2b95c0ea02ec in memclnt_process (vm=0x2b95c1345260 , node=0x2b95c607, f=)* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_vlib.c:983* *#6 0x2b95c10f1656 in vlib_process_bootstrap (_a=)* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1231* *#7 0x2b95c245c838 in clib_calljmp ()* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vppinfra/longjmp.S:110* *#8 0x2b95c5edfe30 in ?? ()* *#9 0x2b95c10f2999 in vlib_process_startup (f=0x0, p=0x2b95c607, vm=0x2b95c1345260 )* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1253* *#10 dispatch_process (vm=0x2b95c1345260 , p=0x2b95c607, last_time_stamp=39202939357732, f=0x0)* *at /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1296* *#11 0x000c in ?? ()* *#12 0x0005000c in ?? ()* *#13 0x in ?? ()* Any help will be greatly appreciated. Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10919): https://lists.fd.io/g/vpp-dev/message/10919 Mute This Topic: https://lists.fd.io/mt/27567149/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP cores out of vlib_worker_thread_barrier_sync_int()
Hi All, I found out that one of the worker threads is stuck at a place, thus causing this issue: *#0 0x2b95c2bcda2d in accept () from /lib64/libpthread.so.0* *#1 0x2b9703e026c8 in vfio_mp_sync_thread (arg=)* *#2 0x2b95c2bc6dd5 in start_thread () from /lib64/libpthread.so.0* *#3 0x2b95c30ddb3d in clone () from /lib64/libc.so.6* I see on my setup that these two drivers are loaded : vfio and vfio_iommu_type1 Can anyone tell if this could be causing some issue? Would the issue go away if I remove these? Regards, Siddarth On Tue, Oct 23, 2018 at 4:22 PM siddarth rai wrote: > Hi all, > I am facing occasional VPP crash from > vlib_worker_thread_barrier_sync_int(). There was hardly any load on VPP > when this happened > I am using VPP version v18.01.1-100~g3a6948c. Upgrading to a newer > version is not an option for me currently. > > Here is the backtrace: > > *#0 0x2b95c3015207 in raise () from /lib64/libc.so.6* > *#1 0x2b95c30168f8 in abort () from /lib64/libc.so.6* > *#2 0x00405ef3 in os_panic ()* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268* > *#3 0x2b95c111560a in vlib_worker_thread_barrier_sync_int > (vm=0x2b95c1345260 )* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/threads.c:1480* > *#4 0x2b95c0e90d3a in vl_msg_api_handler_with_vm_node > (am=0x2b95c10c14a0 , the_msg=0x3026f5ac, vm=0x2b95c1345260 > ,* > *node=)* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlibapi/api_shared.c:506* > *#5 0x2b95c0ea02ec in memclnt_process (vm=0x2b95c1345260 > , node=0x2b95c607, f=)* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_vlib.c:983* > *#6 0x2b95c10f1656 in vlib_process_bootstrap (_a=)* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1231* > *#7 0x2b95c245c838 in clib_calljmp ()* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vppinfra/longjmp.S:110* > *#8 0x2b95c5edfe30 in ?? ()* > *#9 0x2b95c10f2999 in vlib_process_startup (f=0x0, p=0x2b95c607, > vm=0x2b95c1345260 )* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1253* > *#10 dispatch_process (vm=0x2b95c1345260 , > p=0x2b95c607, last_time_stamp=39202939357732, f=0x0)* > *at > /bfs-build/build-area.47/builds/LinuxNBngp_mainline_RH7/2018-10-15-1924/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1296* > *#11 0x000c in ?? ()* > *#12 0x0005000c in ?? ()* > *#13 0x in ?? ()* > > Any help will be greatly appreciated. > > Regards, > Siddarth > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10943): https://lists.fd.io/g/vpp-dev/message/10943 Mute This Topic: https://lists.fd.io/mt/27567149/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP cores out of vl_client_connect()
Hi all, I am facing VPP crash from vl_client_connect . It continuously crashes when the client tries to reconnect. I am using VPP version v18.01.1-100~g3a6948c. Here is the backtrace: *Program terminated with signal 6, Aborted.* *#0 0x2b0f894ff207 in raise () from /lib64/libc.so.6* *Missing separate debuginfos, use: debuginfo-install OPWVmepCR-99.9-el7.x86_64* *(gdb) bt* *#0 0x2b0f894ff207 in raise () from /lib64/libc.so.6* *#1 0x2b0f895008f8 in abort () from /lib64/libc.so.6* *#2 0x2b0f98c6c4a9 in os_panic () at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vppinfra/unix-misc.c:176* *#3 0x2b0f98c6c4c5 in os_out_of_memory () at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vppinfra/unix-misc.c:221* *#4 0x2b0f98e8fda5 in clib_mem_alloc_aligned_at_offset (os_out_of_memory_on_failure=1, align_offset=0, align=64, size=120120)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:105* *#5 clib_mem_alloc_aligned (align=64, size=120120) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:122* *#6 unix_shared_memory_queue_init (nels=nels@entry=15000, elsize=elsize@entry=8, consumer_pid=41923, signal_when_queue_non_empty=signal_when_queue_non_empty@entry=0)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/unix_shared_memory_queue.c:58* *#7 0x2b0f98e8b7d8 in vl_client_connect (name=name@entry=0x10852658 "opwv_ats_client-0x36d6000-0", ctx_quota=ctx_quota@entry=0, input_queue_size=input_queue_size@entry=15000, timeout=timeout@entry=10)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_client.c:210* *#8 0x2b0f992be17a in vapi_connect (ctx=0x1097c900, name=0x10852658 "opwv_ats_client-0x36d6000-0", chroot_prefix=0x0, max_outstanding_requests=, response_queue_size=15000, mode=VAPI_MODE_NONBLOCKING, * *timeout=10) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/third-party/vpp/vpp_1801/build-data/../src/vpp-api/vapi/vapi.c:347* *#9 0x2b0f951e6fb5 in TUIOVpp::TUIOVpp (this=0x10978990, uio=, requestQueueSize=1, responseQueueSize=15000, master=)* *at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/atsrequest/UIOVpp.cpp:87* *#10 0x2b0f951cfc53 in TUIO::createInstance (this=this@entry=0x2b0f95423f20 ) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/atsrequest/UIO.cpp:632* *#11 0x2b0f951d37af in TUIO::init (this=this@entry=0x2b0f95423f20 ) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/atsrequest/UIO.cpp:492* *#12 0x2b0f951a92f4 in TATSRequest::init () at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/atsrequest/ATSRequest.cpp:266* *#13 0x2b0f9d4aebeb in init_oam_and_spa (argv=, argc=) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/traffic_server/plugin.cpp:833* *#14 TSPluginInit (argc=, argv=) at /bfs-build/build-area.44/builds/LinuxNBngp_mainline_RH7/2018-10-09-2333/http/src/traffic_server/plugin.cpp:972* *#15 0x0050262b in plugin_load (internal=false, argv=0x7ffe36840900, argc=6) at Plugin.cc:166* Any help or pointers will be greatly appreciated. Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10985): https://lists.fd.io/g/vpp-dev/message/10985 Mute This Topic: https://lists.fd.io/mt/27631777/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP crashes out of vlib_worker_thread_barrier_sync_int while workers stuck in clib_bihash_add_del
Hi, I am using VPP version v18.01.1-100~g3a6948c. VPP sometimes crashes out of vlib_worker_thread_barrier_sync_int when running with load. Here is the back trace : (gdb) bt #0 0x2b9e5d45d207 in raise () from /lib64/libc.so.6 #1 0x2b9e5d45e8f8 in abort () from /lib64/libc.so.6 #2 0x00405f23 in os_panic () at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268 #3 0x2b9e5b55c7ea in vlib_worker_thread_barrier_sync_int (vm=0x2b9e5b78c260 ) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlib/threads.c:1488 #4 0x2b9e5b2d6e2a in vl_msg_api_handler_with_vm_node (am=0x2b9e5b5084a0 , the_msg=0x304d0b2c, vm=0x2b9e5b78c260 , node=) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlibapi/api_shared.c:506 #5 0x2b9e5b2e645c in memclnt_process (vm=0x2b9e5b78c260 , node=0x2b9e5e008000, f=) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_vlib.c:987 #6 0x2b9e5b5386e6 in vlib_process_bootstrap (_a=) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1231 #7 0x2b9e5c8a48b8 in clib_calljmp () at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/longjmp.S:110 #8 0x2b9e60327e30 in ?? () #9 0x2b9e5b539a59 in vlib_process_startup (f=0x0, p=0x2b9e5e008000, vm=0x2b9e5b78c260 ) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1253 #10 dispatch_process (vm=0x2b9e5b78c260 , p=0x2b9e5e008000, last_time_stamp=3140570395200949, f=0x0) at /bfs-build//2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vlib/main.c:1296 #11 0x in ?? () Some of the worker threads seem to be stuck in Bihash add_del operation ( part of out implementation ) (gdb) info thr Id Target Id Frame 7Thread 0x2ba2f991a700 (LWP 69610) 0x2ba0e08cd184 in clib_bihash_add_del_40_8 (h=0x2b9e6050d030, add_v=0x2b9ea5ed8cf8, is_add=) at /spare/include/vppinfra/bihash_template.c:338 ... 5Thread 0x2ba2f9317700 (LWP 69607) 0x2ba0e08cd184 in clib_bihash_add_del_40_8 (h=0x2b9e6050cc10, add_v=0x2b9e7ca45620, is_add=) at /spare/srai/include/vppinfra/bihash_template.c:338 ... Is it possible for worker threads to be stuck at this place for some reason? Any help would be appreciated. Thanks, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11327): https://lists.fd.io/g/vpp-dev/message/11327 Mute This Topic: https://lists.fd.io/mt/28265837/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Spike in core VPP VSZ
Hi, I am using VPP version v19.08.1 I have noticed that the virtual memory size for core VPP has increased a lot since the last VPP I used(v 18.01) Earlier it used to be ~2 G, but now its up to ~16 G. This is causing the VPP core-dump files to bloat up. When I say core VPP I mean VPP *without* any plugins. Can anyone tell what changes caused this increase in VPP VSZ ? Is there any way to reduce it ? I am attaching pmap output of VPP started with default configuration https://pastebin.com/0nKwFyHF Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15965): https://lists.fd.io/g/vpp-dev/message/15965 Mute This Topic: https://lists.fd.io/mt/72698301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Spike in core VPP VSZ
Hi Damjan, If you have any advice for VPP 19.08, that would be great. The pmap output is already attached for your reference Regards, Siddarth On Wed, Apr 1, 2020 at 6:18 PM Damjan Marion wrote: > > Not sure about 19.08 but i just tried on latest master. > Without dpdk core size is is 1.29G uncompressed and 1.31G with dpdk plugin > loaded. > > — > Damjan > > On 1 Apr 2020, at 14:09, siddarth rai wrote: > > Hi, > I am using VPP version v19.08.1 > I have noticed that the virtual memory size for core VPP has increased a > lot since the last VPP I used(v 18.01) > > Earlier it used to be ~2 G, but now its up to ~16 G. This is causing the > VPP core-dump files to bloat up. > When I say core VPP I mean VPP *without* any plugins. > > Can anyone tell what changes caused this increase in VPP VSZ ? Is there > any way to reduce it ? > > I am attaching pmap output of VPP started with default configuration > > https://pastebin.com/0nKwFyHF > > Regards, > Siddarth > > > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15972): https://lists.fd.io/g/vpp-dev/message/15972 Mute This Topic: https://lists.fd.io/mt/72698301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Spike in core VPP VSZ
Hi, I am still open to receiving suggestions about root cause of 16G VSZ in VPP in 19.08 without any plugins. Surely somebody would be aware of the trick here as the issue is not seen in later versions as mentioned by Damjan. Any hint would be appreciated I am attaching pmap output of VPP started with default configuration https://pastebin.com/0nKwFyHF Regards, Siddarth On Thu, Apr 2, 2020 at 1:33 PM siddarth rai wrote: > Hi Damjan, > > If you have any advice for VPP 19.08, that would be great. > The pmap output is already attached for your reference > > Regards, > Siddarth > > On Wed, Apr 1, 2020 at 6:18 PM Damjan Marion wrote: > >> >> Not sure about 19.08 but i just tried on latest master. >> Without dpdk core size is is 1.29G uncompressed and 1.31G with dpdk >> plugin loaded. >> >> — >> Damjan >> >> On 1 Apr 2020, at 14:09, siddarth rai wrote: >> >> Hi, >> I am using VPP version v19.08.1 >> I have noticed that the virtual memory size for core VPP has increased a >> lot since the last VPP I used(v 18.01) >> >> Earlier it used to be ~2 G, but now its up to ~16 G. This is causing the >> VPP core-dump files to bloat up. >> When I say core VPP I mean VPP *without* any plugins. >> >> Can anyone tell what changes caused this increase in VPP VSZ ? Is there >> any way to reduce it ? >> >> I am attaching pmap output of VPP started with default configuration >> >> https://pastebin.com/0nKwFyHF >> >> Regards, >> Siddarth >> >> >> >> >> -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#16078): https://lists.fd.io/g/vpp-dev/message/16078 Mute This Topic: https://lists.fd.io/mt/72698301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Spike in core VPP VSZ
Hi Damjan, I have tried the latest master and it seems that the core size is around 1.3 G , as you mentioned above. However, the VPP VSZ is still around 16G. This used to be low in earlier released(around 2G) Can you give any hint why the Virtual size has increased so much? The pmap output is attached above. Regards, Siddarth On Wed, Apr 1, 2020 at 6:18 PM Damjan Marion wrote: > > Not sure about 19.08 but i just tried on latest master. > Without dpdk core size is is 1.29G uncompressed and 1.31G with dpdk plugin > loaded. > > — > Damjan > > On 1 Apr 2020, at 14:09, siddarth rai wrote: > > Hi, > I am using VPP version v19.08.1 > I have noticed that the virtual memory size for core VPP has increased a > lot since the last VPP I used(v 18.01) > > Earlier it used to be ~2 G, but now its up to ~16 G. This is causing the > VPP core-dump files to bloat up. > When I say core VPP I mean VPP *without* any plugins. > > Can anyone tell what changes caused this increase in VPP VSZ ? Is there > any way to reduce it ? > > I am attaching pmap output of VPP started with default configuration > > https://pastebin.com/0nKwFyHF > > Regards, > Siddarth > > > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#16523): https://lists.fd.io/g/vpp-dev/message/16523 Mute This Topic: https://lists.fd.io/mt/72698301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Query regarding Scaling of CPU utilization with different number of workers
Hello everyone, I am using VPP 19.08 and running it in simple forwarding mode. It's handling thousands of connections per second and over 8 Gbps of throughput. I am curious to know what the general experience has been with CPU usage per worker while increasing the number of workers and queues. I do not expect the CPU usage to reduce linearly, but just want to know how much it deviates from linear CPU usage. In my experience, this deviation keeps on increasing with the number of workers. Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#17402): https://lists.fd.io/g/vpp-dev/message/17402 Mute This Topic: https://lists.fd.io/mt/76867576/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Events from VPP worker thread over shared memory
Hi, I want to send some events from VPP to client over shared memory. The trigger for these events are being detected on my worker threads. Can I directly send them from the worker threads or do I need to send them to the main thread first from where they will be forwarded over shared memory ? Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#12454): https://lists.fd.io/g/vpp-dev/message/12454 Mute This Topic: https://lists.fd.io/mt/30295283/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP crash in show PCI command
Hello, I am using an older version of VPP(18.01) but I checked the code of 19.04 and I couldn't find any change in problematic area. I sometimes get a crash with "Show PCI" command. >From the core, it seems to be crashing here : "format_vlib_pci_vpd ( s=0x2ad325f2cb58 ":02:00.0 0 14e4:1657 5.0 GT/s x2 tg3", ' ' , "HP Ethernet 1Gb 4-port 331i Adapter", args=)" I checked this function and it seems that a particular pointer variable (u8* id) always ends up getting an invalid value - "0x30 " Further down in this function, we try to access this pointer and that sometimes causes the VPP to crash. Could it be that the "va_arg" call to get this "id" is returning something invalid? Can someone please help me with this ? Attaching the full stack trace as well. Regards, Siddarth (gdb) bt #0 0x2ad210d83207 in raise () from /lib64/libc.so.6 #1 0x2ad210d848f8 in abort () from /lib64/libc.so.6 #2 0x00405eb9 in os_panic () at vpp_path/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:266 #3 0x2ad20ee8d777 in unix_signal_handler (signum=, si=, uc=) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/unix/main.c:126 #4 #5 format_vlib_pci_vpd ( s=0x2ad325f2cb58 ":02:00.0 0 14e4:1657 5.0 GT/s x2 tg3", ' ' , "HP Ethernet 1Gb 4-port 331i Adapter", args=) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/pci/pci.c:229 #6 0x2ad2101c10d4 in do_percent (va=0x2ad2f0dadb48, fmt=, _s=) at vpp_path/vpp/vpp_1801/build-data/../src/vppinfra/format.c:373 #7 va_format ( s=0x2ad325f2cb58 ":02:00.0 0 14e4:1657 5.0 GT/s x2 tg3", ' ' , "HP Ethernet 1Gb 4-port 331i Adapter", s@entry=0x0, fmt=, va=va@entry=0x2ad2f0dadb48) at vpp_path/vpp/vpp_1801/build-data/../src/vppinfra/format.c:404 #8 0x2ad20ee458c5 in vlib_cli_output (vm=vm@entry=0x2ad20f0a8240 , fmt=fmt@entry=0x2ad20ee9a630 "%-13U%-5v%04x:%04x %-13U%-16s%-32v%U") at vpp_path/vpp/vpp_1801/build-data/../src/vlib/cli.c:687 #9 0x2ad20ee73876 in show_pci_fn (vm=0x2ad20f0a8240 , input=, cmd=) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/pci/pci.c:106 #10 0x2ad20ee45d65 in vlib_cli_dispatch_sub_commands (vm=vm@entry=0x2ad20f0a8240 , cm=cm@entry=0x2ad20f0a8420 , input=input@entry=0x2ad2f0daded0, parent_command_index=) ---Type to continue, or q to quit--- at vpp_path/vpp/vpp_1801/build-data/../src/vlib/cli.c:588 #11 0x2ad20ee4611d in vlib_cli_dispatch_sub_commands (vm=vm@entry=0x2ad20f0a8240 , cm=cm@entry=0x2ad20f0a8420 , input=input@entry=0x2ad2f0daded0, parent_command_index=parent_command_index@entry=0) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/cli.c:566 #12 0x2ad20ee46361 in vlib_cli_input (vm=0x2ad20f0a8240 , input=input@entry=0x2ad2f0daded0, function=function@entry=0x2ad20ee87980 , function_arg=function_arg@entry=0) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/cli.c:662 #13 0x2ad20ee88a97 in unix_cli_process_input (cli_file_index=0, cm=0x2ad20f0a80e0 ) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/unix/cli.c:2303 #14 0x2ad20ee8c61d in unix_cli_process (vm=0x2ad20f0a8240 , rt=0x2ad2f0d9d000, f=) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/unix/cli.c:2419 #15 0x2ad20ee53b36 in vlib_process_bootstrap (_a=) at vpp_path/vpp/vpp_1801/build-data/../src/vlib/main.c:1231 #16 0x2ad2101c9bf8 in clib_calljmp () at vpp_path/vpp/vpp_1801/build-data/../src/vppinfra/longjmp.S:110 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#13127): https://lists.fd.io/g/vpp-dev/message/13127 Mute This Topic: https://lists.fd.io/mt/31730875/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP crash in show PCI command
I believe you can see this issue with any NIC. (Invalid value being returned from var_arg). It does not not always crash, but the value is always wrong. On Thu, May 23, 2019 at 6:58 PM Damjan Marion wrote: > > > > On 23 May 2019, at 15:21, siddarth rai wrote: > > > > Hello, > > > > I am using an older version of VPP(18.01) but I checked the code of > 19.04 and I couldn't find any change in problematic area. > > > > I sometimes get a crash with "Show PCI" command. > > > > From the core, it seems to be crashing here : > > > > "format_vlib_pci_vpd ( > > s=0x2ad325f2cb58 ":02:00.0 0 14e4:1657 5.0 GT/s x2 tg3", ' > ' , "HP Ethernet 1Gb 4-port 331i Adapter", > args=)" > > > > I checked this function and it seems that a particular pointer variable > (u8* id) always ends up getting an invalid value - "0x30 0x30 out of bounds>" > > Further down in this function, we try to access this pointer and that > sometimes causes the VPP to crash. > > > > Could it be that the "va_arg" call to get this "id" is returning > something invalid? > > > > Can someone please help me with this ? > > > > Attaching the full stack trace as well. > > Probably bug in PCI VPD parser. Hard to debug blindly, without having > access to problematic NIC. > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#13136): https://lists.fd.io/g/vpp-dev/message/13136 Mute This Topic: https://lists.fd.io/mt/31730875/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP crash in show PCI command
No luck. Tried with your suggested change, still getting the same. Regards, Siddarth On Fri, May 24, 2019 at 2:07 PM Damjan Marion wrote: > Can you try to replace: > > format_vlib_pci_vpd, d->vpd_r, 0); > > with > > format_vlib_pci_vpd, d->vpd_r, NULL); > > in show_pci_fn() - src/vlib/pci/pci.c > > > On 24 May 2019, at 07:45, siddarth rai wrote: > > I believe you can see this issue with any NIC. (Invalid value being > returned from var_arg). It does not not always crash, but the value is > always wrong. > > On Thu, May 23, 2019 at 6:58 PM Damjan Marion wrote: > >> >> >> > On 23 May 2019, at 15:21, siddarth rai wrote: >> > >> > Hello, >> > >> > I am using an older version of VPP(18.01) but I checked the code of >> 19.04 and I couldn't find any change in problematic area. >> > >> > I sometimes get a crash with "Show PCI" command. >> > >> > From the core, it seems to be crashing here : >> > >> > "format_vlib_pci_vpd ( >> > s=0x2ad325f2cb58 ":02:00.0 0 14e4:1657 5.0 GT/s x2 tg3", >> ' ' , "HP Ethernet 1Gb 4-port 331i Adapter", >> args=)" >> > >> > I checked this function and it seems that a particular pointer variable >> (u8* id) always ends up getting an invalid value - "0x30 > 0x30 out of bounds>" >> > Further down in this function, we try to access this pointer and that >> sometimes causes the VPP to crash. >> > >> > Could it be that the "va_arg" call to get this "id" is returning >> something invalid? >> > >> > Can someone please help me with this ? >> > >> > Attaching the full stack trace as well. >> >> Probably bug in PCI VPD parser. Hard to debug blindly, without having >> access to problematic NIC. >> >> -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#13136): https://lists.fd.io/g/vpp-dev/message/13136 > Mute This Topic: https://lists.fd.io/mt/31730875/675642 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > -=-=-=-=-=-=-=-=-=-=-=- > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#13152): https://lists.fd.io/g/vpp-dev/message/13152 Mute This Topic: https://lists.fd.io/mt/31730875/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] issue with vlib_thread_init
Hi all, I am using VPP version 19.04. I recently migrated to this VPP version from an older version and I am facing some issues during VPP startup. I have a plugin which I initialize using VLIB_INIT_FUNCTION. During the init, I use 'vlib_thread_main->n_threads' to fetch the number of workers and use this count to initialize different vectors in my plugin. I believe that this 'vlib_thread_main' is initialized during 'vlib_thread_init'. I have configured 4 workers in my startup.conf. However, sometimes, during the startup, in my plugin init, the value of 'vlib_thread_main->n_threads' appears to be 0. In fact, the entire vlib_thread_main is uninitialized. This happens some times, thus causing crash in my plugin. Has anyone else seen this issue? Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#13732): https://lists.fd.io/g/vpp-dev/message/13732 Mute This Topic: https://lists.fd.io/mt/32862405/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hello all, I am working with VPP 19_04. I noticed that the VSZ of VPP is showing 200+ GB. On further debugging, I discovered that the 'dpdk_plugin' is the one causing this. If I disable the dpdk plugin, the VSZ falls below 20G. Can anyone help me understand what is it in the dpdk plugin that is causing this bulge in VSZ? Is there anyway to reduce it ? Any help would be appreciated Regards, Siddarth Rai -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14860): https://lists.fd.io/g/vpp-dev/message/14860 Mute This Topic: https://lists.fd.io/mt/68143971/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hi Benoit, Unfortunately, that is not an option for me. I am using dpdk plugin. Regards, Siddarth On Mon, Dec 16, 2019 at 2:50 PM Benoit Ganne (bganne) wrote: > No idea why DPDK is so greedy, however what do you use the DPDK for? In a > lot of scenario we can run VPP without DPDK. In this case, you could > disable the dpdk plugin. > > Best > ben > > > -Original Message- > > From: vpp-dev@lists.fd.io On Behalf Of chetan > bhasin > > Sent: lundi 16 décembre 2019 06:50 > > To: siddarth rai > > Cc: vpp-dev > > Subject: Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin > > > > Hi, > > > > I am also looking for the solution of the below stated problem. > > > > Can anybody please provide direction regarding the same. > > > > Thanks, > > Chetan Bhasin > > > > On Wed, Dec 11, 2019 at 11:30 AM siddarth rai > <mailto:sid...@gmail.com> > wrote: > > > > > > Hello all, > > > > I am working with VPP 19_04. I noticed that the VSZ of VPP is > > showing 200+ GB. > > > > On further debugging, I discovered that the 'dpdk_plugin' is the > one > > causing this. If I disable the dpdk plugin, the VSZ falls below 20G. > > > > Can anyone help me understand what is it in the dpdk plugin that is > > causing this bulge in VSZ? Is there anyway to reduce it ? > > > > Any help would be appreciated > > > > Regards, > > Siddarth Rai > > -=-=-=-=-=-=-=-=-=-=-=- > > Links: You receive all messages sent to this group. > > > > View/Reply Online (#14860): https://lists.fd.io/g/vpp- > > dev/message/14860 > > Mute This Topic: https://lists.fd.io/mt/68143971/856484 > > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp- > > dev%2bow...@lists.fd.io> > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub > > [chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com> ] > > -=-=-=-=-=-=-=-=-=-=-=- > > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14898): https://lists.fd.io/g/vpp-dev/message/14898 Mute This Topic: https://lists.fd.io/mt/68143971/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hi Damjan, The issue here is that huge core files are generated, which take up a lot of space and the system down time is huge too. Even if I compress it, I will have to de-compress wherever I try to debug it and the disk space requirement will be huge. It will be interesting to get a hint on what is exactly contributing to such high virtual memory. I have created a jira for this: https://jira.fd.io/browse/VPP-1810 Also, please find attached the startup conf I am using. Regards, Siddarth On Mon, Dec 16, 2019 at 6:50 PM Damjan Marion wrote: > > What is exact problem here? vsz is just virual memory size, it doesn’t > mean that memory is allocated... > > Can you share your startup.conf? > > — > Damjan > > > On 16 Dec 2019, at 13:59, siddarth rai wrote: > > Hi Benoit, > > Unfortunately, that is not an option for me. I am using dpdk plugin. > > Regards, > Siddarth > > On Mon, Dec 16, 2019 at 2:50 PM Benoit Ganne (bganne) > wrote: > >> No idea why DPDK is so greedy, however what do you use the DPDK for? In a >> lot of scenario we can run VPP without DPDK. In this case, you could >> disable the dpdk plugin. >> >> Best >> ben >> >> > -Original Message- >> > From: vpp-dev@lists.fd.io On Behalf Of chetan >> bhasin >> > Sent: lundi 16 décembre 2019 06:50 >> > To: siddarth rai >> > Cc: vpp-dev >> > Subject: Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin >> > >> > Hi, >> > >> > I am also looking for the solution of the below stated problem. >> > >> > Can anybody please provide direction regarding the same. >> > >> > Thanks, >> > Chetan Bhasin >> > >> > On Wed, Dec 11, 2019 at 11:30 AM siddarth rai > > <mailto:sid...@gmail.com> > wrote: >> > >> > >> > Hello all, >> > >> > I am working with VPP 19_04. I noticed that the VSZ of VPP is >> > showing 200+ GB. >> > >> > On further debugging, I discovered that the 'dpdk_plugin' is the >> one >> > causing this. If I disable the dpdk plugin, the VSZ falls below 20G. >> > >> > Can anyone help me understand what is it in the dpdk plugin that >> is >> > causing this bulge in VSZ? Is there anyway to reduce it ? >> > >> > Any help would be appreciated >> > >> > Regards, >> > Siddarth Rai >> > -=-=-=-=-=-=-=-=-=-=-=- >> > Links: You receive all messages sent to this group. >> > >> > View/Reply Online (#14860): https://lists.fd.io/g/vpp- >> > dev/message/14860 >> > Mute This Topic: https://lists.fd.io/mt/68143971/856484 >> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp- >> > dev%2bow...@lists.fd.io> >> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub >> > [chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com> ] >> > -=-=-=-=-=-=-=-=-=-=-=- >> > >> >> -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#14898): https://lists.fd.io/g/vpp-dev/message/14898 > Mute This Topic: https://lists.fd.io/mt/68143971/675642 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > -=-=-=-=-=-=-=-=-=-=-=- > > > startup.conf Description: Binary data -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14906): https://lists.fd.io/g/vpp-dev/message/14906 Mute This Topic: https://lists.fd.io/mt/68143971/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Linking external libraries to VPP executable
Hello, I am working on VPP 1908. I want to link some external non-vpp libraries to my vpp executable lib. Would it work if I add the path of external lib to vlib/CMakeLists.txt file using 'LINK_LIBRARIES' ? Can anyone tell if this is the right way or if there is any other way ? Any help will be appreciated. Thank you, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15013): https://lists.fd.io/g/vpp-dev/message/15013 Mute This Topic: https://lists.fd.io/mt/69378301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Linking external libraries to VPP executable
Hi, I mean only 'vpp executable' Regards, Siddarth On Thu, Jan 2, 2020 at 5:10 PM Damjan Marion wrote: > > what do you mean by “vpp executable lib”? can you provide more details > what exactly do you want to do? > > -- > Damjan > > > On 2 Jan 2020, at 12:27, siddarth rai wrote: > > > > > > Hello, > > > > I am working on VPP 1908. > > I want to link some external non-vpp libraries to my vpp executable lib. > > > > Would it work if I add the path of external lib to vlib/CMakeLists.txt > file using 'LINK_LIBRARIES' ? > > > > Can anyone tell if this is the right way or if there is any other way ? > > > > Any help will be appreciated. > > > > Thank you, > > Siddarth > > -=-=-=-=-=-=-=-=-=-=-=- > > Links: You receive all messages sent to this group. > > > > View/Reply Online (#15013): https://lists.fd.io/g/vpp-dev/message/15013 > > Mute This Topic: https://lists.fd.io/mt/69378301/675642 > > Group Owner: vpp-dev+ow...@lists.fd.io > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > > -=-=-=-=-=-=-=-=-=-=-=- > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15020): https://lists.fd.io/g/vpp-dev/message/15020 Mute This Topic: https://lists.fd.io/mt/69378301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Linking external libraries to VPP executable
Hi, My plugin is linked to an external lib. When my plugin calls the init function of that external lib, it runs into some problems with symbols (presumably the external lib init function calls a dlopen to load some other files) It seems the problem comes from the fact that VPP calls a dlopen of plugins. A simple executable linking directly to the external library and calling the init function of the external lib does not give the same problem. I prototyped the whole scenario by making a stub executable (acting like VPP executable). Also made a stub shared object (like the plugin) which links to the external lib. Then I did a dlopen of this shared object from my stub executable (just like VPP) where the shared object code calls the init function of external lib. This reproduces the same problem as seen with VPP. I then linked my stub executable with the external library in addition and then everything worked fine. In fact, I went ahead and changed the src/vlib/CMakeLists.txt so that external lib links with vlib. This works fine for me. I just wanted to find out, if there is any generic infra available to link additional external libraries to say, vlib, without fiddling around with the build system of VPP. Regards, Siddarth On Thu, Jan 2, 2020 at 7:38 PM Damjan Marion wrote: > > 99% of vpp code is not in vpp executable so i wander why do you want to do > that? both some of vpp standard libraries (vnet, vlib, ..) and some of vpp > plugins are linked against libs, so to be able to help you i need more > details... > > -- > Damjan > > On 2 Jan 2020, at 14:20, siddarth rai wrote: > > > Hi, > > I mean only 'vpp executable' > > Regards, > Siddarth > > On Thu, Jan 2, 2020 at 5:10 PM Damjan Marion wrote: > >> >> what do you mean by “vpp executable lib”? can you provide more details >> what exactly do you want to do? >> >> -- >> Damjan >> >> > On 2 Jan 2020, at 12:27, siddarth rai wrote: >> > >> > >> > Hello, >> > >> > I am working on VPP 1908. >> > I want to link some external non-vpp libraries to my vpp executable lib. >> > >> > Would it work if I add the path of external lib to vlib/CMakeLists.txt >> file using 'LINK_LIBRARIES' ? >> > >> > Can anyone tell if this is the right way or if there is any other way ? >> > >> > Any help will be appreciated. >> > >> > Thank you, >> > Siddarth >> > -=-=-=-=-=-=-=-=-=-=-=- >> > Links: You receive all messages sent to this group. >> > >> > View/Reply Online (#15013): https://lists.fd.io/g/vpp-dev/message/15013 >> > Mute This Topic: https://lists.fd.io/mt/69378301/675642 >> > Group Owner: vpp-dev+ow...@lists.fd.io >> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] >> > -=-=-=-=-=-=-=-=-=-=-=- >> >> -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#15020): https://lists.fd.io/g/vpp-dev/message/15020 > Mute This Topic: https://lists.fd.io/mt/69378301/675642 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > -=-=-=-=-=-=-=-=-=-=-=- > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15027): https://lists.fd.io/g/vpp-dev/message/15027 Mute This Topic: https://lists.fd.io/mt/69378301/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hi, Thanks a lot for the tip. I would still like to know what is causing DPDK plugin to take up so much VSZ. If any one can give me any pointers I will try and debug it further to hopefully control the VSZ. Regards, Siddarth On Tue, Dec 17, 2019 at 2:57 PM Benoit Ganne (bganne) wrote: > Hi Siddarth, > > > The issue here is that huge core files are generated, which take up a lot > > of space and the system down time is huge too. > > Even if I compress it, I will have to de-compress wherever I try to debug > > it and the disk space requirement will be huge. > > I know this will not fix your issue, however that might help: > - when the core file is generated, if the VA is not in use it should not > take space on the disk because it should be stored as a sparse file. Here > is an example I have locally (note the 117M allocated on disk vs the 2.6G > "virtual" size): > bganne@ubuntu1804:~$ ls -lsh core > 117M -rw-rw-r-- 1 bganne bganne 2.6G Nov 12 15:34 core > Also, if you do not compress it at generation time (via > /proc/sys/kernel/core_pattern or similar) it should not impact the downtime > as it is simply not written nor processed > - if you compress/decompress it with gzip, it will not produce a sparse > file but you can 're-sparse' it using eg. dd: > bganne@ubuntu1804:~$ zcat core.gz | dd conv=sparse of=core > > Ben > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15243): https://lists.fd.io/g/vpp-dev/message/15243 Mute This Topic: https://lists.fd.io/mt/68143971/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hi, Here is the output : 0040-004ca000 r-xp fd:01 188780472 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp 006c9000-006ca000 r--p 000c9000 fd:01 188780472 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp 006ca000-006cb000 rw-p 000ca000 fd:01 188780472 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp 006cb000-006cc000 rw-p 00:00 0 01b4b000-01c8b000 rw-p 00:00 0 [heap] 1-10002f000 rw-p 00:00 0 130008000-130029000 rw-s 00:13 779982 /dev/shm/global_vm 130029000-13104a000 rw-s 00:13 779983 /dev/shm/vpe-api 13104a000-134008000 rw-s 01042000 00:13 779982 /dev/shm/global_vm 14000-94000 r--p 00:00 0 94000-940001000 rw-p 00:00 0 ac0001000-ac0002000 rw-p 00:00 0 c40002000-c40003000 rw-p 00:00 0 dc0003000-dc0064000 rw-p 00:00 0 dc0c64000-dc0cc5000 rw-p 00:00 0 dc18c5000-dc1926000 rw-p 00:00 0 dc2526000-dc2587000 rw-p 00:00 0 10-100280 rw-s 00:27 779981 /buffers-numa-0 100280-14 ---p 00:00 0 7ef98864f000-7efa3f80 rw-p 00:00 0 7efa3f80-7efe3f80 r--p 00:00 0 7efe3fa0-7f023fa0 r--p 00:00 0 7f023fc0-7f063fc0 r--p 00:00 0 7f063fe0-7f064000 rw-p 00:0e 780006 /anon_hugepage (deleted) 7f064000-7f064020 rw-p 00:0e 780008 /anon_hugepage (deleted) 7f064020-7f064040 rw-p 00:0e 780009 /anon_hugepage (deleted) 7f064040-7f0a3fe0 r--p 00:00 0 7f0a4000-7f124000 r--p 00:00 0 7f128000-7f1a8000 r--p 00:00 0 7f1a8c00-7f1a8c021000 rw-p 00:00 0 7f1a8c021000-7f1a9000 ---p 00:00 0 7f1a93ff4000-7f1a9800 rw-p 00:00 0 7f1a9800-7f1a98021000 rw-p 00:00 0 7f1a98021000-7f1a9c00 ---p 00:00 0 7f1a9c00-7f1a9c021000 rw-p 00:00 0 7f1a9c021000-7f1aa000 ---p 00:00 0 7f1aa000-7f1aa0021000 rw-p 00:00 0 7f1aa0021000-7f1aa400 ---p 00:00 0 7f1aa49b8000-7f1aa52da000 rw-p 00:00 0 7f1aa52da000-7f1aa52db000 ---p 00:00 0 7f1aa52db000-7f1ac000 rw-p 00:00 0 7f1ac000-7f22c000 r--p 00:00 0 7f22c000-7f22c0001000 rw-s febd2000 00:12 11698 /sys/devices/pci:00/:00:04.0/resource1 7f22c0001000-7f22c0005000 rw-s fe004000 00:12 11699 /sys/devices/pci:00/:00:04.0/resource4 7f22c043a000-7f22c1a59000 rw-p 00:00 0 7f22c1a59000-7f22c1a5c000 r-xp fd:01 138716319 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so 7f22c1a5c000-7f22c1c5b000 ---p 3000 fd:01 138716319 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so 7f22c1c5b000-7f22c1c5c000 r--p 2000 fd:01 138716319 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so 7f22c1c5c000-7f22c1c5d000 rw-p 3000 fd:01 138716319 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so 7f22c1c5d000-7f22c1c5f000 r-xp fd:01 138561791 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so 7f22c1c5f000-7f22c1e5e000 ---p 2000 fd:01 138561791 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so 7f22c1e5e000-7f22c1e5f000 r--p 1000 fd:01 138561791 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so 7f22c1e5f000-7f22c1e6 rw-p 2000 fd:01 138561791 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so 7f22c1e6-7f22c1e63000 r-xp fd:01 138561788 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so 7f22c1e63000-7f22c2062000 ---p 3000 fd:01 138561788 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so 7f22c2062000-7f22c2063000 r--p 2000 fd:01 138561788 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so 7f22c2063000-7f22c2064000 rw-p 3000 fd:01 138561788 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so 7f22c2064000-7f22c2067000 r-xp fd:01 138561787 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so 7f22c2067000-7f22c2266000 ---p 3000 fd:01 138561787 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so 7f22c2266000-7f22c2267000 r--p 2000 fd:01 138561787 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so 7f22c2267000-7f22c2268000 rw-p 3000 fd:01 138561787 /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so 7f22c2268000-7f22c2271000 r-xp fd:01 138561786 /root/vpp/build-root/install-vpp-native/vp
Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin
Hi Damjan, Output for cat /proc/$(pgrep vpp)/maps : https://pastebin.com/9TfbcPkV cat /proc/$(pgrep vpp)/numa_maps https://pastebin.com/wJeEKf5f Regards, Siddarth On Fri, Jan 24, 2020 at 7:18 PM Damjan Marion wrote: > > I asked pastebin to avoid bothering other people of the list with the long > ouptut… > > Can you also capture "cat /proc/$(pgrep vpp)/numa_maps” > > — > Damjan > > On 24 Jan 2020, at 14:02, siddarth rai wrote: > > Hi, > > Here is the output : > > 0040-004ca000 r-xp fd:01 188780472 > /root/vpp/build-root/install-vpp-native/vpp/bin/vpp > 006c9000-006ca000 r--p 000c9000 fd:01 188780472 > /root/vpp/build-root/install-vpp-native/vpp/bin/vpp > 006ca000-006cb000 rw-p 000ca000 fd:01 188780472 > /root/vpp/build-root/install-vpp-native/vpp/bin/vpp > 006cb000-006cc000 rw-p 00:00 0 > 01b4b000-01c8b000 rw-p 00:00 0 > [heap] > 1-10002f000 rw-p 00:00 0 > 130008000-130029000 rw-s 00:13 779982 > /dev/shm/global_vm > 130029000-13104a000 rw-s 00:13 779983 > /dev/shm/vpe-api > 13104a000-134008000 rw-s 01042000 00:13 779982 > /dev/shm/global_vm > 14000-94000 r--p 00:00 0 > 94000-940001000 rw-p 00:00 0 > ac0001000-ac0002000 rw-p 00:00 0 > c40002000-c40003000 rw-p 00:00 0 > dc0003000-dc0064000 rw-p 00:00 0 > dc0c64000-dc0cc5000 rw-p 00:00 0 > dc18c5000-dc1926000 rw-p 00:00 0 > dc2526000-dc2587000 rw-p 00:00 0 > 10-100280 rw-s 00:27 779981 > /buffers-numa-0 > 100280-14 ---p 00:00 0 > 7ef98864f000-7efa3f80 rw-p 00:00 0 > 7efa3f80-7efe3f80 r--p 00:00 0 > 7efe3fa0-7f023fa0 r--p 00:00 0 > 7f023fc0-7f063fc0 r--p 00:00 0 > 7f063fe0-7f064000 rw-p 00:0e 780006 > /anon_hugepage (deleted) > 7f064000-7f064020 rw-p 00:0e 780008 > /anon_hugepage (deleted) > 7f064020-7f064040 rw-p 00:0e 780009 > /anon_hugepage (deleted) > 7f064040-7f0a3fe0 r--p 00:00 0 > 7f0a4000-7f124000 r--p 00:00 0 > 7f128000-7f1a8000 r--p 00:00 0 > 7f1a8c00-7f1a8c021000 rw-p 00:00 0 > 7f1a8c021000-7f1a9000 ---p 00:00 0 > 7f1a93ff4000-7f1a9800 rw-p 00:00 0 > 7f1a9800-7f1a98021000 rw-p 00:00 0 > 7f1a98021000-7f1a9c00 ---p 00:00 0 > 7f1a9c00-7f1a9c021000 rw-p 00:00 0 > 7f1a9c021000-7f1aa000 ---p 00:00 0 > 7f1aa000-7f1aa0021000 rw-p 00:00 0 > 7f1aa0021000-7f1aa400 ---p 00:00 0 > 7f1aa49b8000-7f1aa52da000 rw-p 00:00 0 > 7f1aa52da000-7f1aa52db000 ---p 00:00 0 > 7f1aa52db000-7f1ac000 rw-p 00:00 0 > 7f1ac000-7f22c000 r--p 00:00 0 > 7f22c000-7f22c0001000 rw-s febd2000 00:12 11698 > /sys/devices/pci:00/:00:04.0/resource1 > 7f22c0001000-7f22c0005000 rw-s fe004000 00:12 11699 > /sys/devices/pci:00/:00:04.0/resource4 > 7f22c043a000-7f22c1a59000 rw-p 00:00 0 > 7f22c1a59000-7f22c1a5c000 r-xp fd:01 138716319 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so > 7f22c1a5c000-7f22c1c5b000 ---p 3000 fd:01 138716319 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so > 7f22c1c5b000-7f22c1c5c000 r--p 2000 fd:01 138716319 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so > 7f22c1c5c000-7f22c1c5d000 rw-p 3000 fd:01 138716319 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so > 7f22c1c5d000-7f22c1c5f000 r-xp fd:01 138561791 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so > 7f22c1c5f000-7f22c1e5e000 ---p 2000 fd:01 138561791 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so > 7f22c1e5e000-7f22c1e5f000 r--p 1000 fd:01 138561791 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so > 7f22c1e5f000-7f22c1e6 rw-p 2000 fd:01 138561791 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so > 7f22c1e6-7f22c1e63000 r-xp fd:01 138561788 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so > 7f22c1e63000-7f22c2062000 ---p 3000 fd:01 138561788 > > /root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so > 7f22c2062000-7f22c2063000 r--p 000
[vpp-dev] VPP main threads gets stuck when system time is changed
Hi, We have VPP 1801 in one of our systems. I understand the support for VPP 1801 is not there anymore , but requesting for any advice nevertheless. System time is changed by a few seconds using 'date -s'. Then the VPP main thread goes to 100% CPU utilization. The issue is only reproduced when the traffic is running. I attached gdb to VPP and saw that while the worker thread is working normally, the main thread seems to be stuck at clib_cpu_time_now. https://pastebin.com/iJm0uZqx Also, here is the bt of main : https://pastebin.com/CSjv4KsW Please help. Any pointers will be much appreciated Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15367): https://lists.fd.io/g/vpp-dev/message/15367 Mute This Topic: https://lists.fd.io/mt/71053226/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP main threads gets stuck when system time is changed
Hi Dave, I understand a lot of things have changed in between 1801 and latest release. But based on the pstack we were seeing, I went ahead and cherry picked a small change from latest in file vlib/threads.c in function 'vlib_worker_thread_barrier_sync_int' I replaced this -- while ((now = vlib_time_now (vm)) < vm->barrier_no_close_before); with this block while (1) { now = vlib_time_now (vm); /* Barrier hold-down timer expired? */ if (now >= vm->barrier_no_close_before) break; if ((vm->barrier_no_close_before - now) > (2.0 * BARRIER_MINIMUM_OPEN_LIMIT)) { clib_warning ("clock change: would have waited for %.4f seconds", (vm->barrier_no_close_before - now)); break; } } This seems to resolve some of the problems. The vapi client doesn't get disconnected any more. The CLI also keeps working, so main thread is not stuck anymore when the system time is changed. I do see the above clib warning also. However, the main thread keeps running at 100% CPU utilization for the time reported by the clib warning. I see that the throughput goes way down and the workers are kind of under-performing This under-performance of workers again happens for the same period of time as printed in this log -"clock change: would have waited for xxx seconds". After this period, the system returns to normal. I was wondering if I could pickup something else to get this right ? Regards, Siddarth On Fri, Feb 7, 2020 at 9:57 PM Dave Barach (dbarach) wrote: > FWIW, master/latest continues to pass traffic w/ “date -s” deltas of both > plus and minus a couple of minutes. This is not a huge surprise, nor is it > a surprise that stable/1801 fails miserably under similar albeit less > draconian circumstances. > > > > The algorithm changes mentioned below don’t involve a lot of code, but > they are pretty first-order important. > > > > HTH... Dave > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *Dave > Barach via Lists.Fd.Io > *Sent:* Friday, February 7, 2020 10:53 AM > *To:* siddarth rai ; vpp-dev > *Cc:* vpp-dev@lists.fd.io > *Subject:* Re: [vpp-dev] VPP main threads gets stuck when system time is > changed > > > > Try patching src/vppinfra/time.[ch] from master/latest. The algorithms > involved have been changed quite a bit since 18.01... > > > > Dave > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *siddarth > rai > *Sent:* Friday, February 7, 2020 9:17 AM > *To:* vpp-dev > *Subject:* [vpp-dev] VPP main threads gets stuck when system time is > changed > > > > Hi, > > > > We have VPP 1801 in one of our systems. I understand the support for VPP > 1801 is not there anymore , but requesting for any advice nevertheless. > > > > System time is changed by a few seconds using 'date -s'. Then the VPP main > thread goes to 100% CPU utilization. > > The issue is only reproduced when the traffic is running. > > > > I attached gdb to VPP and saw that while the worker thread is working > normally, the main thread seems to be stuck at clib_cpu_time_now. > > https://pastebin.com/iJm0uZqx > > > > Also, here is the bt of main : > > https://pastebin.com/CSjv4KsW > > > > Please help. Any pointers will be much appreciated > > > > Regards, > > Siddarth > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15376): https://lists.fd.io/g/vpp-dev/message/15376 Mute This Topic: https://lists.fd.io/mt/71053226/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] VPP main threads gets stuck when system time is changed
Hi Dave, In addition to the change mentioned earlier, I have tried one more change. In file vppinfra/time.h, I replaced 'CLOCK_REALTIME' to 'CLOCK_MONOTONIC'. This seems to have done the trick for now. Just wondering what could be the impact of this change elsewhere. Should we watch out for any blind spots ? Regards, Siddarth P.S:We have moved to 19.08 but at some deployments are live and we can't help but work with 18.01 and resolve the issues that come up. On Tue, Feb 11, 2020 at 6:21 PM Dave Barach (dbarach) wrote: > Start vpp under gdb, and produce the condition. Interrupt vpp, switch to > thread 0 and collect a backtrace. Based on the backtrace, it should be > fairly clear what’s happening. > > > > Once again: vpp 18.01 is 2+ year old software which the community no > longer supports. If at all possible, please rebase your work onto 19.08 > (LTS), or 20.01 (current release). > > > > HTH... Dave > > > > *From:* siddarth rai > *Sent:* Tuesday, February 11, 2020 2:06 AM > *To:* Dave Barach (dbarach) > *Cc:* vpp-dev > *Subject:* Re: [vpp-dev] VPP main threads gets stuck when system time is > changed > > > > Hi Dave, > > > > I understand a lot of things have changed in between 1801 and latest > release. > > But based on the pstack we were seeing, I went ahead and cherry picked a > small change from latest in file > > vlib/threads.c in function 'vlib_worker_thread_barrier_sync_int' > > > > I replaced this -- while ((now = vlib_time_now (vm)) < > vm->barrier_no_close_before); > > > > with this block > > > > while (1) > { > now = vlib_time_now (vm); > /* Barrier hold-down timer expired? */ > if (now >= vm->barrier_no_close_before) > break; > if ((vm->barrier_no_close_before - now) > > (2.0 * BARRIER_MINIMUM_OPEN_LIMIT)) > { > clib_warning > ("clock change: would have waited for %.4f seconds", > (vm->barrier_no_close_before - now)); > break; > } > } > > > > This seems to resolve some of the problems. The vapi client doesn't get > disconnected any more. The CLI also keeps working, so main thread is not > stuck anymore when the system time is changed. I do see the above clib > warning also. > > > > However, the main thread keeps running at 100% CPU utilization for the > time reported by the clib warning. > > I see that the throughput goes way down and the workers are kind of > under-performing > > This under-performance of workers again happens for the same period of > time as printed in this log -"clock change: would have waited for xxx > seconds". After this period, the system returns to normal. > > > > I was wondering if I could pickup something else to get this right ? > > > > Regards, > > Siddarth > > > > > > > > On Fri, Feb 7, 2020 at 9:57 PM Dave Barach (dbarach) > wrote: > > FWIW, master/latest continues to pass traffic w/ “date -s” deltas of both > plus and minus a couple of minutes. This is not a huge surprise, nor is it > a surprise that stable/1801 fails miserably under similar albeit less > draconian circumstances. > > > > The algorithm changes mentioned below don’t involve a lot of code, but > they are pretty first-order important. > > > > HTH... Dave > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *Dave > Barach via Lists.Fd.Io > *Sent:* Friday, February 7, 2020 10:53 AM > *To:* siddarth rai ; vpp-dev > *Cc:* vpp-dev@lists.fd.io > *Subject:* Re: [vpp-dev] VPP main threads gets stuck when system time is > changed > > > > Try patching src/vppinfra/time.[ch] from master/latest. The algorithms > involved have been changed quite a bit since 18.01... > > > > Dave > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *siddarth > rai > *Sent:* Friday, February 7, 2020 9:17 AM > *To:* vpp-dev > *Subject:* [vpp-dev] VPP main threads gets stuck when system time is > changed > > > > Hi, > > > > We have VPP 1801 in one of our systems. I understand the support for VPP > 1801 is not there anymore , but requesting for any advice nevertheless. > > > > System time is changed by a few seconds using 'date -s'. Then the VPP main > thread goes to 100% CPU utilization. > > The issue is only reproduced when the traffic is running. > > > > I attached gdb to VPP and saw that while the worker thread is working > normally, the main thread seems to be stuck at clib_cpu_time_now. > > https://pastebin.com/iJm0uZqx > > > > Also, here is the bt of main : > > https://pastebin.com/CSjv4KsW > > > > Please help. Any pointers will be much appreciated > > > > Regards, > > Siddarth > > > > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15380): https://lists.fd.io/g/vpp-dev/message/15380 Mute This Topic: https://lists.fd.io/mt/71053226/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP on Azure
Hello All, I am trying to run VPP (version 22.02) on a Linux (centos 7.9.2009) VM on Azure. I have made sure that accelerated networking is enabled on the VM. However, when I start up VPP I notice that it is not able to bind with the interface whitelisted in the startup.conf. I found this article in fd.io docs - https://fd.io/docs/vpp/v2101/usecases/vppinazure.html. According to this, VPP 18.07(with DPDK 18.02) was tried on azure. Newer versions were causing issues. So, I wanted to understand if the newer versions of VPP(with DPDK) are supported on azure and if anyone has tried it recently? Regards, Siddarth -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21605): https://lists.fd.io/g/vpp-dev/message/21605 Mute This Topic: https://lists.fd.io/mt/92162915/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-