Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread RaviKiran Veldanda
Hi Andrew,
Thanks for prompt reply, Please find details below
VAPI Configuration ==> I believe you are talking about this
rv = vapi_connect (ctx, "checking", NULL, 512,
512, VAPI_MODE_BLOCKING, true);
The VPP_IFADDR.cfg --> is for all interface information.
The startup.conf for the VPP startup.

I can try with Single worker thread.. but we need minimum 4 worker thread 
anyway.
//Ravi


vpp_ifaddr.cfg
Description: Binary data


startup.conf
Description: Binary data

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20122): https://lists.fd.io/g/vpp-dev/message/20122
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread Andrew Yourtchenko
Oh and even before that - could you see if the same issue occurs in the case of 
no workers (so only a single thread scenario) ? That will help to narrow down 
the area + focus my repro.

--a

> On 13 Sep 2021, at 18:06, Andrew Yourtchenko via lists.fd.io 
>  wrote:
> 
> Cool! Would you be able to share the app + VPP startup config to see if I 
> can repro this locally ?
> 
> --a
> 
>>> On 13 Sep 2021, at 15:25, RaviKiran Veldanda  wrote:
>>> 
>> 
>> [Edited Message Follows]
>> 
>> This is reproduced with stand alone app, which just creates and deletes the 
>> ACL,ABF policies. No traffic, No memif.
>> Please find GDB context: 
>> 
>> Its waiting on some ptherad_condition_wait --> not a timed wait.
>> 
>> 
>> gdb /usr/bin/vpp
>> 
>>  
>> 
>> 0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
>> /lib64/libpthread.so.0
>> 
>> Missing separate debuginfos, use: yum debuginfo-install 
>> glibc-2.28-101.el8.x86_64 libuuid-2.32.1-22.el8.x86_64 
>> mbedtls-2.16.9-1.el8.x86_64 numactl-libs-2.0.12-9.el8.x86_64 
>> openssl-libs-1.1.1c-15.el8.x86_64 sssd-client-2.2.3-20.el8.x86_64 
>> zlib-1.2.11-13.el8.x86_64
>> 
>> (gdb) bt
>> 
>> #0  0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
>> /lib64/libpthread.so.0
>> 
>> #1  0x7f0232da1016 in svm_queue_wait_inline (q=0x1301c6c80) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:157
>> 
>> #2  svm_queue_add (q=0x1301c6c80, elem=elem@entry=0x7f01e1d01dc8 
>> "X\241\005\060\001", nowait=nowait@entry=0) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:284
>> 
>> #3  0x7f02346b17d7 in vl_msg_api_send_shmem (q=, 
>> elem=elem@entry=0x7f01e1d01dc8 "X\241\005\060\001") at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_shared.c:790
>> 
>> #4  0x7f01f1bab6d1 in vl_api_send_msg (elem=, rp=0x1301c6b90) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/api.h:43
>> 
>> #5  vl_api_abf_itf_attach_add_del_t_handler (mp=0x1300f51b8) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/plugins/abf/abf_api.c:121
>> 
>> #6  0x7f02346ca7fb in vl_msg_api_handler_with_vm_node 
>> (am=am@entry=0x7f02348d8f40 , vlib_rp=vlib_rp@entry=0x130023000, the_msg=, 
>> vm=vm@entry=0x7f01f1db5680,
>> 
>> node=node@entry=0x7f01f3088e80, is_private=is_private@entry=0 '\000') at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibapi/api_shared.c:635
>> 
>> #7  0x7f02346af2a6 in void_mem_api_handle_msg_i (is_private=0 '\000', 
>> node=0x7f01f3088e80, vm=0x7f01f1db5680, vlib_rp=0x130023000, 
>> am=0x7f02348d8f40 )
>> 
>> at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:696
>> 
>> #8  vl_mem_api_handle_msg_main (vm=vm@entry=0x7f01f1db5680, 
>> node=node@entry=0x7f01f3088e80) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:707
>> 
>> #9  0x7f02346c1636 in vl_api_clnt_process (vm=, node=0x7f01f3088e80, f=) 
>> at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/vlib_api.c:338
>> 
>> #10 0x7f0232fedb16 in vlib_process_bootstrap (_a=) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1284
>> 
>> #11 0x7f0232550ee8 in clib_calljmp () at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/longjmp.S:123
>> 
>> #12 0x7f01e3f65dc0 in ?? ()
>> 
>> #13 0x7f0232ff0ec4 in vlib_process_startup (f=0x0, p=0x7f01f3088e80, 
>> vm=0x7f01f1db5680) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/types.h:133
>> 
>> #14 dispatch_process (vm=, p=0x7f01f3088e80, last_time_stamp=, f=0x0) at 
>> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1365
>> 
>> #15 0x in ?? ()
>> 
>>  
>> 
>> (gdb) quit
>> 
>>  
>> 
>> 
>> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20121): https://lists.fd.io/g/vpp-dev/message/20121
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread Andrew Yourtchenko
Cool! Would you be able to share the app + VPP startup config to see if I can 
repro this locally ?

--a

> On 13 Sep 2021, at 15:25, RaviKiran Veldanda  wrote:
> 
> 
> [Edited Message Follows]
> 
> This is reproduced with stand alone app, which just creates and deletes the 
> ACL,ABF policies. No traffic, No memif.
> Please find GDB context: 
> 
> Its waiting on some ptherad_condition_wait --> not a timed wait.
> 
> 
> gdb /usr/bin/vpp
> 
>  
> 
> 0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 
> Missing separate debuginfos, use: yum debuginfo-install 
> glibc-2.28-101.el8.x86_64 libuuid-2.32.1-22.el8.x86_64 
> mbedtls-2.16.9-1.el8.x86_64 numactl-libs-2.0.12-9.el8.x86_64 
> openssl-libs-1.1.1c-15.el8.x86_64 sssd-client-2.2.3-20.el8.x86_64 
> zlib-1.2.11-13.el8.x86_64
> 
> (gdb) bt
> 
> #0  0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> 
> #1  0x7f0232da1016 in svm_queue_wait_inline (q=0x1301c6c80) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:157
> 
> #2  svm_queue_add (q=0x1301c6c80, elem=elem@entry=0x7f01e1d01dc8 
> "X\241\005\060\001", nowait=nowait@entry=0) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:284
> 
> #3  0x7f02346b17d7 in vl_msg_api_send_shmem (q=, 
> elem=elem@entry=0x7f01e1d01dc8 "X\241\005\060\001") at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_shared.c:790
> 
> #4  0x7f01f1bab6d1 in vl_api_send_msg (elem=, rp=0x1301c6b90) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/api.h:43
> 
> #5  vl_api_abf_itf_attach_add_del_t_handler (mp=0x1300f51b8) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/plugins/abf/abf_api.c:121
> 
> #6  0x7f02346ca7fb in vl_msg_api_handler_with_vm_node 
> (am=am@entry=0x7f02348d8f40 , vlib_rp=vlib_rp@entry=0x130023000, the_msg=, 
> vm=vm@entry=0x7f01f1db5680,
> 
> node=node@entry=0x7f01f3088e80, is_private=is_private@entry=0 '\000') at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibapi/api_shared.c:635
> 
> #7  0x7f02346af2a6 in void_mem_api_handle_msg_i (is_private=0 '\000', 
> node=0x7f01f3088e80, vm=0x7f01f1db5680, vlib_rp=0x130023000, 
> am=0x7f02348d8f40 )
> 
> at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:696
> 
> #8  vl_mem_api_handle_msg_main (vm=vm@entry=0x7f01f1db5680, 
> node=node@entry=0x7f01f3088e80) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:707
> 
> #9  0x7f02346c1636 in vl_api_clnt_process (vm=, node=0x7f01f3088e80, f=) 
> at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/vlib_api.c:338
> 
> #10 0x7f0232fedb16 in vlib_process_bootstrap (_a=) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1284
> 
> #11 0x7f0232550ee8 in clib_calljmp () at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/longjmp.S:123
> 
> #12 0x7f01e3f65dc0 in ?? ()
> 
> #13 0x7f0232ff0ec4 in vlib_process_startup (f=0x0, p=0x7f01f3088e80, 
> vm=0x7f01f1db5680) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/types.h:133
> 
> #14 dispatch_process (vm=, p=0x7f01f3088e80, last_time_stamp=, f=0x0) at 
> /usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1365
> 
> #15 0x in ?? ()
> 
>  
> 
> (gdb) quit
> 
>  
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20120): https://lists.fd.io/g/vpp-dev/message/20120
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread RaviKiran Veldanda
[Edited Message Follows]

This is reproduced with stand alone app, which just creates and deletes the 
ACL,ABF policies. No traffic, No memif.
Please find GDB context:

Its waiting on some ptherad_condition_wait --> not a timed wait.

gdb /usr/bin/vpp

0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0

Missing separate debuginfos, use: yum debuginfo-install 
glibc-2.28-101.el8.x86_64 libuuid-2.32.1-22.el8.x86_64 
mbedtls-2.16.9-1.el8.x86_64 numactl-libs-2.0.12-9.el8.x86_64 
openssl-libs-1.1.1c-15.el8.x86_64 sssd-client-2.2.3-20.el8.x86_64 
zlib-1.2.11-13.el8.x86_64

(gdb) bt

*#0  0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0*

#1  0x7f0232da1016 in svm_queue_wait_inline (q=0x1301c6c80) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:157

#2  svm_queue_add (q=0x1301c6c80, elem=elem@entry=0x7f01e1d01dc8 
"X\241\005\060\001", nowait=nowait@entry=0) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:284

#3  0x7f02346b17d7 in vl_msg_api_send_shmem (q=, 
elem=elem@entry=0x7f01e1d01dc8 "X\241\005\060\001") at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_shared.c:790

#4  0x7f01f1bab6d1 in vl_api_send_msg (elem=, rp=0x1301c6b90) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/api.h:43

#5  vl_api_abf_itf_attach_add_del_t_handler (mp=0x1300f51b8) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/plugins/abf/abf_api.c:121

#6  0x7f02346ca7fb in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f02348d8f40 , vlib_rp=vlib_rp@entry=0x130023000, the_msg=, 
vm=vm@entry=0x7f01f1db5680,

node=node@entry=0x7f01f3088e80, is_private=is_private@entry=0 '\000') at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibapi/api_shared.c:635

#7  0x7f02346af2a6 in void_mem_api_handle_msg_i (is_private=0 '\000', 
node=0x7f01f3088e80, vm=0x7f01f1db5680, vlib_rp=0x130023000, am=0x7f02348d8f40 )

at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:696

#8  vl_mem_api_handle_msg_main (vm=vm@entry=0x7f01f1db5680, 
node=node@entry=0x7f01f3088e80) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:707

#9  0x7f02346c1636 in vl_api_clnt_process (vm=, node=0x7f01f3088e80, f=) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/vlib_api.c:338

#10 0x7f0232fedb16 in vlib_process_bootstrap (_a=) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1284

#11 0x7f0232550ee8 in clib_calljmp () at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/longjmp.S:123

#12 0x7f01e3f65dc0 in ?? ()

#13 0x7f0232ff0ec4 in vlib_process_startup (f=0x0, p=0x7f01f3088e80, 
vm=0x7f01f1db5680) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/types.h:133

#14 dispatch_process (vm=, p=0x7f01f3088e80, last_time_stamp=, f=0x0) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1365

#15 0x in ?? ()

(gdb) quit

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20119): https://lists.fd.io/g/vpp-dev/message/20119
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread RaviKiran Veldanda
Please find GDB context:

Its waiting on some ptherad_condition_wait --> not a timed wait.

gdb /usr/bin/vpp

0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0

Missing separate debuginfos, use: yum debuginfo-install 
glibc-2.28-101.el8.x86_64 libuuid-2.32.1-22.el8.x86_64 
mbedtls-2.16.9-1.el8.x86_64 numactl-libs-2.0.12-9.el8.x86_64 
openssl-libs-1.1.1c-15.el8.x86_64 sssd-client-2.2.3-20.el8.x86_64 
zlib-1.2.11-13.el8.x86_64

(gdb) bt

*#0  0x7f023297848c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0*

#1  0x7f0232da1016 in svm_queue_wait_inline (q=0x1301c6c80) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:157

#2  svm_queue_add (q=0x1301c6c80, elem=elem@entry=0x7f01e1d01dc8 
"X\241\005\060\001", nowait=nowait@entry=0) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/svm/queue.c:284

#3  0x7f02346b17d7 in vl_msg_api_send_shmem (q=, 
elem=elem@entry=0x7f01e1d01dc8 "X\241\005\060\001") at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_shared.c:790

#4  0x7f01f1bab6d1 in vl_api_send_msg (elem=, 
rp=0x1301c6b90) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/api.h:43

#5  vl_api_abf_itf_attach_add_del_t_handler (mp=0x1300f51b8) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/plugins/abf/abf_api.c:121

#6  0x7f02346ca7fb in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f02348d8f40 , 
vlib_rp=vlib_rp@entry=0x130023000, the_msg=, 
vm=vm@entry=0x7f01f1db5680,

node=node@entry=0x7f01f3088e80, is_private=is_private@entry=0 '\000') at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibapi/api_shared.c:635

#7  0x7f02346af2a6 in void_mem_api_handle_msg_i (is_private=0 '\000', 
node=0x7f01f3088e80, vm=0x7f01f1db5680, vlib_rp=0x130023000, am=0x7f02348d8f40 
)

at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:696

#8  vl_mem_api_handle_msg_main (vm=vm@entry=0x7f01f1db5680, 
node=node@entry=0x7f01f3088e80) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/memory_api.c:707

#9  0x7f02346c1636 in vl_api_clnt_process (vm=, 
node=0x7f01f3088e80, f=) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlibmemory/vlib_api.c:338

#10 0x7f0232fedb16 in vlib_process_bootstrap (_a=) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1284

#11 0x7f0232550ee8 in clib_calljmp () at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/longjmp.S:123

#12 0x7f01e3f65dc0 in ?? ()

#13 0x7f0232ff0ec4 in vlib_process_startup (f=0x0, p=0x7f01f3088e80, 
vm=0x7f01f1db5680) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vppinfra/types.h:133

#14 dispatch_process (vm=, p=0x7f01f3088e80, 
last_time_stamp=, f=0x0) at 
/usr/src/debug/vpp-21.06.0-2~g99c146915_dirty.x86_64/src/vlib/main.c:1365

#15 0x in ?? ()

(gdb) quit

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20119): https://lists.fd.io/g/vpp-dev/message/20119
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread RaviKiran Veldanda
Hi Andrew,
>>>What is sitting on the other end of the memif interfaces? Are those 
>>>applications experiencing any changes / restarts throughout, or are they 
>>>permanently run so no transient events in memif ?
There is no transition on memif interfaces, and I am not running any traffic so 
no events on memif.
Could you see if it is still reproducible if you disconnect the other 
components altogether ? (I am trying to see how reproducible it is in an 
isolated case, and if it is then hopefully it is something that could be 
made into a standalone harness that you might be able to share so i could 
try reproducing it locally.
Yes, Its reproducible with standalone and I am using  "vapi_connect" and 
"vapi_send"/"recv" calls for ACL creation and deletion.(I believe its SHM based)
I will reproduce and provide gdb context.

//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20118): https://lists.fd.io/g/vpp-dev/message/20118
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread Andrew Yourtchenko
Hi Ravikiran,


> On 13 Sep 2021, at 13:00, RaviKiran Veldanda  wrote:
> 
> This definition is way too vague to tell anything.
> 
> Hi Andrew,
> Please find my answers below
> What is the setup ? (Cores and their config)
>  Its 1U server, with 72cores available and 5cores for VPP. With 100G NIC. 
>  VAPI with 512 request and response size.
> 
> What are the exact rules being pushed ?
> >>> ACL Permit rules pushed using VPP API. 
> set acl-plugin acl permit dst 172.172.0.0/24
> abf policy add id 0 acl 0 via 192.168.1.5 memif1/0
> abf attach ip4 policy 0  BondedEthernet0.1100
> Same kind of rules for 100s of subnets
> Every few minutes, the rules will be added and deleted.

What is sitting on the other end of the memif interfaces? Are those 
applications experiencing any changes / restarts throughout, or are they 
permanently run so no transient events in memif ?

> 
> What are the type of traffic that is passing ?
> >> No much traffic passing. I am testing only VAPI for ACL rules.

Could you see if it is still reproducible if you disconnect the other 
components altogether ? (I am trying to see how reproducible it is in an 
isolated case, and if it is then hopefully it is something that could be made 
into a standalone harness that you might be able to share so i could try 
reproducing it locally. 


> 
> If you see the VPP stuck - what does looking at it with GDB tell about the 
> state of it ?
> >> I missed this point... I can provide next time when it hangs.
> 
> Is this reliably reproducible in the lab ?
> >> Yes, after every several hours of addition and deletion its reproducible.

Cool, thanks a lot!

> 
> Please let me know if you need anything more.

Another question that I forgot - which way of interacting are you using for the 
API ? Shared memory or the Unix socket ?

My line of thinking involves narrowing this issue into the minimal test case, 
as I described above… primarily I am curious about isolating away all the 
potential memif transitions and the physical interface traffic, to clarify if 
this can be purely some acl+abf  api interaction artifact or there is more to 
the story…

P.s. The harness (when we narrow down the problem domain) I am planning to 
build using the current work-in-progress Rust bindings (cf: 
https://github.com/ayourtch/latest-vpp-api-playground)  - but if you have 
something that you can share that uses eg govpp, will be interesting to use 
that.

—a

> //Ravi 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20117): https://lists.fd.io/g/vpp-dev/message/20117
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread RaviKiran Veldanda
This definition is way too vague to tell anything.

Hi Andrew,
Please find my answers below
What is the setup ? (Cores and their config)
 Its 1U server, with 72cores available and 5cores for VPP. With 100G NIC. 
 VAPI with 512 request and response size.

What are the exact rules being pushed ?
>>> ACL Permit rules pushed using VPP API.
set acl-plugin acl permit dst 172.172.0.0/24
abf policy add id 0 acl 0 via 192.168.1.5 memif1/0
abf attach ip4 policy 0  BondedEthernet0.1100
Same kind of rules for 100s of subnets
Every few minutes, the rules will be added and deleted.

What are the type of traffic that is passing ?
>> No much traffic passing. I am testing only VAPI for ACL rules.

If you see the VPP stuck - what does looking at it with GDB tell about the 
state of it ?
>> I missed this point... I can provide next time when it hangs.

Is this reliably reproducible in the lab ?
>> Yes, after every several hours of addition and deletion its reproducible.

Please let me know if you need anything more.
//Ravi

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20116): https://lists.fd.io/g/vpp-dev/message/20116
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hangs after few hours of continuous VAPI Send/Recv #acl #abf #vapi

2021-09-13 Thread Andrew Yourtchenko
Hi Ravikiran,

This definition is way too vague to tell anything.

What is the setup ? (Cores and their config)

What are the exact rules being pushed ?

What are the type of traffic that is passing ?

If you see the VPP stuck - what does looking at it with GDB tell about the 
state of it ?

Is this reliably reproducible in the lab ?

--a

> On 13 Sep 2021, at 04:53, RaviKiran Veldanda  wrote:
> 
> Hi Experts,
> I am running several applications and calling some ACL/ABF API to add and 
> delete rules in VPP, these rules are added/deleted from all the applications 
> continuously.
> After 5 to 6 hours I see the VPP ACL/ABF API will return VAPI_OK but 
> payload.retval is NONZero number,
> If we keep on add/delete after this error, the VPP crashes.
> If we just want to read version using VAPI it will give empty values and if 
> we want to do "cli" using vppctl, the vppctl also hangs.
> only way to recover the VPP is "kill -9".
> Is anyone have any clue? if the VPP/VAPI gets hanged how can we make the VPP 
> to normal state without "kill -9 " 
> Any suggestion would be great help. 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20115): https://lists.fd.io/g/vpp-dev/message/20115
Mute This Topic: https://lists.fd.io/mt/85566777/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #acl:https://lists.fd.io/g/vpp-dev/mutehashtag/acl
Mute #abf:https://lists.fd.io/g/vpp-dev/mutehashtag/abf
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-