Hi Raj, 

Inline.

> On Jun 5, 2020, at 4:14 PM, Raj Kumar <raj.gauta...@gmail.com> wrote:
> 
> Hi Florin,
> VPP from the "stable/2005" branch is working fine. Thanks! for your help. 

FC: Great!

> 
> I have one question; for the UDP tx if the destination address is not in the 
> same subnet then it works after adding a route for the next hop. 
> In my case , VPP interface ip is 2001:5b0:ffff:701:b883:31f:29e:68f0 and I am 
> sending UDP traffic to address 2001:5b0:ffff:700:ba83:3ff:fe9e:6848.
> It works fine after adding  the following route in VPP fib 
> ip route add 2001:5b0:ffff:700:ba83:3ff:fe9e:6848/128 via 
> 2001:5b0:ffff:701::254 HundredGigabitEthernet12/0/0.701
> 
> My requirement is to use SRv6 for packet routing .  I tried by adding 
> following sid entry in vpp 
> sr policy add bsid 2001:5b0:ffff:700:ba83:3ff:fe9e:6848 next 
> 2001:5b0:ffff:701::254 insert
> But, with this configuration , UDP tx application is not able to connect. 
> 
> udpTxThread started!!!  ... tx port  = 9988vppcom_session_create()  ..16777216
> vppcom_session_connect:1741: vcl<47606:1>: session handle 16777216 
> (STATE_CLOSED): connecting to peer IPv6 2001:5b0:ffff:700:ba83:3ff:fe9e:6848 
> port 9988 proto UDP
> vcl_session_connected_handler:450: vcl<47606:1>: ERROR: session index 0: 
> connect failed! no resolving interface
> vppcom_session_connect:1756: vcl<47606:1>: session 0 [0x0]: connect failed!
> vppcom_session_connect() failed ... -111
> 
> From the traces , it looks like that VCL session is not looking into the SID 
> entries.  Please let me know if I am doing something wrong.

FC: UDP transport in vpp doesn’t find a route in the fib for the destination, 
so it can’t select a local ip address for the connection. As a result, you get 
that error in vcl. 

So you’ll need a route for your destination 
(2001:5b0:ffff:700:ba83:3ff:fe9e:6848) that redirects traffic from the fib into 
the srv6 tunnel. Additionally, that route should eventually resolve to an 
output interface with an IP (that will be the source ip for egressing packets) 
. 

Regards,
Florin

> 
> thanks,
> -Raj
> 
> 
> 
> 
> 
> 
> 
> 
> On Thu, Jun 4, 2020 at 4:49 PM Florin Coras <fcoras.li...@gmail.com 
> <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Raj, 
> 
> You have it here [4]. I’ll merge it once it verifies. 
> 
> Regards,
> Florin
> 
> [4] https://gerrit.fd.io/r/c/vpp/+/27432 
> <https://gerrit.fd.io/r/c/vpp/+/27432>
> 
>> On Jun 4, 2020, at 1:41 PM, Raj Kumar <raj.gauta...@gmail.com 
>> <mailto:raj.gauta...@gmail.com>> wrote:
>> 
>> Hi Florin,
>> 
>> To pick up all the following patches I downloaded  VPP code from Master 
>> branch.
>>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>> <https://gerrit.fd.io/r/c/vpp/+/27111>
>>> [2] https://gerrit.fd.io/r/c/vpp/+/27106 
>>> <https://gerrit.fd.io/r/c/vpp/+/27106>
>>>  [3] https://gerrit.fd.io/r/c/vpp/+/27235 
>>> <https://gerrit.fd.io/r/c/vpp/+/27235>
>>> 
>>> But , with this build I observed that UDP listener is always migrating the 
>>> received connection on the first worker thread . With stable/2005 ( + 
>>> patches) code, VPP was migrating the connections on different worker 
>>> threads. I am using UDP socket with CONNECTED attribute. 
>>>
>> 
>> I found that on stable/2005 you already merged [2] and [3]. Please let me 
>> know if you are planning to merge [1] on the stable/2005 branch. For now, I 
>> want to stay with stable/2005. 
>> 
>> thanks,
>> -Raj
>> 
>> On Sun, May 31, 2020 at 11:32 PM Florin Coras <fcoras.li...@gmail.com 
>> <mailto:fcoras.li...@gmail.com>> wrote:
>> Hi Raj, 
>> 
>> Inline.
>> 
>>> On May 31, 2020, at 8:10 PM, Raj Kumar <raj.gauta...@gmail.com 
>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>> 
>>> Hi Florin,
>>> The UDPC connections are working fine . I was making a basic mistake. For 
>>> the second application, I missed to export the  LD_LIBRARY_PATH to the 
>>> build directory because of that application was linking to the old library 
>>> ( form /usr/lib64).
>>> Now, I tried both UDP tx and rx connection ( UDP connected) . Both are 
>>> working fine.
>> 
>> FC: Great!
>> 
>>> 
>>> I have a question; all the connections originating from VPP host stack (UDP 
>>> tx)  are always going on the worker thread 1. Is there any way to assign 
>>> these connections to the different worker threads ( similar to the UDP rx) 
>>> ? I can not rely on the receiver ( to initiate the connection) as that is a 
>>> third party application. 
>> 
>> FC: At this time no. 
>> 
>>> 
>>> Earlier with the previous VPP release , I tried by assigning UDP tx 
>>> connections to the different worker threads in a round robin manner . I am 
>>> wondering if we can try some thing similar in the new release. 
>> 
>> It might just work. Let me know if you try it out. 
>> 
>>> 
>>> Also, please let me know in which VPP release, the following patches would 
>>> be available.
>> 
>> FC: The first two are part of 20.05, the third will be ported to the first 
>> 20.05 point release. 
>> 
>> Regards, 
>> Florin
>> 
>>> 
>>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>> <https://gerrit.fd.io/r/c/vpp/+/27111>
>>> [2] https://gerrit.fd.io/r/c/vpp/+/27106 
>>> <https://gerrit.fd.io/r/c/vpp/+/27106>
>>>  [3] https://gerrit.fd.io/r/c/vpp/+/27235 
>>> <https://gerrit.fd.io/r/c/vpp/+/27235>
>>> 
>>> 
>>> Here are the VPP stats -
>>> 
>>> vpp# sh app server
>>> Connection                              App                          Wrk
>>> [0:0][U] 2001:5b0:ffff:701:b883:31f:29e:udp6_rx[shm]                  1
>>> [0:1][U] 2001:5b0:ffff:701:b883:31f:29e:udp6_rx[shm]                  1
>>> vpp#
>>> vpp# sh app client
>>> Connection                              App
>>> [1:0][U] 2001:5b0:ffff:701:b883:31f:29e:udp6_tx[shm]
>>> [1:1][U] 2001:5b0:ffff:701:b883:31f:29e:udp6_tx[shm]
>>> vpp#
>>> vpp# sh session verbose
>>> Connection                                        State          Rx-f      
>>> Tx-f
>>> [0:0][U] 2001:5b0:ffff:701:b883:31f:29e:9880:12345LISTEN         0         0
>>> [0:1][U] 2001:5b0:ffff:701:b883:31f:29e:9881:56789LISTEN         0         0
>>> Thread 0: active sessions 2
>>> 
>>> Connection                                        State          Rx-f      
>>> Tx-f
>>> [1:0][U] 2001:5b0:ffff:701:b883:31f:29e:9880:58199OPENED         0         
>>> 3999756
>>> [1:1][U] 2001:5b0:ffff:701:b883:31f:29e:9880:10442OPENED         0         
>>> 3999756
>>> Thread 1: active sessions 2
>>> 
>>> Connection                                        State          Rx-f      
>>> Tx-f
>>> [2:0][U] 2001:5b0:ffff:701:b883:31f:29e:9881:56789OPENED         0         0
>>> Thread 2: active sessions 1
>>> Thread 3: no sessions
>>> 
>>> Connection                                        State          Rx-f      
>>> Tx-f
>>> [4:0][U] 2001:5b0:ffff:701:b883:31f:29e:9880:12345OPENED         0         0
>>> Thread 4: active sessions 1
>>> 
>>> Thanks,
>>> -Raj
>>> 
>>> 
>>> 
>>> On Sun, May 31, 2020 at 7:35 PM Florin Coras <fcoras.li...@gmail.com 
>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>> Hi Raj, 
>>> 
>>> Inline.
>>> 
>>>> On May 31, 2020, at 4:07 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>> 
>>>> Hi Florin,
>>>> I was trying this test with debug binaries, but as soon as I enable the 
>>>> interface , vpp is crashing.
>>> 
>>> FC: It looks like somehow corrupted buffers make their way into the error 
>>> drop node. What traffic are you running through vpp and is this master or 
>>> are you running some custom code? 
>>> 
>>>> 
>>>> On the original problem ( multiple listener); if I open multiple sockets 
>>>> from the same multi threaded application then it works fine. But, if I 
>>>> start another application then only I see the VPP crash( which I mentioned 
>>>> in my previous email).
>>> 
>>> FC: Is the second app trying to listen on the same ip:port pair? That is 
>>> not supported and the second listen request should’ve been rejected. Do you 
>>> have a bt?
>>> 
>>> Regards,
>>> Florin
>>> 
>>>> 
>>>> Here is the stack trace with debug binaries which is crashing on startup. 
>>>> Please let me know what wrong I am doing here ? 
>>>> 
>>>> 
>>>> #0  0x00007ffff4b748df in raise () from /lib64/libc.so.6
>>>> #1  0x00007ffff4b5ecf5 in abort () from /lib64/libc.so.6
>>>> #2  0x0000000000407a28 in os_panic () at /opt/vpp/src/vpp/vnet/main.c:366
>>>> #3  0x00007ffff5a279af in debugger () at /opt/vpp/src/vppinfra/error.c:84
>>>> #4  0x00007ffff5a27d92 in _clib_error (how_to_die=2, 
>>>> function_name=0x7ffff663d540 <__FUNCTION__.35997> 
>>>> "vlib_buffer_validate_alloc_free",
>>>>     line_number=367, fmt=0x7ffff663d0ba "%s %U buffer 0x%x") at 
>>>> /opt/vpp/src/vppinfra/error.c:143
>>>> #5  0x00007ffff65752a3 in vlib_buffer_validate_alloc_free 
>>>> (vm=0x7ffff6866340 <vlib_global_main>, buffers=0x7fffb586d630, n_buffers=1,
>>>>     expected_state=VLIB_BUFFER_KNOWN_ALLOCATED) at 
>>>> /opt/vpp/src/vlib/buffer.c:366
>>>> #6  0x00007ffff65675f0 in vlib_buffer_pool_put (vm=0x7ffff6866340 
>>>> <vlib_global_main>, buffer_pool_index=0 '\000', buffers=0x7fffb586d630,
>>>>     n_buffers=1) at /opt/vpp/src/vlib/buffer_funcs.h:754
>>>> #7  0x00007ffff6567dde in vlib_buffer_free_inline (vm=0x7ffff6866340 
>>>> <vlib_global_main>, buffers=0x7fffb67e9b14, n_buffers=0, maybe_next=1)
>>>>     at /opt/vpp/src/vlib/buffer_funcs.h:924
>>>> #8  0x00007ffff6567e2e in vlib_buffer_free (vm=0x7ffff6866340 
>>>> <vlib_global_main>, buffers=0x7fffb67e9b10, n_buffers=1)
>>>>     at /opt/vpp/src/vlib/buffer_funcs.h:943
>>>> #9  0x00007ffff6568cb0 in process_drop_punt (vm=0x7ffff6866340 
>>>> <vlib_global_main>, node=0x7fffb5475e00, frame=0x7fffb67e9b00,
>>>>     disposition=ERROR_DISPOSITION_DROP) at /opt/vpp/src/vlib/drop.c:231
>>>> #10 0x00007ffff6568db0 in error_drop_node_fn_hsw (vm=0x7ffff6866340 
>>>> <vlib_global_main>, node=0x7fffb5475e00, frame=0x7fffb67e9b00)
>>>>     at /opt/vpp/src/vlib/drop.c:247
>>>> #11 0x00007ffff65c3447 in dispatch_node (vm=0x7ffff6866340 
>>>> <vlib_global_main>, node=0x7fffb5475e00, type=VLIB_NODE_TYPE_INTERNAL,
>>>>     dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb67e9b00, 
>>>> last_time_stamp=15480427682338516) at /opt/vpp/src/vlib/main.c:1235
>>>> #12 0x00007ffff65c3c02 in dispatch_pending_node (vm=0x7ffff6866340 
>>>> <vlib_global_main>, pending_frame_index=2,
>>>>     last_time_stamp=15480427682338516) at /opt/vpp/src/vlib/main.c:1403
>>>> #13 0x00007ffff65c58c9 in vlib_main_or_worker_loop (vm=0x7ffff6866340 
>>>> <vlib_global_main>, is_main=1) at /opt/vpp/src/vlib/main.c:1862
>>>> #14 0x00007ffff65c6320 in vlib_main_loop (vm=0x7ffff6866340 
>>>> <vlib_global_main>) at /opt/vpp/src/vlib/main.c:1990
>>>> #15 0x00007ffff65c70e0 in vlib_main (vm=0x7ffff6866340 <vlib_global_main>, 
>>>> input=0x7fffb586efb0) at /opt/vpp/src/vlib/main.c:2236
>>>> #16 0x00007ffff662f311 in thread0 (arg=140737329390400) at 
>>>> /opt/vpp/src/vlib/unix/main.c:658
>>>> #17 0x00007ffff5a465cc in clib_calljmp () at 
>>>> /opt/vpp/src/vppinfra/longjmp.S:123
>>>> #18 0x00007fffffffd0a0 in ?? ()
>>>> #19 0x00007ffff662f8a7 in vlib_unix_main (argc=50, argv=0x705f00) at 
>>>> /opt/vpp/src/vlib/unix/main.c:730
>>>> #20 0x0000000000407387 in main (argc=50, argv=0x705f00) at 
>>>> /opt/vpp/src/vpp/vnet/main.c:291
>>>> 
>>>> thanks,
>>>> -Raj
>>>> 
>>>> On Mon, May 25, 2020 at 5:02 PM Florin Coras <fcoras.li...@gmail.com 
>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>> Hi Raj, 
>>>> 
>>>> Okay, so at least with that we have support for bounded listeners (note 
>>>> that [2] was merged but to set the connected option you now have to use 
>>>> vppcom_session_attr). 
>>>> 
>>>> As for the trace, something seems off. Why exactly does it crash? It looks 
>>>> as if session_get_transport_proto (ls) crashes because of ls being null, 
>>>> but prior to that ls is dereferenced and it does’t crash. Could you try 
>>>> with debug binaries? 
>>>> 
>>>> Regards,
>>>> Florin
>>>> 
>>>>> On May 25, 2020, at 1:43 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>> 
>>>>> Hi Florin,
>>>>> This works fine with single udp listener. I can see connections going on 
>>>>> different cores. But, if I run more than one listener then VPP is 
>>>>> crashing. Here are the VPP stack traces - 
>>>>> 
>>>>> (gdb) bt
>>>>> #0  0x0000000000000000 in ?? ()
>>>>> #1  0x00007ffff7761239 in session_listen (ls=<optimized out>, 
>>>>> sep=sep@entry=0x7fffb557fd50) at 
>>>>> /opt/vpp/src/vnet/session/session_types.h:247
>>>>> #2  0x00007ffff7788b3f in app_listener_alloc_and_init 
>>>>> (app=app@entry=0x7fffb76f7d98, sep=sep@entry=0x7fffb557fd50,
>>>>>     listener=listener@entry=0x7fffb557fd28) at 
>>>>> /opt/vpp/src/vnet/session/application.c:196
>>>>> #3  0x00007ffff7788ed8 in vnet_listen (a=a@entry=0x7fffb557fd50) at 
>>>>> /opt/vpp/src/vnet/session/application.c:1005
>>>>> #4  0x00007ffff7779e08 in session_mq_listen_handler (data=0x1300787e9) at 
>>>>> /opt/vpp/src/vnet/session/session_node.c:65
>>>>> #5  session_mq_listen_handler (data=data@entry=0x1300787e9) at 
>>>>> /opt/vpp/src/vnet/session/session_node.c:36
>>>>> #6  0x00007ffff7bbcdb9 in vl_api_rpc_call_t_handler (mp=0x1300787d0) at 
>>>>> /opt/vpp/src/vlibmemory/vlib_api.c:520
>>>>> #7  0x00007ffff7bc5ead in vl_msg_api_handler_with_vm_node 
>>>>> (am=am@entry=0x7ffff7dd2ea0 <api_global_main>, vlib_rp=<optimized out>,
>>>>>     the_msg=0x1300787d0, vm=vm@entry=0x7ffff6d7c200 <vlib_global_main>, 
>>>>> node=node@entry=0x7fffb553f000, is_private=is_private@entry=0 '\000')
>>>>>     at /opt/vpp/src/vlibapi/api_shared.c:609
>>>>> #8  0x00007ffff7bafee6 in vl_mem_api_handle_rpc 
>>>>> (vm=vm@entry=0x7ffff6d7c200 <vlib_global_main>, 
>>>>> node=node@entry=0x7fffb553f000)
>>>>>     at /opt/vpp/src/vlibmemory/memory_api.c:748
>>>>> #9  0x00007ffff7bbd5b3 in vl_api_clnt_process (vm=<optimized out>, 
>>>>> node=0x7fffb553f000, f=<optimized out>)
>>>>>     at /opt/vpp/src/vlibmemory/vlib_api.c:326
>>>>> #10 0x00007ffff6b1b116 in vlib_process_bootstrap (_a=<optimized out>) at 
>>>>> /opt/vpp/src/vlib/main.c:1502
>>>>> #11 0x00007ffff602fbfc in clib_calljmp () from 
>>>>> /opt/vpp/build-root/build-vpp-native/vpp/lib/libvppinfra.so.20.05
>>>>> #12 0x00007fffb5fa2dd0 in ?? ()
>>>>> #13 0x00007ffff6b1e751 in vlib_process_startup (f=0x0, p=0x7fffb553f000, 
>>>>> vm=0x7ffff6d7c200 <vlib_global_main>)
>>>>>     at /opt/vpp/src/vppinfra/types.h:133
>>>>> #14 dispatch_process (vm=0x7ffff6d7c200 <vlib_global_main>, 
>>>>> p=0x7fffb553f000, last_time_stamp=14118080390223872, f=0x0)
>>>>>     at /opt/vpp/src/vlib/main.c:1569
>>>>> #15 0x00000000004b84e0 in ?? ()
>>>>> #16 0x0000000000000000 in ?? ()
>>>>> 
>>>>> thanks,
>>>>> -Raj
>>>>> 
>>>>> On Mon, May 25, 2020 at 2:17 PM Florin Coras <fcoras.li...@gmail.com 
>>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>>> Hi Raj, 
>>>>> 
>>>>> Ow, now you’ve hit the untested part of [2]. Could you try this [3]?
>>>>> 
>>>>> Regards,
>>>>> Florin
>>>>> 
>>>>> [3] https://gerrit.fd.io/r/c/vpp/+/27235 
>>>>> <https://gerrit.fd.io/r/c/vpp/+/27235>
>>>>> 
>>>>>> On May 25, 2020, at 10:44 AM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>> 
>>>>>> Hi Florin,
>>>>>> 
>>>>>> I tried the patches[1] & [2] , but still VCL application is crashing.  
>>>>>> However, session is created in VPP.
>>>>>> 
>>>>>> vpp# sh session verbose 2
>>>>>> [0:0][U] 2001:5b0:ffff:700:b883:31f:29e:9880:9978-LISTEN
>>>>>>  index 0 flags: CONNECTED, OWNS_PORT, LISTEN
>>>>>> Thread 0: active sessions 1
>>>>>> Thread 1: no sessions
>>>>>> Thread 2: no sessions
>>>>>> Thread 3: no sessions
>>>>>> Thread 4: no sessions
>>>>>> vpp# sh app server
>>>>>> Connection                              App                          Wrk
>>>>>> [0:0][U] 2001:5b0:ffff:700:b883:31f:29e:udp6_rx[shm]                  1
>>>>>> vpp#
>>>>>> 
>>>>>> Here are the VCL application traces. Attached is the updated vppcom.c 
>>>>>> file.
>>>>>>   
>>>>>> [root@J3SGISNCCRO01 vcl_test]# VCL_DEBUG=2 gdb 
>>>>>> udp6_server_vcl_threaded_udpc                                            
>>>>>>                       GNU gdb (GDB) Red Hat Enterprise Linux 8.2-6.el8
>>>>>> Copyright (C) 2018 Free Software Foundation, Inc.
>>>>>> License GPLv3+: GNU GPL version 3 or later 
>>>>>> <http://gnu.org/licenses/gpl.html <http://gnu.org/licenses/gpl.html>>
>>>>>> This is free software: you are free to change and redistribute it.
>>>>>> There is NO WARRANTY, to the extent permitted by law.
>>>>>> Type "show copying" and "show warranty" for details.
>>>>>> This GDB was configured as "x86_64-redhat-linux-gnu".
>>>>>> Type "show configuration" for configuration details.
>>>>>> For bug reporting instructions, please see:
>>>>>> <http://www.gnu.org/software/gdb/bugs/ 
>>>>>> <http://www.gnu.org/software/gdb/bugs/>>.
>>>>>> Find the GDB manual and other documentation resources online at:
>>>>>>     <http://www.gnu.org/software/gdb/documentation/ 
>>>>>> <http://www.gnu.org/software/gdb/documentation/>>.
>>>>>> 
>>>>>> For help, type "help".
>>>>>> Type "apropos word" to search for commands related to "word"...
>>>>>> Reading symbols from udp6_server_vcl_threaded_udpc...(no debugging 
>>>>>> symbols found)...done.
>>>>>> (gdb) r 2001:5b0:ffff:700:b883:31f:29e:9880 9978
>>>>>> Starting program: /home/super/vcl_test/udp6_server_vcl_threaded_udpc 
>>>>>> 2001:5b0:ffff:700:b883:31f:29e:9880 9978
>>>>>> Missing separate debuginfos, use: yum debuginfo-install 
>>>>>> glibc-2.28-72.el8_1.1.x86_64
>>>>>> [Thread debugging using libthread_db enabled]
>>>>>> Using host libthread_db library "/lib64/libthread_db.so.1".
>>>>>> 
>>>>>>  server addr - 2001:5b0:ffff:700:b883:31f:29e:9880
>>>>>>  server port  9978,
>>>>>> total port = 1
>>>>>> VCL<64516>: configured VCL debug level (2) from VCL_DEBUG!
>>>>>> VCL<64516>: allocated VCL heap = 0x7fffe6988010, size 268435456 
>>>>>> (0x10000000)
>>>>>> VCL<64516>: configured rx_fifo_size 4000000 (0x3d0900)
>>>>>> VCL<64516>: configured tx_fifo_size 4000000 (0x3d0900)
>>>>>> VCL<64516>: configured app_scope_local (1)
>>>>>> VCL<64516>: configured app_scope_global (1)
>>>>>> VCL<64516>: configured api-socket-name (/tmp/vpp-api.sock)
>>>>>> VCL<64516>: completed parsing vppcom config!
>>>>>> [New Thread 0x7fffe6987700 (LWP 64520)]
>>>>>> vppcom_connect_to_vpp:502: vcl<64516:0>: app (udp6_rx) is connected to 
>>>>>> VPP!
>>>>>> vppcom_app_create:1204: vcl<64516:0>: sending session enable
>>>>>> vppcom_app_create:1212: vcl<64516:0>: sending app attach
>>>>>> vppcom_app_create:1221: vcl<64516:0>: app_name 'udp6_rx', 
>>>>>> my_client_index 256 (0x100)
>>>>>> [New Thread 0x7fffe6186700 (LWP 64521)]
>>>>>> [New Thread 0x7fffe5985700 (LWP 64522)]
>>>>>> vppcom_connect_to_vpp:502: vcl<64516:1>: app (udp6_rx-wrk-1) is 
>>>>>> connected to VPP!
>>>>>> vl_api_app_worker_add_del_reply_t_handler:235: vcl<0:-1>: worker 1 
>>>>>> vpp-worker 1 added
>>>>>> vcl_worker_register_with_vpp:262: vcl<64516:1>: added worker 1
>>>>>> vppcom_epoll_create:2574: vcl<64516:1>: Created vep_idx 0
>>>>>> vppcom_session_create:1292: vcl<64516:1>: created session 1
>>>>>> vppcom_session_bind:1439: vcl<64516:1>: session 1 handle 16777217: 
>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 port 
>>>>>> 9978, proto UDP
>>>>>> vppcom_session_listen:1471: vcl<64516:1>: session 16777217: sending vpp 
>>>>>> listen request...
>>>>>> 
>>>>>> Thread 3 "udp6_server_vcl" received signal SIGSEGV, Segmentation fault.
>>>>>> [Switching to Thread 0x7fffe6186700 (LWP 64521)]
>>>>>> 0x00007ffff7bb0086 in vcl_session_bound_handler (mp=<optimized out>, 
>>>>>> wrk=0x7fffe698ad00) at /opt/vpp/src/vcl/vppcom.c:604
>>>>>> 604           rx_fifo->client_session_index = sid;
>>>>>> (gdb) bt
>>>>>> #0  0x00007ffff7bb0086 in vcl_session_bound_handler (mp=<optimized out>, 
>>>>>> wrk=0x7fffe698ad00) at /opt/vpp/src/vcl/vppcom.c:604
>>>>>> #1  vcl_handle_mq_event (wrk=wrk@entry=0x7fffe698ad00, e=0x214038ec8) at 
>>>>>> /opt/vpp/src/vcl/vppcom.c:904
>>>>>> #2  0x00007ffff7bb0e64 in vppcom_wait_for_session_state_change 
>>>>>> (session_index=<optimized out>, state=state@entry=STATE_LISTEN,
>>>>>>     wait_for_time=<optimized out>) at /opt/vpp/src/vcl/vppcom.c:966
>>>>>> #3  0x00007ffff7bb12de in vppcom_session_listen 
>>>>>> (listen_sh=listen_sh@entry=16777217, q_len=q_len@entry=10) at 
>>>>>> /opt/vpp/src/vcl/vppcom.c:1477
>>>>>> #4  0x00007ffff7bb1671 in vppcom_session_bind (session_handle=16777217, 
>>>>>> ep=0x7fffe6183b30) at /opt/vpp/src/vcl/vppcom.c:1443
>>>>>> #5  0x0000000000401423 in udpRxThread ()
>>>>>> #6  0x00007ffff798c2de in start_thread () from /lib64/libpthread.so.0
>>>>>> #7  0x00007ffff76bd133 in clone () from /lib64/libc.so.6
>>>>>> (gdb)
>>>>>> 
>>>>>> thanks,
>>>>>> -Raj
>>>>>> 
>>>>>> On Tue, May 19, 2020 at 12:31 AM Florin Coras <fcoras.li...@gmail.com 
>>>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>>>> Hi Raj, 
>>>>>> 
>>>>>> By the looks of it, something’s not right because in the logs VCL still 
>>>>>> reports it’s binding using UDPC. You probably cherry-picked [1] but it 
>>>>>> needs [2] as well. More inline.
>>>>>> 
>>>>>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>>>>> <https://gerrit.fd.io/r/c/vpp/+/27111>
>>>>>> [2] https://gerrit.fd.io/r/c/vpp/+/27106 
>>>>>> <https://gerrit.fd.io/r/c/vpp/+/27106>
>>>>>> 
>>>>>>> On May 18, 2020, at 8:42 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> Hi Florin,
>>>>>>> I tried the path [1] , but still VPP is crashing when  application is 
>>>>>>> using listen with UDPC.
>>>>>>> 
>>>>>>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>>>>>> <https://gerrit.fd.io/r/c/vpp/+/27111> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On a different topic , I have some questions. Could you please  provide 
>>>>>>> your inputs - 
>>>>>>> 
>>>>>>> 1) I am using Mellanox NIC. Any idea how can I enable Tx checksum 
>>>>>>> offload ( for udp).  Also, how to change the Tx burst mode and Rx burst 
>>>>>>> mode to the Vector .
>>>>>>> 
>>>>>>> HundredGigabitEthernet12/0/1       3     up   
>>>>>>> HundredGigabitEthernet12/0/1
>>>>>>>   Link speed: 100 Gbps
>>>>>>>   Ethernet address b8:83:03:9e:98:81
>>>>>>>   Mellanox ConnectX-4 Family
>>>>>>>     carrier up full duplex mtu 9206
>>>>>>>     flags: admin-up pmd maybe-multiseg rx-ip4-cksum
>>>>>>>     rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
>>>>>>>     tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
>>>>>>>     pci: device 15b3:1013 subsystem 1590:00c8 address 0000:12:00.01 
>>>>>>> numa 0
>>>>>>>     switch info: name 0000:12:00.1 domain id 1 port id 65535
>>>>>>>     max rx packet len: 65536
>>>>>>>     promiscuous: unicast off all-multicast on
>>>>>>>     vlan offload: strip off filter off qinq off
>>>>>>>     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum 
>>>>>>> vlan-filter
>>>>>>>                        jumbo-frame scatter timestamp keep-crc rss-hash
>>>>>>>     rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame 
>>>>>>> scatter
>>>>>>>     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum 
>>>>>>> tcp-tso
>>>>>>>                        outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso 
>>>>>>> geneve-tnl-tso
>>>>>>>                        multi-segs udp-tnl-tso ip-tnl-tso
>>>>>>>     tx offload active: multi-segs
>>>>>>>     rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 
>>>>>>> ipv6-tcp-ex
>>>>>>>                        ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp 
>>>>>>> ipv6-other
>>>>>>>                        ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only 
>>>>>>> l3-src-only
>>>>>>>     rss active:        ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 
>>>>>>> ipv6-tcp-ex
>>>>>>>                        ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp 
>>>>>>> ipv6-other
>>>>>>>                        ipv6-ex ipv6
>>>>>>>     tx burst mode: No MPW + MULTI + TSO + INLINE + METADATA
>>>>>>>     rx burst mode: Scalar
>>>>>> 
>>>>>> FC: Not sure why (might not be supported) but the offloads are not 
>>>>>> enabled in dpdk_lib_init for VNET_DPDK_PMD_MLX* nics. You could try 
>>>>>> replicating what’s done for the Intel cards and see if that works. 
>>>>>> Alternatively, you might want to try the rdma driver, although I don’t 
>>>>>> know if it supports csum offloading (cc Ben and Damjan). 
>>>>>> 
>>>>>>>
>>>>>>> 2) My application needs to send routing header (SRv6) and Destination 
>>>>>>> option extension header. On RedHat 8.1 , I was using socket option to 
>>>>>>> add routing and destination option extension header.
>>>>>>> With VPP , I can use SRv6 policy to let VPP add the routing header. 
>>>>>>> But, I am not sure if there is any option in VPP or HostStack to add 
>>>>>>> the destination option header.
>>>>>> 
>>>>>> FC: We don’t currently support this. 
>>>>>> 
>>>>>> Regards,
>>>>>> Florin
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Coming back to the original problem, here are the traces- 
>>>>>>> 
>>>>>>> VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
>>>>>>> VCL<39673>: using default heapsize 268435456 (0x10000000)
>>>>>>> VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456 
>>>>>>> (0x10000000)
>>>>>>> VCL<39673>: using default configuration.
>>>>>>> vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to 
>>>>>>> VPP api (/vpe-api)...
>>>>>>> vppcom_connect_to_vpp:502: vcl<39673:0>: app (udp6_rx) is connected to 
>>>>>>> VPP!
>>>>>>> vppcom_app_create:1200: vcl<39673:0>: sending session enable
>>>>>>> vppcom_app_create:1208: vcl<39673:0>: sending app attach
>>>>>>> vppcom_app_create:1217: vcl<39673:0>: app_name 'udp6_rx', 
>>>>>>> my_client_index 0 (0x0)
>>>>>>> vppcom_connect_to_vpp:487: vcl<39673:1>: app (udp6_rx-wrk-1) connecting 
>>>>>>> to VPP api (/vpe-api)...
>>>>>>> vppcom_connect_to_vpp:502: vcl<39673:1>: app (udp6_rx-wrk-1) is 
>>>>>>> connected to VPP!
>>>>>>> vcl_worker_register_with_vpp:262: vcl<39673:1>: added worker 1
>>>>>>> vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1 
>>>>>>> vpp-worker 1 added
>>>>>>> vppcom_epoll_create:2558: vcl<39673:1>: Created vep_idx 0
>>>>>>> vppcom_session_create:1279: vcl<39673:1>: created session 1
>>>>>>> vppcom_session_bind:1426: vcl<39673:1>: session 1 handle 16777217: 
>>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 port 
>>>>>>> 6677, proto UDPC
>>>>>>> vppcom_session_listen:1458: vcl<39673:1>: session 16777217: sending vpp 
>>>>>>> listen request...
>>>>>>> 
>>>>>>> #1  0x00007ffff7761259 in session_listen (ls=<optimized out>, 
>>>>>>> sep=sep@entry=0x7fffb575ad50)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_types.h:247
>>>>>>> #2  0x00007ffff7788b5f in app_listener_alloc_and_init 
>>>>>>> (app=app@entry=0x7fffb7273038, sep=sep@entry=0x7fffb575ad50,
>>>>>>>     listener=listener@entry=0x7fffb575ad28) at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/application.c:196
>>>>>>> #3  0x00007ffff7788ef8 in vnet_listen (a=a@entry=0x7fffb575ad50)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/application.c:1005
>>>>>>> #4  0x00007ffff7779e20 in session_mq_listen_handler (data=0x13007ec01)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:64
>>>>>>> #5  session_mq_listen_handler (data=data@entry=0x13007ec01)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:36
>>>>>>> #6  0x00007ffff7bbcdd9 in vl_api_rpc_call_t_handler (mp=0x13007ebe8)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/vlib_api.c:520
>>>>>>> #7  0x00007ffff7bc5ecd in vl_msg_api_handler_with_vm_node 
>>>>>>> (am=am@entry=0x7ffff7dd2ea0 <api_global_main>, vlib_rp=<optimized out>,
>>>>>>>     the_msg=0x13007ebe8, vm=vm@entry=0x7ffff6d7c200 <vlib_global_main>, 
>>>>>>> node=node@entry=0x7fffb571a000, is_private=is_private@entry=0 '\000')
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibapi/api_shared.c:609
>>>>>>> #8  0x00007ffff7baff06 in vl_mem_api_handle_rpc 
>>>>>>> (vm=vm@entry=0x7ffff6d7c200 <vlib_global_main>, 
>>>>>>> node=node@entry=0x7fffb571a000)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/memory_api.c:748
>>>>>>> #9  0x00007ffff7bbd5d3 in vl_api_clnt_process (vm=<optimized out>, 
>>>>>>> node=0x7fffb571a000, f=<optimized out>)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/vlib_api.c:326
>>>>>>> #10 0x00007ffff6b1b136 in vlib_process_bootstrap (_a=<optimized out>)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1502
>>>>>>> #11 0x00007ffff602fc0c in clib_calljmp () from 
>>>>>>> /lib64/libvppinfra.so.20.05
>>>>>>> #12 0x00007fffb5e34dd0 in ?? ()
>>>>>>> #13 0x00007ffff6b1e771 in vlib_process_startup (f=0x0, 
>>>>>>> p=0x7fffb571a000, vm=0x7ffff6d7c200 <vlib_global_main>)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vppinfra/types.h:133
>>>>>>> #14 dispatch_process (vm=0x7ffff6d7c200 <vlib_global_main>, 
>>>>>>> p=0x7fffb571a000, last_time_stamp=12611933408198086, f=0x0)
>>>>>>>     at 
>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1569
>>>>>>> 
>>>>>>> thanks,
>>>>>>> -Raj
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, May 16, 2020 at 8:18 PM Florin Coras <fcoras.li...@gmail.com 
>>>>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>>>>> Hi Raj, 
>>>>>>> 
>>>>>>> Inline.
>>>>>>> 
>>>>>>>> On May 16, 2020, at 2:30 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>>>> 
>>>>>>>>  Hi Florin,
>>>>>>>> 
>>>>>>>> I am using VPP 20.05 rc0 . Should I upgrade it ? 
>>>>>>> 
>>>>>>> FC: Not necessarily, as long as it’s relatively recent, i.e., it 
>>>>>>> includes all of the recent udp updates. 
>>>>>>> 
>>>>>>>> 
>>>>>>>> Thanks! for providing the patch, i will try it on Monday. Actually, I 
>>>>>>>> am testing in a controlled environment where I can not change the VPP 
>>>>>>>> libraries. I will try it on my server.
>>>>>>> 
>>>>>>> FC: Sounds good. Let me know how it goes!
>>>>>>> 
>>>>>>>> 
>>>>>>>>  On the UDP connection; yes, the error  EINPROGRESS was there because 
>>>>>>>> I am using non-blocking connection. Now, I am ignoring this error. 
>>>>>>>>  Sometime , VPP crashes when I kills my application ( not gracefully) 
>>>>>>>> even when there is  a single connection .
>>>>>>> 
>>>>>>> FC: That might have to do with the app dying such that 1) it does not 
>>>>>>> detach from vpp (e.g., sigkill and atexit function is not executed) 2) 
>>>>>>> it dies with the message queue mutex held and 3) vpp tries to enqueue 
>>>>>>> more events before detecting that it crashed (~30s).
>>>>>>> 
>>>>>>>> 
>>>>>>>> The good part is that now I am able to move connections on different 
>>>>>>>> cores by connecting it on receiving the first packet and then 
>>>>>>>> re-binding the socket to listen.
>>>>>>>> Basically, this approach works but I have not tested it thoroughly. 
>>>>>>>> However , I am still in favor of using the UDPC connection.
>>>>>>> 
>>>>>>> FC: If you have enough logic in your app to emulate a handshake, i.e., 
>>>>>>> always have the client send a few bytes and wait for a reply from the 
>>>>>>> server before opening a new connection, then this approach is probably 
>>>>>>> more flexible from core placement perspective.
>>>>>>> 
>>>>>>> The patch tries to emulate the old udpc with udp (udpc in vpp was 
>>>>>>> confusing for consumers). You might get away with listening from 
>>>>>>> multiple vcl workers on the same udp ip:port pair and have vpp load 
>>>>>>> balance accepts between them, but I’ve never tested that. You can do 
>>>>>>> this only with udp listeners that have been initialized as connected 
>>>>>>> (so only with the patch). 
>>>>>>> 
>>>>>>>> 
>>>>>>>> Btw, in trace logs I see some ssvm_delete related error when 
>>>>>>>> re-binding the connection.
>>>>>>> 
>>>>>>> FC: I think it’s fine. Going over the interactions step by step to see 
>>>>>>> if understand what you’re doing (and hopefully help you understand what 
>>>>>>> vpp does underneath). 
>>>>>>> 
>>>>>>>> 
>>>>>>>> vpp# sh session verbose
>>>>>>>> Connection                                        State          Rx-f  
>>>>>>>>     Tx-f
>>>>>>>> [0:0][U] 2001:5b0:ffff:700:b883:31f:29e:9880:6677-LISTEN         0     
>>>>>>>>     0
>>>>>>>> Thread 0: active sessions 1
>>>>>>>> 
>>>>>>>> Connection                                        State          Rx-f  
>>>>>>>>     Tx-f
>>>>>>>> [1:0][U] 2001:5b0:ffff:700:b883:31f:29e:9880:6677-OPENED         0     
>>>>>>>>     0
>>>>>>>> Thread 1: active sessions 1
>>>>>>>> 
>>>>>>>> Connection                                        State          Rx-f  
>>>>>>>>     Tx-f
>>>>>>>> [2:0][U] 2001:5b0:ffff:700:b883:31f:29e:9880:6677-OPENED         0     
>>>>>>>>     0
>>>>>>>> Thread 2: active sessions 1
>>>>>>>> Thread 3: no sessions
>>>>>>>> Thread 4: no sessions
>>>>>>>> 
>>>>>>>> VCL<24434>: configured VCL debug level (2) from VCL_DEBUG!
>>>>>>>> VCL<24434>: using default heapsize 268435456 (0x10000000)
>>>>>>>> VCL<24434>: allocated VCL heap = 0x7f7f18d1b010, size 268435456 
>>>>>>>> (0x10000000)
>>>>>>>> VCL<24434>: using default configuration.
>>>>>>>> vppcom_connect_to_vpp:487: vcl<24434:0>: app (udp6_rx) connecting to 
>>>>>>>> VPP api (/vpe-api)...
>>>>>>>> vppcom_connect_to_vpp:502: vcl<24434:0>: app (udp6_rx) is connected to 
>>>>>>>> VPP!
>>>>>>>> vppcom_app_create:1200: vcl<24434:0>: sending session enable
>>>>>>>> vppcom_app_create:1208: vcl<24434:0>: sending app attach
>>>>>>>> vppcom_app_create:1217: vcl<24434:0>: app_name 'udp6_rx', 
>>>>>>>> my_client_index 0 (0x0)
>>>>>>> 
>>>>>>> FC: Added worker 0
>>>>>>> 
>>>>>>>> vppcom_connect_to_vpp:487: vcl<24434:1>: app (udp6_rx-wrk-1) 
>>>>>>>> connecting to VPP api (/vpe-api)...
>>>>>>>> vppcom_connect_to_vpp:502: vcl<24434:1>: app (udp6_rx-wrk-1) is 
>>>>>>>> connected to VPP!
>>>>>>>> vcl_worker_register_with_vpp:262: vcl<24434:1>: added worker 1
>>>>>>> 
>>>>>>>> vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1 
>>>>>>>> vpp-worker 1 added
>>>>>>> 
>>>>>>> FC: Adding worker 1
>>>>>>> 
>>>>>>>> vppcom_epoll_create:2558: vcl<24434:1>: Created vep_idx 0
>>>>>>>> vppcom_session_create:1279: vcl<24434:1>: created session 1
>>>>>>>> vppcom_session_bind:1426: vcl<24434:1>: session 1 handle 16777217: 
>>>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 port 
>>>>>>>> 6677, proto UDP
>>>>>>>> vppcom_session_listen:1458: vcl<24434:1>: session 16777217: sending 
>>>>>>>> vpp listen request...
>>>>>>>> vcl_session_bound_handler:607: vcl<24434:1>: session 1 [0x0]: listen 
>>>>>>>> succeeded!
>>>>>>>> vppcom_epoll_ctl:2658: vcl<24434:1>: EPOLL_CTL_ADD: vep_sh 16777216, 
>>>>>>>> sh 16777217, events 0x1, data 0xffffffff!
>>>>>>> 
>>>>>>> FC: Listened on session 1 and added it to epoll session 0
>>>>>>> 
>>>>>>>>  udpRxThread started!!!  ... rx port  = 6677
>>>>>>>> Waiting for a client to connect on port 6677 ...
>>>>>>>> vppcom_session_connect:1742: vcl<24434:1>: session handle 16777217 
>>>>>>>> (STATE_CLOSED): connecting to peer IPv6 
>>>>>>>> 2001:5b0:ffff:700:b883:31f:29e:9886 port 40300 proto UDP
>>>>>>> 
>>>>>>> FC: I guess at this point you got data on the listener so you now try 
>>>>>>> to connect it to the peer. 
>>>>>>> 
>>>>>>>> vppcom_epoll_ctl:2696: vcl<24434:1>: EPOLL_CTL_MOD: vep_sh 16777216, 
>>>>>>>> sh 16777217, events 0x2011, data 0x1000001!
>>>>>>> 
>>>>>>>> vppcom_session_create:1279: vcl<24434:1>: created session 2
>>>>>>>> vppcom_session_bind:1426: vcl<24434:1>: session 2 handle 16777218: 
>>>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 port 
>>>>>>>> 6677, proto UDP
>>>>>>>> vppcom_session_listen:1458: vcl<24434:1>: session 16777218: sending 
>>>>>>>> vpp listen request...
>>>>>>> 
>>>>>>> FC: Request to create new listener. 
>>>>>>> 
>>>>>>>> vcl_session_app_add_segment_handler:855: vcl<24434:1>: mapped new 
>>>>>>>> segment '24177-2' size 134217728
>>>>>>> 
>>>>>>> FC: This is probably the connects segment manager segment that was just 
>>>>>>> created with the first segment in it. 
>>>>>>> 
>>>>>>>> vcl_session_connected_handler:505: vcl<24434:1>: session 1 
>>>>>>>> [0x100000000] connected! rx_fifo 0x224051c80, refcnt 1, tx_fifo 
>>>>>>>> 0x224051b80, refcnt 1
>>>>>>> 
>>>>>>> FC: Connect for previous listener (session 1) succeeded. 
>>>>>>> 
>>>>>>>> vcl_session_app_add_segment_handler:855: vcl<24434:1>: mapped new 
>>>>>>>> segment '24177-3' size 134217728
>>>>>>> 
>>>>>>> FC: This is the new listener’s first segment manager segment. So 
>>>>>>> session 2 has segment 24177-3 associated to it. 
>>>>>>> 
>>>>>>>> vcl_session_bound_handler:607: vcl<24434:1>: session 2 [0x0]: listen 
>>>>>>>> succeeded!
>>>>>>>> vppcom_epoll_ctl:2658: vcl<24434:1>: EPOLL_CTL_ADD: vep_sh 16777216, 
>>>>>>>> sh 16777218, events 0x1, data 0xffffffff!
>>>>>>> 
>>>>>>> FC: Listen succeeded on session 2 and it was added to the vep group. 
>>>>>>> 
>>>>>>>> vcl_session_migrated_handler:674: vcl<24434:1>: Migrated 0x100000000 
>>>>>>>> to thread 2 0x200000000
>>>>>>> 
>>>>>>> FC: You got new data on the connected session (session 1) and the 
>>>>>>> session was migrated to the rss selected thread in vpp. 
>>>>>>> 
>>>>>>>>  new connecton
>>>>>>>> vppcom_session_connect:1742: vcl<24434:1>: session handle 16777218 
>>>>>>>> (STATE_CLOSED): connecting to peer IPv6 
>>>>>>>> 2001:5b0:ffff:700:b883:31f:29e:9886 port 60725 proto UDP
>>>>>>> 
>>>>>>> FC: Connecting session 2 (the latest listener) 
>>>>>>> 
>>>>>>>> vppcom_epoll_ctl:2696: vcl<24434:1>: EPOLL_CTL_MOD: vep_sh 16777216, 
>>>>>>>> sh 16777218, events 0x2011, data 0x1000002!
>>>>>>> 
>>>>>>>> vppcom_session_create:1279: vcl<24434:1>: created session 3
>>>>>>>> vppcom_session_bind:1426: vcl<24434:1>: session 3 handle 16777219: 
>>>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 port 
>>>>>>>> 6677, proto UDP
>>>>>>>> vppcom_session_listen:1458: vcl<24434:1>: session 16777219: sending 
>>>>>>>> vpp listen request...
>>>>>>> 
>>>>>>> FC: Trying to listen on a new session (session 3)
>>>>>>> 
>>>>>>>> ssvm_delete_shm:205: unlink segment '24177-3': No such file or 
>>>>>>>> directory (errno 2)
>>>>>>> 
>>>>>>> FC: This is okay, I think, because vpp already deleted the shm segment 
>>>>>>> (so there’s nothing left to delete). 
>>>>>>> 
>>>>>>> You might want to consider using memfd segments (although it involves a 
>>>>>>> bit of configuration like here [1]). 
>>>>>>> 
>>>>>>> [1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf 
>>>>>>> <https://wiki.fd.io/view/VPP/HostStack/LDP/iperf>
>>>>>>> 
>>>>>>>> vcl_segment_detach:467: vcl<24434:1>: detached segment 3 handle 0
>>>>>>>> vcl_session_app_del_segment_handler:863: vcl<24434:1>: Unmapped 
>>>>>>>> segment: 0
>>>>>>> 
>>>>>>> FC: Because session 2 stopped listening, the underlying segment manager 
>>>>>>> (all listeners have a segment manager in vpp) was removed. VPP forced 
>>>>>>> vcl to unmap the segment as well.
>>>>>>> 
>>>>>>>> vcl_session_connected_handler:505: vcl<24434:1>: session 2 
>>>>>>>> [0x100000000] connected! rx_fifo 0x224051a80, refcnt 1, tx_fifo 
>>>>>>>> 0x224051980, refcnt 1
>>>>>>>> vcl_session_app_add_segment_handler:855: vcl<24434:1>: mapped new 
>>>>>>>> segment '24177-4' size 134217728
>>>>>>>> vcl_session_bound_handler:607: vcl<24434:1>: session 3 [0x0]: listen 
>>>>>>>> succeeded!
>>>>>>>> vppcom_epoll_ctl:2658: vcl<24434:1>: EPOLL_CTL_ADD: vep_sh 16777216, 
>>>>>>>> sh 16777219, events 0x1, data 0xffffffff!
>>>>>>> 
>>>>>>> FC: New listener (session 3) got new segment (and segment manager) and 
>>>>>>> it was added to epoll group. 
>>>>>>> 
>>>>>>> Regards, 
>>>>>>> Florin
>>>>>>> 
>>>>>>>> 
>>>>>>>> thanks,
>>>>>>>> -Raj
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Sat, May 16, 2020 at 2:23 PM Florin Coras <fcoras.li...@gmail.com 
>>>>>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>>>>>> Hi Raj, 
>>>>>>>> 
>>>>>>>> Assuming you are trying to open more than one connected udp session, 
>>>>>>>> does this [1] solve the problem (note it's untested)?
>>>>>>>> 
>>>>>>>> To reproduce legacy behavior, this allows you to listen on 
>>>>>>>> VPPCOM_PROTO_UDPC but that is now converted by vcl into a udp listen 
>>>>>>>> that propagates with a “connected” flag to vpp. That should result in 
>>>>>>>> a udp listener that behaves like an “old” udpc listener. 
>>>>>>>> 
>>>>>>>> Regards,
>>>>>>>> Florin
>>>>>>>> 
>>>>>>>> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>>>>>>>> <https://gerrit.fd.io/r/c/vpp/+/27111>
>>>>>>>> 
>>>>>>>>> On May 16, 2020, at 10:56 AM, Florin Coras via lists.fd.io 
>>>>>>>>> <http://lists.fd.io/> <fcoras.lists=gmail....@lists.fd.io 
>>>>>>>>> <mailto:fcoras.lists=gmail....@lists.fd.io>> wrote:
>>>>>>>>> 
>>>>>>>>> Hi Raj, 
>>>>>>>>> 
>>>>>>>>> Are you using master latest/20.05 rc1 or something older? The fact 
>>>>>>>>> that you’re getting a -115 (EINPROGRESS) suggests you might’ve marked 
>>>>>>>>> the connection as “non-blocking” although you created it as blocking. 
>>>>>>>>> If that’s so, the return value is not an error. 
>>>>>>>>> 
>>>>>>>>> Also, how if vpp crashing? Are you by chance trying to open a lot of 
>>>>>>>>> udp connections back to back? 
>>>>>>>>> 
>>>>>>>>> Regards,
>>>>>>>>> Florin
>>>>>>>>> 
>>>>>>>>>> On May 16, 2020, at 10:23 AM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hi Florin,
>>>>>>>>>> I tried to connect on receiving the first UDP packet . But, it did 
>>>>>>>>>> not work. I am getting error -115 in the application and VPP is 
>>>>>>>>>> crashing.
>>>>>>>>>> 
>>>>>>>>>> This is something I tried in the code (udp receiver) -
>>>>>>>>>> sockfd = vppcom_session_create(VPPCOM_PROTO_UDP, 0);
>>>>>>>>>> rv_vpp = vppcom_session_bind (sockfd, &endpt);
>>>>>>>>>> if (FD_ISSET(session_idx, &readfds))
>>>>>>>>>> {
>>>>>>>>>>     n = vppcom_session_recvfrom(sockfd, (char *)buffer, MAXLINE, 0, 
>>>>>>>>>> &client);
>>>>>>>>>>     if(first_pkt) 
>>>>>>>>>>         rv_vpp = vppcom_session_connect (sockfd, &client);
>>>>>>>>>>         //Here getting rv_vpp as -115 
>>>>>>>>>> }
>>>>>>>>>> Please let me know if I am doing something wrong.
>>>>>>>>>> 
>>>>>>>>>> Here are the traces - 
>>>>>>>>>> 
>>>>>>>>>> VCL<16083>: configured VCL debug level (2) from VCL_DEBUG!
>>>>>>>>>> VCL<16083>: using default heapsize 268435456 (0x10000000)
>>>>>>>>>> VCL<16083>: allocated VCL heap = 0x7fd255ed2010, size 268435456 
>>>>>>>>>> (0x10000000)
>>>>>>>>>> VCL<16083>: using default configuration.
>>>>>>>>>> vppcom_connect_to_vpp:487: vcl<16083:0>: app (udp6_rx) connecting to 
>>>>>>>>>> VPP api (/vpe-api)...
>>>>>>>>>> vppcom_connect_to_vpp:502: vcl<16083:0>: app (udp6_rx) is connected 
>>>>>>>>>> to VPP!
>>>>>>>>>> vppcom_app_create:1200: vcl<16083:0>: sending session enable
>>>>>>>>>> vppcom_app_create:1208: vcl<16083:0>: sending app attach
>>>>>>>>>> vppcom_app_create:1217: vcl<16083:0>: app_name 'udp6_rx', 
>>>>>>>>>> my_client_index 0 (0x0)
>>>>>>>>>> 
>>>>>>>>>> vppcom_connect_to_vpp:487: vcl<16083:1>: app (udp6_rx-wrk-1) 
>>>>>>>>>> connecting to VPP api (/vpe-api)...
>>>>>>>>>> vppcom_connect_to_vpp:502: vcl<16083:1>: app (udp6_rx-wrk-1) is 
>>>>>>>>>> connected to VPP!
>>>>>>>>>> vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1 
>>>>>>>>>> vpp-worker 1 added
>>>>>>>>>> vcl_worker_register_with_vpp:262: vcl<16083:1>: added worker 1
>>>>>>>>>> vppcom_session_create:1279: vcl<16083:1>: created session 0
>>>>>>>>>> vppcom_session_bind:1426: vcl<16083:1>: session 0 handle 16777216: 
>>>>>>>>>> binding to local IPv6 address 2001:5b0:ffff:700:b883:31f:29e:9880 
>>>>>>>>>> port 6677, proto UDP
>>>>>>>>>> vppcom_session_listen:1458: vcl<16083:1>: session 16777216: sending 
>>>>>>>>>> vpp listen request...
>>>>>>>>>> vcl_session_bound_handler:607: vcl<16083:1>: session 0 [0x0]: listen 
>>>>>>>>>> succeeded!
>>>>>>>>>> vppcom_session_connect:1742: vcl<16083:1>: session handle 16777216 
>>>>>>>>>> (STATE_CLOSED): connecting to peer IPv6 
>>>>>>>>>> 2001:5b0:ffff:700:b883:31f:29e:9886 port 51190 proto UDP
>>>>>>>>>>  udpRxThread started!!!  ... rx port  = 6677vppcom_session_connect() 
>>>>>>>>>> failed ... -115
>>>>>>>>>> vcl_session_cleanup:1300: vcl<16083:1>: session 0 [0x0] closing
>>>>>>>>>> vcl_worker_cleanup_cb:190: vcl<94:-1>: cleaned up worker 1
>>>>>>>>>> vl_client_disconnect:309: peer unresponsive, give up
>>>>>>>>>> 
>>>>>>>>>> thanks,
>>>>>>>>>> -Raj
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Fri, May 15, 2020 at 8:10 PM Florin Coras <fcoras.li...@gmail.com 
>>>>>>>>>> <mailto:fcoras.li...@gmail.com>> wrote:
>>>>>>>>>> Hi Raj, 
>>>>>>>>>> 
>>>>>>>>>> There are no explicit vcl apis that allow a udp listener to be 
>>>>>>>>>> switched to connected mode. We might decide to do this at one point 
>>>>>>>>>> through a new bind api (non-posix like) since we do support this for 
>>>>>>>>>> builtin applications. 
>>>>>>>>>> 
>>>>>>>>>> However, you now have the option of connecting a bound session. That 
>>>>>>>>>> is, on the first received packet on a udp listener, you can grab the 
>>>>>>>>>> peer’s address and connect it. Iperf3 in udp mode, which is part of 
>>>>>>>>>> our make test infra, does exactly that. Subsequently, it re-binds 
>>>>>>>>>> the port to accept more connections. Would that work for you?
>>>>>>>>>> 
>>>>>>>>>> Regards, 
>>>>>>>>>> Florin
>>>>>>>>>> 
>>>>>>>>>>> On May 15, 2020, at 4:06 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Thanks! Florin,
>>>>>>>>>>> 
>>>>>>>>>>> OK, I understood that I need to change my application to use UDP 
>>>>>>>>>>> socket and then use vppcom_session_connect(). 
>>>>>>>>>>> This is fine for the UDP client ( sender) .
>>>>>>>>>>> 
>>>>>>>>>>> But ,in  UDP Server ( receiver) , I am not sure how to use the 
>>>>>>>>>>> vppcom_session_connect(). .
>>>>>>>>>>> I am using vppcom_session_listen() to listen on the connections and 
>>>>>>>>>>> then calling vppcom_session_accept() to accept a new connection.
>>>>>>>>>>> 
>>>>>>>>>>> With UDPC, I was able to utilize the RSS ( receiver side scaling) 
>>>>>>>>>>> feature to move the received connections on the different cores 
>>>>>>>>>>> /threads.
>>>>>>>>>>> 
>>>>>>>>>>> Just want to confirm if I can achieve the same with UDP.
>>>>>>>>>>> 
>>>>>>>>>>> I will change my application and will update you about the result.
>>>>>>>>>>> 
>>>>>>>>>>> Thanks,
>>>>>>>>>>> -Raj
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Fri, May 15, 2020 at 5:17 PM Florin Coras 
>>>>>>>>>>> <fcoras.li...@gmail.com <mailto:fcoras.li...@gmail.com>> wrote:
>>>>>>>>>>> Hi Raj, 
>>>>>>>>>>> 
>>>>>>>>>>> We removed udpc transport in vpp. I’ll push a patch that removes it 
>>>>>>>>>>> from vcl as well. 
>>>>>>>>>>> 
>>>>>>>>>>> Calling connect on a udp connection will give you connected 
>>>>>>>>>>> semantics now. Let me know if that solves the issue for you.
>>>>>>>>>>> 
>>>>>>>>>>> Regards,
>>>>>>>>>>> Florin
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> On May 15, 2020, at 12:15 PM, Raj Kumar <raj.gauta...@gmail.com 
>>>>>>>>>>>> <mailto:raj.gauta...@gmail.com>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Hi,
>>>>>>>>>>>> I am getting segmentation fault in VPP when using VCL 
>>>>>>>>>>>> VPPCOM_PROTO_UDPC  socket. This issue is observed with both UDP 
>>>>>>>>>>>> sender and UDP receiver application.
>>>>>>>>>>>> 
>>>>>>>>>>>> However, both UDP sender and receiver works fine with 
>>>>>>>>>>>> VPPCOM_PROTO_UDP.
>>>>>>>>>>>> 
>>>>>>>>>>>> Here is the stack trace - 
>>>>>>>>>>>> 
>>>>>>>>>>>> (gdb) bt
>>>>>>>>>>>> #0  0x0000000000000000 in ?? ()
>>>>>>>>>>>> #1  0x00007ffff775da59 in session_open_vc (app_wrk_index=1, 
>>>>>>>>>>>> rmt=0x7fffb5e34cc0, opaque=0)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.c:1217
>>>>>>>>>>>> #2  0x00007ffff7779257 in session_mq_connect_handler 
>>>>>>>>>>>> (data=0x7fffb676e7a8)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:138
>>>>>>>>>>>> #3  0x00007ffff7780f48 in session_event_dispatch_ctrl 
>>>>>>>>>>>> (elt=0x7fffb643f51c, wrk=0x7fffb650a640)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.h:262
>>>>>>>>>>>> #4  session_queue_node_fn (vm=<optimized out>, node=<optimized 
>>>>>>>>>>>> out>, frame=<optimized out>)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:1409
>>>>>>>>>>>> #5  0x00007ffff6b214c1 in dispatch_node 
>>>>>>>>>>>> (last_time_stamp=<optimized out>, frame=0x0, 
>>>>>>>>>>>> dispatch_state=VLIB_NODE_STATE_POLLING,
>>>>>>>>>>>>     type=VLIB_NODE_TYPE_INPUT, node=0x7fffb5a9a980, 
>>>>>>>>>>>> vm=0x7ffff6d7c200 <vlib_global_main>)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1235
>>>>>>>>>>>> #6  vlib_main_or_worker_loop (is_main=1, vm=0x7ffff6d7c200 
>>>>>>>>>>>> <vlib_global_main>)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1815
>>>>>>>>>>>> #7  vlib_main_loop (vm=0x7ffff6d7c200 <vlib_global_main>) at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1990
>>>>>>>>>>>> #8  vlib_main (vm=<optimized out>, vm@entry=0x7ffff6d7c200 
>>>>>>>>>>>> <vlib_global_main>, input=input@entry=0x7fffb5e34fa0)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:2236
>>>>>>>>>>>> #9  0x00007ffff6b61756 in thread0 (arg=140737334723072) at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:658
>>>>>>>>>>>> #10 0x00007ffff602fc0c in clib_calljmp () from 
>>>>>>>>>>>> /lib64/libvppinfra.so.20.05
>>>>>>>>>>>> #11 0x00007fffffffd1e0 in ?? ()
>>>>>>>>>>>> #12 0x00007ffff6b627ed in vlib_unix_main (argc=<optimized out>, 
>>>>>>>>>>>> argv=<optimized out>)
>>>>>>>>>>>>     at 
>>>>>>>>>>>> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:730
>>>>>>>>>>>> 
>>>>>>>>>>>> Earlier , I tested this functionality with VPP 20.01 release with 
>>>>>>>>>>>> the following patches and it worked perfectly.
>>>>>>>>>>>> https://gerrit.fd.io/r/c/vpp/+/24332 
>>>>>>>>>>>> <https://gerrit.fd.io/r/c/vpp/+/24332>
>>>>>>>>>>>> https://gerrit.fd.io/r/c/vpp/+/24334 
>>>>>>>>>>>> <https://gerrit.fd.io/r/c/vpp/+/24334>
>>>>>>>>>>>> https://gerrit.fd.io/r/c/vpp/+/24462 
>>>>>>>>>>>> <https://gerrit.fd.io/r/c/vpp/+/24462>
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> -Raj
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> <vppcom.c>
>>>>> 
>>>> 
>>> 
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16677): https://lists.fd.io/g/vpp-dev/message/16677
Mute This Topic: https://lists.fd.io/mt/74234856/21656
Mute #vpp-hoststack: https://lists.fd.io/mk?hashtag=vpp-hoststack&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to