[vpp-dev] Crash in VPP 21.06 #vnet

2022-06-08 Thread Raj Kumar
Hi ,
I am observing some infrequent crash in VPP.  I am using VCL for receiving UDP 
packets in my application. We compiled VPP with DPDP and using Mellanox NIC to 
receive UDP packets ( the MTU is set to 9000). Attached is the startup.conf 
file.

#0  0x7efd06b8837f in raise () from /lib64/libc.so.6
#1  0x7efd06b72db5 in abort () from /lib64/libc.so.6
#2  0x55ffa1ed1134 in os_exit (code=code@entry=1) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vpp/vnet/main.c:431
#3  0x7efd07de215a in unix_signal_handler (signum=11, si=, 
uc=)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/unix/main.c:187
#4  
#5  0x7efd07b4cbc6 in _mm_storeu_si128 (__B=..., __P=) at 
/usr/lib/gcc/x86_64-redhat-linux/8/include/emmintrin.h:721
#6  clib_mov16 (src=0x7efcb99f8c00 "`", dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:65
#7  clib_memcpy_fast_avx2 (n=45, src=0x7efcb99f8c00, dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:166
#8  clib_memcpy_fast (n=45, src=0x7efcb99f8c00, dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/string.h:97
#9  svm_fifo_copy_to_chunk_ma_skx (f=0x7efccb716040, c=0x7ef44e1e2cb0, 
tail_idx=, src=0x7efcb99f8c00 "`", len=45, last=0x7ef44e000c08)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:53
#10 0x7efd07b49b73 in svm_fifo_copy_to_chunk (last=, 
len=, src=, tail_idx=, 
c=, f=0x7efccb716040)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:96
#11 svm_fifo_enqueue_segments (f=0x7efccb716040, 
segs=segs@entry=0x7efcb99f7fb0, n_segs=n_segs@entry=2, 
allow_partial=allow_partial@entry=0 '\000')
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:1006
#12 0x7efd0883cd25 in session_enqueue_dgram_connection 
(s=s@entry=0x7efccb6d3580, hdr=0x7efcb99f8c00, b=0x1005415480, 
proto=proto@entry=1 '\001',
queue_event=queue_event@entry=1 '\001') at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/session/session.c:637
#13 0x7efd0869e089 in udp_connection_enqueue (uc0=0x7efccb671e40, 
s0=0x7efccb6d3580, hdr0=hdr0@entry=0x7efcb99f8c00, 
thread_index=thread_index@entry=4,
b=b@entry=0x1005415480, queue_event=queue_event@entry=1 '\001', 
error0=0x7efcb99f80d4) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:159
#14 0x7efd0869e4d7 in udp46_input_inline (is_ip4=0 '\000', frame=, node=, vm=)
at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:311
#15 udp6_input (vm=, node=, frame=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:368
#16 0x7efd07d8e172 in dispatch_pending_node (vm=0x7efcca76a840, 
pending_frame_index=, last_time_stamp=)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/main.c:1024
#17 0x7efd07d8fc6f in vlib_worker_loop (vm=vm@entry=0x7efcca76a840) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/main.c:1649
#18 0x7efd07dc8738 in vlib_worker_thread_fn (arg=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/threads.c:1560
#19 0x7efd072efee8 in clib_calljmp () at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/longjmp.S:123
#20 0x7efcba7fbd20 in ?? ()
#21 0x7efcc4ff41c5 in eal_thread_loop.cold () from 
/usr/lib/vpp_plugins/dpdk_plugin.so


startup.conf
Description: Binary data

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21514): https://lists.fd.io/g/vpp-dev/message/21514
Mute This Topic: https://lists.fd.io/mt/91623364/21656
Mute #vnet:https://lists.fd.io/g/vpp-dev/mutehashtag/vnet
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-18 Thread Raj Kumar
Thanks! Dave

On Tue, Aug 18, 2020 at 9:59 AM Dave Wallace  wrote:

> Florin cherry-picked the change and I just merged it.
>
> Thanks,
> -daw-
>
> On 8/17/2020 5:19 PM, Dave Barach via lists.fd.io wrote:
>
> You can press the “cherrypick” button as easily as Florin... Hint...
>
>
>
> *From:* vpp-dev@lists.fd.io   *On
> Behalf Of *Raj Kumar
> *Sent:* Monday, August 17, 2020 5:09 PM
> *To:* Ayush Gautam  
> *Cc:* Florin Coras  ;
> vpp-dev  
> *Subject:* Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL
> applications #vpp-hoststack
>
>
>
> Hi Florin,
>
> Can we please have the fix[1] on "stable/2005" branch.
>
>
>
> [1] https://gerrit.fd.io/r/c/vpp/+/28173
>
>
>
>  Thanks,
>
> -Raj
>
>
>
> On Wed, Aug 5, 2020 at 7:30 PM Raj Kumar via lists.fd.io  gmail@lists.fd.io> wrote:
>
> Hi Florin,
>
> Yes , this[1] fixed the issue.
>
>
>
> thanks,
>
> -Raj
>
>
>
> On Wed, Aug 5, 2020 at 1:57 AM Florin Coras 
> wrote:
>
> Hi Raj,
>
>
>
> Does this [1] fix the issue?
>
>
>
> Regards,
>
> Florin
>
>
>
> [1] https://gerrit.fd.io/r/c/vpp/+/28173
>
>
>
> On Aug 4, 2020, at 8:24 AM, Raj Kumar  wrote:
>
>
>
> Hi Florin,
>
> I tried vppcom_epoll_wait() on 2 different  servers by using a simple
> application ( with only 1 worker thread) . But, still vppcom_epoll_wait()
> is not being timed out if I do not use use-mq-eventfd .
>
> Here are the server details -
>
> server 1  -  Red hat 7.5  , vpp release - 20.01
>
> server 2 - Centos 8.1  , vpp release - 20.05
>
>
>
> thanks,
>
> -Raj
>
>
>
>
>
> On Tue, Aug 4, 2020 at 10:24 AM Florin Coras 
> wrote:
>
> Hi Raj,
>
>
>
> Interesting. For some reason, the message queue’s underlying
> pthread_cond_timedwait does not work in your setup. Not sure what to make
> of that, unless maybe you’re trying to epoll wait from multiple threads
> that share the same message queue. That is not supported since each thread
> must have its own message queue, i.e., all threads that call epoll should
> be registered as workers. Alternatively, some form of locking or vls,
> instead of vcl, should be used.
>
>
>
> The downside to switching the message queue to eventfd notifications,
> instead of mutext/condvar, is that waits are inefficient, i.e., they act
> pretty much like spinlocks. Do keep that in mind.
>
>
>
> Regards,
>
> Florin
>
>
>
> On Aug 4, 2020, at 6:37 AM, Raj Kumar  wrote:
>
>
>
> Hi Florin,
>
> After adding use-mq-eventfd in VCL configuration, it is working as
> expected.
>
> Thanks! for your help.
>
>
>
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   use-mq-eventfd
>   api-socket-name /tmp/vpp-api.sock
> }
>
>
>
> thanks,
>
> -Raj
>
>
>
> On Tue, Aug 4, 2020 at 12:08 AM Florin Coras 
> wrote:
>
> Hi Raj,
>
>
>
> Glad to hear that issue is solved. What vcl config are you running? Did
> you configure use-mq-eventd?
>
>
>
> Regards,
>
> Florin
>
>
>
> On Aug 3, 2020, at 8:33 PM, Raj Kumar  wrote:
>
>
>
> Hi Florin,
>
> This issue is resolved now.  In my application, on receiving the kill
> signal main thread was sending phread_cancel() to the child thread
> because of that child thread was not exiting gracefully.
>
> I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
> MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
> value is a non zero value. It timed out only if the timeout value is 0.
>
> The issue that I am facing is that if there is no traffic at all (
> receiver is just listening on the connections ) then the worker thread is
> not exiting as it is blocked by vppcom_epoll_wait().
>
>
>
> Thanks,
>
> -Raj
>
>
>
>
>
>
>
> On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
> wrote:
>
> Hi Raj,
>
>
>
> In that case it should work. Just from the trace lower it’s hard to figure
> out what exactly happened. Also, keep in mind that vcl is not thread safe,
> so make sure you’re not trying to share sessions or allow two workers to
>  interact with the message queue(s) at the same time.
>
>
>
> Regards,
>
> Florin
>
>
>
> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>
>
>
> Hi Florin,
>
> I am using kill  to stop the application. But , the application has a
> kill signal handler and after receiving the signal it is exiting gracefully.
>
> About vppcom_app_exit, I think this 

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-17 Thread Raj Kumar
Hi Florin,
Can we please have the fix[1] on "stable/2005" branch.

[1] https://gerrit.fd.io/r/c/vpp/+/28173

 Thanks,
-Raj

On Wed, Aug 5, 2020 at 7:30 PM Raj Kumar via lists.fd.io  wrote:

> Hi Florin,
> Yes , this[1] fixed the issue.
>
> thanks,
> -Raj
>
> On Wed, Aug 5, 2020 at 1:57 AM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Does this [1] fix the issue?
>>
>> Regards,
>> Florin
>>
>> [1] https://gerrit.fd.io/r/c/vpp/+/28173
>>
>> On Aug 4, 2020, at 8:24 AM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> I tried vppcom_epoll_wait() on 2 different  servers by using a simple
>> application ( with only 1 worker thread) . But, still vppcom_epoll_wait()
>> is not being timed out if I do not use use-mq-eventfd .
>> Here are the server details -
>> server 1  -  Red hat 7.5  , vpp release - 20.01
>> server 2 - Centos 8.1  , vpp release - 20.05
>>
>> thanks,
>> -Raj
>>
>>
>> On Tue, Aug 4, 2020 at 10:24 AM Florin Coras 
>> wrote:
>>
>>> Hi Raj,
>>>
>>> Interesting. For some reason, the message queue’s underlying
>>> pthread_cond_timedwait does not work in your setup. Not sure what to make
>>> of that, unless maybe you’re trying to epoll wait from multiple threads
>>> that share the same message queue. That is not supported since each thread
>>> must have its own message queue, i.e., all threads that call epoll should
>>> be registered as workers. Alternatively, some form of locking or vls,
>>> instead of vcl, should be used.
>>>
>>> The downside to switching the message queue to eventfd notifications,
>>> instead of mutext/condvar, is that waits are inefficient, i.e., they act
>>> pretty much like spinlocks. Do keep that in mind.
>>>
>>> Regards,
>>> Florin
>>>
>>> On Aug 4, 2020, at 6:37 AM, Raj Kumar  wrote:
>>>
>>> Hi Florin,
>>> After adding use-mq-eventfd in VCL configuration, it is working as
>>> expected.
>>> Thanks! for your help.
>>>
>>> vcl {
>>>   rx-fifo-size 400
>>>   tx-fifo-size 400
>>>   app-scope-local
>>>   app-scope-global
>>>   use-mq-eventfd
>>>   api-socket-name /tmp/vpp-api.sock
>>> }
>>>
>>> thanks,
>>> -Raj
>>>
>>> On Tue, Aug 4, 2020 at 12:08 AM Florin Coras 
>>> wrote:
>>>
>>>> Hi Raj,
>>>>
>>>> Glad to hear that issue is solved. What vcl config are you running? Did
>>>> you configure use-mq-eventd?
>>>>
>>>> Regards,
>>>> Florin
>>>>
>>>> On Aug 3, 2020, at 8:33 PM, Raj Kumar  wrote:
>>>>
>>>> Hi Florin,
>>>> This issue is resolved now.  In my application, on receiving the kill
>>>> signal main thread was sending phread_cancel() to the child thread
>>>> because of that child thread was not exiting gracefully.
>>>> I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
>>>> MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
>>>> value is a non zero value. It timed out only if the timeout value is 0.
>>>> The issue that I am facing is that if there is no traffic at all (
>>>> receiver is just listening on the connections ) then the worker thread is
>>>> not exiting as it is blocked by vppcom_epoll_wait().
>>>>
>>>> Thanks,
>>>> -Raj
>>>>
>>>>
>>>>
>>>> On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
>>>> wrote:
>>>>
>>>>> Hi Raj,
>>>>>
>>>>> In that case it should work. Just from the trace lower it’s hard to
>>>>> figure out what exactly happened. Also, keep in mind that vcl is not 
>>>>> thread
>>>>> safe, so make sure you’re not trying to share sessions or allow two 
>>>>> workers
>>>>> to  interact with the message queue(s) at the same time.
>>>>>
>>>>> Regards,
>>>>> Florin
>>>>>
>>>>> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>>>>>
>>>>> Hi Florin,
>>>>> I am using kill  to stop the application. But , the application
>>>>> has a kill signal handler and after receiving the signal it is exiting
>>>>> gracefully.
>>>>> About vppcom_app_exit, I think this function is registered with
&g

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-05 Thread Raj Kumar
Hi Florin,
Yes , this[1] fixed the issue.

thanks,
-Raj

On Wed, Aug 5, 2020 at 1:57 AM Florin Coras  wrote:

> Hi Raj,
>
> Does this [1] fix the issue?
>
> Regards,
> Florin
>
> [1] https://gerrit.fd.io/r/c/vpp/+/28173
>
> On Aug 4, 2020, at 8:24 AM, Raj Kumar  wrote:
>
> Hi Florin,
> I tried vppcom_epoll_wait() on 2 different  servers by using a simple
> application ( with only 1 worker thread) . But, still vppcom_epoll_wait()
> is not being timed out if I do not use use-mq-eventfd .
> Here are the server details -
> server 1  -  Red hat 7.5  , vpp release - 20.01
> server 2 - Centos 8.1  , vpp release - 20.05
>
> thanks,
> -Raj
>
>
> On Tue, Aug 4, 2020 at 10:24 AM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Interesting. For some reason, the message queue’s underlying
>> pthread_cond_timedwait does not work in your setup. Not sure what to make
>> of that, unless maybe you’re trying to epoll wait from multiple threads
>> that share the same message queue. That is not supported since each thread
>> must have its own message queue, i.e., all threads that call epoll should
>> be registered as workers. Alternatively, some form of locking or vls,
>> instead of vcl, should be used.
>>
>> The downside to switching the message queue to eventfd notifications,
>> instead of mutext/condvar, is that waits are inefficient, i.e., they act
>> pretty much like spinlocks. Do keep that in mind.
>>
>> Regards,
>> Florin
>>
>> On Aug 4, 2020, at 6:37 AM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> After adding use-mq-eventfd in VCL configuration, it is working as
>> expected.
>> Thanks! for your help.
>>
>> vcl {
>>   rx-fifo-size 400
>>   tx-fifo-size 400
>>   app-scope-local
>>   app-scope-global
>>   use-mq-eventfd
>>   api-socket-name /tmp/vpp-api.sock
>> }
>>
>> thanks,
>> -Raj
>>
>> On Tue, Aug 4, 2020 at 12:08 AM Florin Coras 
>> wrote:
>>
>>> Hi Raj,
>>>
>>> Glad to hear that issue is solved. What vcl config are you running? Did
>>> you configure use-mq-eventd?
>>>
>>> Regards,
>>> Florin
>>>
>>> On Aug 3, 2020, at 8:33 PM, Raj Kumar  wrote:
>>>
>>> Hi Florin,
>>> This issue is resolved now.  In my application, on receiving the kill
>>> signal main thread was sending phread_cancel() to the child thread
>>> because of that child thread was not exiting gracefully.
>>> I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
>>> MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
>>> value is a non zero value. It timed out only if the timeout value is 0.
>>> The issue that I am facing is that if there is no traffic at all (
>>> receiver is just listening on the connections ) then the worker thread is
>>> not exiting as it is blocked by vppcom_epoll_wait().
>>>
>>> Thanks,
>>> -Raj
>>>
>>>
>>>
>>> On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
>>> wrote:
>>>
>>>> Hi Raj,
>>>>
>>>> In that case it should work. Just from the trace lower it’s hard to
>>>> figure out what exactly happened. Also, keep in mind that vcl is not thread
>>>> safe, so make sure you’re not trying to share sessions or allow two workers
>>>> to  interact with the message queue(s) at the same time.
>>>>
>>>> Regards,
>>>> Florin
>>>>
>>>> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>>>>
>>>> Hi Florin,
>>>> I am using kill  to stop the application. But , the application
>>>> has a kill signal handler and after receiving the signal it is exiting
>>>> gracefully.
>>>> About vppcom_app_exit, I think this function is registered with
>>>> atexit() inside vppcom_app_create() so it should call when the application
>>>> exits.
>>>> Even, I also tried this vppcom_app_exit() explicitly before exiting the
>>>> application but still I am seeing the same issue.
>>>>
>>>> My application is a multithreaded application. Can you please suggest
>>>> some cleanup functions ( vppcom functions) that  I should call before
>>>> exiting a thread and the main application for a proper cleanup.
>>>> I also tried vppcom_app_destroy() before exiting the main application
>>>> but still I am seeing the same issue.
>>>>
>>>> thanks,
>

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-04 Thread Raj Kumar
Hi Florin,
I tried vppcom_epoll_wait() on 2 different  servers by using a simple
application ( with only 1 worker thread) . But, still vppcom_epoll_wait()
is not being timed out if I do not use use-mq-eventfd .
Here are the server details -
server 1  -  Red hat 7.5  , vpp release - 20.01
server 2 - Centos 8.1  , vpp release - 20.05

thanks,
-Raj


On Tue, Aug 4, 2020 at 10:24 AM Florin Coras  wrote:

> Hi Raj,
>
> Interesting. For some reason, the message queue’s underlying
> pthread_cond_timedwait does not work in your setup. Not sure what to make
> of that, unless maybe you’re trying to epoll wait from multiple threads
> that share the same message queue. That is not supported since each thread
> must have its own message queue, i.e., all threads that call epoll should
> be registered as workers. Alternatively, some form of locking or vls,
> instead of vcl, should be used.
>
> The downside to switching the message queue to eventfd notifications,
> instead of mutext/condvar, is that waits are inefficient, i.e., they act
> pretty much like spinlocks. Do keep that in mind.
>
> Regards,
> Florin
>
> On Aug 4, 2020, at 6:37 AM, Raj Kumar  wrote:
>
> Hi Florin,
> After adding use-mq-eventfd in VCL configuration, it is working as
> expected.
> Thanks! for your help.
>
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   use-mq-eventfd
>   api-socket-name /tmp/vpp-api.sock
> }
>
> thanks,
> -Raj
>
> On Tue, Aug 4, 2020 at 12:08 AM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Glad to hear that issue is solved. What vcl config are you running? Did
>> you configure use-mq-eventd?
>>
>> Regards,
>> Florin
>>
>> On Aug 3, 2020, at 8:33 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> This issue is resolved now.  In my application, on receiving the kill
>> signal main thread was sending phread_cancel() to the child thread
>> because of that child thread was not exiting gracefully.
>> I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
>> MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
>> value is a non zero value. It timed out only if the timeout value is 0.
>> The issue that I am facing is that if there is no traffic at all (
>> receiver is just listening on the connections ) then the worker thread is
>> not exiting as it is blocked by vppcom_epoll_wait().
>>
>> Thanks,
>> -Raj
>>
>>
>>
>> On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
>> wrote:
>>
>>> Hi Raj,
>>>
>>> In that case it should work. Just from the trace lower it’s hard to
>>> figure out what exactly happened. Also, keep in mind that vcl is not thread
>>> safe, so make sure you’re not trying to share sessions or allow two workers
>>> to  interact with the message queue(s) at the same time.
>>>
>>> Regards,
>>> Florin
>>>
>>> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>>>
>>> Hi Florin,
>>> I am using kill  to stop the application. But , the application has
>>> a kill signal handler and after receiving the signal it is exiting
>>> gracefully.
>>> About vppcom_app_exit, I think this function is registered with atexit()
>>> inside vppcom_app_create() so it should call when the application exits.
>>> Even, I also tried this vppcom_app_exit() explicitly before exiting the
>>> application but still I am seeing the same issue.
>>>
>>> My application is a multithreaded application. Can you please suggest
>>> some cleanup functions ( vppcom functions) that  I should call before
>>> exiting a thread and the main application for a proper cleanup.
>>> I also tried vppcom_app_destroy() before exiting the main application
>>> but still I am seeing the same issue.
>>>
>>> thanks,
>>> -Raj
>>>
>>> On Wed, Jul 29, 2020 at 5:34 PM Florin Coras 
>>> wrote:
>>>
>>>> Hi Raj,
>>>>
>>>> Does stopping include a call to vppcom_app_exit or killing the
>>>> applications? If the latter, the apps might be killed with some
>>>> mutexes/spinlocks held. For now, we only support the former.
>>>>
>>>> Regards,
>>>> Florin
>>>>
>>>> > On Jul 29, 2020, at 1:49 PM, Raj Kumar 
>>>> wrote:
>>>> >
>>>> > Hi,
>>>> > In my UDP application , I am using VPP host stack to receive packets
>>>> and memIf to transmit packets. There are 

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-04 Thread Raj Kumar
Hi Florin,
After adding use-mq-eventfd in VCL configuration, it is working as
expected.
Thanks! for your help.

vcl {
  rx-fifo-size 400
  tx-fifo-size 400
  app-scope-local
  app-scope-global
  use-mq-eventfd
  api-socket-name /tmp/vpp-api.sock
}

thanks,
-Raj

On Tue, Aug 4, 2020 at 12:08 AM Florin Coras  wrote:

> Hi Raj,
>
> Glad to hear that issue is solved. What vcl config are you running? Did
> you configure use-mq-eventd?
>
> Regards,
> Florin
>
> On Aug 3, 2020, at 8:33 PM, Raj Kumar  wrote:
>
> Hi Florin,
> This issue is resolved now.  In my application, on receiving the kill
> signal main thread was sending phread_cancel() to the child thread
> because of that child thread was not exiting gracefully.
> I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
> MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
> value is a non zero value. It timed out only if the timeout value is 0.
> The issue that I am facing is that if there is no traffic at all (
> receiver is just listening on the connections ) then the worker thread is
> not exiting as it is blocked by vppcom_epoll_wait().
>
> Thanks,
> -Raj
>
>
>
> On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> In that case it should work. Just from the trace lower it’s hard to
>> figure out what exactly happened. Also, keep in mind that vcl is not thread
>> safe, so make sure you’re not trying to share sessions or allow two workers
>> to  interact with the message queue(s) at the same time.
>>
>> Regards,
>> Florin
>>
>> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> I am using kill  to stop the application. But , the application has
>> a kill signal handler and after receiving the signal it is exiting
>> gracefully.
>> About vppcom_app_exit, I think this function is registered with atexit()
>> inside vppcom_app_create() so it should call when the application exits.
>> Even, I also tried this vppcom_app_exit() explicitly before exiting the
>> application but still I am seeing the same issue.
>>
>> My application is a multithreaded application. Can you please suggest
>> some cleanup functions ( vppcom functions) that  I should call before
>> exiting a thread and the main application for a proper cleanup.
>> I also tried vppcom_app_destroy() before exiting the main application but
>> still I am seeing the same issue.
>>
>> thanks,
>> -Raj
>>
>> On Wed, Jul 29, 2020 at 5:34 PM Florin Coras 
>> wrote:
>>
>>> Hi Raj,
>>>
>>> Does stopping include a call to vppcom_app_exit or killing the
>>> applications? If the latter, the apps might be killed with some
>>> mutexes/spinlocks held. For now, we only support the former.
>>>
>>> Regards,
>>> Florin
>>>
>>> > On Jul 29, 2020, at 1:49 PM, Raj Kumar  wrote:
>>> >
>>> > Hi,
>>> > In my UDP application , I am using VPP host stack to receive packets
>>> and memIf to transmit packets. There are a total 6 application connected to
>>> VPP.
>>> > if I stop the application(s) then VPP is crashing.  In vpp
>>> configuration , 4 worker threads are configured.  If there is no worker
>>> thread configured then I do not see this crash.
>>> > Here is the VPP task trace -
>>> >  (gdb) bt
>>> > #0  0x751818df in raise () from /lib64/libc.so.6
>>> > #1  0x7516bcf5 in abort () from /lib64/libc.so.6
>>> > #2  0xc123 in os_panic () at
>>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vpp/vnet/main.c:366
>>> > #3  0x76b466bb in vlib_worker_thread_barrier_sync_int
>>> (vm=0x76d78200 , func_name=)
>>> > at
>>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/threads.c:1529
>>> > #4  0x77bc5ef0 in vl_msg_api_handler_with_vm_node 
>>> > (am=am@entry=0x77dd2ea0
>>> ,
>>> > vlib_rp=vlib_rp@entry=0x7fee7c001000, the_msg=0x7fee7c02bbd8,
>>> vm=vm@entry=0x76d78200 ,
>>> > node=node@entry=0x7fffb6295000, is_private=is_private@entry=1
>>> '\001')
>>> > at
>>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibapi/api_shared.c:596
>>> > #5  0x77bb000f in void_mem_api_handle_msg_i (is_private=1
>>> '\001', node=0x7fffb6295000, vm=0x76d78200 ,
>>> > vlib_rp=0x7fee7c001000, am=0x77dd2ea0 )
>>> > at
>>> /usr/src/de

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-08-03 Thread Raj Kumar
Hi Florin,
This issue is resolved now.  In my application, on receiving the kill
signal main thread was sending phread_cancel() to the child thread
because of that child thread was not exiting gracefully.
I have one question; it seems that vppcom_epoll_wait(epfd, rcvEvents,
MAX_RETURN_EVENTS, 6.0) is not returning after timed out if the timeout
value is a non zero value. It timed out only if the timeout value is 0.
The issue that I am facing is that if there is no traffic at all ( receiver
is just listening on the connections ) then the worker thread is not
exiting as it is blocked by vppcom_epoll_wait().

Thanks,
-Raj



On Wed, Jul 29, 2020 at 11:23 PM Florin Coras 
wrote:

> Hi Raj,
>
> In that case it should work. Just from the trace lower it’s hard to figure
> out what exactly happened. Also, keep in mind that vcl is not thread safe,
> so make sure you’re not trying to share sessions or allow two workers to
>  interact with the message queue(s) at the same time.
>
> Regards,
> Florin
>
> On Jul 29, 2020, at 8:17 PM, Raj Kumar  wrote:
>
> Hi Florin,
> I am using kill  to stop the application. But , the application has a
> kill signal handler and after receiving the signal it is exiting gracefully.
> About vppcom_app_exit, I think this function is registered with atexit()
> inside vppcom_app_create() so it should call when the application exits.
> Even, I also tried this vppcom_app_exit() explicitly before exiting the
> application but still I am seeing the same issue.
>
> My application is a multithreaded application. Can you please suggest some
> cleanup functions ( vppcom functions) that  I should call before exiting a
> thread and the main application for a proper cleanup.
> I also tried vppcom_app_destroy() before exiting the main application but
> still I am seeing the same issue.
>
> thanks,
> -Raj
>
> On Wed, Jul 29, 2020 at 5:34 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Does stopping include a call to vppcom_app_exit or killing the
>> applications? If the latter, the apps might be killed with some
>> mutexes/spinlocks held. For now, we only support the former.
>>
>> Regards,
>> Florin
>>
>> > On Jul 29, 2020, at 1:49 PM, Raj Kumar  wrote:
>> >
>> > Hi,
>> > In my UDP application , I am using VPP host stack to receive packets
>> and memIf to transmit packets. There are a total 6 application connected to
>> VPP.
>> > if I stop the application(s) then VPP is crashing.  In vpp
>> configuration , 4 worker threads are configured.  If there is no worker
>> thread configured then I do not see this crash.
>> > Here is the VPP task trace -
>> >  (gdb) bt
>> > #0  0x751818df in raise () from /lib64/libc.so.6
>> > #1  0x7516bcf5 in abort () from /lib64/libc.so.6
>> > #2  0xc123 in os_panic () at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vpp/vnet/main.c:366
>> > #3  0x76b466bb in vlib_worker_thread_barrier_sync_int
>> (vm=0x76d78200 , func_name=)
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/threads.c:1529
>> > #4  0x77bc5ef0 in vl_msg_api_handler_with_vm_node 
>> > (am=am@entry=0x77dd2ea0
>> ,
>> > vlib_rp=vlib_rp@entry=0x7fee7c001000, the_msg=0x7fee7c02bbd8,
>> vm=vm@entry=0x76d78200 ,
>> > node=node@entry=0x7fffb6295000, is_private=is_private@entry=1
>> '\001')
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibapi/api_shared.c:596
>> > #5  0x77bb000f in void_mem_api_handle_msg_i (is_private=1
>> '\001', node=0x7fffb6295000, vm=0x76d78200 ,
>> > vlib_rp=0x7fee7c001000, am=0x77dd2ea0 )
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:698
>> > #6  vl_mem_api_handle_msg_private (vm=vm@entry=0x76d78200
>> , node=node@entry=0x7fffb6295000, reg_index=> out>)
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:762
>> > #7  0x77bbe346 in vl_api_clnt_process (vm=,
>> node=0x7fffb6295000, f=)
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/vlib_api.c:370
>> > #8  0x76b161d6 in vlib_process_bootstrap (_a=)
>> > at
>> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/main.c:1502
>> > #9  0x7602ac0c in clib_calljmp () from
>> /lib64/libvppinfra.so.20.05
>> > #10 0x7fffb5e93dd0 in ?? ()
>> > #11 0x76b19821 in dispatch_process (vm=0x76d78200
>> , p=0x7fffb629

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Raj Kumar
Hi Florin,
I am using kill  to stop the application. But , the application has a
kill signal handler and after receiving the signal it is exiting gracefully.
About vppcom_app_exit, I think this function is registered with atexit()
inside vppcom_app_create() so it should call when the application exits.
Even, I also tried this vppcom_app_exit() explicitly before exiting the
application but still I am seeing the same issue.

My application is a multithreaded application. Can you please suggest some
cleanup functions ( vppcom functions) that  I should call before exiting a
thread and the main application for a proper cleanup.
I also tried vppcom_app_destroy() before exiting the main application but
still I am seeing the same issue.

thanks,
-Raj

On Wed, Jul 29, 2020 at 5:34 PM Florin Coras  wrote:

> Hi Raj,
>
> Does stopping include a call to vppcom_app_exit or killing the
> applications? If the latter, the apps might be killed with some
> mutexes/spinlocks held. For now, we only support the former.
>
> Regards,
> Florin
>
> > On Jul 29, 2020, at 1:49 PM, Raj Kumar  wrote:
> >
> > Hi,
> > In my UDP application , I am using VPP host stack to receive packets and
> memIf to transmit packets. There are a total 6 application connected to
> VPP.
> > if I stop the application(s) then VPP is crashing.  In vpp configuration
> , 4 worker threads are configured.  If there is no worker thread configured
> then I do not see this crash.
> > Here is the VPP task trace -
> >  (gdb) bt
> > #0  0x751818df in raise () from /lib64/libc.so.6
> > #1  0x7516bcf5 in abort () from /lib64/libc.so.6
> > #2  0xc123 in os_panic () at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vpp/vnet/main.c:366
> > #3  0x76b466bb in vlib_worker_thread_barrier_sync_int
> (vm=0x76d78200 , func_name=)
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/threads.c:1529
> > #4  0x77bc5ef0 in vl_msg_api_handler_with_vm_node 
> > (am=am@entry=0x77dd2ea0
> ,
> > vlib_rp=vlib_rp@entry=0x7fee7c001000, the_msg=0x7fee7c02bbd8,
> vm=vm@entry=0x76d78200 ,
> > node=node@entry=0x7fffb6295000, is_private=is_private@entry=1
> '\001')
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibapi/api_shared.c:596
> > #5  0x77bb000f in void_mem_api_handle_msg_i (is_private=1
> '\001', node=0x7fffb6295000, vm=0x76d78200 ,
> > vlib_rp=0x7fee7c001000, am=0x77dd2ea0 )
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:698
> > #6  vl_mem_api_handle_msg_private (vm=vm@entry=0x76d78200
> , node=node@entry=0x7fffb6295000, reg_index= out>)
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:762
> > #7  0x77bbe346 in vl_api_clnt_process (vm=,
> node=0x7fffb6295000, f=)
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/vlib_api.c:370
> > #8  0x76b161d6 in vlib_process_bootstrap (_a=)
> > at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/main.c:1502
> > #9  0x7602ac0c in clib_calljmp () from
> /lib64/libvppinfra.so.20.05
> > #10 0x7fffb5e93dd0 in ?? ()
> > #11 0x76b19821 in dispatch_process (vm=0x76d78200
> , p=0x7fffb6295000, last_time_stamp=15931923011231136,
> > f=0x0) at
> /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vppinfra/types.h:133
> > #12 0x7f0f66009024 in ?? ()
> >
> >
> > Thanks,
> > -Raj
> >
>
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17113): https://lists.fd.io/g/vpp-dev/message/17113
Mute This Topic: https://lists.fd.io/mt/75873900/21656
Mute #vpp-hoststack: 
https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Raj Kumar
Hi,
In my UDP application , I am using VPP host stack to receive packets and memIf 
to transmit packets. There are a total 6 application connected to VPP.
if I stop the application(s) then VPP is crashing.  In vpp configuration , 4 
worker threads are configured.  If there is no worker thread configured then I 
do not see this crash.
Here is the VPP task trace -
(gdb) bt
#0  0x751818df in raise () from /lib64/libc.so.6
#1  0x7516bcf5 in abort () from /lib64/libc.so.6
#2  0xc123 in os_panic () at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vpp/vnet/main.c:366
#3  0x76b466bb in vlib_worker_thread_barrier_sync_int 
(vm=0x76d78200 , func_name=)
at /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/threads.c:1529
#4  0x77bc5ef0 in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x77dd2ea0 ,
vlib_rp=vlib_rp@entry=0x7fee7c001000, the_msg=0x7fee7c02bbd8, 
vm=vm@entry=0x76d78200 ,
node=node@entry=0x7fffb6295000, is_private=is_private@entry=1 '\001')
at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibapi/api_shared.c:596
#5  0x77bb000f in void_mem_api_handle_msg_i (is_private=1 '\001', 
node=0x7fffb6295000, vm=0x76d78200 ,
vlib_rp=0x7fee7c001000, am=0x77dd2ea0 )
at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:698
#6  vl_mem_api_handle_msg_private (vm=vm@entry=0x76d78200 
, node=node@entry=0x7fffb6295000, reg_index=)
at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/memory_api.c:762
#7  0x77bbe346 in vl_api_clnt_process (vm=, 
node=0x7fffb6295000, f=)
at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlibmemory/vlib_api.c:370
#8  0x76b161d6 in vlib_process_bootstrap (_a=)
at /usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vlib/main.c:1502
#9  0x7602ac0c in clib_calljmp () from /lib64/libvppinfra.so.20.05
#10 0x7fffb5e93dd0 in ?? ()
#11 0x76b19821 in dispatch_process (vm=0x76d78200 
, p=0x7fffb6295000, last_time_stamp=15931923011231136,
f=0x0) at 
/usr/src/debug/vpp-20.05-9~g0bf9c294c_dirty.x86_64/src/vppinfra/types.h:133
#12 0x7f0f66009024 in ?? ()

Thanks,
-Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17110): https://lists.fd.io/g/vpp-dev/message/17110
Mute This Topic: https://lists.fd.io/mt/75873900/21656
Mute #vpp-hoststack: 
https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] How to active tx udp checksum offload #dpdk #mellanox

2020-07-01 Thread Raj Kumar
Hi,
I am using vpp stable/2005 code. I want to enable UDP checksum offload for tx . 
I changed the vpp startup.cfg file -

## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload

## Enable UDP / TCP TX checksum offload
## This is the reversed option of 'no-tx-checksum-offload'
*enable-tcp-udp-checksum*

But , still it is not activated . However all these tx offloads are available.

With VPP 19.05 , I was able to activate it by adding following piece of code in 
./plugins/dpdk/device/init.c

case VNET_DPDK_PMD_CXGBE:
case VNET_DPDK_PMD_MLX4:
case VNET_DPDK_PMD_MLX5:
case VNET_DPDK_PMD_QEDE:
case VNET_DPDK_PMD_BNXT:
xd->port_type = port_type_from_speed_capa (_info);

*if (dm->conf->no_tx_checksum_offload == 0)*
*{*
*xd->port_conf.txmode.offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;*
*xd->port_conf.txmode.offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;*
*xd->flags |=*
*DPDK_DEVICE_FLAG_TX_OFFLOAD |*
*DPDK_DEVICE_FLAG_INTEL_PHDR_CKSUM;*
*}*
break;

But, the above code does not work with vpp 20.05. I think , there is a newer 
version of DPDK in this release.

Please let me know if I am doing something wrong.

HundredGigabitEthernet12/0/0       2     up   HundredGigabitEthernet12/0/0
Link speed: 100 Gbps
Ethernet address b8:83:03:9e:68:f0
Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
rx: queues 2 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.00 numa 0
switch info: name :12:00.0 domain id 0 port id 65535
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
jumbo-frame scatter timestamp keep-crc rss-hash
rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
*tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso*
*outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso geneve-tnl-tso*
*multi-segs udp-tnl-tso ip-tnl-tso*
*tx offload active: multi-segs*
rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only l3-src-only
rss active:        ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
ipv6-ex ipv6
tx burst mode: No MPW + MULTI + TSO + INLINE + METADATA
rx burst mode: Scalar

tx frames ok                                    14311830
tx bytes ok                                 128602546562
rx frames ok                                        1877
rx bytes ok                                       228452

thanks,
-Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16863): https://lists.fd.io/g/vpp-dev/message/16863
Mute This Topic: https://lists.fd.io/mt/75246142/21656
Mute #dpdk: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/dpdk
Mute #mellanox: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/mellanox
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-06-05 Thread Raj Kumar
Hi Florin,
VPP from the "stable/2005" branch is working fine. Thanks! for your help.

I have one question; for the UDP tx if the destination address is not in
the same subnet then it works after adding a route for the next hop.
In my case , VPP interface ip is 2001:5b0::701:b883:31f:29e:68f0 and I
am sending UDP traffic to address 2001:5b0::700:ba83:3ff:fe9e:6848.
It works fine after adding  the following route in VPP fib
ip route add 2001:5b0::700:ba83:3ff:fe9e:6848/128 via
2001:5b0::701::254 HundredGigabitEthernet12/0/0.701

My requirement is to use SRv6 for packet routing .  I tried by adding
following sid entry in vpp
sr policy add bsid 2001:5b0::700:ba83:3ff:fe9e:6848 next
2001:5b0::701::254 insert
But, with this configuration , UDP tx application is not able to connect.

udpTxThread started!!!  ... tx port  = 9988vppcom_session_create()
 ..16777216
vppcom_session_connect:1741: vcl<47606:1>: session handle 16777216
(STATE_CLOSED): connecting to peer IPv6
2001:5b0::700:ba83:3ff:fe9e:6848 port 9988 proto UDP
vcl_session_connected_handler:450: vcl<47606:1>: ERROR: session index 0:
connect failed! no resolving interface
vppcom_session_connect:1756: vcl<47606:1>: session 0 [0x0]: connect failed!
vppcom_session_connect() failed ... -111

>From the traces , it looks like that VCL session is not looking into the
SID entries.  Please let me know if I am doing something wrong.

thanks,
-Raj








On Thu, Jun 4, 2020 at 4:49 PM Florin Coras  wrote:

> Hi Raj,
>
> You have it here [4]. I’ll merge it once it verifies.
>
> Regards,
> Florin
>
> [4] https://gerrit.fd.io/r/c/vpp/+/27432
>
> On Jun 4, 2020, at 1:41 PM, Raj Kumar  wrote:
>
> Hi Florin,
>
> To pick up all the following patches I downloaded  VPP code from Master
> branch.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
> [2] https://gerrit.fd.io/r/c/vpp/+/27106
>  [3] https://gerrit.fd.io/r/c/vpp/+/27235
>
> But , with this build I observed that UDP listener is always migrating the
> received connection on the first worker thread . With stable/2005 ( +
> patches) code, VPP was migrating the connections on different worker
> threads. I am using UDP socket with CONNECTED attribute.
>
>
> I found that on stable/2005 you already merged [2] and [3]. Please let me
> know if you are planning to merge [1] on the stable/2005 branch. For now, I
> want to stay with stable/2005.
>
> thanks,
> -Raj
>
> On Sun, May 31, 2020 at 11:32 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Inline.
>>
>> On May 31, 2020, at 8:10 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> The UDPC connections are working fine . I was making a basic mistake. For
>> the second application, I missed to export the  LD_LIBRARY_PATH to the
>> build directory because of that application was linking to the old library
>> ( form /usr/lib64).
>> Now, I tried both UDP tx and rx connection ( UDP connected) . Both are
>> working fine.
>>
>>
>> FC: Great!
>>
>>
>> I have a question; all the connections originating from VPP host stack
>> (UDP tx)  are always going on the worker thread 1. Is there any way to
>> assign these connections to the different worker threads ( similar to the
>> UDP rx) ? I can not rely on the receiver ( to initiate the connection) as
>> that is a third party application.
>>
>>
>> FC: At this time no.
>>
>>
>> Earlier with the previous VPP release , I tried by assigning UDP tx
>> connections to the different worker threads in a round robin manner . I am
>> wondering if we can try some thing similar in the new release.
>>
>>
>> It might just work. Let me know if you try it out.
>>
>>
>> Also, please let me know in which VPP release, the following patches
>> would be available.
>>
>>
>> FC: The first two are part of 20.05, the third will be ported to the
>> first 20.05 point release.
>>
>> Regards,
>> Florin
>>
>>
>> [1] https://gerrit.fd.io/r/c/vpp/+/27111
>> [2] https://gerrit.fd.io/r/c/vpp/+/27106
>>  [3] https://gerrit.fd.io/r/c/vpp/+/27235
>>
>>
>> Here are the VPP stats -
>>
>> vpp# sh app server
>> Connection  App  Wrk
>> [0:0][U] 2001:5b0::701:b883:31f:29e:udp6_rx[shm]  1
>> [0:1][U] 2001:5b0::701:b883:31f:29e:udp6_rx[shm]  1
>> vpp#
>> vpp# sh app client
>> Connection  App
>> [1:0][U] 2001:5b0::701:b883:31f:29e:udp6_tx[shm]
>> [1:1][U] 2001:5b0::701:b883:31f:29e:udp6_tx[shm]
>> vpp#
>> vpp# sh session verbose

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-06-04 Thread Raj Kumar
Hi Florin,

To pick up all the following patches I downloaded  VPP code from Master
branch.

[1] https://gerrit.fd.io/r/c/vpp/+/27111
[2] https://gerrit.fd.io/r/c/vpp/+/27106
 [3] https://gerrit.fd.io/r/c/vpp/+/27235

But , with this build I observed that UDP listener is always migrating the
received connection on the first worker thread . With stable/2005 ( +
patches) code, VPP was migrating the connections on different worker
threads. I am using UDP socket with CONNECTED attribute.


I found that on stable/2005 you already merged [2] and [3]. Please let me
know if you are planning to merge [1] on the stable/2005 branch. For now, I
want to stay with stable/2005.

thanks,
-Raj

On Sun, May 31, 2020 at 11:32 PM Florin Coras 
wrote:

> Hi Raj,
>
> Inline.
>
> On May 31, 2020, at 8:10 PM, Raj Kumar  wrote:
>
> Hi Florin,
> The UDPC connections are working fine . I was making a basic mistake. For
> the second application, I missed to export the  LD_LIBRARY_PATH to the
> build directory because of that application was linking to the old library
> ( form /usr/lib64).
> Now, I tried both UDP tx and rx connection ( UDP connected) . Both are
> working fine.
>
>
> FC: Great!
>
>
> I have a question; all the connections originating from VPP host stack
> (UDP tx)  are always going on the worker thread 1. Is there any way to
> assign these connections to the different worker threads ( similar to the
> UDP rx) ? I can not rely on the receiver ( to initiate the connection) as
> that is a third party application.
>
>
> FC: At this time no.
>
>
> Earlier with the previous VPP release , I tried by assigning UDP tx
> connections to the different worker threads in a round robin manner . I am
> wondering if we can try some thing similar in the new release.
>
>
> It might just work. Let me know if you try it out.
>
>
> Also, please let me know in which VPP release, the following patches would
> be available.
>
>
> FC: The first two are part of 20.05, the third will be ported to the first
> 20.05 point release.
>
> Regards,
> Florin
>
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
> [2] https://gerrit.fd.io/r/c/vpp/+/27106
>  [3] https://gerrit.fd.io/r/c/vpp/+/27235
>
>
> Here are the VPP stats -
>
> vpp# sh app server
> Connection  App  Wrk
> [0:0][U] 2001:5b0::701:b883:31f:29e:udp6_rx[shm]  1
> [0:1][U] 2001:5b0::701:b883:31f:29e:udp6_rx[shm]  1
> vpp#
> vpp# sh app client
> Connection  App
> [1:0][U] 2001:5b0::701:b883:31f:29e:udp6_tx[shm]
> [1:1][U] 2001:5b0::701:b883:31f:29e:udp6_tx[shm]
> vpp#
> vpp# sh session verbose
> ConnectionState  Rx-f
>  Tx-f
> [0:0][U] 2001:5b0::701:b883:31f:29e:9880:12345LISTEN 0
> 0
> [0:1][U] 2001:5b0::701:b883:31f:29e:9881:56789LISTEN 0
> 0
> Thread 0: active sessions 2
>
> ConnectionState  Rx-f
>  Tx-f
> [1:0][U] 2001:5b0::701:b883:31f:29e:9880:58199OPENED 0
> 3999756
> [1:1][U] 2001:5b0::701:b883:31f:29e:9880:10442OPENED 0
> 3999756
> Thread 1: active sessions 2
>
> ConnectionState  Rx-f
>  Tx-f
> [2:0][U] 2001:5b0::701:b883:31f:29e:9881:56789OPENED 0
> 0
> Thread 2: active sessions 1
> Thread 3: no sessions
>
> ConnectionState          Rx-f
>  Tx-f
> [4:0][U] 2001:5b0::701:b883:31f:29e:9880:12345OPENED 0
> 0
> Thread 4: active sessions 1
>
> Thanks,
> -Raj
>
>
>
> On Sun, May 31, 2020 at 7:35 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Inline.
>>
>> On May 31, 2020, at 4:07 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> I was trying this test with debug binaries, but as soon as I enable the
>> interface , vpp is crashing.
>>
>>
>> FC: It looks like somehow corrupted buffers make their way into the error
>> drop node. What traffic are you running through vpp and is this master or
>> are you running some custom code?
>>
>>
>> On the original problem ( multiple listener); if I open multiple sockets
>> from the same multi threaded application then it works fine. But, if I
>> start another application then only I see the VPP crash( which I mentioned
>> in my previous email).
>>
>>
>> FC: Is the second app trying to listen on the same ip:port pair? That is
>> not supported and the second listen request should’ve been rejected. Do you
>> have a bt?
>>
>> Regar

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-31 Thread Raj Kumar
Hi Florin,
I was trying this test with debug binaries, but as soon as I enable the
interface , vpp is crashing.

On the original problem ( multiple listener); if I open multiple sockets
from the same multi threaded application then it works fine. But, if I
start another application then only I see the VPP crash( which I mentioned
in my previous email).

Here is the stack trace with debug binaries which is crashing on startup.
Please let me know what wrong I am doing here ?


#0  0x74b748df in raise () from /lib64/libc.so.6
#1  0x74b5ecf5 in abort () from /lib64/libc.so.6
#2  0x00407a28 in os_panic () at /opt/vpp/src/vpp/vnet/main.c:366
#3  0x75a279af in debugger () at /opt/vpp/src/vppinfra/error.c:84
#4  0x75a27d92 in _clib_error (how_to_die=2,
function_name=0x7663d540 <__FUNCTION__.35997>
"vlib_buffer_validate_alloc_free",
line_number=367, fmt=0x7663d0ba "%s %U buffer 0x%x") at
/opt/vpp/src/vppinfra/error.c:143
#5  0x765752a3 in vlib_buffer_validate_alloc_free
(vm=0x76866340 , buffers=0x7fffb586d630, n_buffers=1,
expected_state=VLIB_BUFFER_KNOWN_ALLOCATED) at
/opt/vpp/src/vlib/buffer.c:366
#6  0x765675f0 in vlib_buffer_pool_put (vm=0x76866340
, buffer_pool_index=0 '\000', buffers=0x7fffb586d630,
n_buffers=1) at /opt/vpp/src/vlib/buffer_funcs.h:754
#7  0x76567dde in vlib_buffer_free_inline (vm=0x76866340
, buffers=0x7fffb67e9b14, n_buffers=0, maybe_next=1)
at /opt/vpp/src/vlib/buffer_funcs.h:924
#8  0x76567e2e in vlib_buffer_free (vm=0x76866340
, buffers=0x7fffb67e9b10, n_buffers=1)
at /opt/vpp/src/vlib/buffer_funcs.h:943
#9  0x76568cb0 in process_drop_punt (vm=0x76866340
, node=0x7fffb5475e00, frame=0x7fffb67e9b00,
disposition=ERROR_DISPOSITION_DROP) at /opt/vpp/src/vlib/drop.c:231
#10 0x76568db0 in error_drop_node_fn_hsw (vm=0x76866340
, node=0x7fffb5475e00, frame=0x7fffb67e9b00)
at /opt/vpp/src/vlib/drop.c:247
#11 0x765c3447 in dispatch_node (vm=0x76866340
, node=0x7fffb5475e00, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb67e9b00,
last_time_stamp=15480427682338516) at /opt/vpp/src/vlib/main.c:1235
#12 0x765c3c02 in dispatch_pending_node (vm=0x76866340
, pending_frame_index=2,
last_time_stamp=15480427682338516) at /opt/vpp/src/vlib/main.c:1403
#13 0x765c58c9 in vlib_main_or_worker_loop (vm=0x76866340
, is_main=1) at /opt/vpp/src/vlib/main.c:1862
#14 0x765c6320 in vlib_main_loop (vm=0x76866340
) at /opt/vpp/src/vlib/main.c:1990
#15 0x765c70e0 in vlib_main (vm=0x76866340 ,
input=0x7fffb586efb0) at /opt/vpp/src/vlib/main.c:2236
#16 0x7662f311 in thread0 (arg=140737329390400) at
/opt/vpp/src/vlib/unix/main.c:658
#17 0x75a465cc in clib_calljmp () at
/opt/vpp/src/vppinfra/longjmp.S:123
#18 0x7fffd0a0 in ?? ()
#19 0x7662f8a7 in vlib_unix_main (argc=50, argv=0x705f00) at
/opt/vpp/src/vlib/unix/main.c:730
#20 0x00407387 in main (argc=50, argv=0x705f00) at
/opt/vpp/src/vpp/vnet/main.c:291

thanks,
-Raj

On Mon, May 25, 2020 at 5:02 PM Florin Coras  wrote:

> Hi Raj,
>
> Okay, so at least with that we have support for bounded listeners (note
> that [2] was merged but to set the connected option you now have to
> use vppcom_session_attr).
>
> As for the trace, something seems off. Why exactly does it crash? It looks
> as if session_get_transport_proto (ls) crashes because of ls being null,
> but prior to that ls is dereferenced and it does’t crash. Could you try
> with debug binaries?
>
> Regards,
> Florin
>
> On May 25, 2020, at 1:43 PM, Raj Kumar  wrote:
>
> Hi Florin,
> This works fine with single udp listener. I can see connections going on
> different cores. But, if I run more than one listener then VPP is crashing.
> Here are the VPP stack traces -
>
> (gdb) bt
> #0  0x in ?? ()
> #1  0x77761239 in session_listen (ls=, 
> sep=sep@entry=0x7fffb557fd50)
> at /opt/vpp/src/vnet/session/session_types.h:247
> #2  0x77788b3f in app_listener_alloc_and_init 
> (app=app@entry=0x7fffb76f7d98,
> sep=sep@entry=0x7fffb557fd50,
> listener=listener@entry=0x7fffb557fd28) at
> /opt/vpp/src/vnet/session/application.c:196
> #3  0x77788ed8 in vnet_listen (a=a@entry=0x7fffb557fd50) at
> /opt/vpp/src/vnet/session/application.c:1005
> #4  0x77779e08 in session_mq_listen_handler (data=0x1300787e9) at
> /opt/vpp/src/vnet/session/session_node.c:65
> #5  session_mq_listen_handler (data=data@entry=0x1300787e9) at
> /opt/vpp/src/vnet/session/session_node.c:36
> #6  0x77bbcdb9 in vl_api_rpc_call_t_handler (mp=0x1300787d0) at
> /opt/vpp/src/vlibmemory/vlib_api.c:520
> #7  0x77bc5ead in vl_msg_api_handler_with_vm_node 
> (am=am@entry=0x7ff

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-25 Thread Raj Kumar
Hi Florin,
This works fine with single udp listener. I can see connections going on
different cores. But, if I run more than one listener then VPP is crashing.
Here are the VPP stack traces -

(gdb) bt
#0  0x in ?? ()
#1  0x77761239 in session_listen (ls=,
sep=sep@entry=0x7fffb557fd50)
at /opt/vpp/src/vnet/session/session_types.h:247
#2  0x77788b3f in app_listener_alloc_and_init
(app=app@entry=0x7fffb76f7d98,
sep=sep@entry=0x7fffb557fd50,
listener=listener@entry=0x7fffb557fd28) at
/opt/vpp/src/vnet/session/application.c:196
#3  0x77788ed8 in vnet_listen (a=a@entry=0x7fffb557fd50) at
/opt/vpp/src/vnet/session/application.c:1005
#4  0x77779e08 in session_mq_listen_handler (data=0x1300787e9) at
/opt/vpp/src/vnet/session/session_node.c:65
#5  session_mq_listen_handler (data=data@entry=0x1300787e9) at
/opt/vpp/src/vnet/session/session_node.c:36
#6  0x77bbcdb9 in vl_api_rpc_call_t_handler (mp=0x1300787d0) at
/opt/vpp/src/vlibmemory/vlib_api.c:520
#7  0x77bc5ead in vl_msg_api_handler_with_vm_node
(am=am@entry=0x77dd2ea0
, vlib_rp=,
the_msg=0x1300787d0, vm=vm@entry=0x76d7c200 ,
node=node@entry=0x7fffb553f000, is_private=is_private@entry=0 '\000')
at /opt/vpp/src/vlibapi/api_shared.c:609
#8  0x77bafee6 in vl_mem_api_handle_rpc (vm=vm@entry=0x76d7c200
, node=node@entry=0x7fffb553f000)
at /opt/vpp/src/vlibmemory/memory_api.c:748
#9  0x77bbd5b3 in vl_api_clnt_process (vm=,
node=0x7fffb553f000, f=)
at /opt/vpp/src/vlibmemory/vlib_api.c:326
#10 0x76b1b116 in vlib_process_bootstrap (_a=) at
/opt/vpp/src/vlib/main.c:1502
#11 0x7602fbfc in clib_calljmp () from
/opt/vpp/build-root/build-vpp-native/vpp/lib/libvppinfra.so.20.05
#12 0x7fffb5fa2dd0 in ?? ()
#13 0x76b1e751 in vlib_process_startup (f=0x0, p=0x7fffb553f000,
vm=0x76d7c200 )
at /opt/vpp/src/vppinfra/types.h:133
#14 dispatch_process (vm=0x76d7c200 ,
p=0x7fffb553f000, last_time_stamp=14118080390223872, f=0x0)
at /opt/vpp/src/vlib/main.c:1569
#15 0x004b84e0 in ?? ()
#16 0x in ?? ()

thanks,
-Raj

On Mon, May 25, 2020 at 2:17 PM Florin Coras  wrote:

> Hi Raj,
>
> Ow, now you’ve hit the untested part of [2]. Could you try this [3]?
>
> Regards,
> Florin
>
> [3] https://gerrit.fd.io/r/c/vpp/+/27235
>
> On May 25, 2020, at 10:44 AM, Raj Kumar  wrote:
>
> Hi Florin,
>
> I tried the patches[1] & [2] , but still VCL application is crashing.
> However, session is created in VPP.
>
> vpp# sh session verbose 2
> [0:0][U] 2001:5b0::700:b883:31f:29e:9880:9978-LISTEN
>  index 0 flags: CONNECTED, OWNS_PORT, LISTEN
> Thread 0: active sessions 1
> Thread 1: no sessions
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
> vpp# sh app server
> Connection  App  Wrk
> [0:0][U] 2001:5b0::700:b883:31f:29e:udp6_rx[shm]  1
> vpp#
>
> Here are the VCL application traces. Attached is the updated vppcom.c file.
>
> [root@J3SGISNCCRO01 vcl_test]# VCL_DEBUG=2 gdb
> udp6_server_vcl_threaded_udpc
>GNU gdb (GDB) Red Hat Enterprise Linux 8.2-6.el8
> Copyright (C) 2018 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> Type "show copying" and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>.
> Find the GDB manual and other documentation resources online at:
> <http://www.gnu.org/software/gdb/documentation/>.
>
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from udp6_server_vcl_threaded_udpc...(no debugging symbols
> found)...done.
> (gdb) r 2001:5b0::700:b883:31f:29e:9880 9978
> Starting program: /home/super/vcl_test/udp6_server_vcl_threaded_udpc
> 2001:5b0::700:b883:31f:29e:9880 9978
> Missing separate debuginfos, use: yum debuginfo-install
> glibc-2.28-72.el8_1.1.x86_64
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
>
>  server addr - 2001:5b0::700:b883:31f:29e:9880
>  server port  9978,
> total port = 1
> VCL<64516>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<64516>: allocated VCL heap = 0x7fffe6988010, size 268435456
> (0x1000)
> VCL<64516>: configured rx_fifo_s

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-19 Thread Raj Kumar
Hi Florin,
I am facing a weird problem. After making the VPP code changes, I
recompiled/re-installed VPP by using the following commands-
make rebuild-release
make pkg-rpm
rpm -ivh /opt/vpp/build-root/*.rpm

But, it looks that VPP is still using the old code  I also stopped the VPP
service before compiling and installing the new code.
Also, recompiled the application using the new vppcom library.
But, the line# in the following trace indicates that VPP is using the old
code
vppcom_session_create:*1279*: vcl<28267:1>: created session 1

May be because of this issue , VPP is still crashing with UDPC.

Please let me know  if there is any other way to compile VPP with the local
code changes.

thanks,
-Raj
.

On Tue, May 19, 2020 at 12:31 AM Florin Coras 
wrote:

> Hi Raj,
>
> By the looks of it, something’s not right because in the logs VCL still
> reports it’s binding using UDPC. You probably cherry-picked [1] but it
> needs [2] as well. More inline.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
> [2] https://gerrit.fd.io/r/c/vpp/+/27106
>
> On May 18, 2020, at 8:42 PM, Raj Kumar  wrote:
>
>
> Hi Florin,
> I tried the path [1] , but still VPP is crashing when  application is
> using listen with UDPC.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
>
>
>
> On a different topic , I have some questions. Could you please  provide
> your inputs -
>
> 1) I am using Mellanox NIC. Any idea how can I enable Tx checksum offload
> ( for udp).  Also, how to change the Tx burst mode and Rx burst mode to the
> Vector .
>
> HundredGigabitEthernet12/0/1   3 up   HundredGigabitEthernet12/0/1
>   Link speed: 100 Gbps
>   Ethernet address b8:83:03:9e:98:81
>  * Mellanox ConnectX-4 Family*
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
> tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.01 numa 0
> switch info: name :12:00.1 domain id 1 port id 65535
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> vlan-filter
>jumbo-frame scatter timestamp keep-crc rss-hash
> rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso
> geneve-tnl-tso
>multi-segs udp-tnl-tso ip-tnl-tso
>* tx offload active: multi-segs*
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only
> l3-src-only
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
>
> *   tx burst mode: No MPW + MULTI + TSO + INLINE + METADATArx burst
> mode: Scalar*
>
>
> FC: Not sure why (might not be supported) but the offloads are not enabled
> in dpdk_lib_init for VNET_DPDK_PMD_MLX* nics. You could try replicating
> what’s done for the Intel cards and see if that works. Alternatively, you
> might want to try the rdma driver, although I don’t know if it supports
> csum offloading (cc Ben and Damjan).
>
>
> 2) My application needs to send routing header (SRv6) and Destination
> option extension header. On RedHat 8.1 , I was using socket option to add
> routing and destination option extension header.
> With VPP , I can use SRv6 policy to let VPP add the routing header. But, I
> am not sure if there is any option in VPP or HostStack to add the
> destination option header.
>
>
> FC: We don’t currently support this.
>
> Regards,
> Florin
>
>
>
> Coming back to the original problem, here are the traces-
>
> VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<39673>: using default heapsize 268435456 (0x1000)
> VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456
> (0x1000)
> VCL<39673>: using default configuration.
> vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to VPP
> api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<39673:0>: app (udp6_rx) is connected to VPP!
> vppcom_app_create:1200: vcl<39673:0>: sending session enable
> vppcom_app_create:1208: vcl<39673:0>: sending app attach
> vppcom_app_create:1217: vcl<3967

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-18 Thread Raj Kumar
_rp=,
the_msg=0x13007ebe8, vm=vm@entry=0x76d7c200 ,
node=node@entry=0x7fffb571a000, is_private=is_private@entry=0 '\000')
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibapi/api_shared.c:609
#8  0x77baff06 in vl_mem_api_handle_rpc (vm=vm@entry=0x76d7c200
, node=node@entry=0x7fffb571a000)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/memory_api.c:748
#9  0x77bbd5d3 in vl_api_clnt_process (vm=,
node=0x7fffb571a000, f=)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/vlib_api.c:326
#10 0x76b1b136 in vlib_process_bootstrap (_a=)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1502
#11 0x7602fc0c in clib_calljmp () from /lib64/libvppinfra.so.20.05
#12 0x7fffb5e34dd0 in ?? ()
#13 0x76b1e771 in vlib_process_startup (f=0x0, p=0x7fffb571a000,
vm=0x76d7c200 )
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vppinfra/types.h:133
#14 dispatch_process (vm=0x76d7c200 ,
p=0x7fffb571a000, last_time_stamp=12611933408198086, f=0x0)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1569

thanks,
-Raj




On Sat, May 16, 2020 at 8:18 PM Florin Coras  wrote:

> Hi Raj,
>
> Inline.
>
> On May 16, 2020, at 2:30 PM, Raj Kumar  wrote:
>
>  Hi Florin,
>
> I am using VPP 20.05 rc0 . Should I upgrade it ?
>
>
> FC: Not necessarily, as long as it’s relatively recent, i.e., it includes
> all of the recent udp updates.
>
>
> Thanks! for providing the patch, i will try it on Monday. Actually, I am
> testing in a controlled environment where I can not change the VPP
> libraries. I will try it on my server.
>
>
> FC: Sounds good. Let me know how it goes!
>
>
>  On the UDP connection; yes, the error  EINPROGRESS was there because I am
> using non-blocking connection. Now, I am ignoring this error.
>  Sometime , VPP crashes when I kills my application ( not gracefully) even
> when there is  a single connection .
>
>
> FC: That might have to do with the app dying such that 1) it does not
> detach from vpp (e.g., sigkill and atexit function is not executed) 2) it
> dies with the message queue mutex held and 3) vpp tries to enqueue more
> events before detecting that it crashed (~30s).
>
>
> The good part is that now I am able to move connections on different cores
> by connecting it on receiving the first packet and then re-binding the
> socket to listen.
> Basically, this approach works but I have not tested it thoroughly.
> However , I am still in favor of using the UDPC connection.
>
>
> FC: If you have enough logic in your app to emulate a handshake, i.e.,
> always have the client send a few bytes and wait for a reply from the
> server before opening a new connection, then this approach is probably more
> flexible from core placement perspective.
>
> The patch tries to emulate the old udpc with udp (udpc in vpp was
> confusing for consumers). You might get away with listening from multiple
> vcl workers on the same udp ip:port pair and have vpp load balance accepts
> between them, but I’ve never tested that. You can do this only with udp
> listeners that have been initialized as connected (so only with the patch).
>
>
> Btw, in trace logs I see some ssvm_delete related error when re-binding
> the connection.
>
>
> FC: I think it’s fine. Going over the interactions step by step to see if
> understand what you’re doing (and hopefully help you understand what vpp
> does underneath).
>
>
> vpp# sh session verbose
> ConnectionState  Rx-f
>  Tx-f
> [0:0][U] 2001:5b0::700:b883:31f:29e:9880:6677-LISTEN 0
> 0
> Thread 0: active sessions 1
>
> ConnectionState  Rx-f
>  Tx-f
> [1:0][U] 2001:5b0::700:b883:31f:29e:9880:6677-OPENED 0
> 0
> Thread 1: active sessions 1
>
> ConnectionState  Rx-f
>  Tx-f
> [2:0][U] 2001:5b0::700:b883:31f:29e:9880:6677-OPENED 0
> 0
> Thread 2: active sessions 1
> Thread 3: no sessions
> Thread 4: no sessions
>
> VCL<24434>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<24434>: using default heapsize 268435456 (0x1000)
> VCL<24434>: allocated VCL heap = 0x7f7f18d1b010, size 268435456
> (0x1000)
> VCL<24434>: using default configuration.
> vppcom_connect_to_vpp:487: vcl<24434:0>: app (udp6_rx) connecting to VPP
> api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<24434:0>: app (udp6_rx) is connected to VPP!
> vppcom_app_create:1200: vcl<24434:0>: sending session enable
> vppcom_app_create:1208: vcl<24434:0>: sending app attach
&g

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-16 Thread Raj Kumar
_session_listen:1458: vcl<24434:1>: session 16777219: sending vpp
listen request...
*ssvm_delete_shm:205: unlink segment '24177-3': No such file or directory
(errno 2)*
vcl_segment_detach:467: vcl<24434:1>: detached segment 3 handle 0
vcl_session_app_del_segment_handler:863: vcl<24434:1>: Unmapped segment: 0
vcl_session_connected_handler:505: vcl<24434:1>: session 2 [0x1]
connected! rx_fifo 0x224051a80, refcnt 1, tx_fifo 0x224051980, refcnt 1
vcl_session_app_add_segment_handler:855: vcl<24434:1>: mapped new segment
'24177-4' size 134217728
vcl_session_bound_handler:607: vcl<24434:1>: session 3 [0x0]: listen
succeeded!
vppcom_epoll_ctl:2658: vcl<24434:1>: EPOLL_CTL_ADD: vep_sh 16777216, sh
16777219, events 0x1, data 0x!

thanks,
-Raj


On Sat, May 16, 2020 at 2:23 PM Florin Coras  wrote:

> Hi Raj,
>
> Assuming you are trying to open more than one connected udp session, does
> this [1] solve the problem (note it's untested)?
>
> To reproduce legacy behavior, this allows you to listen on
> VPPCOM_PROTO_UDPC but that is now converted by vcl into a udp listen that
> propagates with a “connected” flag to vpp. That should result in a udp
> listener that behaves like an “old” udpc listener.
>
> Regards,
> Florin
>
> [1] https://gerrit.fd.io/r/c/vpp/+/27111
>
> On May 16, 2020, at 10:56 AM, Florin Coras via lists.fd.io <
> fcoras.lists=gmail@lists.fd.io> wrote:
>
> Hi Raj,
>
> Are you using master latest/20.05 rc1 or something older? The fact that
> you’re getting a -115 (EINPROGRESS) suggests you might’ve marked the
> connection as “non-blocking” although you created it as blocking. If that’s
> so, the return value is not an error.
>
> Also, how if vpp crashing? Are you by chance trying to open a lot of udp
> connections back to back?
>
> Regards,
> Florin
>
> On May 16, 2020, at 10:23 AM, Raj Kumar  wrote:
>
> Hi Florin,
> I tried to connect on receiving the first UDP packet . But, it did not
> work. I am getting error -115 in the application and VPP is crashing.
>
> This is something I tried in the code (udp receiver) -
> sockfd = vppcom_session_create(VPPCOM_PROTO_UDP, 0);
> rv_vpp = vppcom_session_bind (sockfd, );
> if (FD_ISSET(session_idx, ))
> {
> n = vppcom_session_recvfrom(sockfd, (char *)buffer, MAXLINE, 0,
> );
> if(first_pkt)
> rv_vpp = vppcom_session_connect (sockfd, );
> //Here getting rv_vpp as -115
> }
> Please let me know if I am doing something wrong.
>
> Here are the traces -
>
> VCL<16083>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<16083>: using default heapsize 268435456 (0x1000)
> VCL<16083>: allocated VCL heap = 0x7fd255ed2010, size 268435456
> (0x1000)
> VCL<16083>: using default configuration.
> vppcom_connect_to_vpp:487: vcl<16083:0>: app (udp6_rx) connecting to VPP
> api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<16083:0>: app (udp6_rx) is connected to VPP!
> vppcom_app_create:1200: vcl<16083:0>: sending session enable
> vppcom_app_create:1208: vcl<16083:0>: sending app attach
> vppcom_app_create:1217: vcl<16083:0>: app_name 'udp6_rx', my_client_index
> 0 (0x0)
>
> vppcom_connect_to_vpp:487: vcl<16083:1>: app (udp6_rx-wrk-1) connecting to
> VPP api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<16083:1>: app (udp6_rx-wrk-1) is connected
> to VPP!
> vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1
> vpp-worker 1 added
> vcl_worker_register_with_vpp:262: vcl<16083:1>: added worker 1
> vppcom_session_create:1279: vcl<16083:1>: created session 0
> vppcom_session_bind:1426: vcl<16083:1>: session 0 handle 16777216: binding
> to local IPv6 address 2001:5b0::700:b883:31f:29e:9880 port 6677, proto
> UDP
> vppcom_session_listen:1458: vcl<16083:1>: session 16777216: sending vpp
> listen request...
> vcl_session_bound_handler:607: vcl<16083:1>: session 0 [0x0]: listen
> succeeded!
> vppcom_session_connect:1742: vcl<16083:1>: session handle 16777216
> (STATE_CLOSED): connecting to peer IPv6 2001:5b0::700:b883:31f:29e:9886
> port 51190 proto UDP
>  udpRxThread started!!!  ... rx port  = 6677vppcom_session_connect()
> failed ... -115
> vcl_session_cleanup:1300: vcl<16083:1>: session 0 [0x0] closing
> vcl_worker_cleanup_cb:190: vcl<94:-1>: cleaned up worker 1
> vl_client_disconnect:309: peer unresponsive, give up
>
> thanks,
> -Raj
>
>
> On Fri, May 15, 2020 at 8:10 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> There are no explicit vcl apis that allow a udp listener to be switched
>> to connected mode. We might decide to do this at one point 

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-16 Thread Raj Kumar
Hi Florin,
I tried to connect on receiving the first UDP packet . But, it did not
work. I am getting error -115 in the application and VPP is crashing.

This is something I tried in the code (udp receiver) -
sockfd = vppcom_session_create(VPPCOM_PROTO_UDP, 0);
rv_vpp = vppcom_session_bind (sockfd, );
if (FD_ISSET(session_idx, ))
{
n = vppcom_session_recvfrom(sockfd, (char *)buffer, MAXLINE, 0,
);
if(first_pkt)
rv_vpp = vppcom_session_connect (sockfd, );
//Here getting rv_vpp as -115
}
Please let me know if I am doing something wrong.

Here are the traces -

VCL<16083>: configured VCL debug level (2) from VCL_DEBUG!
VCL<16083>: using default heapsize 268435456 (0x1000)
VCL<16083>: allocated VCL heap = 0x7fd255ed2010, size 268435456 (0x1000)
VCL<16083>: using default configuration.
vppcom_connect_to_vpp:487: vcl<16083:0>: app (udp6_rx) connecting to VPP
api (/vpe-api)...
vppcom_connect_to_vpp:502: vcl<16083:0>: app (udp6_rx) is connected to VPP!
vppcom_app_create:1200: vcl<16083:0>: sending session enable
vppcom_app_create:1208: vcl<16083:0>: sending app attach
vppcom_app_create:1217: vcl<16083:0>: app_name 'udp6_rx', my_client_index 0
(0x0)

vppcom_connect_to_vpp:487: vcl<16083:1>: app (udp6_rx-wrk-1) connecting to
VPP api (/vpe-api)...
vppcom_connect_to_vpp:502: vcl<16083:1>: app (udp6_rx-wrk-1) is connected
to VPP!
vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1
vpp-worker 1 added
vcl_worker_register_with_vpp:262: vcl<16083:1>: added worker 1
vppcom_session_create:1279: vcl<16083:1>: created session 0
vppcom_session_bind:1426: vcl<16083:1>: session 0 handle 16777216: binding
to local IPv6 address 2001:5b0::700:b883:31f:29e:9880 port 6677, proto
UDP
vppcom_session_listen:1458: vcl<16083:1>: session 16777216: sending vpp
listen request...
vcl_session_bound_handler:607: vcl<16083:1>: session 0 [0x0]: listen
succeeded!
vppcom_session_connect:1742: vcl<16083:1>: session handle 16777216
(STATE_CLOSED): connecting to peer IPv6 2001:5b0::700:b883:31f:29e:9886
port 51190 proto UDP
 udpRxThread started!!!  ... rx port  = 6677vppcom_session_connect() failed
... -115
vcl_session_cleanup:1300: vcl<16083:1>: session 0 [0x0] closing
vcl_worker_cleanup_cb:190: vcl<94:-1>: cleaned up worker 1
vl_client_disconnect:309: peer unresponsive, give up

thanks,
-Raj


On Fri, May 15, 2020 at 8:10 PM Florin Coras  wrote:

> Hi Raj,
>
> There are no explicit vcl apis that allow a udp listener to be switched to
> connected mode. We might decide to do this at one point through a new bind
> api (non-posix like) since we do support this for builtin applications.
>
> However, you now have the option of connecting a bound session. That is,
> on the first received packet on a udp listener, you can grab the peer’s
> address and connect it. Iperf3 in udp mode, which is part of our make test
> infra, does exactly that. Subsequently, it re-binds the port to accept more
> connections. Would that work for you?
>
> Regards,
> Florin
>
> On May 15, 2020, at 4:06 PM, Raj Kumar  wrote:
>
> Thanks! Florin,
>
> OK, I understood that I need to change my application to use UDP socket
> and then use vppcom_session_connect().
> This is fine for the UDP client ( sender) .
>
> But ,in  UDP Server ( receiver) , I am not sure how to use the
> vppcom_session_connect(). .
> I am using vppcom_session_listen() to listen on the connections and then
> calling vppcom_session_accept() to accept a new connection.
>
> With UDP*C,* I was able to utilize the RSS ( receiver side scaling)
> feature to move the received connections on the different cores /threads.
>
> Just want to confirm if I can achieve the same with UDP.
>
> I will change my application and will update you about the result.
>
> Thanks,
> -Raj
>
>
> On Fri, May 15, 2020 at 5:17 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> We removed udpc transport in vpp. I’ll push a patch that removes it from
>> vcl as well.
>>
>> Calling connect on a udp connection will give you connected semantics
>> now. Let me know if that solves the issue for you.
>>
>> Regards,
>> Florin
>>
>>
>> On May 15, 2020, at 12:15 PM, Raj Kumar  wrote:
>>
>> Hi,
>> I am getting segmentation fault in VPP when using VCL VPPCOM_PROTO_UDP*C*
>> socket. This issue is observed with both UDP sender and UDP receiver
>> application.
>>
>> However, both UDP sender and receiver works fine with VPPCOM_PROTO_UDP.
>>
>> Here is the stack trace -
>>
>> (gdb) bt
>> #0  0x in ?? ()
>> #1  0x7775da59 in session_open_vc (app_wrk_index=1,
>> rmt=0x7fffb5e34cc0, opaque=0)
>

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-15 Thread Raj Kumar
Thanks! Florin,

OK, I understood that I need to change my application to use UDP socket and
then use vppcom_session_connect().
This is fine for the UDP client ( sender) .

But ,in  UDP Server ( receiver) , I am not sure how to use the
vppcom_session_connect(). .
I am using vppcom_session_listen() to listen on the connections and then
calling vppcom_session_accept() to accept a new connection.

With UDP*C,* I was able to utilize the RSS ( receiver side scaling) feature
to move the received connections on the different cores /threads.

Just want to confirm if I can achieve the same with UDP.

I will change my application and will update you about the result.

Thanks,
-Raj


On Fri, May 15, 2020 at 5:17 PM Florin Coras  wrote:

> Hi Raj,
>
> We removed udpc transport in vpp. I’ll push a patch that removes it from
> vcl as well.
>
> Calling connect on a udp connection will give you connected semantics now.
> Let me know if that solves the issue for you.
>
> Regards,
> Florin
>
>
> On May 15, 2020, at 12:15 PM, Raj Kumar  wrote:
>
> Hi,
> I am getting segmentation fault in VPP when using VCL VPPCOM_PROTO_UDP*C*
> socket. This issue is observed with both UDP sender and UDP receiver
> application.
>
> However, both UDP sender and receiver works fine with VPPCOM_PROTO_UDP.
>
> Here is the stack trace -
>
> (gdb) bt
> #0  0x in ?? ()
> #1  0x7775da59 in session_open_vc (app_wrk_index=1,
> rmt=0x7fffb5e34cc0, opaque=0)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.c:1217
> #2  0x77779257 in session_mq_connect_handler (data=0x7fffb676e7a8)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:138
> #3  0x77780f48 in session_event_dispatch_ctrl (elt=0x7fffb643f51c,
> wrk=0x7fffb650a640)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.h:262
> #4  session_queue_node_fn (vm=, node=,
> frame=)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:1409
> #5  0x76b214c1 in dispatch_node (last_time_stamp=,
> frame=0x0, dispatch_state=VLIB_NODE_STATE_POLLING,
> type=VLIB_NODE_TYPE_INPUT, node=0x7fffb5a9a980, vm=0x76d7c200
> )
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1235
> #6  vlib_main_or_worker_loop (is_main=1, vm=0x76d7c200
> )
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1815
> #7  vlib_main_loop (vm=0x76d7c200 ) at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1990
> #8  vlib_main (vm=, vm@entry=0x76d7c200
> , input=input@entry=0x7fffb5e34fa0)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:2236
> #9  0x76b61756 in thread0 (arg=140737334723072) at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:658
> #10 0x7602fc0c in clib_calljmp () from /lib64/libvppinfra.so.20.05
> #11 0x7fffd1e0 in ?? ()
> #12 0x76b627ed in vlib_unix_main (argc=,
> argv=)
> at
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:730
>
> Earlier , I tested this functionality with VPP 20.01 release with the
> following patches and it worked perfectly.
> https://gerrit.fd.io/r/c/vpp/+/24332
> https://gerrit.fd.io/r/c/vpp/+/24334
> https://gerrit.fd.io/r/c/vpp/+/24462
>
> Thanks,
> -Raj
> 
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16415): https://lists.fd.io/g/vpp-dev/message/16415
Mute This Topic: https://lists.fd.io/mt/74234856/21656
Mute #vpp-hoststack: https://lists.fd.io/mk?hashtag=vpp-hoststack=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-15 Thread Raj Kumar
Hi,
I am getting segmentation fault in VPP when using VCL VPPCOM_PROTO_UDP *C* 
socket. This issue is observed with both UDP sender and UDP receiver 
application.

However, both UDP sender and receiver works fine with VPPCOM_PROTO_UDP.

Here is the stack trace -

(gdb) bt
#0  0x in ?? ()
#1  0x7775da59 in session_open_vc (app_wrk_index=1, rmt=0x7fffb5e34cc0, 
opaque=0)
at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.c:1217
#2  0x77779257 in session_mq_connect_handler (data=0x7fffb676e7a8)
at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:138
#3  0x77780f48 in session_event_dispatch_ctrl (elt=0x7fffb643f51c, 
wrk=0x7fffb650a640)
at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session.h:262
#4  session_queue_node_fn (vm=, node=, 
frame=)
at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:1409
#5  0x76b214c1 in dispatch_node (last_time_stamp=, 
frame=0x0, dispatch_state=VLIB_NODE_STATE_POLLING,
type=VLIB_NODE_TYPE_INPUT, node=0x7fffb5a9a980, vm=0x76d7c200 
)
at /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1235
#6  vlib_main_or_worker_loop (is_main=1, vm=0x76d7c200 )
at /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1815
#7  vlib_main_loop (vm=0x76d7c200 ) at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:1990
#8  vlib_main (vm=, vm@entry=0x76d7c200 , 
input=input@entry=0x7fffb5e34fa0)
at /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/main.c:2236
#9  0x76b61756 in thread0 (arg=140737334723072) at 
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:658
#10 0x7602fc0c in clib_calljmp () from /lib64/libvppinfra.so.20.05
#11 0x7fffd1e0 in ?? ()
#12 0x76b627ed in vlib_unix_main (argc=, argv=)
at /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlib/unix/main.c:730

Earlier , I tested this functionality with VPP 20.01 release with the following 
patches and it worked perfectly.

https://gerrit.fd.io/r/c/vpp/+/24332

https://gerrit.fd.io/r/c/vpp/+/24334

https://gerrit.fd.io/r/c/vpp/+/24462

Thanks,
-Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16410): https://lists.fd.io/g/vpp-dev/message/16410
Mute This Topic: https://lists.fd.io/mt/74234856/21656
Mute #vpp-hoststack: https://lists.fd.io/mk?hashtag=vpp-hoststack=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Multiple UDP receiver applications on same port #vpp-hoststack

2020-02-26 Thread Raj Kumar
Hi Florin,
Thanks for the clarification.

Thanks,
-Raj

On Wed, Feb 26, 2020 at 5:14 PM Florin Coras  wrote:

> Hi Raj,
>
> Now that’s interesting. VPP detects when an application dies (binary api
> mechanism) and forces through the session layer a de-attach which in turn
> leads to an unbind.
>
> In case of udp, not tcp, we have a shim layer that redirects udp packets
> to whomever registered for a certain port. In your case, udp-input was
> registered twice but because currently we don’t have a reference count, the
> unbind removes the registration for both binds.
>
> Will add it to my todo list if nobody beats me to it.
>
> Regards,
> Florin
>
> On Feb 26, 2020, at 1:28 PM, Raj Kumar  wrote:
>
> Hi,
> When 2 or more UDP rx applications ( using VCL) receiving on the same port
> ( bind on the same port but different ip address) then on stopping either
> one of the application , all other application stopped receiving the
> traffic. As soon as , I restart the application all other applications also
> start receiving the traffic.
>
> vpp# sh ip6 int
>
> vppnet1.2001 is admin up
>
>   Local unicast address(es):
>
> fd0d:edc4::2001::213/64
>
> fd0d:edc4::2001::223/64
>
>   Link-local address(es):
>
> fe80::ba83:3ff:fe79:af8c
> When both applications are running : -
> vpp# sh session verbose 1
> ConnectionState  Rx-f
> Tx-f
> [#0][U] fd0d:edc4::2001::213:9915->:::0   -  0
>  0
> [#0][U] fd0d:edc4::2001::223:9915->:::0   -  0
>  0
> Thread 0: active sessions 2
> Thread 1: no sessions
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
>
> ConnectionState  Rx-f
> Tx-f
> [#5][U] fd0d:edc4::2001::213:9915->fd0d:edc4:f-  15226
>  0
> Thread 5: active sessions 1
>
> ConnectionState  Rx-f
> Tx-f
> [#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-  0
>  0
> Thread 6: active sessions 1
>
>
> On stopping first application  : -
>
> vpp# sh session verbose 1
>
> ConnectionState  Rx-f
> Tx-f
>
> [#0][U] fd0d:edc4::2001::223:9915->:::0   -  0
>  0
>
> Thread 0: active sessions 1
>
> Thread 1: no sessions
>
> Thread 2: no sessions
>
> Thread 3: no sessions
>
> Thread 4: no sessions
>
> Thread 5: no sessions
>
>
> ConnectionState  Rx-f
> Tx-f
>
> [#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-  0
>  0
>
>
> Thread 6: active sessions 1
> One active session is there but in 'sh err'  "no listener punt" error
> increments and application is not receiving the data.
>
> 310150540 ip6-udp-lookup no listener punt
>
> packet trace :-
>
> --- Start of thread 6 vpp_wk_5 ---
> Packet 1
>
> 01:12:04:676114: dpdk-input
>   vppnet1 rx queue 2
>   buffer 0x10b18f: current data 0, length 7634, buffer-pool 0, ref-count
> 1, totlen-nifb 0, trace handle 0x600
>ext-hdr-valid
>   PKT MBUF: port 0, nb_segs 1, pkt_len 7634
> buf_len 9344, data_len 7634, ol_flags 0x182, data_off 128, phys_addr
> 0x744c6440
> packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x678c0829 fdir.hi 0x0 fdir.lo 0x678c0829
> Packet Offload Flags
>   PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without
> extension headers
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>   IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
>   UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
> tos 0x00, flow label 0x0, hop limit 64, payload length 7576
>   UDP: 23456 -> 9915
> length 7576, checksum 0x225d
> 01:12:04:676154: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
> 01:12:04:676156: ip6-input
>   UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
> tos 0x00, flow label 0x0, hop limit 64, payload length 7576
>   UDP: 23456 -> 9915
> length 7576, checksum 0x225d
> 01:12:04:676164: ip6-lookup
>   fib 0 dpo-idx 20 flow hash: 0x

[vpp-dev] Multiple UDP receiver applications on same port #vpp-hoststack

2020-02-26 Thread Raj Kumar
Hi,
When 2 or more UDP rx applications ( using VCL) receiving on the same port ( 
bind on the same port but different ip address) then on stopping either one of 
the application , all other application stopped receiving the traffic. As soon 
as , I restart the application all other applications also start receiving the 
traffic.

vpp# sh ip6 int

vppnet1.2001 is admin up

Local unicast address(es):

fd0d:edc4::2001::213/64

fd0d:edc4::2001::223/64

Link-local address(es):

fe80::ba83:3ff:fe79:af8c

When both applications are running : -
vpp# sh session verbose 1
Connection                                        State          Rx-f      Tx-f
[#0][U] fd0d:edc4::2001::213:9915->:::0       -              0         0
[#0][U] fd0d:edc4::2001::223:9915->:::0       -              0         0
Thread 0: active sessions 2
Thread 1: no sessions
Thread 2: no sessions
Thread 3: no sessions
Thread 4: no sessions

Connection                                        State          Rx-f      Tx-f
[#5][U] fd0d:edc4::2001::213:9915->fd0d:edc4:f-              15226     0
Thread 5: active sessions 1

Connection                                        State          Rx-f      Tx-f
[#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-              0         0
Thread 6: active sessions 1

On stopping first application  : -

vpp# sh session verbose 1

Connection                                        State          Rx-f      Tx-f

[#0][U] fd0d:edc4::2001::223:9915->:::0       -              0         0

Thread 0: active sessions 1

Thread 1: no sessions

Thread 2: no sessions

Thread 3: no sessions

Thread 4: no sessions

Thread 5: no sessions

Connection                                        State          Rx-f      Tx-f

[#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-              0         0

Thread 6: active sessions 1

One active session is there but in 'sh err'  "no listener punt" error 
increments and application is not receiving the data.

310150540             ip6-udp-lookup             no listener punt

packet trace :-

--- Start of thread 6 vpp_wk_5 ---
Packet 1

01:12:04:676114: dpdk-input
vppnet1 rx queue 2
buffer 0x10b18f: current data 0, length 7634, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x600
ext-hdr-valid
PKT MBUF: port 0, nb_segs 1, pkt_len 7634
buf_len 9344, data_len 7634, ol_flags 0x182, data_off 128, phys_addr 0x744c6440
packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x678c0829 fdir.hi 0x0 fdir.lo 0x678c0829
Packet Offload Flags
PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without extension 
headers
RTE_PTYPE_L4_UDP (0x0200) UDP packet
IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
tos 0x00, flow label 0x0, hop limit 64, payload length 7576
UDP: 23456 -> 9915
length 7576, checksum 0x225d
01:12:04:676154: ethernet-input
frame: flags 0x3, hw-if-index 2, sw-if-index 2
IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
01:12:04:676156: ip6-input
UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
tos 0x00, flow label 0x0, hop limit 64, payload length 7576
UDP: 23456 -> 9915
length 7576, checksum 0x225d
01:12:04:676164: ip6-lookup
fib 0 dpo-idx 20 flow hash: 0x
UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
tos 0x00, flow label 0x0, hop limit 64, payload length 7576
UDP: 23456 -> 9915
length 7576, checksum 0x225d
01:12:04:676166: ip6-local
UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
tos 0x00, flow label 0x0, hop limit 64, payload length 7576
UDP: 23456 -> 9915
length 7576, checksum 0x225d
01:12:04:676169: ip6-udp-lookup
UDP: src-port 23456 dst-port 9915
01:12:04:676169: ip6-punt
UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
tos 0x00, flow label 0x0, hop limit 64, payload length 7576
UDP: 23456 -> 9915
length 7576, checksum 0x225d
01:12:04:676169: error-punt
rx:vppnet1.2001
01:12:04:676170: punt
ip6-udp-lookup: no listener punt

thanks,
-Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15570): https://lists.fd.io/g/vpp-dev/message/15570
Mute This Topic: https://lists.fd.io/mt/71574152/21656
Mute #vpp-hoststack: https://lists.fd.io/mk?hashtag=vpp-hoststack=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-24 Thread Raj Kumar
fff:2001::203:7729->fd0d:edc4:ESTABLISHED0 0

[1:2][T] fd0d:edc4::2001::203:39216->fd0d:edc4ESTABLISHED0
399

Thread 1: active sessions 3



ConnectionState  Rx-f
Tx-f

[2:0][T] fd0d:edc4::2001::203:51962->fd0d:edc4ESTABLISHED0 0

[2:1][T] fd0d:edc4::2001::203:56849->fd0d:edc4ESTABLISHED0
399

[2:2][T] fd0d:edc4::2001::203:6689->fd0d:edc4:ESTABLISHED0 0

[2:3][T] fd0d:edc4::2001::203:7729->fd0d:edc4:ESTABLISHED0 0

Thread 2: active sessions 4



ConnectionState  Rx-f
Tx-f

[3:0][T] fd0d:edc4::2001::203:29141->fd0d:edc4ESTABLISHED0 0

[3:1][T] fd0d:edc4::2001::203:6689->fd0d:edc4:ESTABLISHED0 0

Thread 3: active sessions 2



ConnectionState  Rx-f
Tx-f

[4:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

[4:1][T] fd0d:edc4::2001::203:57550->fd0d:edc4ESTABLISHED0
399

[4:2][T] fd0d:edc4::2001::203:56939->fd0d:edc4ESTABLISHED    0 0

Thread 4: active sessions 3


thanks,
-Raj











On Tue, Jan 21, 2020 at 9:43 PM Florin Coras  wrote:

> Hi Raj,
>
> Inline.
>
> On Jan 21, 2020, at 3:41 PM, Raj Kumar  wrote:
>
> Hi Florin,
> There is no drop on the interfaces. It is 100G card.
> In UDP tx application, I am using 1460 bytes of buffer to send on
> select(). I am getting 5 Gbps throughput  ,but if I start one more
> application then total throughput goes down to 4 Gbps as both the sessions
> are on the same thread.
> I increased the tx buffer to 8192 bytes and then I can get 11 Gbps
> throughput  but again if I start one more application the throughput goes
> down to 10 Gbps.
>
>
> FC: I assume you’re using vppcom_session_write to write to the session.
> How large is “len” typically? See lower on why that matters.
>
>
>
> I found one issue in the code ( You must be aware of that) , the UDP send
> MSS is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large
> packets  are getting fragmented.
>
> udp_send_mss (transport_connection_t * t)
> {
>   /* TODO figure out MTU of output interface */
>   return 1460;
> }
>
>
> FC: That’s a typical mss and actually what tcp uses as well. Given the
> nics, they should be fine sending a decent number of mpps without the need
> to do jumbo ip datagrams.
>
> if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if
> i start one more application then throughput is going down to 13 Mbps.
>
>
> It looks like the 17 Mbps is per core limit and since all the sessions are
> pined to the same thread we can not get more throughput.  Here, per core
> throughput look good to me. Please let me know there is any way to use
> multiple threads for UDP tx applications.
>
>
> In your previous email you mentioned that we can use connected udp socket
> in the UDP receiver. Can we do something similar for UDP tx ?
>
>
> FC: I think it may work fine if vpp has main + 1 worker. I have a draft
> patch here [1] that seems to work with multiple workers but it’s not
> heavily tested.
>
> Out of curiosity, I ran a vcl_test_client/server test with 1 worker and
> with XL710s, I’m seeing this:
>
> CLIENT RESULTS: Streamed 65536017791 bytes
>   in 14.392678 seconds (36.427420 Gbps half-duplex)!
>
> Should be noted that because of how datagrams are handled in the session
> layer, throughput is sensitive to write sizes. I ran the client like:
> ~/vcl_client -p udpc 6.0.1.2 1234 -U -N 100 -T 65536
>
> Or in english, unidirectional test, tx buffer of 64kB and 1M writes of
> that buffer. My vcl config was such that tx fifos were 4MB and rx fifos
> 2MB. The sender had few tx packet drops (1657) and the receiver few rx
> packet drops (801). If you plan to use it, make sure arp entries are first
> resolved (e.g., use ping) otherwise the first packet is lost.
>
> Throughput drops to ~15Gbps with 8kB writes. You should probably also test
> with bigger writes with udp.
>
> [1] https://gerrit.fd.io/r/c/vpp/+/24462
>
>
> From the hardware stats , it seems that UDP tx checksum offload is not
> enabled/active  which could impact the performance. I think, udp tx
> checksum should be enabled by default if it is not disabled using
> parameter  "no-tx-checksum-offload".
>
>
> FC: Performance might be affected by the limited number of offloads
> available. Here’s what I see on my XL710s:
>
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload active: udp-cksum tcp-cksum multi-segs
>
>
> Ethernet address b8:83:03:79:af:8c
>   Mellanox ConnectX-4 Family
> carrier up 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Raj Kumar
Correction : -
Please read 17 Mbps as 17 Gbps and 13Mbps as 13Gbps in my previous mail.

thanks,
-Raj

On Tue, Jan 21, 2020 at 6:41 PM Raj Kumar  wrote:

> Hi Florin,
> There is no drop on the interfaces. It is 100G card.
> In UDP tx application, I am using 1460 bytes of buffer to send on
> select(). I am getting 5 Gbps throughput  ,but if I start one more
> application then total throughput goes down to 4 Gbps as both the sessions
> are on the same thread.
> I increased the tx buffer to 8192 bytes and then I can get 11 Gbps
> throughput  but again if I start one more application the throughput goes
> down to 10 Gbps.
>
> I found one issue in the code ( You must be aware of that) , the UDP send
> MSS is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large
> packets  are getting fragmented.
> udp_send_mss (transport_connection_t * t)
> {
>   /* TODO figure out MTU of output interface */
>   return 1460;
> }
> if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if
> i start one more application then throughput is going down to 13 Mbps.
>
> It looks like the 17 Mbps is per core limit and since all the sessions are
> pined to the same thread we can not get more throughput.  Here, per core
> throughput look good to me. Please let me know there is any way to use
> multiple threads for UDP tx applications.
>
> In your previous email you mentioned that we can use connected udp socket
> in the UDP receiver. Can we do something similar for UDP tx ?
>
> From the hardware stats , it seems that UDP tx checksum offload is not
> enabled/active  which could impact the performance. I think, udp tx
> checksum should be enabled by default if it is not disabled using
> parameter  "no-tx-checksum-offload".
>
> Ethernet address b8:83:03:79:af:8c
>   Mellanox ConnectX-4 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
> rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)
> tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> vlan-filter
>jumbo-frame scatter timestamp keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso
> multi-segs
>udp-tnl-tso ip-tnl-tso
> tx offload active: multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> tx burst function: (nil)
> rx burst function: mlx5_rx_burst
>
> thanks,
> -Raj
>
> On Mon, Jan 20, 2020 at 7:55 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Good to see progress. Check with “show int” the tx counters on the sender
>> and rx counters on the receiver as the interfaces might be dropping
>> traffic. One sender should be able to do more than 5Gbps.
>>
>> How big are the writes to the tx fifo? Make sure the tx buffer is some
>> tens of kB.
>>
>> As for the issue with the number of workers, you’ll have to switch to
>> udpc (connected udp), to ensure you have a separate connection for each
>> ‘flow’, and to use accept in combination with epoll to accept the sessions
>> udpc creates.
>>
>> Note that udpc currently does not work correctly with vcl and multiple
>> vpp workers if vcl is the sender (not the receiver) and traffic is
>> bidirectional. The sessions are all created on the first thread and once
>> return traffic is received, they’re migrated to the thread selected by RSS
>> hashing. VCL is not notified when that happens and it runs out of sync. You
>> might not be affected by this, as you’re not receiving any return traffic,
>> but because of that all sessions may end up stuck on the first thread.
>>
>> For udp transport, the listener is connection-less and bound to the main
>> thread. As a result, all incoming packets, even if they pertain to multiple
>> flows, are written to the listener’s buffer/fifo.
>>
>> Regards,
>> Florin
>>
>> On Jan 20

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Raj Kumar
Hi Florin,
There is no drop on the interfaces. It is 100G card.
In UDP tx application, I am using 1460 bytes of buffer to send on select().
I am getting 5 Gbps throughput  ,but if I start one more application then
total throughput goes down to 4 Gbps as both the sessions are on the same
thread.
I increased the tx buffer to 8192 bytes and then I can get 11 Gbps
throughput  but again if I start one more application the throughput goes
down to 10 Gbps.

I found one issue in the code ( You must be aware of that) , the UDP send
MSS is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large
packets  are getting fragmented.
udp_send_mss (transport_connection_t * t)
{
  /* TODO figure out MTU of output interface */
  return 1460;
}
if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if
i start one more application then throughput is going down to 13 Mbps.

It looks like the 17 Mbps is per core limit and since all the sessions are
pined to the same thread we can not get more throughput.  Here, per core
throughput look good to me. Please let me know there is any way to use
multiple threads for UDP tx applications.

In your previous email you mentioned that we can use connected udp socket
in the UDP receiver. Can we do something similar for UDP tx ?

>From the hardware stats , it seems that UDP tx checksum offload is not
enabled/active  which could impact the performance. I think, udp tx
checksum should be enabled by default if it is not disabled using
parameter  "no-tx-checksum-offload".

Ethernet address b8:83:03:79:af:8c
  Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)
tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
   jumbo-frame scatter timestamp keep-crc
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-segs
   udp-tnl-tso ip-tnl-tso
tx offload active: multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
tx burst function: (nil)
rx burst function: mlx5_rx_burst

thanks,
-Raj

On Mon, Jan 20, 2020 at 7:55 PM Florin Coras  wrote:

> Hi Raj,
>
> Good to see progress. Check with “show int” the tx counters on the sender
> and rx counters on the receiver as the interfaces might be dropping
> traffic. One sender should be able to do more than 5Gbps.
>
> How big are the writes to the tx fifo? Make sure the tx buffer is some
> tens of kB.
>
> As for the issue with the number of workers, you’ll have to switch to udpc
> (connected udp), to ensure you have a separate connection for each ‘flow’,
> and to use accept in combination with epoll to accept the sessions udpc
> creates.
>
> Note that udpc currently does not work correctly with vcl and multiple vpp
> workers if vcl is the sender (not the receiver) and traffic is
> bidirectional. The sessions are all created on the first thread and once
> return traffic is received, they’re migrated to the thread selected by RSS
> hashing. VCL is not notified when that happens and it runs out of sync. You
> might not be affected by this, as you’re not receiving any return traffic,
> but because of that all sessions may end up stuck on the first thread.
>
> For udp transport, the listener is connection-less and bound to the main
> thread. As a result, all incoming packets, even if they pertain to multiple
> flows, are written to the listener’s buffer/fifo.
>
> Regards,
> Florin
>
> On Jan 20, 2020, at 3:50 PM, Raj Kumar  wrote:
>
> Hi Florin,
> I changed my application as you suggested. Now, I am able to achieve 5
> Gbps with a single UDP stream.  Overall, I can get ~20Gbps with multiple
> host application . Also, the TCP throughput  is improved to ~28Gbps after
> tuning as mentioned in  [1].
> On the similar topic; the UDP tx throughput is throttled to 5Gbps. Even if
> I run the multiple host applications the overall throughput is 5Gbps. I
> also tried by configuring multiple worker threads . But the problem is that
> all the application sessions are assigned to the 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-20 Thread Raj Kumar
Hi Florin,
I changed my application as you suggested. Now, I am able to achieve 5 Gbps
with a single UDP stream.  Overall, I can get ~20Gbps with multiple host
application . Also, the TCP throughput  is improved to ~28Gbps after tuning
as mentioned in  [1].
On the similar topic; the UDP tx throughput is throttled to 5Gbps. Even if
I run the multiple host applications the overall throughput is 5Gbps. I
also tried by configuring multiple worker threads . But the problem is that
all the application sessions are assigned to the same worker thread. Is
there any way to assign each session  to a different worker thread?

vpp# sh session verbose 2
Thread 0: no sessions
[#1][U] fd0d:edc4::2001::203:58926->fd0d:edc4:
 Rx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 1
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 399 nitems 399 has_event 1
  head 1460553 tail 1460552 segment manager 1
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 4294967295
 session: state: opened opaque: 0x0 flags:
[#1][U] fd0d:edc4::2001::203:63413->fd0d:edc4:
 Rx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 2
  vpp session 1 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 399 nitems 399 has_event 1
  head 3965434 tail 3965433 segment manager 2
  vpp session 1 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 4294967295
 session: state: opened opaque: 0x0 flags:
Thread 1: active sessions 2
Thread 2: no sessions
Thread 3: no sessions
Thread 4: no sessions
Thread 5: no sessions
Thread 6: no sessions
Thread 7: no sessions
vpp# sh app client
Connection  App
[#1][U] fd0d:edc4::2001::203:58926->udp6_tx_8092[shm]
[#1][U] fd0d:edc4::2001::203:63413->udp6_tx_8093[shm]
vpp#



thanks,
-Raj

On Sun, Jan 19, 2020 at 8:50 PM Florin Coras  wrote:

> Hi Raj,
>
> The function used for receiving datagrams is limited to reading at most
> the length of a datagram from the rx fifo. UDP datagrams are mtu sized, so
> your reads are probably limited to ~1.5kB. On each epoll rx event try
> reading from the session handle in a while loop until you get an
> VPPCOM_EWOULDBLOCK. That might improve performance.
>
> Having said that, udp is lossy so unless you implement your own
> congestion/flow control algorithms, the data you’ll receive might be full
> of “holes”. What are the rx/tx error counters on your interfaces (check
> with “sh int”).
>
> Also, with simple tuning like this [1], you should be able to achieve much
> more than 15Gbps with tcp.
>
> Regards,
> Florin
>
> [1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf
>
> On Jan 19, 2020, at 3:25 PM, Raj Kumar  wrote:
>
>   Hi Florin,
>  By using VCL library in an UDP receiver application,  I am able to
> receive only 2 Mbps traffic. On increasing the traffic, I see Rx FIFO full
> error and application stopped receiving the traffic from the session
> layer.  Whereas, with TCP I can easily achieve 15Gbps throughput without
> tuning any DPDK parameter.  UDP tx also looks fine. From an host
> application I can send ~5Gbps without any issue.
>
> I am running VPP( stable/2001 code) on RHEL8 server using Mellanox 100G
> (MLNX5) adapters.
> Please advise if I can use VCL library to receive high throughput UDP
> traffic ( in Gbps). I would be running multiple instances of host
> application to receive data ( ~50-60 Gbps).
>
> I also tried by increasing the Rx FIFO size to 16MB but did not help much.
> The host application is just throwing the received packets , it is not
> doing any packet processing.
>
> [root@orc01 vcl_test]# VCL_DEBUG=2 ./udp6_server_vcl
> VCL<20201>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<20201>: allocated VCL heap = 0x7f39a17ab010, size 268435456
> (0x1000)
> VCL<20201>: configured rx_fifo_size 400 (0x3d0900)
> VCL<20201>: configured tx_fifo_size 400 (0x3d0900)
> VCL<20201>: configured app_scope_local (1)
> VCL<20201>: configured app_scope_global (1)
> VCL<20201>: configured api-socket-name (/tmp/vpp-api.sock)
> VCL<20201>: completed parsing vppcom config!
> vppcom_connect_to_vpp:480: vcl<20201:0>: app (udp6_server) is connected to
> VPP!
> vppcom_app_create:1104: vcl<20201:0>: sending session enable
> vppcom_app_create:1112: vcl<20201:0>: sending app attach
> vppcom_app_create:1121: vcl<20201:0>: app_name 'udp6_server',
> my_client_index 256 (0x100)
> vppcom_epoll_create:2439: vcl<20201:0>: Created vep_idx 0
> vppcom_session_create:1179: vcl<20201:0>: created 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-19 Thread Raj Kumar
0d:edc4::2001::203
tos 0x00, flow label 0x0, hop limit 64, payload length 1458
  UDP: 56944 -> 8092
length 1458, checksum 0xb22d
00:09:53:445028: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
00:09:53:445029: ip6-input
  UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
tos 0x00, flow label 0x0, hop limit 64, payload length 1458
  UDP: 56944 -> 8092
length 1458, checksum 0xb22d
00:09:53:445031: ip6-lookup
  fib 0 dpo-idx 6 flow hash: 0x
  UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
tos 0x00, flow label 0x0, hop limit 64, payload length 1458
  UDP: 56944 -> 8092
length 1458, checksum 0xb22d
00:09:53:445032: ip6-local
UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
  tos 0x00, flow label 0x0, hop limit 64, payload length 1458
UDP: 56944 -> 8092
  length 1458, checksum 0xb22d
00:09:53:445032: ip6-udp-lookup
  UDP: src-port 56944 dst-port 8092
00:09:53:445033: udp6-input
  UDP_INPUT: connection 0, disposition 5, thread 0


thanks,
-Raj


On Wed, Jan 15, 2020 at 4:09 PM Raj Kumar via Lists.Fd.Io  wrote:

> Hi Florin,
> Yes,  [2] patch resolved the  IPv6/UDP receiver issue.
> Thanks! for your help.
>
> thanks,
> -Raj
>
> On Tue, Jan 14, 2020 at 9:35 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> First of all, with this [1], the vcl test app/client can establish a udpc
>> connection. Note that udp will most probably lose packets, so large
>> exchanges with those apps may not work.
>>
>> As for the second issue, does [2] solve it?
>>
>> Regards,
>> Florin
>>
>> [1] https://gerrit.fd.io/r/c/vpp/+/24332
>> [2] https://gerrit.fd.io/r/c/vpp/+/24334
>>
>> On Jan 14, 2020, at 12:59 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> Thanks! for the reply.
>>
>> I realized the issue with the non-connected case.  For receiving
>> datagrams, I was using recvfrom() with DONOT_WAIT flag because of
>> that  vppcom_session_recvfrom() api was failing. It expects either 0 or
>> MSG_PEEK flag.
>>   if (flags == 0)
>> rv = vppcom_session_read (session_handle, buffer, buflen);
>>   else if (flags & MSG_PEEK) 0x2
>> rv = vppcom_session_peek (session_handle, buffer, buflen);
>>   else
>> {
>>   VDBG (0, "Unsupport flags for recvfrom %d", flags);
>>   return VPPCOM_EAFNOSUPPORT;
>> }
>>
>>  I changed the flag to 0 in recvfrom() , after that UDP rx is working
>> fine but only for IPv4.
>>
>> I am facing a different issue with IPv6/UDP receiver.  I am getting "no
>> listener for dst port" error.
>>
>> Please let me know if I am doing something wrong.
>> Here are the traces : -
>>
>> [root@orc01 testcode]# VCL_DEBUG=2 LDP_DEBUG=2
>> LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so
>>  VCL_CONFIG=/etc/vpp/vcl.cfg ./udp6_rx
>> VCL<1164>: configured VCL debug level (2) from VCL_DEBUG!
>> VCL<1164>: allocated VCL heap = 0x7ff877439010, size 268435456
>> (0x1000)
>> VCL<1164>: configured rx_fifo_size 400 (0x3d0900)
>> VCL<1164>: configured tx_fifo_size 400 (0x3d0900)
>> VCL<1164>: configured app_scope_local (1)
>> VCL<1164>: configured app_scope_global (1)
>> VCL<1164>: configured api-socket-name (/tmp/vpp-api.sock)
>> VCL<1164>: completed parsing vppcom config!
>> vppcom_connect_to_vpp:549: vcl<1164:0>: app (ldp-1164-app) is connected
>> to VPP!
>> vppcom_app_create:1067: vcl<1164:0>: sending session enable
>> vppcom_app_create:1075: vcl<1164:0>: sending app attach
>> vppcom_app_create:1084: vcl<1164:0>: app_name 'ldp-1164-app',
>> my_client_index 0 (0x0)
>> ldp_init:209: ldp<1164>: configured LDP debug level (2) from env var
>> LDP_DEBUG!
>> ldp_init:282: ldp<1164>: LDP initialization: done!
>> ldp_constructor:2490: LDP<1164>: LDP constructor: done!
>> socket:974: ldp<1164>: calling vls_create: proto 1 (UDP), is_nonblocking 0
>> vppcom_session_create:1142: vcl<1164:0>: created session 0
>> bind:1086: ldp<1164>: fd 32: calling vls_bind: vlsh 0, addr
>> 0x7fff9a93efe0, len 28
>> vppcom_session_bind:1280: vcl<1164:0>: session 0 handle 0: binding to
>> local IPv6 address :: port 8092, proto UDP
>> vppcom_session_listen:1312: vcl<1164:0>: session 0: sending vpp listen
>> request...
>> vcl_session_bound_handler:610: vcl<1164:0>: session 0 [0x1]: listen
>> succeeded!
>> bind:1102: ldp<1164>

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-15 Thread Raj Kumar
Hi Florin,
Yes,  [2] patch resolved the  IPv6/UDP receiver issue.
Thanks! for your help.

thanks,
-Raj

On Tue, Jan 14, 2020 at 9:35 PM Florin Coras  wrote:

> Hi Raj,
>
> First of all, with this [1], the vcl test app/client can establish a udpc
> connection. Note that udp will most probably lose packets, so large
> exchanges with those apps may not work.
>
> As for the second issue, does [2] solve it?
>
> Regards,
> Florin
>
> [1] https://gerrit.fd.io/r/c/vpp/+/24332
> [2] https://gerrit.fd.io/r/c/vpp/+/24334
>
> On Jan 14, 2020, at 12:59 PM, Raj Kumar  wrote:
>
> Hi Florin,
> Thanks! for the reply.
>
> I realized the issue with the non-connected case.  For receiving
> datagrams, I was using recvfrom() with DONOT_WAIT flag because of
> that  vppcom_session_recvfrom() api was failing. It expects either 0 or
> MSG_PEEK flag.
>   if (flags == 0)
> rv = vppcom_session_read (session_handle, buffer, buflen);
>   else if (flags & MSG_PEEK) 0x2
> rv = vppcom_session_peek (session_handle, buffer, buflen);
>   else
> {
>   VDBG (0, "Unsupport flags for recvfrom %d", flags);
>   return VPPCOM_EAFNOSUPPORT;
> }
>
>  I changed the flag to 0 in recvfrom() , after that UDP rx is working fine
> but only for IPv4.
>
> I am facing a different issue with IPv6/UDP receiver.  I am getting "no
> listener for dst port" error.
>
> Please let me know if I am doing something wrong.
> Here are the traces : -
>
> [root@orc01 testcode]# VCL_DEBUG=2 LDP_DEBUG=2
> LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so
>  VCL_CONFIG=/etc/vpp/vcl.cfg ./udp6_rx
> VCL<1164>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<1164>: allocated VCL heap = 0x7ff877439010, size 268435456 (0x1000)
> VCL<1164>: configured rx_fifo_size 400 (0x3d0900)
> VCL<1164>: configured tx_fifo_size 400 (0x3d0900)
> VCL<1164>: configured app_scope_local (1)
> VCL<1164>: configured app_scope_global (1)
> VCL<1164>: configured api-socket-name (/tmp/vpp-api.sock)
> VCL<1164>: completed parsing vppcom config!
> vppcom_connect_to_vpp:549: vcl<1164:0>: app (ldp-1164-app) is connected to
> VPP!
> vppcom_app_create:1067: vcl<1164:0>: sending session enable
> vppcom_app_create:1075: vcl<1164:0>: sending app attach
> vppcom_app_create:1084: vcl<1164:0>: app_name 'ldp-1164-app',
> my_client_index 0 (0x0)
> ldp_init:209: ldp<1164>: configured LDP debug level (2) from env var
> LDP_DEBUG!
> ldp_init:282: ldp<1164>: LDP initialization: done!
> ldp_constructor:2490: LDP<1164>: LDP constructor: done!
> socket:974: ldp<1164>: calling vls_create: proto 1 (UDP), is_nonblocking 0
> vppcom_session_create:1142: vcl<1164:0>: created session 0
> bind:1086: ldp<1164>: fd 32: calling vls_bind: vlsh 0, addr
> 0x7fff9a93efe0, len 28
> vppcom_session_bind:1280: vcl<1164:0>: session 0 handle 0: binding to
> local IPv6 address :: port 8092, proto UDP
> vppcom_session_listen:1312: vcl<1164:0>: session 0: sending vpp listen
> request...
> vcl_session_bound_handler:610: vcl<1164:0>: session 0 [0x1]: listen
> succeeded!
> bind:1102: ldp<1164>: fd 32: returning 0
>
> vpp# sh app server
> Connection  App  Wrk
> [0:0][CT:U] :::8092->:::0   ldp-1164-app[shm] 0
> [#0][U] :::8092->:::0   ldp-1164-app[shm] 0
>
> vpp# sh err
>CountNode  Reason
>  7   dpdk-input   no error
>   2606 ip6-udp-lookup no listener for dst port
>  8arp-reply   ARP replies sent
>  1  arp-disabled  ARP Disabled on this
> interface
> 13ip6-glean   neighbor solicitations
> sent
>   2606ip6-input   valid ip6 packets
>  4  ip6-local-hop-by-hop  Unknown protocol ip6
> local h-b-h packets dropped
>   2606 ip6-icmp-error destination unreachable
> response sent
> 40 ip6-icmp-input valid packets
>  1 ip6-icmp-input neighbor solicitations
> from source not on link
> 12 ip6-icmp-input neighbor solicitations
> for unknown targets
>  1 ip6-icmp-input neighbor advertisements
> sent
>  1 ip6-icmp-input neighbor advertisements
> received
> 40 ip6-icmp-input

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-14 Thread Raj Kumar
Hi Florin,
Thanks! for the reply.

I realized the issue with the non-connected case.  For receiving datagrams,
I was using recvfrom() with DONOT_WAIT flag because of
that  vppcom_session_recvfrom() api was failing. It expects either 0 or
MSG_PEEK flag.
  if (flags == 0)
rv = vppcom_session_read (session_handle, buffer, buflen);
  else if (flags & MSG_PEEK) 0x2
rv = vppcom_session_peek (session_handle, buffer, buflen);
  else
{
  VDBG (0, "Unsupport flags for recvfrom %d", flags);
  return VPPCOM_EAFNOSUPPORT;
}

 I changed the flag to 0 in recvfrom() , after that UDP rx is working fine
but only for IPv4.

I am facing a different issue with IPv6/UDP receiver.  I am getting "no
listener for dst port" error.

Please let me know if I am doing something wrong.
Here are the traces : -

[root@orc01 testcode]# VCL_DEBUG=2 LDP_DEBUG=2
LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so
 VCL_CONFIG=/etc/vpp/vcl.cfg ./udp6_rx
VCL<1164>: configured VCL debug level (2) from VCL_DEBUG!
VCL<1164>: allocated VCL heap = 0x7ff877439010, size 268435456 (0x1000)
VCL<1164>: configured rx_fifo_size 400 (0x3d0900)
VCL<1164>: configured tx_fifo_size 400 (0x3d0900)
VCL<1164>: configured app_scope_local (1)
VCL<1164>: configured app_scope_global (1)
VCL<1164>: configured api-socket-name (/tmp/vpp-api.sock)
VCL<1164>: completed parsing vppcom config!
vppcom_connect_to_vpp:549: vcl<1164:0>: app (ldp-1164-app) is connected to
VPP!
vppcom_app_create:1067: vcl<1164:0>: sending session enable
vppcom_app_create:1075: vcl<1164:0>: sending app attach
vppcom_app_create:1084: vcl<1164:0>: app_name 'ldp-1164-app',
my_client_index 0 (0x0)
ldp_init:209: ldp<1164>: configured LDP debug level (2) from env var
LDP_DEBUG!
ldp_init:282: ldp<1164>: LDP initialization: done!
ldp_constructor:2490: LDP<1164>: LDP constructor: done!
socket:974: ldp<1164>: calling vls_create: proto 1 (UDP), is_nonblocking 0
vppcom_session_create:1142: vcl<1164:0>: created session 0
bind:1086: ldp<1164>: fd 32: calling vls_bind: vlsh 0, addr 0x7fff9a93efe0,
len 28
vppcom_session_bind:1280: vcl<1164:0>: session 0 handle 0: binding to local
IPv6 address :: port 8092, proto UDP
vppcom_session_listen:1312: vcl<1164:0>: session 0: sending vpp listen
request...
vcl_session_bound_handler:610: vcl<1164:0>: session 0 [0x1]: listen
succeeded!
bind:1102: ldp<1164>: fd 32: returning 0

vpp# sh app server
Connection  App  Wrk
[0:0][CT:U] :::8092->:::0   ldp-1164-app[shm] 0
[#0][U] :::8092->:::0   ldp-1164-app[shm] 0

vpp# sh err
   CountNode  Reason
 7   dpdk-input   no error
  2606 ip6-udp-lookup no listener for dst port
 8arp-reply   ARP replies sent
 1  arp-disabled  ARP Disabled on this
interface
13ip6-glean   neighbor solicitations
sent
  2606ip6-input   valid ip6 packets
 4  ip6-local-hop-by-hop  Unknown protocol ip6
local h-b-h packets dropped
  2606 ip6-icmp-error destination unreachable
response sent
40 ip6-icmp-input valid packets
 1 ip6-icmp-input neighbor solicitations
from source not on link
12 ip6-icmp-input neighbor solicitations
for unknown targets
 1 ip6-icmp-input neighbor advertisements
sent
 1 ip6-icmp-input neighbor advertisements
received
40 ip6-icmp-input router advertisements sent
40 ip6-icmp-input router advertisements
received
 1 ip4-icmp-input echo replies sent
89   lldp-input   lldp packets received on
disabled interfaces
  1328llc-input   unknown llc ssap/dsap
vpp#

vpp# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:23:39:401354: dpdk-input
  HundredGigabitEthernet12/0/0 rx queue 0
  buffer 0x8894e: current data 0, length 1516, buffer-pool 0, ref-count 1,
totlen-nifb 0, trace handle 0x0
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 1516
buf_len 2176, data_len 1516, ol_flags 0x180, data_off 128, phys_addr
0x75025400
packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV6_EXT_UNKNOWN