Re: [vpp-dev] [dmm-dev] FD.io - Gerrit 2.16 Changes

2019-07-30 Thread Florin Coras
It seems that this [1] supports the new workflows. Just overwriting 
/usr/lib/python3/dist-packages/git_review/cmd.py seems to do the trick. 

I’m sure there must be a cleaner way … 

Florin

> On Jul 30, 2019, at 1:20 PM, Vanessa Valderrama 
>  wrote:
> 
> Changes that will happen with Gerrit:
> 
> 1) The 'New UI' for Gerrit will become the default UI
> 
> 2) The Draft work flow is removed and replaced with 'Work in Progress'
> aka 'WIP' and 'Private' workflows. Unfortunately git-review does not
> support either of these workflows directly. Utilizing them means either
> pushing your changes the manual way for either system or pushing them up
> with git-review and then marking the change via the UI into either of
> the workflows.
> 
> 
> To push a private change you may do so as follows:
> git push origin HEAD:refs/for/master%private
> 
> To pull it out of private you may do so as follows:
> git push origin HEAD:refs/for/master%remove-private
> 
> To push a WIP you may do so as follows:
> git push origin HEAD:refs/for/master%wip
> 
> To mark it ready for review you may do so as follows:
> git push origin HEAD:refs/for/master%ready
> 
> Once a change is in either private or WIP state it does not switch the
> change to a ready state until the current state has been removed.
> 
> In both cases, the state can be set via the UI by selecting the tripple
> dot menu option on the change and selecting the appropriate option.
> 
> To remove WIP state press the 'START REVIEW' button. To remove the
> private state you must do so via the menu.
> 
> NOTE: We are not moving to Gerrit 3 at this time. That is on the road
> map but we need to come to the latest 2.x as we have to do various
> migrations that are only available at the 2.16 level before we can move
> to Gerrit 3.
> 
> Thank you,
> Vanessa
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#61): https://lists.fd.io/g/dmm-dev/message/61
> Mute This Topic: https://lists.fd.io/mt/32658314/675152
> Group Owner: dmm-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/dmm-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13624): https://lists.fd.io/g/vpp-dev/message/13624
Mute This Topic: https://lists.fd.io/mt/32658598/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [dmm-dev] FD.io - Gerrit 2.16 Changes

2019-07-30 Thread Florin Coras
Pushed send too soon 

Florin

[1] /usr/lib/python3/dist-packages/git_review/cmd.py

> On Jul 30, 2019, at 1:49 PM, Florin Coras via Lists.Fd.Io 
>  wrote:
> 
> It seems that this [1] supports the new workflows. Just overwriting 
> /usr/lib/python3/dist-packages/git_review/cmd.py seems to do the trick. 
> 
> I’m sure there must be a cleaner way … 
> 
> Florin
> 
>> On Jul 30, 2019, at 1:20 PM, Vanessa Valderrama 
>> mailto:vvalderr...@linuxfoundation.org>> 
>> wrote:
>> 
>> Changes that will happen with Gerrit:
>> 
>> 1) The 'New UI' for Gerrit will become the default UI
>> 
>> 2) The Draft work flow is removed and replaced with 'Work in Progress'
>> aka 'WIP' and 'Private' workflows. Unfortunately git-review does not
>> support either of these workflows directly. Utilizing them means either
>> pushing your changes the manual way for either system or pushing them up
>> with git-review and then marking the change via the UI into either of
>> the workflows.
>> 
>> 
>> To push a private change you may do so as follows:
>> git push origin HEAD:refs/for/master%private
>> 
>> To pull it out of private you may do so as follows:
>> git push origin HEAD:refs/for/master%remove-private
>> 
>> To push a WIP you may do so as follows:
>> git push origin HEAD:refs/for/master%wip
>> 
>> To mark it ready for review you may do so as follows:
>> git push origin HEAD:refs/for/master%ready
>> 
>> Once a change is in either private or WIP state it does not switch the
>> change to a ready state until the current state has been removed.
>> 
>> In both cases, the state can be set via the UI by selecting the tripple
>> dot menu option on the change and selecting the appropriate option.
>> 
>> To remove WIP state press the 'START REVIEW' button. To remove the
>> private state you must do so via the menu.
>> 
>> NOTE: We are not moving to Gerrit 3 at this time. That is on the road
>> map but we need to come to the latest 2.x as we have to do various
>> migrations that are only available at the 2.16 level before we can move
>> to Gerrit 3.
>> 
>> Thank you,
>> Vanessa
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#61): https://lists.fd.io/g/dmm-dev/message/61 
>> <https://lists.fd.io/g/dmm-dev/message/61>
>> Mute This Topic: https://lists.fd.io/mt/32658314/675152 
>> <https://lists.fd.io/mt/32658314/675152>
>> Group Owner: dmm-dev+ow...@lists.fd.io <mailto:dmm-dev+ow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/dmm-dev/unsub 
>> <https://lists.fd.io/g/dmm-dev/unsub>  [fcoras.li...@gmail.com 
>> <mailto:fcoras.li...@gmail.com>]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13624): https://lists.fd.io/g/vpp-dev/message/13624
> Mute This Topic: https://lists.fd.io/mt/32658598/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13625): https://lists.fd.io/g/vpp-dev/message/13625
Mute This Topic: https://lists.fd.io/mt/32658616/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [dmm-dev] FD.io - Gerrit 2.16 Changes

2019-07-30 Thread Florin Coras
Apologies, seems today is not my day with links. 

Florin

[1] https://github.com/openstack-infra/git-review/tree/master/git_review 
<https://github.com/openstack-infra/git-review/tree/master/git_review>

> On Jul 30, 2019, at 1:51 PM, Florin Coras via Lists.Fd.Io 
>  wrote:
> 
> Pushed send too soon 
> 
> Florin
> 
> [1] /usr/lib/python3/dist-packages/git_review/cmd.py
> 
>> On Jul 30, 2019, at 1:49 PM, Florin Coras via Lists.Fd.Io 
>> > <mailto:fcoras.lists=gmail@lists.fd.io>> wrote:
>> 
>> It seems that this [1] supports the new workflows. Just overwriting 
>> /usr/lib/python3/dist-packages/git_review/cmd.py seems to do the trick. 
>> 
>> I’m sure there must be a cleaner way … 
>> 
>> Florin
>> 
>>> On Jul 30, 2019, at 1:20 PM, Vanessa Valderrama 
>>> mailto:vvalderr...@linuxfoundation.org>> 
>>> wrote:
>>> 
>>> Changes that will happen with Gerrit:
>>> 
>>> 1) The 'New UI' for Gerrit will become the default UI
>>> 
>>> 2) The Draft work flow is removed and replaced with 'Work in Progress'
>>> aka 'WIP' and 'Private' workflows. Unfortunately git-review does not
>>> support either of these workflows directly. Utilizing them means either
>>> pushing your changes the manual way for either system or pushing them up
>>> with git-review and then marking the change via the UI into either of
>>> the workflows.
>>> 
>>> 
>>> To push a private change you may do so as follows:
>>> git push origin HEAD:refs/for/master%private
>>> 
>>> To pull it out of private you may do so as follows:
>>> git push origin HEAD:refs/for/master%remove-private
>>> 
>>> To push a WIP you may do so as follows:
>>> git push origin HEAD:refs/for/master%wip
>>> 
>>> To mark it ready for review you may do so as follows:
>>> git push origin HEAD:refs/for/master%ready
>>> 
>>> Once a change is in either private or WIP state it does not switch the
>>> change to a ready state until the current state has been removed.
>>> 
>>> In both cases, the state can be set via the UI by selecting the tripple
>>> dot menu option on the change and selecting the appropriate option.
>>> 
>>> To remove WIP state press the 'START REVIEW' button. To remove the
>>> private state you must do so via the menu.
>>> 
>>> NOTE: We are not moving to Gerrit 3 at this time. That is on the road
>>> map but we need to come to the latest 2.x as we have to do various
>>> migrations that are only available at the 2.16 level before we can move
>>> to Gerrit 3.
>>> 
>>> Thank you,
>>> Vanessa
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>> 
>>> View/Reply Online (#61): https://lists.fd.io/g/dmm-dev/message/61 
>>> <https://lists.fd.io/g/dmm-dev/message/61>
>>> Mute This Topic: https://lists.fd.io/mt/32658314/675152 
>>> <https://lists.fd.io/mt/32658314/675152>
>>> Group Owner: dmm-dev+ow...@lists.fd.io <mailto:dmm-dev+ow...@lists.fd.io>
>>> Unsubscribe: https://lists.fd.io/g/dmm-dev/unsub 
>>> <https://lists.fd.io/g/dmm-dev/unsub>  [fcoras.li...@gmail.com 
>>> <mailto:fcoras.li...@gmail.com>]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13624): https://lists.fd.io/g/vpp-dev/message/13624 
>> <https://lists.fd.io/g/vpp-dev/message/13624>
>> Mute This Topic: https://lists.fd.io/mt/32658598/675152 
>> <https://lists.fd.io/mt/32658598/675152>
>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
>> <mailto:fcoras.li...@gmail.com>]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13625): https://lists.fd.io/g/vpp-dev/message/13625
> Mute This Topic: https://lists.fd.io/mt/32658616/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13626): https://lists.fd.io/g/vpp-dev/message/13626
Mute This Topic: https://lists.fd.io/mt/32658627/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Transport endpoint extraction while using vnet_connect_uri

2019-07-31 Thread Florin Coras
Hi Praveen, 

In 19.08 you can grab the local and remote endpoints with session_get_endpoint. 
We didn’t have these in 19.04.

Florin

> On Jul 31, 2019, at 7:18 PM, Praveen Kariyanahalli  
> wrote:
> 
> I wrote an client app to connect to remote tls_server using vnet_connect_uri. 
> 
> Post connect, how do I extract the transport end points (local bind ip, port, 
> rmt ip, rmt port). I am using 1904 version of the code. 
> 
> ct = transport_get_connection (ecm->transport_proto, index, 
> thread_index);
> 
> Is not helping ... returns 0.
> 
> Context neither under TLS proto nor under TCP
> 
> What am I missing?
> 
> Thanks in advance
> Praveen
> 
> 
> ᐧ
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13636): https://lists.fd.io/g/vpp-dev/message/13636
> Mute This Topic: https://lists.fd.io/mt/32674842/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13638): https://lists.fd.io/g/vpp-dev/message/13638
Mute This Topic: https://lists.fd.io/mt/32674842/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Transport endpoint extraction while using vnet_connect_uri

2019-08-01 Thread Florin Coras
Hi Praveen, 

That’s just master branch :-)

Florin

> On Aug 1, 2019, at 2:03 PM, Praveen Kariyanahalli  
> wrote:
> 
> I dont see that tag 1908. How do I get that code? 
> 
> Thanks in advance
> Praveen
> ᐧ
> 
> On Wed, Jul 31, 2019 at 10:18 PM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Praveen, 
> 
> In 19.08 you can grab the local and remote endpoints with 
> session_get_endpoint. We didn’t have these in 19.04.
> 
> Florin
> 
>> On Jul 31, 2019, at 7:18 PM, Praveen Kariyanahalli > <mailto:praveen.ha...@gmail.com>> wrote:
>> 
>> I wrote an client app to connect to remote tls_server using 
>> vnet_connect_uri. 
>> 
>> Post connect, how do I extract the transport end points (local bind ip, 
>> port, rmt ip, rmt port). I am using 1904 version of the code. 
>> 
>> ct = transport_get_connection (ecm->transport_proto, index, 
>> thread_index);
>> 
>> Is not helping ... returns 0.
>> 
>> Context neither under TLS proto nor under TCP
>> 
>> What am I missing?
>> 
>> Thanks in advance
>> Praveen
>> 
>> 
>> ᐧ
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13636): https://lists.fd.io/g/vpp-dev/message/13636 
>> <https://lists.fd.io/g/vpp-dev/message/13636>
>> Mute This Topic: https://lists.fd.io/mt/32674842/675152 
>> <https://lists.fd.io/mt/32674842/675152>
>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
>> <mailto:fcoras.li...@gmail.com>]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13648): https://lists.fd.io/g/vpp-dev/message/13648
Mute This Topic: https://lists.fd.io/mt/32674842/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Envoy transport socket support for vpp.

2019-08-06 Thread Florin Coras
Hi Rupesh, 

CC’ing Stephen and Ping who are working on this. 

Florin

> On Aug 5, 2019, at 4:36 AM, Rupesh Raghuvaran  
> wrote:
> 
> Hi,
> 
> I would like to know the current state of the support for envoy over vpp host 
> stack. Is there any open source transport socket support for vpp available 
> for envoy.
> 
> Thanks
> Rupesh  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13661): https://lists.fd.io/g/vpp-dev/message/13661
> Mute This Topic: https://lists.fd.io/mt/32724370/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13674): https://lists.fd.io/g/vpp-dev/message/13674
Mute This Topic: https://lists.fd.io/mt/32724370/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-08-15 Thread Florin Coras
Hi Max,

Not at this time. It should be possible with a few changes for nonblocking 
sessions. I’ll add it to my list, in case nobody else beats me to it. 

Florin

> On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> 
> Can vppcom_session_connect() function run in non-blocking mode? I see that 
> there is a wait for the connection result in the 
> vppcom_wait_for_session_state_change function.  Is it possible to get the 
> result of the connection using vppcom_epoll_wait?
> 
> Thanks.
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745
> Mute This Topic: https://lists.fd.io/mt/32885087/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13747): https://lists.fd.io/g/vpp-dev/message/13747
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ValueError: Fieldname 'async' is a python keyword and is not accessible via the python API

2019-08-21 Thread Florin Coras
Interesting. Want to push a patch to change async_enable or something like that 
here [1]? Or I can do that. 

Florin

[1] 
https://git.fd.io/vpp/diff/src/plugins/tlsopenssl/tls_openssl.api?id=be4d1aa 


> On Aug 21, 2019, at 7:41 AM, Damjan Marion via Lists.Fd.Io 
>  wrote:
> 
> 
> May be due to the fact that I use newer ubuntu version, but i’m not able to 
> build VPP on master.
> 
> This is error message I get:
> 
> [1283/1678] Generating API header 
> /home/damarion/cisco/vpp8/bui...build-vpp_debug-native/vpp/plugins/tlsopenssl/tls_openssl.api.h
> FAILED: plugins/tlsopenssl/tls_openssl.api.h
> cd 
> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl
>  && mkdir -p 
> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl
>  && /home/damarion/cisco/vpp8/src/tools/vppapigen/vppapigen --includedir 
> /home/damarion/cisco/vpp8/src --input 
> /home/damarion/cisco/vpp8/src/plugins/tlsopenssl/tls_openssl.api --output 
> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl/tls_openssl.api.h
> ValueError: Fieldname 'async' is a python keyword and is not accessible via 
> the python API.
> 
> 
> And looks like reverting [1] helps….
> 
> [1] https://git.fd.io/vpp/commit/?id=be4d1aa 
> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13798): https://lists.fd.io/g/vpp-dev/message/13798
> Mute This Topic: https://lists.fd.io/mt/32978936/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13799): https://lists.fd.io/g/vpp-dev/message/13799
Mute This Topic: https://lists.fd.io/mt/32978936/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ValueError: Fieldname 'async' is a python keyword and is not accessible via the python API

2019-08-21 Thread Florin Coras

> On Aug 21, 2019, at 8:32 AM, Damjan Marion  wrote:
> 
>> 
>> On 21 Aug 2019, at 17:15, Florin Coras > <mailto:fcoras.li...@gmail.com>> wrote:
>> 
>> Interesting. Want to push a patch to change async_enable or something like 
>> that here [1]? Or I can do that. 
>> 
>> Florin
>> 
>> [1] 
>> https://git.fd.io/vpp/diff/src/plugins/tlsopenssl/tls_openssl.api?id=be4d1aa 
>> <https://git.fd.io/vpp/diff/src/plugins/tlsopenssl/tls_openssl.api?id=be4d1aa>
>> 
>>> On Aug 21, 2019, at 7:41 AM, Damjan Marion via Lists.Fd.Io 
>>> mailto:dmarion=me@lists.fd.io>> wrote:
>>> 
>>> 
>>> May be due to the fact that I use newer ubuntu version, but i’m not able to 
>>> build VPP on master.
>>> 
>>> This is error message I get:
>>> 
>>> [1283/1678] Generating API header 
>>> /home/damarion/cisco/vpp8/bui...build-vpp_debug-native/vpp/plugins/tlsopenssl/tls_openssl.api.h
>>> FAILED: plugins/tlsopenssl/tls_openssl.api.h
>>> cd 
>>> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl
>>>  && mkdir -p 
>>> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl
>>>  && /home/damarion/cisco/vpp8/src/tools/vppapigen/vppapigen --includedir 
>>> /home/damarion/cisco/vpp8/src --input 
>>> /home/damarion/cisco/vpp8/src/plugins/tlsopenssl/tls_openssl.api --output 
>>> /home/damarion/cisco/vpp8/build-root/build-vpp_debug-native/vpp/plugins/tlsopenssl/tls_openssl.api.h
>>> ValueError: Fieldname 'async' is a python keyword and is not accessible via 
>>> the python API.
>>> 
>>> 
>>> And looks like reverting [1] helps….
>>> 
>>> [1] https://git.fd.io/vpp/commit/?id=be4d1aa 
>>> <https://git.fd.io/vpp/commit/?id=be4d1aa>
>>> 
> 
> It sounds to me like right solution to the problem is to avoid python 
> keyword-related constrains in API msg fields.
> 
> Next time they may add new keyword and break existing API definitions….

Agreed. I suspect the issue is that we pass those fields as named params in 
python apis, but Ole should know the actual answer. 

Florin-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13801): https://lists.fd.io/g/vpp-dev/message/13801
Mute This Topic: https://lists.fd.io/mt/32978936/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP 19.08 release is available!

2019-08-22 Thread Florin Coras
Congrats to the entire community and thanks Andrew!

Cheers,
Florin

> On Aug 21, 2019, at 1:57 PM, Andrew Yourtchenko  wrote:
> 
> Hi all,
> 
> the VPP release 19.08 artifacts are available on packagecloud release
> repositories.
> 
> I have tested the installation on ubuntu and centos.
> 
> Many thanks to everyone involved into making it happen!
> 
> Special thanks to Vanessa Valderrama for the help today.
> 
> --a
> 
> 
> p.s. stable/1908 branch is re-opened for the fixes slated for .1
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13804): https://lists.fd.io/g/vpp-dev/message/13804
> Mute This Topic: https://lists.fd.io/mt/32983052/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13816): https://lists.fd.io/g/vpp-dev/message/13816
Mute This Topic: https://lists.fd.io/mt/32983052/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-09-04 Thread Florin Coras
Hi Max, 

Here’s the patch that allows non-blocking connects [1]. 

Florin

[1] https://gerrit.fd.io/r/c/vpp/+/21610 <https://gerrit.fd.io/r/c/vpp/+/21610>

> On Aug 15, 2019, at 7:41 AM, Florin Coras via Lists.Fd.Io 
>  wrote:
> 
> Hi Max,
> 
> Not at this time. It should be possible with a few changes for nonblocking 
> sessions. I’ll add it to my list, in case nobody else beats me to it. 
> 
> Florin
> 
>> On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io 
>>  wrote:
>> 
>> Hello,
>> 
>> Can vppcom_session_connect() function run in non-blocking mode? I see that 
>> there is a wait for the connection result in the 
>> vppcom_wait_for_session_state_change function.  Is it possible to get the 
>> result of the connection using vppcom_epoll_wait?
>> 
>> Thanks.
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745
>> Mute This Topic: https://lists.fd.io/mt/32885087/675152
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13747): https://lists.fd.io/g/vpp-dev/message/13747
> Mute This Topic: https://lists.fd.io/mt/32885087/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13907): https://lists.fd.io/g/vpp-dev/message/13907
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP API client with no rx pthread

2019-09-11 Thread Florin Coras
Hi Satya, 

Probably you can just replicate what the api rx-thread is doing, i.e., 
rx_thread_fn. In particular, take a look at vl_msg_api_queue_handler. 

Florin

> On Sep 11, 2019, at 3:26 AM, Satya Murthy  wrote:
> 
> Hi ,
> 
> We are trying to develop a VPP API client which needs synchronous reply 
> handling.
> Hence, we were thinking of NOT having a separate pthread for receiving the 
> response from VPP.
> We are planning to use no_rx_pthread version of connect api.
> 
> Is there any example code to receive and handle the response synchronously.
> I see all the examples are using separate pthread for receiving.
> 
> Any input on this will be of great help.
> 
> -- 
> Thanks & Regards,
> Murthy -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13952): https://lists.fd.io/g/vpp-dev/message/13952
> Mute This Topic: https://lists.fd.io/mt/34101834/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13956): https://lists.fd.io/g/vpp-dev/message/13956
Mute This Topic: https://lists.fd.io/mt/34101834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VLC: tls in open tcp session

2019-09-16 Thread Florin Coras
Hi Max, 

No, it’s not possible. 

Florin

> On Sep 16, 2019, at 1:35 AM, Max A. via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> 
> Is it possible to switch to using TLS in an already open TCP session using 
> VCL?
> 
> Thanks. -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13987): https://lists.fd.io/g/vpp-dev/message/13987
> Mute This Topic: https://lists.fd.io/mt/34162392/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13994): https://lists.fd.io/g/vpp-dev/message/13994
Mute This Topic: https://lists.fd.io/mt/34162392/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] UDP packet sending using sendto()

2019-09-25 Thread Florin Coras
Hi Nataraj, 

It’s not possible with VCL at this point but it’s possible with builtin 
applications. It’s just a matter of extending the connect api to support this. 

Florin

> On Sep 25, 2019, at 6:16 PM, Nataraj Batchu  wrote:
> 
> Hi,
> 
> Today with VPP we cannot give endpoint info in sendto() call. We have to do a 
> connect(udp_sock, endpoint_info) first and then invoke sendto(). This works. 
> But the question is how can I influence the source port of these packets?
> 
> Thanks,
> -Nataraj -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14057): https://lists.fd.io/g/vpp-dev/message/14057
> Mute This Topic: https://lists.fd.io/mt/34294099/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14058): https://lists.fd.io/g/vpp-dev/message/14058
Mute This Topic: https://lists.fd.io/mt/34294099/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vlib_node_add_next usage #vpp

2019-10-03 Thread Florin Coras
Hi Ranadip, 

Yes, the session layer updates all vms when it adds a new arc from 
session_queue_node to whatever the transport wants to use for output. I don’t 
remember now why we did things that way, but it may be that it’s not needed 
anymore. 

Florin

> On Oct 2, 2019, at 9:23 PM, Ranadip Das  wrote:
> 
> The session_register_transport() has the foreach code.
> 
>  1381  /* *INDENT-OFF* */
>  <> 1382  if (output_node != ~0)
>  <> 1383  {
>  <> 1384  foreach_vlib_main 
> 
>  (({
>  <> 1385  next_index = vlib_node_add_next 
> 
>  (this_vlib_main,
>  <> 1386  session_queue_node 
> .index,
>  <> 1387  output_node);
>  <> 1388  }));
>  <> 1389  }
>  <> 1390  /* *INDENT-ON* */
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14103): https://lists.fd.io/g/vpp-dev/message/14103
> Mute This Topic: https://lists.fd.io/mt/34376308/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14109): https://lists.fd.io/g/vpp-dev/message/14109
Mute This Topic: https://lists.fd.io/mt/34376308/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Hoststack CPS

2019-10-09 Thread Florin Coras
Hi Nataraj, 

The test was done with the builtin echo applications over a 40Gbps link with no 
latency and it measured CPS as number of TCP handshakes per second. Also, note 
that’s a pretty old presentation. 

Florin

> On Oct 9, 2019, at 10:30 AM, Nataraj Batchu  wrote:
> 
> Hi,
> 
> In the slide deck at https://wiki.fd.io/images/b/b4/Vpp-hoststack-kc.pdf 
>  I see that with VPP 
> Hoststack 200k CPS is achieved. Have few questions:
> 
> a. Are the sessions originated from App linked with VCL? 
> b. Is the TCP connection over the network or on local host?
> b. Non-blocking connect fixes are available only recently. Were the results 
> with blocking connect()? 
> 
> Thanks in advance.
> 
> -Nataraj -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14153): https://lists.fd.io/g/vpp-dev/message/14153
> Mute This Topic: https://lists.fd.io/mt/34465863/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14154): https://lists.fd.io/g/vpp-dev/message/14154
Mute This Topic: https://lists.fd.io/mt/34465863/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Hoststack CPS

2019-10-10 Thread Florin Coras
Ni Nataraj, 

I haven’t tested this recently but we’re planning on adding some tcp + vcl 
related performance testing to csit.

Regards, 
Florin

> On Oct 10, 2019, at 6:30 PM, Nataraj Batchu  wrote:
> 
> Hi Florin:
> 
> Thanks for the quick reply. By any chance, do you have latest CPS numbers? Or 
> any document to look at that you/your team published?  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14156): https://lists.fd.io/g/vpp-dev/message/14156
> Mute This Topic: https://lists.fd.io/mt/34465863/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14158): https://lists.fd.io/g/vpp-dev/message/14158
Mute This Topic: https://lists.fd.io/mt/34465863/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] tls close from cli lead to assert error

2019-10-11 Thread Florin Coras
Hi, 

First of all, that’s just a test tool that forces closing of sessions by 
“faking” a transport close request. It can potentially lead to other issues if 
afterwards the app calls the wrong apis into the transport. It shouldn’t be 
used on production traffic. 

Having said that, your problem is probably solved by this [1]. That is, we know 
the thread for the session/ctx, so we can use the explicit api. 

Florin

[1] https://gerrit.fd.io/r/c/vpp/+/22674

> On Oct 11, 2019, at 6:24 AM, jiangxiaom...@outlook.com wrote:
> 
> code: g...@github.com:FDio/vpp.git   
> master : 1a41a35b27da6921d6d86a9f1ad5f1b46e1185f7
> 
> if i close tls session which is not in thread 0, VPP will assert error.
> Below is more Info:
> 
> 
> DBGvpp# sh session verbose
> 
> ConnectionState  Rx-f  
> Tx-f  
> 
> [0:0][CT:J] 0.0.0.0:5005->0.0.0.0:0   LISTEN 0 0  
>
> 
> [0:1][TLS] app_wrk 1 engine 2 tcp 0:20 0 
> 
> [0:2][T] 0.0.0.0:5005->0.0.0.0:0  LISTEN 0 0  
>
> 
> Thread 0: active sessions 3
> 
>  
> ConnectionState  Rx-f  
> Tx-f  
> 
> [1:0][T] 192.168.7.100:5005->192.168.7.101:54206  ESTABLISHED0 0  
>
> 
> [1:1][TLS] app_wrk 1 index 0 engine 2 tcp 1:0 state: 4  0 0   
>   
> 
> Thread 1: active sessions 2
> 
> DBGvpp# clear session thread 1 session 0
> 
> 0: /home/dev/code/vpp-master/src/plugins/tlsopenssl/tls_openssl.c:72 
> (openssl_ctx_get) assertion `! pool_is_free 
> (openssl_main.ctx_pool[vlib_get_thread_index ()], _e)' fails
> 
> Program received signal SIGABRT, Aborted.
> 
> 0x74a12337 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 
> 55return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> 
> Missing separate debuginfos, use: debuginfo-install 
> libgcc-4.8.5-39.el7.x86_64 libuuid-2.23.2-61.el7.x86_64 
> numactl-libs-2.0.12-3.el7.x86_64
> 
> (gdb) bt
> 
> #0  0x74a12337 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 
> #1  0x74a13a28 in __GI_abort () at abort.c:90
> 
> #2  0x0040765a in os_panic () at 
> /home/dev/code/vpp-master/src/vpp/vnet/main.c:355
> 
> #3  0x7585ab29 in debugger () at 
> /home/dev/code/vpp-master/src/vppinfra/error.c:84
> 
> #4  0x7585aef8 in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x7fffc9a96a40 "%s:%d (%s) assertion `%s' fails") at 
> /home/dev/code/vpp-master/src/vppinfra/error.c:143
> 
> #5  0x7fffc9a9110e in openssl_ctx_get (ctx_index=0) at 
> /home/dev/code/vpp-master/src/plugins/tlsopenssl/tls_openssl.c:71
> 
> #6  0x7747f20b in tls_ctx_get (ctx_handle=1073741824) at 
> /home/dev/code/vpp-master/src/vnet/tls/tls.c:298
> 
> #7  0x7747f522 in tls_session_disconnect_callback 
> (tls_session=0x7fffdaaa54c0) at 
> /home/dev/code/vpp-master/src/vnet/tls/tls.c:389
> 
> #8  0x77445b9a in app_worker_close_notify (app_wrk=0x7fffda55d280, 
> s=0x7fffdaaa54c0) at 
> /home/dev/code/vpp-master/src/vnet/session/application_worker.c:324
> 
> #9  0x7744a45b in clear_session (s=0x7fffdaaa54c0) at 
> /home/dev/code/vpp-master/src/vnet/session/session_cli.c:594
> 
> #10 0x7744a681 in clear_session_command_fn (vm=0x766b3d80 
> , input=0x7fffda82af00, cmd=0x7fffda3ff2c0) at 
> /home/dev/code/vpp-master/src/vnet/session/session_cli.c:635
> 
> #11 0x763c5105 in vlib_cli_dispatch_sub_commands (vm=0x766b3d80 
> , cm=0x766b3fb0 , 
> input=0x7fffda82af00, parent_command_index=59) at 
> /home/dev/code/vpp-master/src/vlib/cli.c:645
> 
> #12 0x763c4f9a in vlib_cli_dispatch_sub_commands (vm=0x766b3d80 
> , cm=0x766b3fb0 , 
> input=0x7fffda82af00, parent_command_index=0) at 
> /home/dev/code/vpp-master/src/vlib/cli.c:606
> 
> #13 0x763c5530 in vlib_cli_input (vm=0x766b3d80 
> , input=0x7fffda82af00, function=0x7646b44c 
> , function_arg=0) at 
> /home/dev/code/vpp-master/src/vlib/cli.c:746
> 
> #14 0x76471626 in unix_cli_process_input (cm=0x766b4720 
> , cli_file_index=0) at 
> /home/dev/code/vpp-master/src/vlib/unix/cli.c:2572
> 
> #15 0x76472198 in unix_cli_process (vm=0x766b3d80 
> , rt=0x7fffda7ea000, f=0x0) at 
> /home/dev/code/vpp-master/src/vlib/unix/cli.c:2688
> 
> #16 0x76412884 in vlib_process_bootstrap (_a=140736833976688) at 
> /home/dev/code/vpp-master/src/vlib/main.c:1472
> 
> #17 0x7587b9a0 in clib_calljmp () from 
> /home/dev/code/vpp-master/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.20.01
> 
> #18 0x7fffd8fef940 in ?? ()
> 
> #19 0x7641298c in vlib_process_startup (vm=0x766b3d80 
> , p=0x267, f=0x0) at 
> /home/dev/code/vpp-master/src/vlib/main.c:1494
> 
> The reason is that:
> tls_ctx_t struct is create 

Re: [vpp-dev] Is there any way to set local ip with vcl when connect remote socket server ?

2019-10-17 Thread Florin Coras
Hi, 

Currently we only support setting of local ips for builtin applications, like 
the one you’ve built lower. VCL and the message queue api currently do not 
support this.

I guess we could add a VPPCOM_ATTR_SET_LCL_ADDR set attribute option and pass 
that over the api. But before that, could you explain in a bit more detail your 
use case?

Thanks,
Florin

> On Oct 17, 2019, at 7:58 AM, jiangxiaom...@outlook.com wrote:
> 
> In the Hoststack app, I use write the flowing function to set local addr to 
> connect remote socket server
> clib_error_t * vnet_connect_uri_with_local_addr(vlib_main_t *vm,
> 
> vnet_connect_args_t *a, ip46_address_t *local_addr)
> 
> {
> 
> session_endpoint_cfg_t sep = SESSION_ENDPOINT_CFG_NULL;
> 
> int rv;
> 
>  
> 
> if ((rv = parse_uri(a->uri, &sep))) {
> 
> return clib_error_return(0, "uri format error(%d)!", rv);
> 
> }
> 
>  
> 
> ip46_address_copy(&sep.peer.ip, local_addr);
> 
> clib_memcpy(&a->sep_ext, &sep, sizeof(sep));
> 
> if ((rv = vnet_connect(a))) {
> 
> return clib_error_return(0, "connect returned: %d", rv);
> 
> }
> 
>  
> 
> return 0;
> 
> }
> 
> and it work very well.
> But when I turn to VCL way, I found no way to set local connect IP.
> Anyone know how to solve the problem? -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14198): https://lists.fd.io/g/vpp-dev/message/14198
> Mute This Topic: https://lists.fd.io/mt/34703357/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14200): https://lists.fd.io/g/vpp-dev/message/14200
Mute This Topic: https://lists.fd.io/mt/34703357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Basic l2 bridging does not work

2019-10-17 Thread Florin Coras
Hi Chuan, 

As Balaji said, probably it’s worth making sure the cable between the eth0 nics 
is fine and that eth0 on R230 works as expected. For instance, you could double 
check your setup by switching to linux drivers and trying a ping between the 
two boxes. 

Regarding “sh int”, it only shows the admin status of the interface. To see the 
link status, as Damjan explained, you have to use the “show hardware” cli. 

Hope this helps,
Florin

> On Oct 17, 2019, at 3:08 PM, Balaji Venkatraman via Lists.Fd.Io 
>  wrote:
> 
> Hi Chuan,
> I got the eth0 and eth1 mixed up. My bad.
> Are these fiber or copper links? You may want to check if the cable is ok. 
> Also, please make sure you have crossover cable(if RJ) between the servers.
>  
> Thanks!
> --
> Regards,
> Balaji. 
>  
>  
> From: Chuan Han mailto:chuan...@google.com>>
> Date: Thursday, October 17, 2019 at 2:41 PM
> To: "Balaji Venkatraman (balajiv)"  >
> Cc: "vpp-dev@lists.fd.io "  >, Arivudainambi Appachi gounder 
> mailto:aappa...@google.com>>, Jerry Cen 
> mailto:zhiw...@google.com>>
> Subject: Re: [vpp-dev] Basic l2 bridging does not work
>  
> Restarting ixia controller does not help. We ended up with both ixia ports 
> having '!'. 
>  
> We are not sure how ixia port plays a role here. eth0 interfaces are the 
> interfaces connecting two servers, not to ixia. 
>  
> On Thu, Oct 17, 2019 at 11:26 AM Balaji Venkatraman (balajiv) 
> mailto:bala...@cisco.com>> wrote:
> Hi Chuan,
>  
> Could you please try to reset the ixia controller connected to port 4?
> I have seen issues with ‘!’ on ixia. Given the carrier on eth0 is down, I 
> suspect the ixia port.
>  
> --
> Regards,
> Balaji. 
>  
>  
> From: Chuan Han mailto:chuan...@google.com>>
> Date: Thursday, October 17, 2019 at 11:09 AM
> To: "Balaji Venkatraman (balajiv)"  >
> Cc: "vpp-dev@lists.fd.io "  >, Arivudainambi Appachi gounder 
> mailto:aappa...@google.com>>, Jerry Cen 
> mailto:zhiw...@google.com>>
> Subject: Re: [vpp-dev] Basic l2 bridging does not work
>  
> Yes. It is unidirectional stream from port 1 to port 4. 
>  
> Another engineer, Nambi, configured ixia. What he showed me yesterday is that 
> xia port connected to port 1 is green and good. ixia port connected to port 4 
> is green but has a red exclamation mark, which means ping does not work. 
>  
> We also found eth0 on R230 is down shown by "show hardware eth0" command. 
> However "show int" shows it is up.
>  
>  
> vpp# sh hardware-interfaces eth0
>   NameIdx   Link  Hardware
> eth0   2down  eth0
>   Link speed: unknown
>   Ethernet address b4:96:91:23:1e:d6
>   Intel 82599
> carrier down 
> flags: admin-up promisc pmd rx-ip4-cksum
> rx: queues 1 (max 128), desc 512 (min 32 max 4096 align 8)
> tx: queues 3 (max 64), desc 512 (min 32 max 4096 align 8)
> pci: device 8086:154d subsystem 8086:7b11 address :06:00.01 numa 0
> max rx packet len: 15872
> promiscuous: unicast on all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
>macsec-strip vlan-filter vlan-extend jumbo-frame 
> scatter 
>security keep-crc 
> rx offload active: ipv4-cksum 
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum 
>tcp-tso macsec-insert multi-segs security 
> tx offload active: none
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex 
> ipv6-tcp 
>ipv6-udp ipv6-ex ipv6 
> rss active:none
> tx burst function: (nil)
> rx burst function: ixgbe_recv_pkts_vec
> 
> rx frames ok   33278
> rx bytes ok  3960082
> extended stats:
>   rx good packets  33278
>   rx good bytes  3960082
>   rx q0packets 33278
>   rx q0bytes 3960082
>   rx size 65 to 127 packets33278
>   rx multicast packets 33278
>   rx total packets 33278
>   rx total bytes 3960082
> vpp# sh int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count 
> eth0  2  up  9000/0/0/0 rx 
> packets 33279
> rx bytes  
>3960201
> drops 
>  5
>  

Re: [vpp-dev] Basic l2 bridging does not work

2019-10-17 Thread Florin Coras
This looks like a DPDK issue, but I’ll let Damjan be the judge of that. 

To see if this is a config issues, could you simplify your startup config by
- removing “workers 0” from the two nics and adding “num-rx-queues 2” to the 
nics or to the default stanza, if you’re running with 2 workers
- comment out the cryptodev config 

If the two nics don’t come up, check if there’s any obvious dpdk error in “show 
log”.

Florin 

> On Oct 17, 2019, at 4:56 PM, Chuan Han via Lists.Fd.Io 
>  wrote:
> 
> I tried disabling autoneg on R740 side. It is not allowed too. If vpp cannot 
> allow two nics to be successfully added to the same vpp instance, it seems to 
> be a bug. Is it something which can be easily spotted in the code base? 
> 
> It is also not possible to enforce symmetricity on internet. The other party 
> can do anything as long as basic ping works. 
> 
> On Thu, Oct 17, 2019 at 3:55 PM Chuan Han  > wrote:
> If I only put one phy nic, i.e., eth0, to vpp, 'sh hardware' shows it is up. 
> If I put both eth0 and eth1 in vpp, eth0 is always down. It seems something 
> is wrong with the nic or vpp does not support this type of hardware? 
> 
> We tried enabling autoneg on R230. It is not allowed. To avoid asymmetric 
> settings, disabling autoneg on R740 will help? 
> 
> On Thu, Oct 17, 2019 at 3:46 PM Balaji Venkatraman (balajiv) 
> mailto:bala...@cisco.com>> wrote:
> It plays a role if it is asymmetric at both ends. You could enable it at both 
> ends and check. 
> 
>> On Oct 17, 2019, at 3:15 PM, Chuan Han > > wrote:
>> 
>> 
>> I rebooted the r230 machine and found the phy nic corresponding to eth has 
>> autoneg off. 
>> 
>> root@esdn-relay:~/gnxi/perf_testing/r230# ethtool enp6s0f1
>> Settings for enp6s0f1:
>> Supported ports: [ FIBRE ]
>> Supported link modes:   1baseT/Full 
>> Supported pause frame use: Symmetric
>> Supports auto-negotiation: No
>> Supported FEC modes: Not reported
>> Advertised link modes:  1baseT/Full 
>> Advertised pause frame use: Symmetric
>> Advertised auto-negotiation: No
>> Advertised FEC modes: Not reported
>> Speed: 1Mb/s
>> Duplex: Full
>> Port: Direct Attach Copper
>> PHYAD: 0
>> Transceiver: internal
>> Auto-negotiation: off
>> Supports Wake-on: d
>> Wake-on: d
>> Current message level: 0x0007 (7)
>>drv probe link
>> Link detected: yes
>> root@esdn-relay:~/gnxi/perf_testing/r230# 
>> 
>> On r740, autoneg is on. It is copper. 
>> 
>> root@esdn-lab:~/gnxi/perf_testing/r740/vpp# ethtool eno3
>> Settings for eno3:
>> Supported ports: [ TP ]
>> Supported link modes:   100baseT/Full 
>> 1000baseT/Full 
>> 1baseT/Full 
>> Supported pause frame use: Symmetric
>> Supports auto-negotiation: Yes
>> Supported FEC modes: Not reported
>> Advertised link modes:  100baseT/Full 
>> 1000baseT/Full 
>> 1baseT/Full 
>> Advertised pause frame use: Symmetric
>> Advertised auto-negotiation: Yes
>> Advertised FEC modes: Not reported
>> Speed: 1Mb/s
>> Duplex: Full
>> Port: Twisted Pair
>> PHYAD: 0
>> Transceiver: internal
>> Auto-negotiation: on
>> MDI-X: Unknown
>> Supports Wake-on: umbg
>> Wake-on: g
>> Current message level: 0x0007 (7)
>>drv probe link
>> Link detected: yes
>> root@esdn-lab:~/gnxi/perf_testing/r740/vpp# 
>> 
>> not clear if this plays a role or not. 
>> 
>> On Thu, Oct 17, 2019 at 2:41 PM Chuan Han via Lists.Fd.Io 
>>  > > wrote:
>> Restarting ixia controller does not help. We ended up with both ixia ports 
>> having '!'. 
>> 
>> We are not sure how ixia port plays a role here. eth0 interfaces are the 
>> interfaces connecting two servers, not to ixia. 
>> 
>> On Thu, Oct 17, 2019 at 11:26 AM Balaji Venkatraman (balajiv) 
>> mailto:bala...@cisco.com>> wrote:
>> Hi Chuan,
>> 
>>  
>> 
>> Could you please try to reset the ixia controller connected to port 4?
>> 
>> I have seen issues with ‘!’ on ixia. Given the carrier on eth0 is down, I 
>> suspect the ixia port.
>> 
>>  
>> 
>> --
>> 
>> Regards,
>> 
>> Balaji. 
>> 
>>  
>> 
>>  
>> 
>> From: Chuan Han mailto:chuan...@google.com>>
>> Date: Thursday, October 17, 2019 at 11:09 AM
>> To: "Balaji Venkatraman (balajiv)" > >
>> Cc: "vpp-dev@lists.fd.io " > >, Arivudainambi Appachi gounder 
>> mailto:aappa...@google.com>>, Jerry Cen 
>> mailto:zhiw...@google.com>>
>> Subject: Re: [vpp-dev] Basic l2 bridging does not work

Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-10-18 Thread Florin Coras
Hi Max, 

I had this patch [1] that indirectly fixes the issue lying around. So, is an 
EPOLLOUT | EPOLLHUP event enough to signal a failed async connect for you?

Thanks,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/22274 

> On Oct 18, 2019, at 2:07 AM, Max A.  wrote:
> 
> Hi Florin,
>  
> vppcom_session_connect in non-blocking mode works well only if the connection 
> can be successfully established. If the connection cannot be established, 
> then no event is raised about this. How can this be fixed?
>  
> Thanks.
> 
>  
> Четверг, 5 сентября 2019, 1:04 +03:00 от Florin Coras 
> :
>  
> Hi Max, 
>  
> Here’s the patch that allows non-blocking connects [1]. 
>  
> Florin
>  
> [1] https://gerrit.fd.io/r/c/vpp/+/21610 
> <https://gerrit.fd.io/r/c/vpp/+/21610>
>  
>> On Aug 15, 2019, at 7:41 AM, Florin Coras via Lists.Fd.Io 
>> > >
>>  wrote:
>>  
>> Hi Max,
>> 
>> Not at this time. It should be possible with a few changes for nonblocking 
>> sessions. I’ll add it to my list, in case nobody else beats me to it.
>> 
>> Florin
>>  
>>> 
>>> On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io 
>>> >> > 
>>> wrote:
>>> 
>>> Hello,
>>> 
>>> Can vppcom_session_connect() function run in non-blocking mode? I see that 
>>> there is a wait for the connection result in the 
>>> vppcom_wait_for_session_state_change function.  Is it possible to get the 
>>> result of the connection using vppcom_epoll_wait?
>>> 
>>> Thanks.
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>> 
>>> View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745 
>>> <https://lists.fd.io/g/vpp-dev/message/13745>
>>> Mute This Topic: https://lists.fd.io/mt/32885087/675152 
>>> <https://lists.fd.io/mt/32885087/675152>
>>> Group Owner: vpp-dev+ow...@lists.fd.io 
>>> 
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
>>> ]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13747): https://lists.fd.io/g/vpp-dev/message/13747 
>> <https://lists.fd.io/g/vpp-dev/message/13747>
>> Mute This Topic: https://lists.fd.io/mt/32885087/675152 
>> <https://lists.fd.io/mt/32885087/675152>
>> Group Owner: vpp-dev+ow...@lists.fd.io 
>> 
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
>> ]
>> -=-=-=-=-=-=-=-=-=-=-=-
>  
>  
> --
> Max A.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14236): https://lists.fd.io/g/vpp-dev/message/14236
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP tls doesn't support adding a custom tls engine, and vcl not support choosing tls engine

2019-10-20 Thread Florin Coras
Hi, 

Here’s a draft patch that allows the addition of new application crypto engine 
types [1] and another draft patch that allows the configuration of custom tls 
engines for vcl apps [2]. The latter might change as we improve vcl integration 
with tls, but for now should do. 

I pushed the patches without too much testing, so do let me know if they don’t 
work as expected.

Thanks,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/22863 
[2] https://gerrit.fd.io/r/c/vpp/+/22865


> On Oct 20, 2019, at 6:39 PM, jiangxiaom...@outlook.com wrote:
> 
> I found there's no way to add a custom tls engine, and tls_register_engine 
> only support maximum of 4 tls engine.
> So I think it's need to add at least one enum tag, like CRYPTO_ENGINE_CUSTOM 
> to enum crypto_engine_type_t for vpp user adding their custom tls engine.
> And VCL should also support choosing tls engine.(Now it's hard coding with 
> CRYPTO_ENGINE_OPENSSL) -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14251): https://lists.fd.io/g/vpp-dev/message/14251
> Mute This Topic: https://lists.fd.io/mt/36250146/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14252): https://lists.fd.io/g/vpp-dev/message/14252
Mute This Topic: https://lists.fd.io/mt/36250146/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-10-21 Thread Florin Coras
Hi Max, 

We don’t currently propagate and therefore map vpp errors onto vcl connection 
errors, so you can’t retrieve the error number through vcl.

Florin

> On Oct 21, 2019, at 12:17 PM, Max A.  wrote:
> 
> Hi Florin,
>  
> These events are enough. How can I get the error number? I see that 
> VPPCOM_ATTR_GET_ERROR is not working yet.
>  
> Thank you.
>  
> Пятница, 18 октября 2019, 18:04 +03:00 от Florin Coras 
> :
>  
> Hi Max, 
>  
> I had this patch [1] that indirectly fixes the issue lying around. So, is an 
> EPOLLOUT | EPOLLHUP event enough to signal a failed async connect for you?
>  
> Thanks,
> Florin
>  
> [1] https://gerrit.fd.io/r/c/vpp/+/22274 
> <https://gerrit.fd.io/r/c/vpp/+/22274> 
>  
>> On Oct 18, 2019, at 2:07 AM, Max A. > > wrote:
>>  
>> Hi Florin,
>>  
>> vppcom_session_connect in non-blocking mode works well only if the 
>> connection can be successfully established. If the connection cannot be 
>> established, then no event is raised about this. How can this be fixed?
>>  
>> Thanks.
>> 
>>  
>> Четверг, 5 сентября 2019, 1:04 +03:00 от Florin Coras 
>> > >:
>>  
>> Hi Max, 
>>  
>> Here’s the patch that allows non-blocking connects [1]. 
>>  
>> Florin
>>  
>> [1] https://gerrit.fd.io/r/c/vpp/+/21610 
>> <https://gerrit.fd.io/r/c/vpp/+/21610>
>>  
>>> On Aug 15, 2019, at 7:41 AM, Florin Coras via Lists.Fd.Io 
>>> > wrote:
>>>  
>>> Hi Max,
>>> 
>>> Not at this time. It should be possible with a few changes for nonblocking 
>>> sessions. I’ll add it to my list, in case nobody else beats me to it.
>>> 
>>> Florin
>>>  
>>>> 
>>>> On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io 
>>>> > wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> Can vppcom_session_connect() function run in non-blocking mode? I see that 
>>>> there is a wait for the connection result in the 
>>>> vppcom_wait_for_session_state_change function.  Is it possible to get the 
>>>> result of the connection using vppcom_epoll_wait?
>>>> 
>>>> Thanks.
>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>> Links: You receive all messages sent to this group.
>>>> 
>>>> View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745 
>>>> <https://lists.fd.io/g/vpp-dev/message/13745>
>>>> Mute This Topic: https://lists.fd.io/mt/32885087/675152 
>>>> <https://lists.fd.io/mt/32885087/675152>
>>>> Group Owner: vpp-dev+ow...@lists.fd.io <>
>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com <>]
>>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> 
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>> 
>>> View/Reply Online (#13747): https://lists.fd.io/g/vpp-dev/message/13747 
>>> <https://lists.fd.io/g/vpp-dev/message/13747>
>>> Mute This Topic: https://lists.fd.io/mt/32885087/675152 
>>> <https://lists.fd.io/mt/32885087/675152>
>>> Group Owner: vpp-dev+ow...@lists.fd.io <>
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>> <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com <>]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>  
>>  
>> --
>> Max A.
>  
>  
> --
> Max A.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14258): https://lists.fd.io/g/vpp-dev/message/14258
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] GNU indent pain

2019-10-22 Thread Florin Coras
+1

I think we should do this independent of the “indent wars”

Thanks, 
Florin

> On Oct 22, 2019, at 10:09 AM, Damjan Marion via Lists.Fd.Io 
>  wrote:
> 
> 
> Folks,
> 
> Now we have 2nd release of ubuntu out which comes with new GNU indent which 
> introduces lot of bug fixes.
> Unfortunately our repo is full of products of bugs which are fixed, so it 
> results in big mess when new indent is used.
> 
> Most of the time, it is about space after __attribute__ where one version 
> thinks it should be, and another version thinks it should not.
> 
> Up to the point where we come up with some solution, I would like to propose 
> following change as workaround:
> 
> https://gerrit.fd.io/r/c/vpp/+/22937 
> 
> Thoughts?
> 
> — 
> Damjan
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14272): https://lists.fd.io/g/vpp-dev/message/14272
> Mute This Topic: https://lists.fd.io/mt/36446352/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14277): https://lists.fd.io/g/vpp-dev/message/14277
Mute This Topic: https://lists.fd.io/mt/36446352/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can "use_mq_eventfd" solve epoll_wait high cpu usage problem?

2019-10-31 Thread Florin Coras
Hi, 

use_mq_eventfd will help with vcl but as you’ve noticed it won’t help for ldp 
because there we need to poll both vcl and linux fds. Because mutex-condvar 
notifications can’t be epolled we have to constantly switch between linux and 
vcl epolled fds. One option going forward would be to change ldp to detect if 
vcl is using mutex-condvars or eventfds and in case of the latter poll linux 
fds and the mq’s eventfd in a linux epoll. 

Regards,
Florin

> On Oct 31, 2019, at 5:54 AM, wanghanlin  wrote:
> 
> hi ALL,
> I found app using VCL "epoll_wait" still occupy 70% cpu with "use_mq_eventfd" 
> configuration even if very little traffic.
> Then I investigate code in ldp_epoll_pwait, vls_epoll_wait is called with 
> timeout equal to 0.
> Then I have two questions:
> 1. What problems can "use_mq_eventfd" solve?
> 2.Any other way to decrease cpu usage?
> Thanks!
> 
> code in  ldp_epoll_pwait:
> do
> {
>   if (!ldpw->epoll_wait_vcl)
>   {
> rv = vls_epoll_wait (ep_vlsh, events, maxevents, 0);
> if (rv > 0)
>   {
> ldpw->epoll_wait_vcl = 1;
> goto done;
>   }
> else if (rv < 0)
>   {
> errno = -rv;
> rv = -1;
> goto done;
>   }
>   }
>   else
>   ldpw->epoll_wait_vcl = 0;
> 
>   if (libc_epfd > 0)
>   {
> rv = libc_epoll_pwait (libc_epfd, events, maxevents, 0, sigmask);
> if (rv != 0)
>   goto done;
>   }
> }
>   while ((timeout == -1) || (clib_time_now (&ldpw->clib_time) < max_time));
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14413): https://lists.fd.io/g/vpp-dev/message/14413 
> 
> Mute This Topic: https://lists.fd.io/mt/40123765/675152 
> 
> Group Owner: vpp-dev+ow...@lists.fd.io 
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>   [fcoras.li...@gmail.com 
> ]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14420): https://lists.fd.io/g/vpp-dev/message/14420
Mute This Topic: https://lists.fd.io/mt/40123765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can "use mq eventfd" solve epoll wait h igh cpu usage problem?

2019-10-31 Thread Florin Coras
Hi Hanlin, 

If a worker’s mq uses eventfds for notifications, we could nest it in 
libc_epfd, i.e.,the epoll fd we create for the linux fds. So, if an app's 
worker calls epoll_wait, in ldp we can epoll_wait on libc_epfd and if we get an 
event on the mq’s eventfd, we can call vls_epoll_wait with a 0 timeout to drain 
the events from vcl. 

Having said that, keep in mind that we typically recommend that people use vcl 
because ldp, through vls, enforces a rather strict locking policy. That is 
needed in order to avoid invalidating vcl’s assumption that sessions are owned 
by only one vcl worker. Moreover, we’ve tested ldp only against a limited set 
of applications. 

Regards, 
Florin

> On Oct 31, 2019, at 7:58 PM, wanghanlin  wrote:
> 
> Do you mean, if just use eventfds only, then I needn't set timeout  to 0 in 
> ldp_epoll_pwait?
> If so, then how to process unhandled_evts_vector in vppcom_epoll_wait timely? 
> What I'm saying is,  another thread add event to unhandled_evts_vector during 
> epoll_wait, or unhandled_evts_vector not process completely because of 
> reaching maxevents.
> 
> Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 10/31/2019 23:34,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi, 
> 
> use_mq_eventfd will help with vcl but as you’ve noticed it won’t help for ldp 
> because there we need to poll both vcl and linux fds. Because mutex-condvar 
> notifications can’t be epolled we have to constantly switch between linux and 
> vcl epolled fds. One option going forward would be to change ldp to detect if 
> vcl is using mutex-condvars or eventfds and in case of the latter poll linux 
> fds and the mq’s eventfd in a linux epoll. 
> 
> Regards,
> Florin
> 
>> On Oct 31, 2019, at 5:54 AM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> hi ALL,
>> I found app using VCL "epoll_wait" still occupy 70% cpu with 
>> "use_mq_eventfd" configuration even if very little traffic.
>> Then I investigate code in ldp_epoll_pwait, vls_epoll_wait is called with 
>> timeout equal to 0.
>> Then I have two questions:
>> 1. What problems can "use_mq_eventfd" solve?
>> 2.Any other way to decrease cpu usage?
>> Thanks!
>> 
>> code in  ldp_epoll_pwait:
>> do
>> {
>>   if (!ldpw->epoll_wait_vcl)
>>  {
>>rv = vls_epoll_wait (ep_vlsh, events, maxevents, 0);
>>if (rv > 0)
>>  {
>>ldpw->epoll_wait_vcl = 1;
>>goto done;
>>  }
>>else if (rv < 0)
>>  {
>>errno = -rv;
>>rv = -1;
>>goto done;
>>  }
>>  }
>>   else
>>  ldpw->epoll_wait_vcl = 0;
>> 
>>   if (libc_epfd > 0)
>>  {
>>rv = libc_epoll_pwait (libc_epfd, events, maxevents, 0, sigmask);
>>if (rv != 0)
>>  goto done;
>>  }
>> }
>>   while ((timeout == -1) || (clib_time_now (&ldpw->clib_time) < max_time));
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#14413): https://lists.fd.io/g/vpp-dev/message/14413 
>> <https://lists.fd.io/g/vpp-dev/message/14413>
>> Mute This Topic: https://lists.fd.io/mt/40123765/675152 
>> <https://lists.fd.io/mt/40123765/675152>
>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub> 

Re: [vpp-dev] Can "use mq eventfd" solve epoll wait h igh cpu usage problem?

2019-10-31 Thread Florin Coras
Hi Hanlin, 

Stephen and Ping have made a lot of progress with Envoy and VCL, but I’ll let 
them comment on that. 

Regards, 
Florin

> On Oct 31, 2019, at 9:44 PM, wanghanlin  wrote:
> 
> OK,I got it. Thanks a lot.
> By the way, can VCL adapt to envoy or any progress about this?
> 
> Regards,
> Hanlin
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 11/1/2019 12:07,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> If a worker’s mq uses eventfds for notifications, we could nest it in 
> libc_epfd, i.e.,the epoll fd we create for the linux fds. So, if an app's 
> worker calls epoll_wait, in ldp we can epoll_wait on libc_epfd and if we get 
> an event on the mq’s eventfd, we can call vls_epoll_wait with a 0 timeout to 
> drain the events from vcl. 
> 
> Having said that, keep in mind that we typically recommend that people use 
> vcl because ldp, through vls, enforces a rather strict locking policy. That 
> is needed in order to avoid invalidating vcl’s assumption that sessions are 
> owned by only one vcl worker. Moreover, we’ve tested ldp only against a 
> limited set of applications. 
> 
> Regards, 
> Florin
> 
>> On Oct 31, 2019, at 7:58 PM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Do you mean, if just use eventfds only, then I needn't set timeout  to 0 in 
>> ldp_epoll_pwait?
>> If so, then how to process unhandled_evts_vector in vppcom_epoll_wait 
>> timely? What I'm saying is,  another thread add event to 
>> unhandled_evts_vector during epoll_wait, or unhandled_evts_vector not 
>> process completely because of reaching maxevents.
>> 
>> Regards,
>> Hanlin
>> 
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 10/31/2019 23:34,Florin Coras 
>> <mailto:fcoras.li...@gmail.com> wrote: 
>> Hi, 
>> 
>> use_mq_eventfd will help with vcl but as you’ve noticed it won’t help for 
>> ldp because there we need to poll both vcl and linux fds. Because 
>> mutex-condvar notifications can’t be epolled we have to constantly switch 
>> between linux and vcl epolled fds. One option going forward would be to 
>> change ldp to detect if vcl is using mutex-condvars or eventfds and in case 
>> of the latter poll linux fds and the mq’s eventfd in a linux epoll. 
>> 
>> Regards,
>> Florin
>> 
>>> On Oct 31, 2019, at 5:54 AM, wanghanlin >> <mailto:wanghan...@corp.netease.com>> wrote:
>>> 
>>> hi ALL,
>>> I found app using VCL "epoll_wait" still occupy 70% cpu with 
>>> "use_mq_eventfd" configuration even if very little traffic.
>>> Then I investigate code in ldp_epoll_pwait, vls_epoll_wait is called with 
>>> timeout equal to 0.
>>> Then I have two questions:
>>> 1. What problems can "use_mq_eventfd" solve?
>>> 2.Any other way to decrease cpu usage?
>>> Thanks!
>>> 
>>> code in  ldp_epoll_pwait:
>>> do
>>> {
>>>   if (!ldpw->epoll_wait_vcl)
>>> {
>>>   rv = vls_epoll_wait (ep_vlsh, events, maxevents, 0);
>>>   if (rv > 0)
>>> {
>>>   ldpw->epoll_wait_vcl = 1;
>>>   goto done;
>>> }
>>>   else if (rv < 0)
>>> {
>>>   errno = -rv;
>>>   rv = -1;
>>>   goto done;
>>> }
>>> }
>>>   else
>>> ldpw->epoll_wait_vcl = 0;
>>> 
>>>   if (libc_epfd > 0)
>>> {
>>>   rv = libc_epoll_pwait (libc_epfd, events, maxevents, 0, sigmask);
>>&

Re: [vpp-dev] VPP crash in TCP FIFO allocation

2019-11-04 Thread Florin Coras
Hi Akshaya, 

Glad you were able to solve the issue. We’re slowly moving away from shm fifo 
segments, i.e., /dev/shm segments, in favor of memfd segments. 

Regards, 
Florin

> On Nov 4, 2019, at 3:11 AM, Akshaya Nadahalli  
> wrote:
> 
> Hi Florin,
> 
> This crash was due to setting hard limit on /dev/shm partition to 100 MB. 
> After increasing that I am able to scale more connections.
> 
> For fifo allocation, I see that we can use either shm or memfd. Is there any 
> recommendation/preference on which one to use? Does memfd also internally use 
> /dev/shm for memory allocation?
> 
> Regards,
> Akshaya N
> 
> On Fri, Oct 25, 2019 at 2:03 AM Akshaya Nadahalli 
> mailto:akshaya.nadaha...@gmail.com>> wrote:
> Sure, I will try to move to master and try it out. Will update once I am able 
> to test with master.
> 
> Regards,
> Akshaya N
> 
> On Thu, Oct 24, 2019 at 8:03 PM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Akshaya, 
> 
> Can you also try with master?
> 
> Thanks,
> Florin
> 
> > On Oct 24, 2019, at 4:35 AM, Akshaya Nadahalli  > <mailto:akshaya.nadaha...@gmail.com>> wrote:
> > 
> > Hi,
> > 
> > While testing VPP hoststack with large number of TCP connections, I see VPP 
> > crash in fifo allocation. Always crash is seen between 11k to 12k TCP 
> > connections. Changing vcl config - 
> > segment-size/add-segment-size/rx-fifo-size/tx-fifo-size doesn't change the 
> > behaviour - crash is always seen after establishing around 11k to 12k tcp 
> > sessions.
> > 
> > Test client is using vppcom APIs. Callstack for crash is attached. I am 
> > using 19.08 code with non-blocking patch cherry-picked. All sessions are 
> > non-blocking. Anyone aware of any issues/defect fixes related to this?
> > 
> > Regards,
> > Akshaya N
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > 
> > View/Reply Online (#14334): https://lists.fd.io/g/vpp-dev/message/14334 
> > <https://lists.fd.io/g/vpp-dev/message/14334>
> > Mute This Topic: https://lists.fd.io/mt/36994945/675152 
> > <https://lists.fd.io/mt/36994945/675152>
> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev%2bow...@lists.fd.io>
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> > <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com 
> > <mailto:fcoras.li...@gmail.com>]
> > -=-=-=-=-=-=-=-=-=-=-=-
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14488): https://lists.fd.io/g/vpp-dev/message/14488
Mute This Topic: https://lists.fd.io/mt/36994945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2019-11-15 Thread Florin Coras
Hi Hanlin,

Just to make sure, are you running master or some older VPP?

Regarding the issue you could be hitting lower, here’s [1] a patch that I have 
not yet pushed for merging because it leads to api changes for applications 
that directly use the session layer application interface instead of vcl. I 
haven’t tested it extensively, but the goal with it is to signal segment 
allocation/deallocation over the mq instead of the binary api.

Finally, I’ve never tested LDP with Envoy, so not sure if that works properly. 
There’s ongoing work to integrate Envoy with VCL, so you may want to get in 
touch with the authors. 

Regards,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/21497

> On Nov 15, 2019, at 2:26 AM, wanghanlin  wrote:
> 
> hi ALL,
> I accidentally got following crash stack when I used VCL with hoststack and 
> memfd. But corresponding invalid rx_fifo address (0x2f42e2480) is valid in 
> VPP process and also can be found in /proc/map. That is, shared memfd segment 
> memory is not consistent between hoststack app and VPP.
> Generally, VPP allocate/dealloc the memfd segment and then notify hoststack 
> app to attach/detach. But If just after VPP dealloc memfd segment and notify 
> hoststack app, and then VPP allocate same memfd segment at once because of 
> session connected, and then what happened now? Because hoststack app process 
> dealloc message and connected message with diffrent threads, maybe 
> rx_thread_fn just detach the memfd segment and not attach the same memfd 
> segment, then unfortunately worker thread get the connected message. 
> 
> These are just my guess, maybe I misunderstand.
> 
> (gdb) bt
> #0  0x7f7cde21ffbf in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
> #1  0x01190a64 in Envoy::SignalAction::sigHandler (sig=11, 
> info=, context=) at 
> source/common/signal/signal_action.cc:73 
> #2  
> #3  0x7f7cddc2e85e in vcl_session_connected_handler (wrk=0x7f7ccd4bad00, 
> mp=0x224052f4a) at /home/wanghanlin/vpp-new/src/vcl/vppcom.c:471
> #4  0x7f7cddc37fec in vcl_epoll_wait_handle_mq_event (wrk=0x7f7ccd4bad00, 
> e=0x224052f48, events=0x395000c, num_ev=0x7f7cca49e5e8)
> at /home/wanghanlin/vpp-new/src/vcl/vppcom.c:2658
> #5  0x7f7cddc3860d in vcl_epoll_wait_handle_mq (wrk=0x7f7ccd4bad00, 
> mq=0x224042480, events=0x395000c, maxevents=63, wait_for_time=0, 
> num_ev=0x7f7cca49e5e8)
> at /home/wanghanlin/vpp-new/src/vcl/vppcom.c:2762
> #6  0x7f7cddc38c74 in vppcom_epoll_wait_eventfd (wrk=0x7f7ccd4bad00, 
> events=0x395000c, maxevents=63, n_evts=0, wait_for_time=0)
> at /home/wanghanlin/vpp-new/src/vcl/vppcom.c:2823
> #7  0x7f7cddc393a0 in vppcom_epoll_wait (vep_handle=33554435, 
> events=0x395000c, maxevents=63, wait_for_time=0) at 
> /home/wanghanlin/vpp-new/src/vcl/vppcom.c:2880
> #8  0x7f7cddc5d659 in vls_epoll_wait (ep_vlsh=3, events=0x395000c, 
> maxevents=63, wait_for_time=0) at 
> /home/wanghanlin/vpp-new/src/vcl/vcl_locked.c:895
> #9  0x7f7cdeb4c252 in ldp_epoll_pwait (epfd=67, events=0x395, 
> maxevents=64, timeout=32, sigmask=0x0) at 
> /home/wanghanlin/vpp-new/src/vcl/ldp.c:2334
> #10 0x7f7cdeb4c334 in epoll_wait (epfd=67, events=0x395, 
> maxevents=64, timeout=32) at /home/wanghanlin/vpp-new/src/vcl/ldp.c:2389
> #11 0x00fc9458 in epoll_dispatch ()
> #12 0x00fc363c in event_base_loop ()
> #13 0x00c09b1c in Envoy::Server::WorkerImpl::threadRoutine 
> (this=0x357d8c0, guard_dog=...) at source/server/worker_impl.cc:104 
> 
> #14 0x01193485 in std::function::operator()() const 
> (this=0x7f7ccd4b8544)
> at 
> /usr/lib/gcc/x86_64-linux-gnu/7.4.0/../../../../include/c++/7.4.0/bits/std_function.h:706
> #15 Envoy::Thread::ThreadImplPosix::ThreadImplPosix(std::function ()>)::$_0::operator()(void*) const (this=, arg=0x2f42e2480)
> at source/common/common/posix/thread_impl.cc:33 
> 
> #16 Envoy::Thread::ThreadImplPosix::ThreadImplPosix(std::function ()>)::$_0::__invoke(void*) (arg=0x2f42e2480) at 
> source/common/common/posix/thread_impl.cc:32 
> #17 0x7f7cde2164a4 in start_thread () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #18 0x7f7cddf58d0f in clone () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) f 3
> #3  0x7f7cddc2e85e in vcl_session_connected_handler (wrk=0x7f7ccd4bad00, 
> mp=0x224052f4a) at /home/wanghanlin/vpp-new/src/vcl/vppcom.c:471
> 471   rx_fifo->client_session_index = session_index;
> (gdb) p rx_fifo
> $1 = (svm_fifo_t *) 0x2f42e2480
> (gdb) p *rx_fifo
> Cannot access memory at address 0x2f42e2480
> (gdb)
> 
> 
> Regards,
> Hanlin
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 

Re: [vpp-dev] UDP echo server-session-queue node TX event issue

2019-11-19 Thread Florin Coras
Hi Shiva, 

The echo client/server code was mainly built for tcp testing and was never made 
to properly work with udp (it does not support session worker migration). It 
did work with udpc (connected udp), but apparently that’s not currently 
selectable due to a transport protocol unformat bug fixed here [1]. 

Apart from that, some other observations regarding the echo clients test app:
- it expects data to be delivered reliably by the transport, so if you ever try 
udpc in full-duplex mode (as you did lower with udp) and packets are dropped, 
the test will fail. To use half-duplex testing, add “no-echo” to the server cli 
and “no-return” to client cli. 
- if you aim to measure throughput, be sure to configure fifo-size to a larger 
value than the default of 64k. Note though that udp is currently not paced, so 
it can easily overwhelm the tx nic, thereby leading to tx-errors. 

Florin

[1] https://gerrit.fd.io/r/c/vpp/+/23551

> On Nov 19, 2019, at 3:32 AM, Shiva Shankar  
> wrote:
> 
> Hi All,
> I am validating builtin echo sever client node functionality with UDP 
> protocol by running 2 VPP instances on 2 different  hosts. 
> 
> On VPP1, running echo server with the test command "echo server uri udp: 
> //1.2.3.4  / "
> On VPP2, running echo client with the test command "test echo client nclients 
> 1 gbytes 10 test-timeout 10 uri udp://1.2.3.4/ "
> 
> On the server-side, packets are received without any issue. However, while 
> echoing back from the server app RX callback, VPP is not sending packets out.
> Looks like we are running out of RX buffers because of TX events not 
> processed by session node. 
> 
> Error case logs:
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 1: echo_server_rx_callback:287: short trout! written 0 read 82
> 
> If I "manually place" session-queue node on the main thread (thread 0) things 
> are working normally. Also, This issue is not seen if the protocol is TCP.
> 
> Am I missing something obvious?
> 
> -Shiva

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14629): https://lists.fd.io/g/vpp-dev/message/14629
Mute This Topic: https://lists.fd.io/mt/60553538/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Change to Gerrit

2019-11-20 Thread Florin Coras
+1 

Florin

> On Nov 20, 2019, at 10:32 AM, Paul Vinciguerra  
> wrote:
> 
> How would the group feel about implementing something like [0], so that 
> changes to the commit message don't trigger rebuilds?
> 
> To enforce the commit message structure, we could skip the jobs and set 
> verify label after the codestyle checks if no files were changed.
> Maybe others don't care, but I don't like wasting cpu cycles/developer's 
> time, and I weigh that before clarifying a commit message.
> 
> [0] 
> https://gerrit-review.googlesource.com/Documentation/config-labels.html#label_copyAllScoresIfNoCodeChange
>  
> 
>  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14643): https://lists.fd.io/g/vpp-dev/message/14643
> Mute This Topic: https://lists.fd.io/mt/60969892/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14644): https://lists.fd.io/g/vpp-dev/message/14644
Mute This Topic: https://lists.fd.io/mt/60969892/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Requesting feedback on a change.

2019-11-20 Thread Florin Coras
Quick reply wrt to point 2. This is just the way I separate things, others may 
have different opinions:

- I believe perror is called after some syscall fails. Alternatively, 
clib_unix_error/clib_unix_warning maybe used for the same purpose
- If by local loggers you mean the vlib log infra, that is not thread safe and 
is typically used from main thread when features are initialized or on some 
cli/api event. If you meant loggers that wrap clib_warning or ffprint, then 
those are probably meant for debugging from workers. It can be that the latter 
are not always on. 
- direct calls to clib_warning from workers are typically left in to report 
“unexpected” conditions, although the practice is not encouraged. 

Florin

> On Nov 20, 2019, at 9:32 AM, Paul Vinciguerra  
> wrote:
> 
> I introduced a change [0] that passes the CI gate, but it does so, because it 
> it not actually tested by the CI.  ;P
> 
> 1. For it to be tested properly, I need to reduce the number of cpu's exposed 
> to vpp, and that requires running with elevated privileges [CAP_SYS_NICE] 
> from what I can tell.  
> We discussed at the community meeting the sentiment that test shouldn't 
> require elevated privileges to run.  Can anyone provide some guidance on the 
> best way you would like to proceed?
> 
> 2. The test framework doesn't actually catch the root issue (and consequently 
> needs to wait to timeout...).  Once we drop support for python2 compatibility 
> post 20.01, I'd like to have the tests listen to the log streams and act 
> accordingly.  But as I think about this, it would be helpful to understand 
> the decisions behind a developer's use of perror vs clib_warning vs local 
> loggers.  Are there any gotchas I need to be aware of?  I think it would be 
> great addition to be able to test the way a non-developer would troubleshoot.
> 
> 3. My current change fixes the issue when running the test outside of the 
> custom test runner.  I would like to hear any objections before I start 
> moving the "magic" that goes on in that file into the test case.  I 
> *strongly* believe that the tests need to run consistently, whether run from 
> 'make test', or from the test shell, or any other standard tooling. 
> 
> Paul
> 
> [0] https://gerrit.fd.io/r/c/vpp/+/23555 
>  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14642): https://lists.fd.io/g/vpp-dev/message/14642
> Mute This Topic: https://lists.fd.io/mt/60950403/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14647): https://lists.fd.io/g/vpp-dev/message/14647
Mute This Topic: https://lists.fd.io/mt/60950403/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2019-11-21 Thread Florin Coras
Hi Hanlin, 

As Jon pointed out, you may want to register with gerrit. 

You comments with respect to points 1) and 2) are spot on. I’ve updated the 
patch to fix them. 

Regarding 3), if I understood your scenario correctly, it should not happen. 
The ssvm infra forces applications to map segments at fixed addresses. That is, 
for the scenario you’re describing lower, if B2 is processed first, 
ssvm_slave_init_memfd will map the segment at A2. Note how we first map the 
segment to read the shared header (sh) and then use sh->ssvm_va (which should 
be A2) to remap the segment at a fixed virtual address (va). 

Regards,
Florin

> On Nov 21, 2019, at 2:49 AM, wanghanlin  wrote:
> 
> Hi Florin,
> I have applied the patch, and found some problems in my case.  I have not 
> right to post it in gerrit, so I post here.
> 1)evt->event_type should be set  with SESSION_CTRL_EVT_APP_DEL_SEGMENT rather 
> than SESSION_CTRL_EVT_APP_ADD_SEGMENT. File: src/vnet/session/session_api.c, 
> Line: 561, Function:mq_send_del_segment_cb
> 2)session_send_fds may been called in the end of function 
> mq_send_add_segment_cb, otherwise lock of app_mq can't been free here.File: 
> src/vnet/session/session_api.c, Line: 519, Function:mq_send_add_segment_cb 
> 3) When vcl_segment_attach called in each worker thread, then 
> ssvm_slave_init_memfd can been called in each worker thread and then 
> ssvm_slave_init_memfd map address sequentially through map segment once in 
> advance.  It's OK in only one thread, but maybe wrong in multiple worker 
> threads. Suppose following scene: VPP allocate segment at address A1 and 
> notify worker thread B1 to expect B1 also map segment at address A1,  and 
> simultaneously VPP allocate segment at address A2 and notify worker thread B2 
> to expect B2 map segment at address A2. If B2 first process notify message, 
> then ssvm_slave_init_memfd may map segment at address A1. Maybe VPP can add 
> segment map address in notify message, and then worker thread just map 
> segment at this address. 
> 
> Regards,
> Hanlin
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 11/19/2019 09:50,wanghanlin 
> <mailto:wanghan...@corp.netease.com> wrote: 
> Hi  Florin,
> VPP vsersion is v19.08.
> I'll apply this patch and check it. Thanks a lot!
> 
> Regards,
> Hanlin
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 11/16/2019 00:50,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin,
> 
> Just to make sure, are you running master or some older VPP?
> 
> Regarding the issue you could be hitting lower, here’s [1] a patch that I 
> have not yet pushed for merging because it leads to api changes for 
> applications that directly use the session layer application interface 
> instead of vcl. I haven’t tested it extensively, but the goal with it is to 
> signal segment allocation/deallocation over the mq instead of the binary api.
> 
> Finally, I’ve never tested LDP with Envoy, so not sure if that works 
> properly. There’s ongoing work to integrate Envoy with VCL, so you may want 
> to get in touch with the authors. 
> 
> Regards,
> Florin
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/21497 
> <https://gerrit.fd.io/r/c/vpp/+/21497>
> 
>> On Nov 15, 2019, at 2:26 AM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> hi ALL,
>> I accidentally got following crash stack when I used VCL with hoststack and 
>> memfd. But corresponding invalid rx_fifo address (0x2f42e2480) is valid in 
>> VPP process and also can be found in /proc/map. That is, shared memfd 
>> segment memory is not consistent between hoststack app and VPP.
>> Generally, VPP allocate/dealloc the memfd segment and then notify hoststack 
>> app to attach/detach. But If just after VPP dealloc memfd segment and no

Re: [vpp-dev] ldp write assert error

2019-11-21 Thread Florin Coras
Patch looks good! I’ll merge once the ci-infra issues are solved.

Since you’re doing perf testing, I’ll note again, although I’m sure you already 
know it, that ldp performance is somewhat lower than that of vcl under certain 
types of load, because of vls locking.

Thanks, 
Florin

> On Nov 21, 2019, at 7:39 AM, jiangxiaom...@outlook.com wrote:
> 
> I used ab with ldp,  for nginx bench test.
> Below is my start command:
> sudo env \
>  VCL_CONFIG=/tmp/vcl-test-3af8e.conf \
>  VCL_DEBUG=1 \
>  LDP_DEBUG=1 \
>  LDP_SID_BIT=9 \
>  gdb -x /tmp/gdb-3af8e
> I got the vppcom_session_write_inline assert failds. Obviously apache ab's 
> write function send the wrong params. 
> But we in vppcom_session_write_inline only check the param: buf, not check 
> the param: n.
> If I add the param n check, apache ab tools will work well.
> I think it's necessary to add the param:n check for 
> vppcom_session_write_inline  funcion.
> Here is my patch:  https://gerrit.fd.io/r/c/vpp/+/23584 
> 
> vppcom_session_create:1142: vcl<15458:0>: created session 1
> fcntl:503: ldp<15458>: fd 513 vlsh 1, cmd 3
> fcntl:503: ldp<15458>: fd 513 vlsh 1, cmd 4
> connect:1255: ldp<15458>: fd 513: calling vls_connect(): vlsh 1 addr 
> 0x55768f18 len 16
> vppcom_session_connect:1608: vcl<15458:0>: session handle 1: connecting to 
> server IPv4 192.168.7.130 port 80 proto TCP
> epoll_ctl:2203: ldp<15458>: epfd 512 ep_vlsh 0, fd 513 vlsh 1, op 1
> connect:1255: ldp<15458>: fd 513: calling vls_connect(): vlsh 1 addr 
> 0x55768f18 len 16
> vppcom_session_connect:1592: vcl<15458:0>: session handle 1 [0x10001]: 
> session already connected to IPv4 192.168.7.130 port 80 proto TCP, state 0x1 
> (STATE_CONNECT)
> epoll_ctl:2203: ldp<15458>: epfd 512 ep_vlsh 0, fd 513 vlsh 1, op 2
> epoll_ctl:2203: ldp<15458>: epfd 512 ep_vlsh 0, fd 513 vlsh 1, op 1
> /home/dev/net-base/build/vpp/src/vcl/vppcom.c:1968 
> (vppcom_session_write_inline) assertion `n_write > 0' fails
>  
> Program received signal SIGABRT, Aborted.
> 0x75b98337 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 55   return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> Missing separate debuginfos, use: debuginfo-install 
> keyutils-libs-1.5.8-3.el7.x86_64 libuuid-2.23.2-61.el7.x86_64 
> pcre-8.32-17.el7.x86_64
> (gdb) bt
> #0  0x75b98337 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> #1  0x75b99a28 in __GI_abort () at abort.c:90
> #2  0x7506e8f3 in os_panic () at 
> /home/dev/net-base/build/vpp/src/vppinfra/unix-misc.c:176
> #3  0x74fd6c93 in debugger () at 
> /home/dev/net-base/build/vpp/src/vppinfra/error.c:84
> #4  0x74fd7062 in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x757483f0 "%s:%d (%s) assertion `%s' fails") at 
> /home/dev/net-base/build/vpp/src/vppinfra/error.c:143
> #5  0x7571f70e in vppcom_session_write_inline (session_handle=1, 
> buf=0x55763240 <_request>, n=0, is_flush=1 '\001') at 
> /home/dev/net-base/build/vpp/src/vcl/vppcom.c:1968
> #6  0x7571f81a in vppcom_session_write_msg (session_handle=1, 
> buf=0x55763240 <_request>, n=0) at 
> /home/dev/net-base/build/vpp/src/vcl/vppcom.c:1986
> #7  0x757467f5 in vls_write_msg (vlsh=1, buf=0x55763240 
> <_request>, nbytes=0) at /home/dev/net-base/build/vpp/src/vcl/vcl_locked.c:505
> #8  0x77bd0f3f in write (fd=513, buf=0x55763240 <_request>, 
> nbytes=0) at /home/dev/net-base/build/vpp/src/vcl/ldp.c:424
> #9  0x7666e67b in apr_socket_send (sock=0x55787fc0, 
> buf=0x55763240 <_request> "GET / HTTP/1.0\r\nConnection: 
> Keep-Alive\r\nHost: 192.168.7.130\r\nUser-Agent: ApacheBench/2.3\r\nAccept: 
> */*\r\n\r\n", len=len@entry=0x7fffdc20) at network_io/unix/sendrecv.c:41
> #10 0xb4f4 in write_request (c=c@entry=0x557870e0) at ab.c:707
> #11 0x7f9e in write_request (c=0x557870e0) at ab.c:1793
> #12 test () at ab.c:1871
>  
> 
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14661): https://lists.fd.io/g/vpp-dev/message/14661
> Mute This Topic: https://lists.fd.io/mt/61080258/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14663): https://lists.fd.io/g/vpp-dev/message/14663
Mute This Topic: https://lists.fd.io/mt/61080258/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2019-11-22 Thread Florin Coras
Hi Hanlin, 

Okay, that’s a different issue. The expectation is that each vcl worker has a 
different binary api transport into vpp. This assumption holds for applications 
with multiple process workers (like nginx) but is not completely satisfied for 
applications with thread workers. 

Namely, for each vcl worker we connect over the socket api to vpp and 
initialize the shared memory transport (so binary api messages are delivered 
over shared memory instead of the socket). However, as you’ve noted, the socket 
client is currently not multi-thread capable, consequently we have an overlap 
of socket client fds between the workers. The first segment is assigned 
properly but the subsequent ones will fail in this scenario. 

I wasn’t aware of this so we’ll have to either fix the socket binary api 
client, for multi-threaded apps, or change the session layer to use different 
fds for exchanging memfd fds. 

Regards, 
Florin

> On Nov 21, 2019, at 11:47 PM, wanghanlin  wrote:
> 
> Hi Florin,
> Regarding 3), I think main problem maybe in function 
> vl_socket_client_recv_fd_msg called by vcl_session_app_add_segment_handler.  
> Mutiple worker threads share the same scm->client_socket.fd, so B2 may 
> receive the segment memfd belong to A1.
> 
>  
> Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 11/22/2019 01:44,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> As Jon pointed out, you may want to register with gerrit. 
> 
> You comments with respect to points 1) and 2) are spot on. I’ve updated the 
> patch to fix them. 
> 
> Regarding 3), if I understood your scenario correctly, it should not happen. 
> The ssvm infra forces applications to map segments at fixed addresses. That 
> is, for the scenario you’re describing lower, if B2 is processed first, 
> ssvm_slave_init_memfd will map the segment at A2. Note how we first map the 
> segment to read the shared header (sh) and then use sh->ssvm_va (which should 
> be A2) to remap the segment at a fixed virtual address (va). 
> 
> Regards,
> Florin
> 
>> On Nov 21, 2019, at 2:49 AM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Hi Florin,
>> I have applied the patch, and found some problems in my case.  I have not 
>> right to post it in gerrit, so I post here.
>> 1)evt->event_type should be set  with SESSION_CTRL_EVT_APP_DEL_SEGMENT 
>> rather than SESSION_CTRL_EVT_APP_ADD_SEGMENT. File: 
>> src/vnet/session/session_api.c, Line: 561, Function:mq_send_del_segment_cb
>> 2)session_send_fds may been called in the end of function 
>> mq_send_add_segment_cb, otherwise lock of app_mq can't been free here.File: 
>> src/vnet/session/session_api.c, Line: 519, Function:mq_send_add_segment_cb 
>> 3) When vcl_segment_attach called in each worker thread, then 
>> ssvm_slave_init_memfd can been called in each worker thread and then 
>> ssvm_slave_init_memfd map address sequentially through map segment once in 
>> advance.  It's OK in only one thread, but maybe wrong in multiple worker 
>> threads. Suppose following scene: VPP allocate segment at address A1 and 
>> notify worker thread B1 to expect B1 also map segment at address A1,  and 
>> simultaneously VPP allocate segment at address A2 and notify worker thread 
>> B2 to expect B2 map segment at address A2. If B2 first process notify 
>> message, then ssvm_slave_init_memfd may map segment at address A1. Maybe VPP 
>> can add segment map address in notify message, and then worker thread just 
>> map segment at this address. 
>> 
>> Regards,
>> Hanlin
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 11/19/2019 09:50,wanghanlin 
>> <mailto:wanghan...@corp.netease.com&

Re: [vpp-dev] tcp_echo

2019-11-22 Thread Florin Coras
Hi Dom, 

You can’t run app debug binaries against vpp release binaries because dlmalloc 
will try to validate allocations between two processes. That’s the crash you 
got lower. Try to run vpp debug binaries with tcp_echo debug binaries. 

Most of our testing is based on vcl (say with iperf[1]). Does that work okay 
for you?

Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Nov 22, 2019, at 10:30 AM, dch...@akouto.com wrote:
> 
> Hello,
> 
> I'm trying to run a simple test using the tcp_echo application using 
> 19.08.1-release, but it just crashes pretty early on. I built a debug version 
> and tried to debug the crash that happens in mspace_malloc:
> 
> void* mspace_malloc(mspace msp, size_t bytes) {
>   mstate ms = (mstate)msp;
>   if (!ok_magic(ms)) { 
> USAGE_ERROR_ACTION(ms,ms);
> return 0;
>   }
> 
> The call to ok_magic returns 0 and the program exits. Stepping through the 
> code, the issue happens while still trying to connect to VPP at 
> initialization:
> 
> Thrd #1 [tcp_echo] 5012 [core: 1] (Suspended : Signal : SIGABRT:Aborted)
> __GI_raise() at raise.c:55 0x76bb7337
> __GIea_abort() at abort.c:90 0x76bb8a28
> os_panic() at unix-misc.c:176 0x7773fd77
> mspace_malloc() at dlmalloc.c:4,344 0x7775ca4b
> mspace_get_aligned() at dlmalloc.c:4,233 0x7775c7cc
> clib_mem_alloc_aligned_at_offset() at mem.h:139 0x77748c7b
> vec_resize_allocate_memory() at vec.c:59 0x77748ec6
> _vec_resize_inline() at vec.h:147 0x776b6e54
> do_percent() at format.c:341 0x776b7e09
> va_format() at format.c:404 0x776b828d
> format() at format.c:428 0x776b8400
> shm_name_from_svm_map_region_args() at svm.c:456 0x779a07d5
> svm_map_region() at svm.c:593 0x779a0fe9
> svm_region_find_or_create() at svm.c:931 0x779a2050
> vl_map_shmem() at memory_shared.c:605 0x77bbe4b2
> vl_client_api_map() at memory_client.c:371 0x77bc1225
> connect_to_vlib_internal() at memory_client.c:398 0x77bc129f
> vl_client_connect_to_vlib() at memory_client.c:429 0x77bc1396
> connect_to_vpp() at tcp_echo.c:436 0x40b291
> main() at tcp_echo.c:1,389 0x40f723
> 
> The internal test echo client/server work fine, so I think I've set things up 
> correctly but I'm not sure if I'm missing something for the external 
> client/server, or if they work for everyone else on 19.08.1-release.
> 
> All I do is bring up VPP, set an IP address on the interface, and then run 
> the tcp_echo application. Can someone please let me know if the tcp_echo 
> application works for them on 19.08.1-release, and if so are there any other 
> steps needed for it to run other than what I've mentioned?
> 
> Thank you in advance!
> 
> Dom
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14672): https://lists.fd.io/g/vpp-dev/message/14672
> Mute This Topic: https://lists.fd.io/mt/61720535/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14674): https://lists.fd.io/g/vpp-dev/message/14674
Mute This Topic: https://lists.fd.io/mt/61720535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ldp select/epoll with 0 timeout will make cpu 100% busy

2019-11-24 Thread Florin Coras
For optimal performance, we recommend that people integrate their applications 
with VCL. LDP functionally supports a number of applications, but for certain 
scenarios it is not optimal. 

In particular, it is not efficient when the application is heavily threaded 
because VLS (shim between VCL and LDP) does a lot of locking to ensure that 
file descriptors are not simultaneously accessed from multiple threads. 
Compared to applications that use multiple processes as workers (e.g., nginx) 
in multi-threaded scenarios, LDP + VLS do not register those threads as new 
workers because they can’t know which of the app’s threads are actual workers. 

Regarding epoll, the current implementation is inefficient if the application 
needs to epoll wait blocking (timeout is -1) or with a positive timeout on both 
linux fds and vcl sessions. That’s because message queue notifications from vpp 
cannot be linux epolled if the message queues use mutex-condvars, so we end up 
alternatively polling linux fds and vcl sessions with zero timeout (busy wait). 
Going forward we may enforce eventfd mq notifications only with ldp, so this 
could be solved. 

Finally, if app polls the fds by setting timeout to 0, I’d expect the 
application’s poll loop (which includes epoll) to use most of the cpu. 

Florin

> On Nov 24, 2019, at 6:24 AM, jiangxiaom...@outlook.com wrote:
> 
> I test nginx with ldp, then nginx work well, but the cpu to hight. ldp thread 
> cpu usage is 100% and the cpu mostly wasted at epoll function.
> The test apache jmeter with ldp is more worse. Jmeter will start thousand 
> threads, When simulating multiple users for http perf test.
> And each java thread call the select with 0 timeout, and ldp select function 
> will make cpu 100% busy. -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14681): https://lists.fd.io/g/vpp-dev/message/14681
> Mute This Topic: https://lists.fd.io/mt/61875428/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14685): https://lists.fd.io/g/vpp-dev/message/14685
Mute This Topic: https://lists.fd.io/mt/61875428/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix

2019-11-26 Thread Florin Coras
Hi, 

You’ll have to be a bit more specific with respect to the issues you’re 
hitting. Are you saying that nginx + wrk and vpp 19.08 result in really low 
throughput? Is this the first release of 19.08 or a later one like 19.08.3. 
Have you tried master?

For throughput testing we typically use [1]. Could you try it out and see if 
performance has changed? 

Ping (cc’ed) has done some extensive Nginx + wrk testing and a recent version 
of master seemed to perform really well.  

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Nov 26, 2019, at 5:33 PM, lin.yan...@zte.com.cn wrote:
> 
> The nginx test was performed on vpp19.04, and the transmission speed of 
> different files can reach several MB / sec by using the wrk tool test. The 
> nginx test was performed on vpp19.08, and the test was performed by the wrk 
> tool. The transmission speed of different files was basically 0.
> Ask: What is causing this?
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14710): https://lists.fd.io/g/vpp-dev/message/14710
> Mute This Topic: https://lists.fd.io/mt/62121055/675152
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14712): https://lists.fd.io/g/vpp-dev/message/14712
Mute This Topic: https://lists.fd.io/mt/62121055/21656
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-11-28 Thread Florin Coras
Hi Hanlin,

Are your app namespaces mapped to the same vrf? 

If no, then your interfaces can have IPs out of the same subnet because they’re 
part of different vrf tables. To allow for connectivity between the vrfs you’ll 
need to do some “vrf leaking”. We do exactly that in our vcl make test (see 
test/test_vcl.py). 

If yes, then can I ask for more details about your use case? Your apps are part 
of different app namespaces but they share the ip routing on purpose? Also, are 
you enforcing the source ip in the connect?

Regards,
Florin

> On Nov 28, 2019, at 5:50 AM, wanghanlin  wrote:
> 
> Hi All,
> We have two APPs in different namespaces, such as APP1 and APP2, and use 
> hoststack based on same VPP.  
> APP1 listen on 192.168.1.2:8080,and APP2 connect APP2 through 192.168.1.3. 
> Then, we can add two interfaces on VPP, but how to configure ip address?  
> Suppose netmask is 24, VPP can not support two interfaces with same subnet. 
> 
> 
> Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14729): https://lists.fd.io/g/vpp-dev/message/14729
Mute This Topic: https://lists.fd.io/mt/64106592/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-11-29 Thread Florin Coras
Hi Hanlin, 

Inline. 

> On Nov 29, 2019, at 7:12 AM, wanghanlin  wrote:
> 
> Hi Florin,
> Thanks for your reply.
> I just consider a very simple use case. Some apps in different containers 
> communicate through VPP, just in a L2 bridge domain.  
> Without hoststack,  we may add some host-interfaces in one bridge domain, and 
> assign IP address of veth interface in containers. In addition, a physical 
> nic also added in same bridge domain to communicate with other hosts.
> But with hoststack, things seem complicated because we have to assign IP 
> address inside VPP.  

FC: Yes, with host stack transport protocols are terminated in vpp, therefore 
the interfaces must have IPs. Do you need network access to the container’s 
linux stack for other applications, i.e., do you need IPs in the container as 
well? Also, can’t you give the interfaces /32 IPs?  

> I hope apps can communicate with each other and with external hosts in the 
> same vrf and source ip is enforced and not changed during communication.  If 
> not, can multiple vrfs achieve this?

FC:  If applications are attached to the same app namespace, then you could 
leverage cut-through connections if you enable local scope connections at 
attachment time (see slides 17 and 18 here [1]). Cut-through sessions are 
“connected” at session layer, so they don’t pass through the IP fib.

Otherwise, connectivity between the apps is established via intra-vrf or 
inter-vrf routing. Intra-vrf you don’t need to configure anything more, 
inter-vrf you need to add additional routes. For external hosts, you need 
routes to them in the vrfs. 

What we call “local” IPs for a connection are assigned at connect/accept time 
and they do not change. When connecting, we use the first IP of an interface 
that has a route to the destination and on accept, we use the dst IP on the SYN 
packet. 

Regards,
Florin

[1] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf 


>  
> Thanks,
> Hanlin
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14737): https://lists.fd.io/g/vpp-dev/message/14737
> Mute This Topic: https://lists.fd.io/mt/64106592/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14738): https://lists.fd.io/g/vpp-dev/message/14738
Mute This Topic: https://lists.fd.io/mt/64106592/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-01 Thread Florin Coras
Hi Yang.L, 

It looks as if nginx did not bind successfully or the connects are not routed 
to nginx (since your report is showing 0 requests in 30s). 

Check the routing between the two apps and vcl/ldp logs to see if there are any 
attach/bind errors.

Regards,
Florin

> On Dec 1, 2019, at 6:31 PM, lin.yan...@zte.com.cn wrote:
> 
> Hi All,
> The nginx test was performed on vpp19.04, and the transmission speed of 
> different files can reach several MB / sec by using the wrk tool test. 
> However,the nginx test was performed on vpp19.08.1, the transmission speed of 
> different fileswas basically 0 by using the wrk tool test.Then I changed the 
> vpp version to the latest master,
> the result was still the same.
> The vpp +  "LD_PRELOAD" nginx(nginx-release-1.17.5) +the tool wrk,the result 
> was:
> <捕获.PNG>
> 
> Mind answering this one?
> Can you provide relevant vpp+nginx +wrk test configurations and test reports?
> thanks,
> Yang.L
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14742): https://lists.fd.io/g/vpp-dev/message/14742
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14743): https://lists.fd.io/g/vpp-dev/message/14743
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Using hoststack instead of tap

2019-12-02 Thread Florin Coras
Hi Paul, 

Are you thinking about using tcp to generate packets? If yes, probably you want 
to take a look at the iperf vcl tests (test/test_vcl.py).

Regards,
Florin

> On Dec 2, 2019, at 9:57 AM, Paul Vinciguerra  
> wrote:
> 
> There was a brief discussion on the community call about using hoststack as 
> an alternative to running tests as a privileged user for a tap interface.  Is 
> there any documentation on this, or is there a representative test case I can 
> reference?  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14755): https://lists.fd.io/g/vpp-dev/message/14755
> Mute This Topic: https://lists.fd.io/mt/65105296/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14756): https://lists.fd.io/g/vpp-dev/message/14756
Mute This Topic: https://lists.fd.io/mt/65105296/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-02 Thread Florin Coras
Hi Yang.L, 

I just tried out nginx + debug vcl and ldp + debug vpp. Everything seems to be 
working fine. 

Once you start nginx, do you get any errors in /var/log/syslog. What does “show 
sessions verbose” return? There might be some issues with your config.

Thanks, 
Florin

> On Dec 2, 2019, at 12:49 AM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> When nginx configuration item worker_processes = 1, everything is normal; 
> when nginx configuration item worker_processes> 1, the above situation will 
> occur.
> 
> Can you explain the above problem?
> thanks,
> Yang.L -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14746): https://lists.fd.io/g/vpp-dev/message/14746
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14757): https://lists.fd.io/g/vpp-dev/message/14757
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP / tcp_echo performance

2019-12-03 Thread Florin Coras
Hi Dom, 

I’ve never tried to run the stack in a VM, so not sure about the expected 
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL 
configuration). 
- I don’t think you need to configure more than 16k buffers/numa. 

Additionally, to get more information on the issue:
- What does “show session verbose 2” report? Check the stats section for 
retransmit counts (tr - timer retransmit, fr - fast retansmit) which if 
non-zero indicate that packets are lost. 
- Check interface rx/tx error counts with “show int”. 
- Typically, for improved performance, you should write more than 1.4kB per 
call. But the fact that your average is less than 1.4kB suggests that you often 
find the fifo full or close to full. So probably the issue is not your sender 
app. 

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Dec 3, 2019, at 11:40 AM, dch...@akouto.com wrote:
> 
> Hi all,
> 
> I've been running some performance tests and not quite getting the results I 
> was hoping for, and have a couple of related questions I was hoping someone 
> could provide some tips with. For context, here's a summary of the results of 
> TCP tests I've run on two VMs (CentOS 7 OpenStack instances, host-1 is the 
> client and host-2 is the server):
> Running iperf3 natively before the interfaces are assigned to DPDK/VPP: 10 
> Gbps TCP throughput
> Running iperf3 with VCL/HostStack: 3.5 Gbps TCP throughput
> Running a modified version of the tcp_echo application (similar results with 
> socket and svm api): 610 Mbps throughput
> Things I've tried to improve performance:
> Anything I could apply from 
> https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
> Added tcp { cc-algo cubic } to VPP startup config
> Using isolcpu and VPP startup config options, allocated first 2, then 4 and 
> finally 6 of the 8 available cores to VPP main & worker threads
> In VPP startup config set "buffers-per-numa 65536" and "default data-size 
> 4096"
> Updated grub boot options to include hugepagesz=1GB hugepages=64 
> default_hugepagesz=1GB
> My goal is to achieve at least the same throughput using VPP as I get when I 
> run iperf3 natively on the same network interfaces (in this case 10 Gbps).
>  
> A couple of related questions:
> Given the items above, do any VPP or kernel configuration items jump out that 
> I may have missed that could justify the difference in native vs VPP 
> performance or help get the two a bit closer?
> In the modified tcp_echo application, n_sent = app_send_stream(...) is called 
> in a loop always using the same length (1400 bytes) in my test version. The 
> return value n_sent indicates that the average bytes sent is only around 130 
> bytes per call after some run time. Are there any parameters or options that 
> might improve this?
> Any tips or pointers to documentation that might shed some light would be 
> hugely appreciated!
>  
> Regards,
> Dom
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14772): https://lists.fd.io/g/vpp-dev/message/14772
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14777): https://lists.fd.io/g/vpp-dev/message/14777
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread Florin Coras
Hi Dom, 

[traveling so a quick reply]

For some reason, your rx/tx fifos (see nitems), and implicitly the snd and rcv 
wnd, are 64kB in your logs lower. Is this the tcp echo or iperf result?

Regards,
Florin

> On Dec 4, 2019, at 7:29 AM, dch...@akouto.com wrote:
> 
> Hi,
> 
> Thank you Florin and Jerome for your time, very much appreciated.
> 
> For VCL configuration, FIFO sizes are 16 MB
> "show session verbose 2" does not indicate any retransmissions. Here are the 
> numbers during a test run where approx. 9 GB were transferred (the difference 
> in values between client and server is just because it took me a few seconds 
> to issue the command on the client side as you can see from the duration):
> SERVER SIDE:
>  stats: in segs 5989307 dsegs 5989306 bytes 8544661342 dupacks 0
> out segs 3942513 dsegs 0 bytes 0 dupacks 0
> fr 0 tr 0 rxt segs 0 bytes 0 duration 106.489
> err wnd data below 0 above 0 ack below 0 above 0
> CLIENT SIDE:
>  stats: in segs 4207793 dsegs 0 bytes 0 dupacks 0
> out segs 6407444 dsegs 6407443 bytes 9141373892 dupacks 0
> fr 0 tr 0 rxt segs 0 bytes 0 duration 114.113
> err wnd data below 0 above 0 ack below 0 above 0
> sh int does not seem to indicate any issue. There are occasional drops but I 
> enabled tracing and checked those out, they are LLC BPDU's, I'm not sure 
> where those are coming from but I suspect they are from linuxbridge in the 
> compute host where the VMs are running.
> @Jerome: Before I use the dpdk-devbind command to make the interfaces 
> available to VPP, they use virtio drivers. When assigned to VPP they use 
> uio_pci_generic.
> 
> I'm not sure if any other stats might be useful so I'm just pasting a bunch 
> of stats & information from the client & server instances below, I know it's 
> a lot, just putting it here in case there is something useful in there. 
> Thanks again for taking the time to follow-up with me and for the 
> suggestions, I really do appreciate it very much!
> 
> Regards,
> Dom
> 
> #
> # Interface uses virtio-pci when the iperf3 test is run using regular Linux
> # networking. 
> #
> [root@vpp-test-1 centos]# dpdk-devbind --status
>  
> Network devices using kernel driver
> ===
> :00:03.0 'Virtio network device 1000' if=eth0 drv=virtio-pci 
> unused=virtio_pci *Active*
> :00:04.0 'Virtio network device 1000' if=eth1 drv=virtio-pci 
> unused=virtio_pci *Active*
>  
> #
> # Interface uses uio_pci_generic when set up for VPP
> #
>  
> [root@vpp-test-1 centos]# dpdk-devbind --status
>  
> Network devices using DPDK-compatible driver
> 
> :00:03.0 'Virtio network device 1000' drv=uio_pci_generic 
> unused=virtio_pci
>  
> Network devices using kernel driver
> ===
> :00:04.0 'Virtio network device 1000' if=eth1 drv=virtio-pci 
> unused=virtio_pci,uio_pci_generic *Active*
>  
>  
> vpp# sh hardware-interfaces
>   NameIdx   Link  Hardware
> GigabitEthernet0/3/0   1 up   GigabitEthernet0/3/0
>   Link speed: 10 Gbps
>   Ethernet address fa:16:3e:10:5e:4b
>   Red Hat Virtio
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg
> rx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
> tx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
> pci: device 1af4:1000 subsystem 1af4:0001 address :00:03.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip udp-cksum tcp-cksum tcp-lro vlan-filter
>jumbo-frame
> rx offload active: jumbo-frame
> tx offload avail:  vlan-insert udp-cksum tcp-cksum tcp-tso multi-segs
> tx offload active: multi-segs
> rss avail: none
> rss active:none
> tx burst function: virtio_xmit_pkts
> rx burst function: virtio_recv_mergeable_pkts
>  
> rx frames ok 467
> rx bytes ok27992
> extended stats:
>   rx good packets467
>   rx good bytes27992
>   rx q0packets   467
>   rx q0bytes   27992
>   rx q0 good packets 467
>   rx q0 good bytes 27992
>   rx q0 multicast packets465
>   rx q0 broadcast packets  2
> 

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread Florin Coras
Hi Dom, 

I suspect your client/server are really bursty in sending/receiving and your 
fifos are relatively small. So probably the delay in issuing the cli in the two 
vms is enough for the receiver to drain its rx fifo. Also, whenever the rx fifo 
on the receiver fills, the sender will most probably stop sending for ~200ms 
(the persist timeout after a zero window). 

The vcl.conf parameters are only used by vcl applications. The builtin echo 
apps do not use vcl, instead they use the native C app-interface api. Both the 
server and client echo apps take the fifo size as a parameter (something like 
fifo-size 4096 for 4MB fifos). 

Regards, 
Florin

> On Dec 4, 2019, at 3:58 PM, dch...@akouto.com wrote:
> 
> Hi Florin,
> 
> Those are tcp echo results. Note that the "show session verbose 2" command 
> was issued while there was still traffic being sent. Interesting that on the 
> client (sender) side the tx fifo is full (cursize 65534 nitems 65534) and on 
> the server (receiver) side the rx fifo is empty (cursize 0 nitems 65534).
> 
> Where is the rx and tx fifo size configured? Here's my exact vcl.conf file:
> vcl {
>   rx-fifo-size 1600
>   tx-fifo-size 1600
>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }
> 
> Is this what those values should match?
> 
> Thanks,
> Dom
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14803): https://lists.fd.io/g/vpp-dev/message/14803
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14804): https://lists.fd.io/g/vpp-dev/message/14804
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread Florin Coras
Hi Dom,

I would actually recommend testing with iperf because it should not be slower 
than the builtin echo server/client apps. Remember to add fifo-size to your 
echo apps cli commands (something like fifo-size 4096 for 4MB) to increase the 
fifo sizes. 

Also note that you’re trying full-duplex testing. To check half-duplex, add 
no-echo to the server and no-return to client (or the other way around - in an 
airport and can’t remember the exact cli). We should probably make half-duplex 
default. 

I’m surprised that iperf reports throughput as small as the echo apps. Did you 
check that fifo sizes are 16MB as configured and that snd_wnd/rcv_wnd/cwnd 
reported by “show session verbose 2” are the right size?

As for the checksum issues you’re hitting, I agree. It might be that tcp 
checksum offloading does not work properly with your interfaces. 

Regards,
Florin

> On Dec 4, 2019, at 2:18 PM, dch...@akouto.com wrote:
> 
> It turns out I was using DPDK virtio, with help from Moshin I changed the 
> configuration and tried to repeat the tests using VPP native virtio, results 
> are similar but there are some interesting new observations, sharing them 
> here in case they are useful to others or trigger any ideas. 
> 
> After configuring both instances to use VPP native virtio, I used the 
> built-in echo test to see what throughput I would get, and I got the same 
> results as the modified external tcp_echo, i.e. about 600 Mbps:
> Added dpdk { no-pci } to startup.conf and configured the interface using 
> create int virtio  as per instructions from Moshin, confirmed 
> settings with show virtio pci command
> Ran the built-in test echo application to transfer 1 GB of data and got the 
> following results:
> vpp# test echo clients gbytes 1 uri tcp://10.0.0.153/5556
> 1 three-way handshakes in 0.00 seconds 2288.06/s
> Test started at 1255.753237
> Test finished at 1272.863244
> 1073741824 bytes (1024 mbytes, 1 gbytes) in 17.11 seconds
> 62755195.55 bytes/second full-duplex
> .5020 gbit/second full-duplex
> I then used iperf3 with VCL on both sides and got roughly the same results 
> (620 Mbps)
> Then I rebooted the client VM and use native Linux networking on the client 
> side with VPP on the server side, and try to repeat the iperf test
> When I use VPP-native virtio on the server side, the iperf test fails, 
> packets are dropped on the server (VPP) side, doing a trace shows packets are 
> dropped because of "bad tcp checksum"
> I then switch the server side to use DPDK virtio, the iperf test works and I 
> get 3 Gbps throughput
> So, the big performance problem is on the client (sender) side, with VPP only 
> able to get around 600 Mbps out for some reason, even when using the built-in 
> test echo application. I'm continuing my investigation to see where the 
> bottleneck is, any other ideas on where to look would be greatly appreciated.
> 
> Also, there may be a checksum bug in the VPP-native virtio driver since the 
> packets are not dropped on the server side when using the DPDK virtio driver. 
> I'd be happy to help gather more details on this, create a JIRA ticket and 
> even contribute a fix but wanted to check before going down that road, any 
> thoughts or comments?
> 
> Thanks again for all the help so far!
> 
> Regards,
> Dom
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14801): https://lists.fd.io/g/vpp-dev/message/14801
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14805): https://lists.fd.io/g/vpp-dev/message/14805
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-12-05 Thread Florin Coras
Hi Hanlin, 

Inline.

> On Dec 4, 2019, at 1:59 AM, wanghanlin  wrote:
> 
> Hi Florin,
> 
> Thanks for your patient reply.  Still I have some doubt inline.
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 11/30/2019 02:47,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> Inline. 
> 
>> On Nov 29, 2019, at 7:12 AM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Hi Florin,
>> Thanks for your reply.
>> I just consider a very simple use case. Some apps in different containers 
>> communicate through VPP, just in a L2 bridge domain.  
>> Without hoststack,  we may add some host-interfaces in one bridge domain, 
>> and assign IP address of veth interface in containers. In addition, a 
>> physical nic also added in same bridge domain to communicate with other 
>> hosts.
>> But with hoststack, things seem complicated because we have to assign IP 
>> address inside VPP.  
> 
> FC: Yes, with host stack transport protocols are terminated in vpp, therefore 
> the interfaces must have IPs. Do you need network access to the container’s 
> linux stack for other applications, i.e., do you need IPs in the container as 
> well? Also, can’t you give the interfaces /32 IPs?
> 
> Hanlin:I need not access to contaner's linux stack now, I think I can create 
> another host-interface with another IP if needed.  Also,  if I give the 
> interfaces /32 IPs, then how to communicate with each other and external 
> hosts?  

FC: I’m not sure I understand the question. I’m inclined to say vpp routing 
and/or cut-through sessions, but it feels like I’m missing some point you were 
trying to make. 

> As an alternative, I assign multiple /24 IPs to one interface, then two 
> applications can communicate with each other and external hosts,  but can 
> only get 0.0.0.0/0 source address at accept time when communicating with each 
> other. Maybe I should bind to a IP before connect if I want to get this 
> specified IP? 

FC: If you use /24 for the interface then, if you want a unique local IPs for 
each app, you should use an explicit source ip in the connect (see 
vppcom_session_attr and VPPCOM_ATTR_SET_LCL_ADDR).

> 
>> I hope apps can communicate with each other and with external hosts in the 
>> same vrf and source ip is enforced and not changed during communication.  If 
>> not, can multiple vrfs achieve this?
> 
> FC:  If applications are attached to the same app namespace, then you could 
> leverage cut-through connections if you enable local scope connections at 
> attachment time (see slides 17 and 18 here [1]). Cut-through sessions are 
> “connected” at session layer, so they don’t pass through the IP fib.
> 
> Hanlin:Can local scope and global scope enable simultaneously? ie, some 
> connections use local scope and others use  global scope simultaneously.

FC: Yes, you can. 

> 
> Otherwise, connectivity between the apps is established via intra-vrf or 
> inter-vrf routing. Intra-vrf you don’t need to configure anything more, 
> inter-vrf you need to add additional routes. For external hosts, you need 
> routes to them in the vrfs. 
> 
> Hanlin:Inter-vrf leaking seems to not work when multiple vrf have same subnet 
> IPs. In test/test_vcl.py,  two vrf table have different subnet IPs.

FC: Yes, that’s true. Let’s clarify your first question and see which of the 
two options (multiple /32 interfaces or a /24 one) works. 

Regards, 
Florin

> 
> What we call “local” IPs for a connection are assigned at connect/accept time 
> and they do not change. When connecting, we use the first IP of an interface 
> that has a route to the destination and on accept, we use the dst IP on the 
> SYN packet. 
> 
> Regards,
> Florin
> 
> [1] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf 
> <https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf>
> 
>>  
>> Thanks,
>> Hanlin
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#14737): https://lists.fd.io/g/vpp-dev/message/14737 
>> <https://lists.fd.io/g/vpp-dev/message/14737>
>> Mute This Topic: https://lists.fd.io/mt/64106592/675152 
>> <https://

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-06 Thread Florin Coras
Hi Dom, 

Great to see progress! More inline. 

> On Dec 6, 2019, at 10:21 AM, dch...@akouto.com wrote:
> 
> Hi Florin,
> 
> Some progress, at least with the built-in echo app, thank you for all the 
> suggestions so far! By adjusting the fifo-size and testing in half-duplex I 
> was able to get close to 5 Gbps between the two openstack instances using the 
> built-in test echo app:
> 
> vpp# test echo clients gbytes 1 no-return fifo-size 100 uri 
> tcp://10.0.0.156/

FC: The cli for the echo apps is a bit confusing. Whatever you pass above is 
left shifted by 10 (multiplied by 1024) so that’s why I suggested to use 4096 
(~4MB). You can also use larger values, but above you are asking for ~1GB :-)

> 1 three-way handshakes in .26 seconds 3.86/s
> Test started at 745.163085
> Test finished at 746.937343
> 1073741824 bytes (1024 mbytes, 1 gbytes) in 1.77 seconds
> 605177784.33 bytes/second half-duplex
> 4.8414 gbit/second half-duplex
> 
> I need to get closer to 10 Gbps but at least there is good proof that the 
> issue is related to configuration / tuning. So, I switched back to iperf 
> testing with VCL, and I'm back to 600 Mbps, even though I can confirm that 
> the fifo sizes match what is configured in vcl.conf (note that in this test 
> run I changed that to 8 Mb each for rx and tx from the previous 16, but 
> results are the same when I use 16 Mb). I'm obviously missing something in 
> the configuration but I can't imagine what that might be. Below is my exact 
> startup.conf, vcl.conf and output from show session from this iperf run to 
> give the full picture, hopefully something jumps out as missing in my 
> configuration. Thank you for your patience and support with this, much 
> appreciated!

FC: Not entirely sure what the issue is but some things can be improved. More 
lower. 

> 
> [root@vpp-test-1 centos]# cat vcl.conf
> vcl {
>   rx-fifo-size 800
>   tx-fifo-size 800
>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }

FC: This looks okay.

> 
> [root@vpp-test-1 centos]# cat /etc/vpp/startup.conf
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
>   interactive
> }
> dpdk {
>   dev :00:03.0{
>   num-rx-desc 65535
>   num-tx-desc 65535

FC: Not sure about this. I don’t have any experience with vhost interfaces, but 
for XL710s I typically use 256 descriptors. It might be too low if you start 
noticing lots of rx/tx drops with “show int”. 

>   }
> }
> session { evt_qs_memfd_seg }
> socksvr { socket-name /tmp/vpp-api.sock }
> api-trace {
>   on
> }
> api-segment {
>   gid vpp
> }
> cpu {
> main-core 7
> corelist-workers 4-6
> workers 3

FC: For starters, could you try this out with only 1 worker, since you’re 
testing with 1 connection. 

Also, did you try pinning iperf with taskset to a worker on the same numa like 
your vpp workers, in case you have multiple numas? Check with lscpu your cpu 
into numa distribution.  

You may want to pin iperf even if you have only one numa, just to be sure it 
won’t be scheduled by mistake on the cores vpp is using. 

> }
> buffers {
> ## Increase number of buffers allocated, needed only in scenarios with
> ## large number of interfaces and worker threads. Value is per numa 
> node.
> ## Default is 16384 (8192 if running unpriviledged)
> buffers-per-numa 128000

FC: For simple testing I only use 16k, but this value actually depends on the 
number of rx/tx descriptors you have configured. 

>  
> ## Size of buffer data area
> ## Default is 2048
> default data-size 8192

FC: Are you trying to use jumbo buffers? You need to add to the tcp stanza, 
i.e., tcp { mtu  }. But for starters don’t modify the 
buffer size, just to get an idea of where performance is without this. 

Afterwards, as Jerome suggested, you may want to try tso by enabling it for 
tcp, i.e., tcp { tso } in startup.conf and enabling tso for the nic by adding 
“tso on” to the nic’s dpdk stanza (if the nic actually supports it). You don’t 
need to change the buffer size for that. 

> }
> 
> vpp# sh session verbose 2
> Thread 0: no sessions
> [1:0][T] 10.0.0.152:41737->10.0.0.156:5201ESTABLISHED
>  index: 0 flags:  timers:
>  snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
>  snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
>  flight size 0 out space 4413 rcv_wnd_av 7999488 tsval_recent 12893009
>  tsecr 10757431 tsecr_last_ack 10757431 tsval_recent_age 1995 snd_mss 1428
>  rto 200 rto_boff 0 srtt 3 us 3.887 rttvar 2 rtt_ts 0. rtt_seq 124
>  cong:   none algo newreno cwnd 4413 ssthresh 4194304 bytes_acked 0
>  cc space 4413 prev_cwnd 0 prev_ssthresh 0 rtx_bytes 0
>  snd_congestion 1736877166 dupack 0 limited_transmit 1736877166
>  sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
>  last_bytes_delivered 0 high_sacked 

Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-12-06 Thread Florin Coras
Hi Hanlin, 

Inline. 

> On Dec 5, 2019, at 7:00 PM, wanghanlin  wrote:
> 
> Hi Florin,
> Okay, regarding first question,  the following is the detailed use case:
> I have one 82599 nic in my Linux host. Then I allocate two VF interfaces 
> through SRIOV,  one VF place into a Linux namespace N1 and assign IP address 
> 192.168.1.2/24, another VF place into VPP.  
> I have three applications (just called APP1, APP2, APP3) communicating with 
> each other, and each application must get the source IP address (not 0.0.0.0) 
> after accept for a connect request.
> APP1 run in Linux namespce N1 and use IP address 192.168.1.2/24. APP2 run in 
> Linux namespace N2 and use IP address 192.168.1.3/24. APP3 run in Linux 
> namespace N3 and use IP address 192.168.1.4/24.  
> And finally, APP2 and APP3 need to run based LDP.
> 
> Let's summarize:
> APP1, N1, 192.168.1.2/24, outside VPP
> APP2, N2, 192.168.1.3/24, inside VPP
> APP3, N3, 192.168.1.4/24, inside VPP


FC: I assume N2 and N3 are mapped to app namespaces from VPP perspective. 
Additionally, those two prefixes, i.e., 192.168.1.3/24 and 192.168.1.4/24, do 
not need to be configured on interfaces part of N2 and N3 respectively. 

Then, from vpp perspective, APP2 and APP3 are “locally attached” and APP1 is 
“remote”. So, from my perspective, they’re at least two different networks. 
APP2 and APP3 could be the same or different networks.

For instance, you could assign 192.168.1.2/25 to N1 and then leave 
192.168.1.128/25 to vpp for N2 and N3. Within vpp you have two options:
- add two interfaces, say intN2 and intN3 with IPs 192.168.1.129/32 and 
192.168.1.130/32 and associate N2 and N3 app namespaces to those interfaces 
(not the fibs). Whenever initiating connections, APP2 and APP3 will pick up the 
ips of the interfaces associated to their respective app namespaces. 
- add one interface intN with IP 192.168.1.129/25 and associate both namespaces 
to it. If you need APP1 to use 192.168.1.129 and APP2 192.168.1.130, then 
you’ll need your apps to call bind before connecting (haven’t tested this but I 
think it should work). 

The above assumes APP2 and APP3 map to different app namespaces. If you want to 
use the same app namespace, to be able to use cut-through connections, then 
only option 2 works. Additionally, you need the two apps to attach with both 
local and global scope set. 

Hope this helps!

Regards, 
Florin

> 
> Then, my question is how to configure 192.168.1.3/24 and 192.168.1.4/24 in 
> VPP?
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 12/6/2019 03:56,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> Inline.
> 
>> On Dec 4, 2019, at 1:59 AM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Hi Florin,
>> 
>> Thanks for your patient reply.  Still I have some doubt inline.
>> 
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 11/30/2019 02:47,Florin Coras 
>> <mailto:fcoras.li...@gmail.com> wrote: 
>> Hi Hanlin, 
>> 
>> Inline. 
>> 
>>> On Nov 29, 2019, at 7:12 AM, wanghanlin >> <mailto:wanghan...@corp.netease.com>> wrote:
>>> 
>>> Hi Florin,
>>> Thanks for your reply.
>>> I just consider a very simple use case. Some apps in different containers 
>>> communicate through VPP, just in a L2 bridge domain.  
>>> Without hoststack,  we may add some host-interfaces in one bridge domain, 
>>> and assign IP address of veth interface in containers. In addition, a 
>>> physical nic also added in same bridge domain to communicate with other 
>>> hosts.
>>> But with hoststack, things seem complicated because we have to assign IP 
>>> address inside VPP.  
>> 

Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-06 Thread Florin Coras
Hi Lin, 

I don’t see anything obviously wrong. 

What is your vcl.conf? Also could you check the status of your nginx workers in 
vpp by doing: “show app” and then “show app ” where index is the index 
associated to your nginx app (if no other app is associated it should be 1). 

Here’s some example output: 

DBGvpp# sh app
Index NameNamespace
0 tls default
1 ldp-83053-app[shm]  default
DBGvpp# sh app 1
app-name ldp-83053-app[shm] app-index 1 ns-index 0 seg-size 38.15m
rx-fifo-size 97.66k tx-fifo-size 97.66k workers:
  wrk-index 1 app-index 1 map-index 0 api-client-index 0
  wrk-index 2 app-index 1 map-index 1 api-client-index 256
  wrk-index 3 app-index 1 map-index 2 api-client-index 512
  wrk-index 4 app-index 1 map-index 3 api-client-index 768
  wrk-index 5 app-index 1 map-index 4 api-client-index 1024 

Here nginx is registered into the session layer by ldp as “ld-83053-app” (83053 
is nginx’s pid) and it has 4 workers. 

Regards, 
Florin

> On Dec 6, 2019, at 12:47 AM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> I have modified some configuration items of starup.conf and nginx.conf,but 
> the results were still the same.
> The nginx logs are:<捕获1.PNG>
> 
> The configuration files are in the attachment.
> I don't know what went wrong? Can you help me analyze it?
> Thanks,
> Yang.L <捕获1.PNG>-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14821): https://lists.fd.io/g/vpp-dev/message/14821
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14830): https://lists.fd.io/g/vpp-dev/message/14830
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-12-09 Thread Florin Coras
Hi Hanlin,

Inline.

> On Dec 9, 2019, at 12:42 AM, wanghanlin  wrote:
> 
> Hi Florin,
> Thanks for your suggestion.
> Follow your suggested configuration, three applications can communicate each 
> other.

FC: Great!

> But there are two minor problems:
> 1. I used option 2 to configure APP2 and APP3,  but can only get the source 
> IP address 0.0.0.0 (not 192.168.1.129 or 192.168.1.130) after accept for a 
> connect request. 

FC: Does this happen only for connects/accepts between APP2 and APP3? If yes, 
then that’s because you’re getting cut-through connections through the global 
table. That is, the two apps, although configured with different app namespaces 
use the same ip fib, so the session layer cut-through connects them, as opposed 
to using the requested transport protocol. 

For cut-through connections we don’t fill in the local ip, but I guess we 
could. 

> 2. There are two different networks between APP1 and APP2/APP3 in your 
> configuration. But if they are all in the same network, then how to 
> configure? 

FC: I don’t know if there are any tricks that would allow for that. But, I 
would recommend keeping them separate because splitting a network between 
multiple ip interfaces sounds like a source of problems. Also, you’re not using 
more resources, it’s just that you now split your /24 into multiple subnets. 

Regards,
Florin

> 
> Following are vpp, vcl and fib configurations:
> 
> 
> 
> 
> 
> 
> 
> Thanks & Regards,
> Hanlin
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 12/7/2019 07:11,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> Inline. 
> 
>> On Dec 5, 2019, at 7:00 PM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Hi Florin,
>> Okay, regarding first question,  the following is the detailed use case:
>> I have one 82599 nic in my Linux host. Then I allocate two VF interfaces 
>> through SRIOV,  one VF place into a Linux namespace N1 and assign IP address 
>> 192.168.1.2/24, another VF place into VPP.  
>> I have three applications (just called APP1, APP2, APP3) communicating with 
>> each other, and each application must get the source IP address (not 
>> 0.0.0.0) after accept for a connect request.
>> APP1 run in Linux namespce N1 and use IP address 192.168.1.2/24. APP2 run in 
>> Linux namespace N2 and use IP address 192.168.1.3/24. APP3 run in Linux 
>> namespace N3 and use IP address 192.168.1.4/24.  
>> And finally, APP2 and APP3 need to run based LDP.
>> 
>> Let's summarize:
>> APP1, N1, 192.168.1.2/24, outside VPP
>> APP2, N2, 192.168.1.3/24, inside VPP
>> APP3, N3, 192.168.1.4/24, inside VPP
> 
> 
> FC: I assume N2 and N3 are mapped to app namespaces from VPP perspective. 
> Additionally, those two prefixes, i.e., 192.168.1.3/24 and 192.168.1.4/24, do 
> not need to be configured on interfaces part of N2 and N3 respectively. 
> 
> Then, from vpp perspective, APP2 and APP3 are “locally attached” and APP1 is 
> “remote”. So, from my perspective, they’re at least two different networks. 
> APP2 and APP3 could be the same or different networks.
> 
> For instance, you could assign 192.168.1.2/25 to N1 and then leave 
> 192.168.1.128/25 to vpp for N2 and N3. Within vpp you have two options:
> - add two interfaces, say intN2 and intN3 with IPs 192.168.1.129/32 and 
> 192.168.1.130/32 and associate N2 and N3 app namespaces to those interfaces 
> (not the fibs). Whenever initiating connections, APP2 and APP3 will pick up 
> the ips of the interfaces associated to their respective app namespaces. 
> - add one interface intN with IP 192.168.1.129/25 and associate both 
> namespaces to it. If you need APP1 to use 192.168.1.129 and APP2 
> 192.168.1.130, then you’ll need your apps to call bind before connecting 
> (haven’t tested this but I think it should work). 
> 
> The above assumes APP2 and APP3 map to different app namespaces. If you want 
> to use the same app namespace, to be able to use cut-through connections, 
> then only option 2 works. Additionally, you need the two apps to attach with 
> both local and global scope set. 
> 
> Hope this helps!
> 
> Regards, 
> Florin
> 
>> 
>> Then, my question is how t

Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2019-12-11 Thread Florin Coras
Hi Hanlin, 

Thanks to Dave, we can now have per thread binary api connections to vpp. I’ve 
updated the socket client and vcl to leverage this so, after [1] we have per 
vcl worker thread binary api sockets that are used to exchange fds. 

Let me know if you’re still hitting the issue. 

Regards,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/23687

> On Nov 22, 2019, at 10:30 AM, Florin Coras  wrote:
> 
> Hi Hanlin, 
> 
> Okay, that’s a different issue. The expectation is that each vcl worker has a 
> different binary api transport into vpp. This assumption holds for 
> applications with multiple process workers (like nginx) but is not completely 
> satisfied for applications with thread workers. 
> 
> Namely, for each vcl worker we connect over the socket api to vpp and 
> initialize the shared memory transport (so binary api messages are delivered 
> over shared memory instead of the socket). However, as you’ve noted, the 
> socket client is currently not multi-thread capable, consequently we have an 
> overlap of socket client fds between the workers. The first segment is 
> assigned properly but the subsequent ones will fail in this scenario. 
> 
> I wasn’t aware of this so we’ll have to either fix the socket binary api 
> client, for multi-threaded apps, or change the session layer to use different 
> fds for exchanging memfd fds. 
> 
> Regards, 
> Florin
> 
>> On Nov 21, 2019, at 11:47 PM, wanghanlin > <mailto:wanghan...@corp.netease.com>> wrote:
>> 
>> Hi Florin,
>> Regarding 3), I think main problem maybe in function 
>> vl_socket_client_recv_fd_msg called by vcl_session_app_add_segment_handler.  
>> Mutiple worker threads share the same scm->client_socket.fd, so B2 may 
>> receive the segment memfd belong to A1.
>> 
>>  
>> Regards,
>> Hanlin
>> 
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 11/22/2019 01:44,Florin Coras 
>> <mailto:fcoras.li...@gmail.com> wrote: 
>> Hi Hanlin, 
>> 
>> As Jon pointed out, you may want to register with gerrit. 
>> 
>> You comments with respect to points 1) and 2) are spot on. I’ve updated the 
>> patch to fix them. 
>> 
>> Regarding 3), if I understood your scenario correctly, it should not happen. 
>> The ssvm infra forces applications to map segments at fixed addresses. That 
>> is, for the scenario you’re describing lower, if B2 is processed first, 
>> ssvm_slave_init_memfd will map the segment at A2. Note how we first map the 
>> segment to read the shared header (sh) and then use sh->ssvm_va (which 
>> should be A2) to remap the segment at a fixed virtual address (va). 
>> 
>> Regards,
>> Florin
>> 
>>> On Nov 21, 2019, at 2:49 AM, wanghanlin >> <mailto:wanghan...@corp.netease.com>> wrote:
>>> 
>>> Hi Florin,
>>> I have applied the patch, and found some problems in my case.  I have not 
>>> right to post it in gerrit, so I post here.
>>> 1)evt->event_type should be set  with SESSION_CTRL_EVT_APP_DEL_SEGMENT 
>>> rather than SESSION_CTRL_EVT_APP_ADD_SEGMENT. File: 
>>> src/vnet/session/session_api.c, Line: 561, Function:mq_send_del_segment_cb
>>> 2)session_send_fds may been called in the end of function 
>>> mq_send_add_segment_cb, otherwise lock of app_mq can't been free here.File: 
>>> src/vnet/session/session_api.c, Line: 519, Function:mq_send_add_segment_cb 
>>> 3) When vcl_segment_attach called in each worker thread, then 
>>> ssvm_slave_init_memfd can been called in each worker thread and then 
>>> ssvm_slave_init_memfd map address sequentially through map segment once in 
>>> advance.  It's OK in only one thread, but maybe wrong in multiple worker 
>>> threads. Suppose following scene: VPP allocate segment at address A1 and 
>>> notify worker thread B1 to expect B1 also map segment at address A1,  and 
>>> simultaneously VPP allocate segment at address A2 and notify worker thread 
>>> B2 to expect B2 map segment at address A2. If B2 first process notify 
>>> message, then ssvm_slave_init_memfd may map segment at address A1. Maybe 
>>> VPP can add segment map

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-12 Thread Florin Coras
Hi Dom, 


> On Dec 12, 2019, at 12:29 PM, dch...@akouto.com wrote:
> 
> Hi Florin,
> 
> The saga continues, a little progress and more questions. In order to reduce 
> the variables, I am now only using VPP on one of the VMs: iperf3 server is 
> running on a VM with native Linux networking, and iperf3+VCL client running 
> on the second VM.

FC: Okay!

> 
> I've pasted the output from a few commands during this test run below and 
> have a few questions if you don't mind.
> The "show errors" command indicates "Tx packet drops (dpdk tx failure)". I 
> have done quite a bit of searching, found other mentions of this in other 
> threads but no tips as to where to look or hints on how it was / can be 
> solved. Any thoughts?
FC: The number of drops is not that large, so we can ignore for now. 
> I'm not really sure how to interpret the results of "show run" but nothing 
> jumps out at me, do you see anything useful in there?
FC: Nothing apart from the fact that one of vpp’s workers is moderately loaded 
(you’re still running 3 workers). 
> Some of the startup.conf options were not working for me, so I switched to 
> building from source (I chose to use tag v20.01-rc0 for some stability). 
> Still no luck with some of the options:
> When I try to use tcp { tso } I get this: 0: tcp_config_fn: unknown input ` 
> tso'
FC: You need to get “closer” to master HEAD. That tag was laid when 19.08 was 
released but tso support was merged afterwards. Typically our CI infra is good 
enough to keep things running so you might want to try master latest. 
> When I try to use num-mbufs in the dpdk section, I get 0: dpdk_config: 
> unknown input `num-mbufs 65535’
FC: This was deprecated at one point. The new stanza is "buffers { 
buffers-per-numa  }"
> 
> Do you know if these options are supported? I can't figure out a way to 
> increase mbufs since the above option does not work, and when I try to use 
> socket-mem (which according to the documentation is needed if there is a need 
> for a larger number of mbufs) I get this: dpdk_config:1408: socket-mem 
> argument is deprecated

FC: Yes, this was also deprecated. 

> 
> To answer some of your questions from your previous reply:
> I have indeed been using taaskset and watching CPU load with top to make sure 
> things are going where I expect them to go
> I am not trying to use jumbo buffers, increasing "default data-size" was just 
> an attempt to see if there would be a difference
> Thanks for the cubic congestion algo suggestion, made the change but no 
> improvement

FC: Understood! I guess that means we should try tso. I just tested it and it 
seems dpdk stanza needs an extra "dpdk {enable-tcp-udp-checksum}” apart from 
“dpdk { dev  { tso on } }”. Let me know if you hit any other issues with 
it. You’ll know that it’s running if you do “show session verbose 2” and you 
see “TSO" in the cfg flags, instead of “TSO off”. 

Regards, 
Florin
> Thank you for all the help, it is very much appreciated.
> 
> Regards,
> Dom
> 
> vpp# sh int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> GigabitEthernet0/3/0  1  up  9000/0/0/0 rx 
> packets   1642537
> rx bytes  
>  108676814
> tx 
> packets   5216493
> tx bytes  
> 7793319472
> drops 
>392
> ip4   
>1642178
> tx-error  
>475
> local00 down  0/0/0/0   drops 
>  1
> 
> vpp# sh err
>CountNode  Reason
>  1ip4-glean   ARP requests sent
>  7   dpdk-input   no error
>5216424  session-queue Packets transmitted
>  1tcp4-rcv-processPure ACKs received
>  2  tcp4-syn-sent SYN-ACKs received
>  7tcp4-establishedPackets pushed into rx fifo
>1619850tcp4-establishedPure ACKs received
>  22219tcp4-establishedDuplicate ACK
>  1tcp4-establishedResets received
> 62tcp4-establishedConnection closed
>  1tcp4-establishedFINs received
> 62   tcp4-output  Resets sent
>  2arp-reply   ARP replies sent
> 33ip4-input   unkn

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-13 Thread Florin Coras
Hi Dom, 

From the logs it looks like TSO is not on. I wonder if the vhost nic actually 
honors the “tso on” flag. Have you also tried with native vhost driver, instead 
of the dpdk one? I’ve never tried with the tcp, so I don’t know if it properly 
advertises the fact that it supports TSO. 

Lower you can see how it looks on my side, between two Broadwell boxes with 
XL710s. The tcp connection TSO flag needs to be on, otherwise tcp will do the 
segmentation by itself. 

Regards, 
Florin

$ ~/vpp/vcl_iperf_client 6.0.1.2 -t 10
[snip]
[ ID] Interval   Transfer Bandwidth   Retr
[ 33]   0.00-10.00  sec  42.2 GBytes  36.2 Gbits/sec0 sender
[ 33]   0.00-10.00  sec  42.2 GBytes  36.2 Gbits/sec  receiver

vpp# show session verbose 2
[snip]
[1:1][T] 6.0.1.1:27240->6.0.1.2:5201  ESTABLISHED
 index: 1 cfg: TSO flags: PSH pending timers: RETRANSMIT
 snd_una 2731494347 snd_nxt 2731992143 snd_una_max 2731992143 rcv_nxt 1 rcv_las 
1
 snd_wnd 1999872 rcv_wnd 3999744 rcv_wscale 10 snd_wl1 1 snd_wl2 2731494347
 flight size 497796 out space 716 rcv_wnd_av 3999744 tsval_recent 1787061797
 tsecr 3347210414 tsecr_last_ack 3347210414 tsval_recent_age 4294966829 snd_mss 
1448
 rto 200 rto_boff 0 srtt 1 us .101 rttvar 1 rtt_ts 8.6696 rtt_seq 2731733367
 next_node 0 opaque 0x0
 cong:   none algo cubic cwnd 498512 ssthresh 407288 bytes_acked 17376
 cc space 716 prev_cwnd 581841 prev_ssthresh 403737
 snd_cong 2702482407 dupack 0 limited_tx 1608697445
 rxt_bytes 0 rxt_delivered 0 rxt_head 13367060 rxt_ts 3347210414
 prr_start 2701996195 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 2702540327 is_reneging 0
 cur_rxt_hole 4294967295 high_rxt 2702048323 rescue_rxt 2701996194
 stats: in segs 293052 dsegs 0 bytes 0 dupacks 5568
out segs 381811 dsegs 381810 bytes 15628627726 dupacks 0
fr 229 tr 0 rxt segs 8207 bytes 11733696 duration 3.468
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 4941713080 bucket 2328382 t/p 4941.713 last_update 0 us idle 100
 Rx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 1
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 199 nitems 199 has_event 1
  head 396234 tail 396233 segment manager 1
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 4294967295
 session: state: ready opaque: 0x0 flags:

vpp# sh run 
[snip]
Thread 1 vpp_wk_0 (lcore 24)
Time 774.3, 10 sec internal node vector rate 0.00
  vector rates in 2.5159e3, out 1.4186e3, drop 1.2915e-3, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
FortyGigabitEthernet84/0/0-out   active 977678 1099456  
 0  2.47e21.12
FortyGigabitEthernet84/0/0-txactive 977678 1098446  
 0  2.17e31.12
ethernet-input   active 442524  848618  
 0  2.69e21.92
ip4-input-no-checksumactive 442523  848617  
 0  2.86e21.92
ip4-localactive 442523  848617  
 0  3.24e21.92
ip4-lookup   active1291425 1948073  
 0  2.09e21.51
ip4-rewrite  active 977678 1099456  
 0  2.23e21.12
session-queuepolling7614793106 1099452  
 0  7.45e50.00
tcp4-established active 442520  848614  
 0  1.26e31.92
tcp4-input   active 442523  848617  
 0  3.04e21.92
tcp4-output  active 977678 1099456  
 0  3.77e21.12
tcp4-rcv-process active  1   1  
 0  5.82e31.00
tcp4-syn-sentactive  2   2  
 0  6.84e41.00


> On Dec 13, 2019, at 12:58 PM, dch...@akouto.com wrote:
> 
> Hi,
> I rebuilt VPP on master and updated startup.conf to enable tso as follows:
> dpdk {
>   dev :00:03.0{
>   num-rx-desc 2048
>   num-tx-desc 2048
>   tso on
>   }
>   uio-driver vfio-pci
>   enable-tcp-udp-checksum
> }
> 
> I'm not sure whether it is working or not, there is nothing in show session 
> verbose 2 to indicate whether it is on or off (output at the end of this 
> upda

Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-26 Thread Florin Coras
Hi Yang.L, 

Could you try latest master? We’ve finally merged the patch that moves messages 
that request the mapping of new segments from binary api to app worker message 
queues. 

Regards,
Florin 

> On Dec 26, 2019, at 12:10 AM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> vcl.conf 's content are :
> vcl { rx-fifo-size 400 tx-fifo-size 400 app-scope-local 
> app-scope-global api-socket-name /run/vpp-api.sock }
> 
> the comline "show app" output:
> DBGvpp# sh app Index Name Namespace 0 tls default 1 ldp-60913-app[shm] 
> default DBGvpp# sh app Index Name Namespace 0 tls default 1 
> ldp-60913-app[shm] default DBGvpp# sh app 1 app-name ldp-60913-app[shm] 
> app-index 1 ns-index 0 seg-size 128m rx-fifo-size 3.81m tx-fifo-size 3.81m 
> workers: wrk-index 1 app-index 1 map-index 0 api-client-index 0 wrk-index 2 
> app-index 1 map-index 1 api-client-index 256  
> Everything looks fine.But when another machine start wrk ,the following error 
> occurs in the nginx/log/error.log.
> Here's the error.log.
> epoll_ctl:2203: ldp<58628>: epfd 33 ep_vlsh 1, fd 62 vlsh 30, op 1
> ldp_accept4:2043: ldp<58628>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vppcom_session_accept:1521: vcl<58628:1>: listener 16777216 [0x0] accepted 31 
> [0x1f] peer: 192.168.3.66:55640 local: 192.168.3.65:8080
> epoll_ctl:2203: ldp<58628>: epfd 33 ep_vlsh 1, fd 63 vlsh 31, op 1
> ldp_accept4:2043: ldp<58628>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-5') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-5') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-6') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-6') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-7') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-7') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-8') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-8') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-9') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-9') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-10') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-10') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-11') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-11') failed
> ssvm_slave_init_memfd:296: page size unknown: Bad file descriptor (errno 9)
> vcl_segment_attach:80: svm_fifo_segment_attach ('58606-12') failed
> vl_api_map_another_segment_t_handler:286: VCL<58628>: svm_fifo_segment_attach 
> ('58606-12') failed
> 
>  Can you help me analyze it?
> Thanks,
> Yang.L
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14970): https://lists.fd.io/g/vpp-dev/message/14970
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14974): https://lists.fd.io/g/vpp-dev/message/14974
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-26 Thread Florin Coras
Hi Yang.L, 

I suspect you may need to do a “git pull” and rebuild because the lines don’t 
match, i.e., vcl_session_accepted_handler:377 is now just an assignment. Let me 
know if that solves the issue.

Regards,
Florin

> On Dec 26, 2019, at 10:11 PM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> I have tried the latest master.The problem is not resolved.
> Here's the nginx error linformation:
> 
> epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 61 vlsh 29, op 1
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0] accepted 
> 30 [0x1e] peer: 192.168.3.66:47672 local: 192.168.3.65:8080
> epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 62 vlsh 30, op 1
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0] accepted 
> 31 [0x1f] peer: 192.168.3.66:47674 local: 192.168.3.65:8080
> epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 63 vlsh 31, op 1
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> 32 couldn't be mounted!
> 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software caused 
> connection abort)
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> 32 couldn't be mounted!
> 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software caused 
> connection abort)
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> 32 couldn't be mounted!
> 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software caused 
> connection abort)
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> 32 couldn't be mounted!
> 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software caused 
> connection abort)
> ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0xdc50
> vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> 32 couldn't be mounted!
> 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software caused 
> connection abort)
> 
> Can you help me analyze it?
> Thanks,
> Yang.L
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14977): https://lists.fd.io/g/vpp-dev/message/14977
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14978): https://lists.fd.io/g/vpp-dev/message/14978
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Do we have any plan to introduce packetdrill to test VPP hoststack?

2019-12-30 Thread Florin Coras
Hi Hanlin,

As far as I know, there are currently no plans for that, but that doesn’t mean 
we won’t accept contributions :-)

Regarding what’s been done, over the past year, I’ve been using Defensics 
Codenomicon to validate TCP implementation before each release (more than 1M 
tests). FD.io  does not own a Codenomicon license, so using 
something like packetdrill periodically seems like a really good alternative.  

Regards,
Florin

> On Dec 30, 2019, at 6:52 PM, wanghanlin  wrote:
> 
> Hi All,
> Do we have any plan to introduce packetdrill to test VPP hoststack?
> Or,how do we guarantee the correct implementation of the protocol stack?
> 
> Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14999): https://lists.fd.io/g/vpp-dev/message/14999 
> 
> Mute This Topic: https://lists.fd.io/mt/69343227/675152 
> 
> Group Owner: vpp-dev+ow...@lists.fd.io 
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>   [fcoras.li...@gmail.com 
> ]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15000): https://lists.fd.io/g/vpp-dev/message/15000
Mute This Topic: https://lists.fd.io/mt/69343227/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Check in ip4_local_inline()

2020-01-03 Thread Florin Coras
Hi Nitin, 

I believe your observation is correct. Adding Neale in case we missed 
something. 

Regards,
Florin

> On Jan 3, 2020, at 10:32 AM, Nitin Saxena  wrote:
> 
> Hi Dave,
>  
> Thanks.
>  
> I agree with your point that there is less chance of consecutive packets 
> having same source IP. However both functions: ip4_local_check_src(), 
> ip4_local_check_src_x2() already have the trick to avoid fib lookup for 
> consecutive packets having same source IP. Correct me if I am wrong, 
> currently else {} part in both aforementioned functions seems to be a dead 
> code as PREDICT_FALSE(last_check->first) is always TRUE (as last_check->first 
> is always 1 throughout ip4_local_inline() function).
>  
> Also with my patch, there is no impact on cycle count of ip4_local node (both 
> x86 and ARM) where source IP increments for every packet in a terminating 
> frame. It does decrease cycles for ip4-local when all packets have similar 
> source IP.
>  
> So is there any gap in my understanding or is it deliberate to make else {} 
> case  as dead code?
>  
> Thanks,
> Nitin
>  
> From: Dave Barach (dbarach) mailto:dbar...@cisco.com>> 
> Sent: Friday, January 3, 2020 8:08 PM
> To: Nitin Saxena mailto:nsax...@marvell.com>>; 
> vpp-dev@lists.fd.io 
> Subject: [EXT] RE: [vpp-dev] Check in ip4_local_inline()
>  
> External Email
> Ask yourself how often there will be precisely one source (or dst) IP address 
> in this path. Optimizing a specific lab/benchmark case may or may not make 
> sense.
>  
> D. 
>  
> From: vpp-dev@lists.fd.io   > On Behalf Of Nitin Saxena
> Sent: Friday, January 3, 2020 8:02 AM
> To: vpp-dev@lists.fd.io 
> Subject: [vpp-dev] Check in ip4_local_inline()
>  
> Hi,
>  
> I am sending UDP termination packets to VPP interface with single source IP. 
> I find that fib lookup is happening for every packet, even if source IP for 
> current packet is same as last packet. Is it expected behavior? Following 
> patch seems to avoid lookup for every packet.
>  
> Thanks,
> Nitin
>  
> diff --git a/src/vnet/ip/ip4_forward.c b/src/vnet/ip/ip4_forward.c
> index aa554ea..59edaba 100644
> --- a/src/vnet/ip/ip4_forward.c
> +++ b/src/vnet/ip/ip4_forward.c
> @@ -1542,6 +1542,7 @@ ip4_local_check_src (vlib_buffer_t * b, ip4_header_t * 
> ip0,
>last_check->src.as_u32 = ip0->src_address.as_u32;
>last_check->lbi = lbi0;
>last_check->error = *error0;
> +  last_check->first = 0;
>  }
>else
>  {
> @@ -1549,7 +1550,6 @@ ip4_local_check_src (vlib_buffer_t * b, ip4_header_t * 
> ip0,
> vnet_buffer (b)->ip.adj_index[VLIB_TX];
>vnet_buffer (b)->ip.adj_index[VLIB_TX] = last_check->lbi;
>*error0 = last_check->error;
> -  last_check->first = 0;
>  }
> }
>  
> @@ -1638,6 +1638,7 @@ ip4_local_check_src_x2 (vlib_buffer_t ** b, 
> ip4_header_t ** ip,
>last_check->src.as_u32 = ip[1]->src_address.as_u32;
>last_check->lbi = lbi[1];
>last_check->error = error[1];
> +  last_check->first = 0;
>  }
>else
>  {
> @@ -1651,7 +1652,6 @@ ip4_local_check_src_x2 (vlib_buffer_t ** b, 
> ip4_header_t ** ip,
>  
>error[0] = last_check->error;
>error[1] = last_check->error;
> -  last_check->first = 0;
>  }
> }
>  
>  
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#15036): https://lists.fd.io/g/vpp-dev/message/15036 
> 
> Mute This Topic: https://lists.fd.io/mt/69397810/675152 
> 
> Group Owner: vpp-dev+ow...@lists.fd.io 
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>   [fcoras.li...@gmail.com 
> ]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15039): https://lists.fd.io/g/vpp-dev/message/15039
Mute This Topic: https://lists.fd.io/mt/69397810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VppCom suitable for this scenario

2020-01-06 Thread Florin Coras
Hi Satya, 

VCL is a library applications can link against to interact with vpp’s host 
stack in a way similar to but not quite POSIX compliant. That is, it can be 
used to open sessions with various transports (e.g., tcp, quic, tls) to 
send/receive data in a synchronous and asynchronous manner. The exchange of 
data and notifications between vpp and vcl is done using shared memory, but 
everything is organized around sessions which in vpp are associated to worker 
threads.

In theory, you could imagine your control plane app listening on a given tcp 
port and then writing a vpp builtin application (see [1]) that tcp connects on 
all workers to the control plane session. These per-worker sessions are what 
you’re looking for at point 3 and their buffers (we call them fifos) can be 
larger than 1MB. However, ultimately, this leads to writing an input node for 
the builtin app where the workers read the messages the control plane app sent 
on each session and react to it. So, I’m not sure it’s worth using all of this 
infra instead of just writing your custom one. I guess it depends on what you 
need to achieve.  

More info on the host stack here [2]. You probably want to check the 
presentation at the end [3]. 

Regards, 
Florin

[1] https://git.fd.io/vpp/tree/src/plugins/hs_apps/echo_client.c 

[2] https://wiki.fd.io/view/VPP/HostStack 

[3] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf 



> On Jan 6, 2020, at 2:05 AM, Satya Murthy  wrote:
> 
> Hi ,
> 
> Have one basic doubt on applicability of VppCom library for a use case that 
> we have as below.
> 
> Use Case with following requirements:
> 1. control plane app needs to communicate with different VPP worker threads
> 2. control plane app may need to send messages to vpp workers with message 
> size that can span upto a max size of 1 MB.
> 3. control plane app needs to have different VppCom channels with each worker
> 
> For the above scenario, is VppCom a suitable infrastructure ? 
> Using memif causes max size limit at 64KB. Hence, we are thinking about 
> alternatives.
> 
> Please share your inputs on this. 
> ( Also, is there any documentation on VppCom library ? ) 
> 
> -- 
> Thanks & Regards,
> Murthy -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#15056): https://lists.fd.io/g/vpp-dev/message/15056
> Mute This Topic: https://lists.fd.io/mt/69461619/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15064): https://lists.fd.io/g/vpp-dev/message/15064
Mute This Topic: https://lists.fd.io/mt/69461619/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VppCom suitable for this scenario

2020-01-07 Thread Florin Coras
Hi Satya,

Glad it helped!

You may want to use svm queues or message queues similarly to how the session 
queue node does. For instance, see how messages are dequeued in 
session_queue_node_fn.

Regards,
Florin 

> On Jan 6, 2020, at 10:45 PM, Satya Murthy  wrote:
> 
> Hi Florin,
> 
> Thank you very much for quick inputs.  I have gone through your youtube video 
> from kubecon and it cleared lot of my doubts.
> You presented it in a very clear manner.
> 
> As you rightly pointed out, VppCom will be a overhead for our use case.
> All we need is just a shared memory communication to send and receive bigger 
> messages.
> Memif was not a candidate for this, since it will pose message size 
> restrictions upto 64K.
> 
> In this case, what framework we can use to send/recv messages from VPP 
> workers across shared memory. 
> Can we use SVM queues directly and get the message into our custom VPP plugin 
> and process it 
> ( in case of VPP receiving message from control plane app )
> 
> Any example code that already does this ? If so, can you please point this to 
> us.
> -- 
> Thanks & Regards,
> Murthy -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#15073): https://lists.fd.io/g/vpp-dev/message/15073
> Mute This Topic: https://lists.fd.io/mt/69461619/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15075): https://lists.fd.io/g/vpp-dev/message/15075
Mute This Topic: https://lists.fd.io/mt/69461619/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp assert error whtn nginx start with ldp

2020-01-07 Thread Florin Coras
Hi, 

Not entirely sure what’s happening there. I just tried nginx with master latest 
and binding 4 workers seems to work. 

In your case it looks as if a listening session associated to an app listener 
was freed. Not sure how that could happen. Anything special about your nginx or 
vcl configuration?

Regards,
Florin

> On Jan 6, 2020, at 10:38 PM, jiangxiaom...@outlook.com wrote:
> 
> VPP crash when start nginx start with ldp. vpp code is master 
> 78565f38e8436dae9cd3a891b5e5d929209c87f9,
> The crash stack is below: Anyone has any solution?
> 
> DBGvpp# 0: vl_api_memclnt_delete_t_handler:277: Stale clnt delete index 
> 16777215 old epoch 255 cur epoch 0
> 
> 0: /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 
> (session_get_from_handle) assertion `! pool_is_free 
> (smm->wrk[thread_index].sessions, _e)' fails
> 
>  
> 
> Program received signal SIGABRT, Aborted.
> 
> 0x74a7 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 
> 55   return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
> 
> (gdb) bt
> 
> #0  0x74a7 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 
> #1  0x74a34a28 in __GI_abort () at abort.c:90
> 
> #2  0x00407458 in os_panic () at 
> /home/dev/code/net-base/build/vpp/src/vpp/vnet/main.c:355
> 
> #3  0x7587ad1f in debugger () at 
> /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:84
> 
> #4  0x7587b0ee in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x7772b0c8 "%s:%d (%s) assertion `%s' fails") at 
> /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:143
> 
> #5  0x773da25f in session_get_from_handle (handle=2) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320
> 
> #6  0x773da330 in listen_session_get_from_handle (handle=2) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:548
> 
> #7  0x773dac6b in app_listener_lookup (app=0x7fffd72f2188, 
> sep_ext=0x7fffdc84fc80) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:122
> 
> #8  0x773de10d in vnet_listen (a=0x7fffdc84fc80) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:979
> 
> #9  0x773c33a9 in session_mq_listen_handler (data=0x13007fb89) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/session_node.c:62
> 
> #10 0x77bb4f8a in vl_api_rpc_call_t_handler (mp=0x13007fb70) at 
> /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:519
> 
> #11 0x77bc8dfc in vl_msg_api_handler_with_vm_node (am=0x77dd9e40 
> , vlib_rp=0x130021000, the_msg=0x13007fb70, 
> vm=0x766c0640 , node=0x7fffdc847000, is_private=0 
> '\000') at /home/dev/code/net-base/build/vpp/src/vlibapi/api_shared.c:603
> 
> #12 0x77b9815c in vl_mem_api_handle_rpc (vm=0x766c0640 
> , node=0x7fffdc847000) at 
> /home/dev/code/net-base/build/vpp/src/vlibmemory/memory_api.c:748
> 
> #13 0x77bb3e05 in vl_api_clnt_process (vm=0x766c0640 
> , node=0x7fffdc847000, f=0x0) at 
> /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:326
> 
> #14 0x7641f1f5 in vlib_process_bootstrap (_a=140736887348176) at 
> /home/dev/code/net-base/build/vpp/src/vlib/main.c:1475
> 
> #15 0x7589aef4 in clib_calljmp () at 
> /home/dev/code/net-base/build/vpp/src/vppinfra/longjmp.S:123
> 
> #16 0x7fffdc2d5ba0 in ?? ()
> 
> #17 0x7641f2fd in vlib_process_startup (vm=0x7641fca0 
> , p=0x7fffdc2d5ca0, f=0x) at 
> /home/dev/code/net-base/build/vpp/src/vlib/main.c:1497
> 
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
> 
> (gdb) up 5
> 
> #5  0x773da25f in session_get_from_handle (handle=2) at 
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320
> 
> 320   return pool_elt_at_index (smm->wrk[thread_index].sessions, 
> session_index);
> 
> (gdb) print thread_index
> 
> $1 = 0
> 
> (gdb) info thread
> 
>   Id   Target Id Frame 
> 
>   3Thread 0x7fffb4e51700 (LWP 101019) "vpp_wk_0" 0x764188e6 in 
> vlib_worker_thread_barrier_check () at 
> /home/dev/code/net-base/build/vpp/src/vlib/threads.h:425
> 
>   2Thread 0x7fffb5652700 (LWP 101018) "eal-intr-thread" 
> 0x74afbe63 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81
> 
> * 1Thread 0x77fd87c0 (LWP 101001) "vpp_main" 0x74a7 in 
> __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> 
> (gdb) print session_index
> 
> $2 = 2
> 
>  
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#15072): https://lists.fd.io/g/vpp-dev/message/15072
> Mute This Topic: https://lists.fd.io/mt/69497840/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all m

[vpp-dev] Arm verify job broken

2020-01-14 Thread Florin Coras
Hi, 

Jobs have been failing since yesterday. Did anybody try to look into it?

Regards,
Florin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15164): https://lists.fd.io/g/vpp-dev/message/15164
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-14 Thread Florin Coras
Hi Raj,

Session layer does support connection-less transports but udp does not raise 
accept notifications to vcl. UDPC might, but we haven’t tested udpc with vcl in 
a long time so it might not work properly. 

What was the problem you were hitting in the non-connected case?

Regards,
Florin

> On Jan 14, 2020, at 7:13 AM, raj.gauta...@gmail.com wrote:
> 
> Hi ,
> I am trying some host application tests ( using LD_PRELOAD) .  TCP rx and tx 
> both work fine. UDP tx also works fine. 
> The issue is only with UDP rx .  In some discussion it was mentioned that 
> session layer does not support connection-less transports so protocols like 
> udp still need to accept connections and only afterwards read from the fifos.
> So, I changed the UDP receiver application to use listen() and accept() 
> before read() . But , I am still having issue to make it run. 
> After I started, udp traffic from other server it seems to accept the 
> connection but never returns from the vppcom_session_accept() function.
> VPP release is 19.08.
> 
> vpp# sh app server
> Connection  App  Wrk
> [0:0][CT:U] 0.0.0.0:8090->0.0.0.0:0 ldp-36646-app[shm]0
> [#0][U] 0.0.0.0:8090->0.0.0.0:0 ldp-36646-app[shm]0
> vpp#
>
>
> [root@orc01 testcode]#  VCL_DEBUG=2 LDP_DEBUG=2 
> LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so 
>  VCL_CONFIG=/etc/vpp/vcl.cfg ./udp_rx
> VCL<36646>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<36646>: allocated VCL heap = 0x7f77e5309010, size 268435456 (0x1000)
> VCL<36646>: configured rx_fifo_size 400 (0x3d0900)
> VCL<36646>: configured tx_fifo_size 400 (0x3d0900)
> VCL<36646>: configured app_scope_local (1)
> VCL<36646>: configured app_scope_global (1)
> VCL<36646>: configured api-socket-name (/tmp/vpp-api.sock)
> VCL<36646>: completed parsing vppcom config!
> vppcom_connect_to_vpp:549: vcl<36646:0>: app (ldp-36646-app) is connected to 
> VPP!
> vppcom_app_create:1067: vcl<36646:0>: sending session enable
> vppcom_app_create:1075: vcl<36646:0>: sending app attach
> vppcom_app_create:1084: vcl<36646:0>: app_name 'ldp-36646-app', 
> my_client_index 0 (0x0)
> ldp_init:209: ldp<36646>: configured LDP debug level (2) from env var 
> LDP_DEBUG!
> ldp_init:282: ldp<36646>: LDP initialization: done!
> ldp_constructor:2490: LDP<36646>: LDP constructor: done!
> socket:974: ldp<36646>: calling vls_create: proto 1 (UDP), is_nonblocking 0
> vppcom_session_create:1142: vcl<36646:0>: created session 0
> Socket successfully created..
> bind:1086: ldp<36646>: fd 32: calling vls_bind: vlsh 0, addr 0x7fff3f3c1040, 
> len 16
> vppcom_session_bind:1280: vcl<36646:0>: session 0 handle 0: binding to local 
> IPv4 address 0.0.0.0 port 8090, proto UDP
> vppcom_session_listen:1312: vcl<36646:0>: session 0: sending vpp listen 
> request...
> vcl_session_bound_handler:610: vcl<36646:0>: session 0 [0x1]: listen 
> succeeded!
> bind:1102: ldp<36646>: fd 32: returning 0
> Socket successfully binded..
> listen:2005: ldp<36646>: fd 32: calling vls_listen: vlsh 0, n 5
> vppcom_session_listen:1308: vcl<36646:0>: session 0 [0x1]: already in listen 
> state!
> listen:2020: ldp<36646>: fd 32: returning 0
> Server listening..
> ldp_accept4:2043: ldp<36646>: listen fd 32: calling vppcom_session_accept: 
> listen sid 0, ep 0x0, flags 0x3f3c0fc0
> vppcom_session_accept:1478: vcl<36646:0>: discarded event: 0
>
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15165): https://lists.fd.io/g/vpp-dev/message/15165
Mute This Topic: https://lists.fd.io/mt/69694900/21656
Mute #vpp-hoststack: https://lists.fd.io/mk?hashtag=vpp-hoststack&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Arm verify job broken

2020-01-14 Thread Florin Coras
Thanks, Ed!

Florin

> On Jan 14, 2020, at 11:53 AM, Ed Kern (ejk)  wrote:
> 
> looking into it now…..
> 
> its a strange one ill tell you that up front…failures are all over the place 
> inside the build and even some hitting the 120 min timeout….
> 
> more as i dig hopefully
> 
> thanks for the ping
> 
> Ed
> 
> 
> 
>> On Jan 14, 2020, at 11:40 AM, Florin Coras > <mailto:fcoras.li...@gmail.com>> wrote:
>> 
>> Hi, 
>> 
>> Jobs have been failing since yesterday. Did anybody try to look into it?
>> 
>> Regards,
>> Florin
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15168): https://lists.fd.io/g/vpp-dev/message/15168
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-14 Thread Florin Coras
 advertisements 
> received
>  1 ip4-icmp-input echo replies sent
> 89   lldp-input   lldp packets received on 
> disabled interfaces
>   1328llc-input   unknown llc ssap/dsap
> vpp#
> 
> vpp# show trace
> --- Start of thread 0 vpp_main ---
> Packet 1
> 
> 00:23:39:401354: dpdk-input
>   HundredGigabitEthernet12/0/0 rx queue 0
>   buffer 0x8894e: current data 0, length 1516, buffer-pool 0, ref-count 1, 
> totlen-nifb 0, trace handle 0x0
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 1516
> buf_len 2176, data_len 1516, ol_flags 0x180, data_off 128, phys_addr 
> 0x75025400
> packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Offload Flags
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without 
> extension headers
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>   IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 60593 -> 8092
> length 1458, checksum 0x0964
> 00:23:39:401355: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
> 00:23:39:401356: ip6-input
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 60593 -> 8092
> length 1458, checksum 0x0964
> 00:23:39:401357: ip6-lookup
>   fib 0 dpo-idx 5 flow hash: 0x
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 60593 -> 8092
> length 1458, checksum 0x0964
> 00:23:39:401361: ip6-local
> UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
>   tos 0x00, flow label 0x0, hop limit 64, payload length 1458
> UDP: 60593 -> 8092
>   length 1458, checksum 0x0964
> 00:23:39:401362: ip6-udp-lookup
>   UDP: src-port 60593 dst-port 8092 (no listener)
> 00:23:39:401362: ip6-icmp-error
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 60593 -> 8092
> length 1458, checksum 0x0964
> 00:23:39:401363: error-drop
>   rx:HundredGigabitEthernet12/0/0.2001
> 00:23:39:401364: drop
>   ip6-input: valid ip6 packets
> 
> vpp#
> 
> 
> Thanks,
> -Raj
> 
> 
> On Tue, Jan 14, 2020 at 1:44 PM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Raj,
> 
> Session layer does support connection-less transports but udp does not raise 
> accept notifications to vcl. UDPC might, but we haven’t tested udpc with vcl 
> in a long time so it might not work properly. 
> 
> What was the problem you were hitting in the non-connected case?
> 
> Regards,
> Florin
> 
> > On Jan 14, 2020, at 7:13 AM, raj.gauta...@gmail.com 
> > <mailto:raj.gauta...@gmail.com> wrote:
> > 
> > Hi ,
> > I am trying some host application tests ( using LD_PRELOAD) .  TCP rx and 
> > tx both work fine. UDP tx also works fine. 
> > The issue is only with UDP rx .  In some discussion it was mentioned that 
> > session layer does not support connection-less transports so protocols like 
> > udp still need to accept connections and only afterwards read from the 
> > fifos.
> > So, I changed the UDP receiver application to use listen() and accept() 
> > before read() . But , I am still having issue to make it run. 
> > After I started, udp traffic from other server it seems to accept the 
> > connection but never returns from the vppcom_session_accept() function.
> > VPP release is 19.08.
> > 
> > vpp# sh app server
> > Connection  App  Wrk
> > [0:0][CT:U] 0.0.0.0:8090->0.0.0.0:0 <http://0.0.0.0:0/> 
> > ldp-36646-app[shm]0
> > [#0][U] 0.0.0.0:8090->0.0.0.0:0 <http://0.0.0.0:0/> 
> > ldp-36646-app[shm]0
> > vpp#
> >
> >
> > [root@orc01 testcode]#  VCL_DEBUG=2 LDP_DEBUG=2 
> > LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.s

Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2020-01-19 Thread Florin Coras
Hi Hanlin, 

Thanks for confirming!

Regards,
Florin

> On Jan 18, 2020, at 7:00 PM, wanghanlin  wrote:
> 
> Hi Florin,
> With latest master code,the problem regarding 3) has been fixed.
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 12/12/2019 14:53,wanghanlin 
> <mailto:wanghan...@corp.netease.com> wrote: 
> That's great! 
> I'll apply and check it soon.
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 12/12/2019 04:15,Florin Coras 
> <mailto:fcoras.li...@gmail.com> wrote: 
> Hi Hanlin, 
> 
> Thanks to Dave, we can now have per thread binary api connections to vpp. 
> I’ve updated the socket client and vcl to leverage this so, after [1] we have 
> per vcl worker thread binary api sockets that are used to exchange fds. 
> 
> Let me know if you’re still hitting the issue. 
> 
> Regards,
> Florin
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/23687 
> <https://gerrit.fd.io/r/c/vpp/+/23687>
> 
>> On Nov 22, 2019, at 10:30 AM, Florin Coras > <mailto:fcoras.li...@gmail.com>> wrote:
>> 
>> Hi Hanlin, 
>> 
>> Okay, that’s a different issue. The expectation is that each vcl worker has 
>> a different binary api transport into vpp. This assumption holds for 
>> applications with multiple process workers (like nginx) but is not 
>> completely satisfied for applications with thread workers. 
>> 
>> Namely, for each vcl worker we connect over the socket api to vpp and 
>> initialize the shared memory transport (so binary api messages are delivered 
>> over shared memory instead of the socket). However, as you’ve noted, the 
>> socket client is currently not multi-thread capable, consequently we have an 
>> overlap of socket client fds between the workers. The first segment is 
>> assigned properly but the subsequent ones will fail in this scenario. 
>> 
>> I wasn’t aware of this so we’ll have to either fix the socket binary api 
>> client, for multi-threaded apps, or change the session layer to use 
>> different fds for exchanging memfd fds. 
>> 
>> Regards, 
>> Florin
>> 
>>> On Nov 21, 2019, at 11:47 PM, wanghanlin >> <mailto:wanghan...@corp.netease.com>> wrote:
>>> 
>>> Hi Florin,
>>> Regarding 3), I think main problem maybe in function 
>>> vl_socket_client_recv_fd_msg called by vcl_session_app_add_segment_handler. 
>>>  Mutiple worker threads share the same scm->client_socket.fd, so B2 may 
>>> receive the segment memfd belong to A1.
>>> 
>>>
>>> Regards,
>>> Hanlin
>>> 
>>>     
>>> wanghanlin
>>> 
>>> wanghan...@corp.netease.com
>>>  
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=wanghanlin&uid=wanghanlin%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22wanghanlin%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>> On 11/22/2019 01:44,Florin Coras 
>>> <mailto:fcoras.li...@gmail.com> wrote: 
>>> Hi Hanlin, 
>>> 
>>> As Jon pointed out, you may want to register with gerrit. 
>>> 
>>> You comments with respect to points 1) and 2) are spot on. I’ve updated the 
>>> patch to fix them. 
>>> 
>>> Regarding 3), if I understood your scenario correctly, it should not 
>>> happen. The ssvm infra forces applications to map segments at fixed 
>>> addres

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-19 Thread Florin Coras
tive sessions 2
> 
> [root@orc01 vcl_test]# cat /etc/vpp/vcl.conf
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }
> [root@orc01 vcl_test]#
> 
> --- Start of thread 0 vpp_main ---
> Packet 1
> 
> 00:09:53:445025: dpdk-input
>   HundredGigabitEthernet12/0/0 rx queue 0
>   buffer 0x88078: current data 0, length 1516, buffer-pool 0, ref-count 1, 
> totlen-nifb 0, trace handle 0x0
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 1516
> buf_len 2176, data_len 1516, ol_flags 0x180, data_off 128, phys_addr 
> 0x75601e80
> packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Offload Flags
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without 
> extension headers
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>   IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 56944 -> 8092
> length 1458, checksum 0xb22d
> 00:09:53:445028: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
> 00:09:53:445029: ip6-input
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 56944 -> 8092
> length 1458, checksum 0xb22d
> 00:09:53:445031: ip6-lookup
>   fib 0 dpo-idx 6 flow hash: 0x
>   UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
> tos 0x00, flow label 0x0, hop limit 64, payload length 1458
>   UDP: 56944 -> 8092
> length 1458, checksum 0xb22d
> 00:09:53:445032: ip6-local
> UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
>   tos 0x00, flow label 0x0, hop limit 64, payload length 1458
> UDP: 56944 -> 8092
>   length 1458, checksum 0xb22d
> 00:09:53:445032: ip6-udp-lookup
>   UDP: src-port 56944 dst-port 8092
> 00:09:53:445033: udp6-input
>   UDP_INPUT: connection 0, disposition 5, thread 0
> 
> 
> thanks,
> -Raj
> 
>
> On Wed, Jan 15, 2020 at 4:09 PM Raj Kumar via Lists.Fd.Io 
> <http://lists.fd.io/>  <mailto:gmail@lists.fd.io>> wrote:
> Hi Florin,
> Yes,  [2] patch resolved the  IPv6/UDP receiver issue. 
> Thanks! for your help.
> 
> thanks,
> -Raj
> 
> On Tue, Jan 14, 2020 at 9:35 PM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Raj, 
> 
> First of all, with this [1], the vcl test app/client can establish a udpc 
> connection. Note that udp will most probably lose packets, so large exchanges 
> with those apps may not work. 
> 
> As for the second issue, does [2] solve it?
> 
> Regards, 
> Florin
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/24332 
> <https://gerrit.fd.io/r/c/vpp/+/24332>
> [2] https://gerrit.fd.io/r/c/vpp/+/24334 
> <https://gerrit.fd.io/r/c/vpp/+/24334>
> 
>> On Jan 14, 2020, at 12:59 PM, Raj Kumar > <mailto:raj.gauta...@gmail.com>> wrote:
>> 
>> Hi Florin,
>> Thanks! for the reply. 
>> 
>> I realized the issue with the non-connected case.  For receiving datagrams, 
>> I was using recvfrom() with DONOT_WAIT flag because of that  
>> vppcom_session_recvfrom() api was failing. It expects either 0 or MSG_PEEK 
>> flag.
>>   if (flags == 0)
>> rv = vppcom_session_read (session_handle, buffer, buflen);
>>   else if (flags & MSG_PEEK) 0x2
>> rv = vppcom_session_peek (session_handle, buffer, buflen);
>>   else
>> {
>>   VDBG (0, "Unsupport flags for recvfrom %d", flags);
>>   return VPPCOM_EAFNOSUPPORT;
>> }
>> 
>>  I changed the flag to 0 in recvfrom() , after that UDP rx is working fine 
>> but only for IPv4.
>> 
>> I am facing a different issue with IPv6/UDP receiver.  I am getting "no 
>> listener for dst port" error.
>>
>> Please let me know if I am doing something wrong. 
>> Here are the traces : -
>> 
>> [root@orc01 testcode]# VCL_DEBUG=2 LDP_DEBUG=2 
>> LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so
>>   VCL_CONFIG=/etc/vpp/vcl.cfg ./udp6

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-20 Thread Florin Coras
Hi Raj, 

Good to see progress. Check with “show int” the tx counters on the sender and 
rx counters on the receiver as the interfaces might be dropping traffic. One 
sender should be able to do more than 5Gbps. 

How big are the writes to the tx fifo? Make sure the tx buffer is some tens of 
kB. 

As for the issue with the number of workers, you’ll have to switch to udpc 
(connected udp), to ensure you have a separate connection for each ‘flow’, and 
to use accept in combination with epoll to accept the sessions udpc creates. 

Note that udpc currently does not work correctly with vcl and multiple vpp 
workers if vcl is the sender (not the receiver) and traffic is bidirectional. 
The sessions are all created on the first thread and once return traffic is 
received, they’re migrated to the thread selected by RSS hashing. VCL is not 
notified when that happens and it runs out of sync. You might not be affected 
by this, as you’re not receiving any return traffic, but because of that all 
sessions may end up stuck on the first thread. 

For udp transport, the listener is connection-less and bound to the main 
thread. As a result, all incoming packets, even if they pertain to multiple 
flows, are written to the listener’s buffer/fifo.

Regards,
Florin

> On Jan 20, 2020, at 3:50 PM, Raj Kumar  wrote:
> 
> Hi Florin,
> I changed my application as you suggested. Now, I am able to achieve 5 Gbps 
> with a single UDP stream.  Overall, I can get ~20Gbps with multiple host 
> application . Also, the TCP throughput  is improved to ~28Gbps after tuning 
> as mentioned in  [1]. 
> On the similar topic; the UDP tx throughput is throttled to 5Gbps. Even if I 
> run the multiple host applications the overall throughput is 5Gbps. I also 
> tried by configuring multiple worker threads . But the problem is that all 
> the application sessions are assigned to the same worker thread. Is there any 
> way to assign each session  to a different worker thread?
> 
> vpp# sh session verbose 2
> Thread 0: no sessions
> [#1][U] fd0d:edc4::2001::203:58926->fd0d:edc4:
>  Rx fifo: cursize 0 nitems 399 has_event 0
>   head 0 tail 0 segment manager 1
>   vpp session 0 thread 1 app session 0 thread 0
>   ooo pool 0 active elts newest 0
>  Tx fifo: cursize 399 nitems 399 has_event 1
>   head 1460553 tail 1460552 segment manager 1
>   vpp session 0 thread 1 app session 0 thread 0
>   ooo pool 0 active elts newest 4294967295
>  session: state: opened opaque: 0x0 flags:
> [#1][U] fd0d:edc4::2001::203:63413->fd0d:edc4:
>  Rx fifo: cursize 0 nitems 399 has_event 0
>   head 0 tail 0 segment manager 2
>   vpp session 1 thread 1 app session 0 thread 0
>   ooo pool 0 active elts newest 0
>  Tx fifo: cursize 399 nitems 399 has_event 1
>   head 3965434 tail 3965433 segment manager 2
>   vpp session 1 thread 1 app session 0 thread 0
>   ooo pool 0 active elts newest 4294967295
>  session: state: opened opaque: 0x0 flags:
> Thread 1: active sessions 2
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
> Thread 5: no sessions
> Thread 6: no sessions
> Thread 7: no sessions
> vpp# sh app client
> Connection  App
> [#1][U] fd0d:edc4::2001::203:58926->udp6_tx_8092[shm]
> [#1][U] fd0d:edc4::2001::203:63413->udp6_tx_8093[shm]
> vpp#
> 
> 
> 
> thanks,
> -Raj
> 
> On Sun, Jan 19, 2020 at 8:50 PM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Raj,
> 
> The function used for receiving datagrams is limited to reading at most the 
> length of a datagram from the rx fifo. UDP datagrams are mtu sized, so your 
> reads are probably limited to ~1.5kB. On each epoll rx event try reading from 
> the session handle in a while loop until you get an VPPCOM_EWOULDBLOCK. That 
> might improve performance. 
> 
> Having said that, udp is lossy so unless you implement your own 
> congestion/flow control algorithms, the data you’ll receive might be full of 
> “holes”. What are the rx/tx error counters on your interfaces (check with “sh 
> int”). 
> 
> Also, with simple tuning like this [1], you should be able to achieve much 
> more than 15Gbps with tcp. 
> 
> Regards,
> Florin
> 
> [1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf 
> <https://wiki.fd.io/view/VPP/HostStack/LDP/iperf>
> 
>> On Jan 19, 2020, at 3:25 PM, Raj Kumar > <mailto:raj.gauta...@gmail.com>> wrote:
>> 
>>   Hi Florin,
>>  By using VCL library in an UDP receiver application,  I am able to receive 
>> only 2 Mbps traffic. On increasing the traffic, I see Rx FIFO full error and 
>> application stopped receiving the 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Florin Coras
Hi Raj, 

Inline.

> On Jan 21, 2020, at 3:41 PM, Raj Kumar  wrote:
> 
> Hi Florin,
> There is no drop on the interfaces. It is 100G card. 
> In UDP tx application, I am using 1460 bytes of buffer to send on select(). I 
> am getting 5 Gbps throughput  ,but if I start one more application then total 
> throughput goes down to 4 Gbps as both the sessions are on the same thread.   
> I increased the tx buffer to 8192 bytes and then I can get 11 Gbps throughput 
>  but again if I start one more application the throughput goes down to 10 
> Gbps.

FC: I assume you’re using vppcom_session_write to write to the session. How 
large is “len” typically? See lower on why that matters.
 
> 
> I found one issue in the code ( You must be aware of that) , the UDP send MSS 
> is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large packets  
> are getting fragmented. 
> udp_send_mss (transport_connection_t * t)
> {
>   /* TODO figure out MTU of output interface */
>   return 1460;
> }

FC: That’s a typical mss and actually what tcp uses as well. Given the nics, 
they should be fine sending a decent number of mpps without the need to do 
jumbo ip datagrams. 

> if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if i 
> start one more application then throughput is going down to 13 Mbps. 

> 
> It looks like the 17 Mbps is per core limit and since all the sessions are 
> pined to the same thread we can not get more throughput.  Here, per core 
> throughput look good to me. Please let me know there is any way to use 
> multiple threads for UDP tx applications. 
> 
> In your previous email you mentioned that we can use connected udp socket in 
> the UDP receiver. Can we do something similar for UDP tx ?

FC: I think it may work fine if vpp has main + 1 worker. I have a draft patch 
here [1] that seems to work with multiple workers but it’s not heavily tested. 

Out of curiosity, I ran a vcl_test_client/server test with 1 worker and with 
XL710s, I’m seeing this:

CLIENT RESULTS: Streamed 65536017791 bytes
  in 14.392678 seconds (36.427420 Gbps half-duplex)!

Should be noted that because of how datagrams are handled in the session layer, 
throughput is sensitive to write sizes. I ran the client like:
~/vcl_client -p udpc 6.0.1.2 1234 -U -N 100 -T 65536

Or in english, unidirectional test, tx buffer of 64kB and 1M writes of that 
buffer. My vcl config was such that tx fifos were 4MB and rx fifos 2MB. The 
sender had few tx packet drops (1657) and the receiver few rx packet drops 
(801). If you plan to use it, make sure arp entries are first resolved (e.g., 
use ping) otherwise the first packet is lost. 

Throughput drops to ~15Gbps with 8kB writes. You should probably also test with 
bigger writes with udp. 

[1] https://gerrit.fd.io/r/c/vpp/+/24462

> 
> From the hardware stats , it seems that UDP tx checksum offload is not 
> enabled/active  which could impact the performance. I think, udp tx checksum 
> should be enabled by default if it is not disabled using parameter  
> "no-tx-checksum-offload".

FC: Performance might be affected by the limited number of offloads available. 
Here’s what I see on my XL710s:

rx offload active: ipv4-cksum jumbo-frame scatter
tx offload active: udp-cksum tcp-cksum multi-segs

> 
> Ethernet address b8:83:03:79:af:8c
>   Mellanox ConnectX-4 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
> rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)

FC: Are you running with 5 vpp workers? 

Regards,
Florin

> tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
>jumbo-frame scatter timestamp keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-segs
>udp-tnl-tso ip-tnl-tso
> tx offload active: multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> tx burst function: (nil)
> rx burst function: mlx5_rx_burst
> 
> thanks,
> -Raj
> 
> On Mon, 

[vpp-dev] Docs and test-docs jobs are unstable

2020-01-22 Thread Florin Coras
Hi, 

All verify jobs are now failing because of these. “lftools command not found” 
seems to be the issue but not the only error. 

Could somebody take a look at them? 

Regards,
Florin-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15221): https://lists.fd.io/g/vpp-dev/message/15221
Mute This Topic: https://lists.fd.io/mt/69981727/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Docs and test-docs jobs are unstable

2020-01-22 Thread Florin Coras
Great! Thanks, Ed!

Florin

> On Jan 22, 2020, at 9:22 AM, Ed Kern (ejk)  wrote:
> 
> 
> florin this should be ok now on all branches….
> 
> Vanessa changed the publisher which worked around the problem.
> 
> 
> (The underlying problem with lftools and their use of pip flags which dont 
> jibe with the latest pip release version
> is still open but shouldn’t be hitting us.)
> 
> 
> Ed
> 
> 
> 
>> On Jan 22, 2020, at 8:36 AM, Ed Kern via Lists.Fd.Io 
>>  wrote:
>> 
>> I am looking into this for the last 30 minutes or so and have raised it with 
>> LF folks as well.
>> 
>> More hopefully soon.
>> 
>> Ed
>> 
>> 
>> 
>> 
>> 
>>> On Jan 22, 2020, at 8:28 AM, Florin Coras  wrote:
>>> 
>>> Hi, 
>>> 
>>> All verify jobs are now failing because of these. “lftools command not 
>>> found” seems to be the issue but not the only error. 
>>> 
>>> Could somebody take a look at them? 
>>> 
>>> Regards,
>>> Florin
>> 
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15227): https://lists.fd.io/g/vpp-dev/message/15227
Mute This Topic: https://lists.fd.io/mt/69981727/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2020-02-13 Thread Florin Coras
Hi Amit, 

Here’s a minimal example [1] that’s based on some of the scripts I’m using. 
Note that I haven’t tested this in isolation, so do let me know if you hit any 
issues. 

Regards, 
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/nginx 
<https://wiki.fd.io/view/VPP/HostStack/LDP/nginx>

> On Feb 12, 2020, at 11:43 PM, Amit Mehra  wrote:
> 
> Hi,
> 
> I am also curious to know how to run nginx with VPP using LD_PRELOAD option. 
> I have installed nginx and able to run it successfully without VPP. Now, i 
> want to try nginx with vpp using LD_PRELOAD option, can someone provide me 
> the steps for the same?
> 
> Regards,
> Amit
> 
> On Fri, Dec 27, 2019 at 11:57 AM Florin Coras  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Yang.L, 
> 
> I suspect you may need to do a “git pull” and rebuild because the lines don’t 
> match, i.e., vcl_session_accepted_handler:377 is now just an assignment. Let 
> me know if that solves the issue.
> 
> Regards,
> Florin
> 
> > On Dec 26, 2019, at 10:11 PM, lin.yan...@zte.com.cn 
> > <mailto:lin.yan...@zte.com.cn> wrote:
> > 
> > Hi Florin,
> > I have tried the latest master.The problem is not resolved.
> > Here's the nginx error linformation:
> > 
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 61 vlsh 29, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0] accepted 
> > 30 [0x1e] peer: 192.168.3.66:47672 <http://192.168.3.66:47672/> local: 
> > 192.168.3.65:8080 <http://192.168.3.65:8080/>
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 62 vlsh 30, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0] accepted 
> > 31 [0x1f] peer: 192.168.3.66:47674 <http://192.168.3.66:47674/> local: 
> > 192.168.3.65:8080 <http://192.168.3.65:8080/>
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 63 vlsh 31, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> > 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software 
> > caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> > 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software 
> > caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> > 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software 
> > caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> > 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software 
> > caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling vppcom_session_accept: 
> > listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for session 
> > 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software 
> > caused connection abort)
> > 
> > Can you help me analyze it?
> > Thanks,
> > Yang.L
> >
> > 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15394): https://lists.fd.io/g/vpp-dev/message/15394
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] 128k performance is not good as kTCP #vpp #mellanox

2020-02-15 Thread Florin Coras
Hi Sejun, 

I’ve never tried spdk, so I can’t know for sure where the issue lies but 
possible sources of problems could be:
- mlx5 driver in combination with vpp. Have you tried vpp’s native rdma based 
driver [1]?
- TSO is not enabled by default for tcp in vpp. To enable it, you’d need to 
change vpp startup.conf to 
- add a tcp { tso } stanza [2]
- under the dpdk stanza add "enable-tcp-udp-checksum” and under your 
device’s config add “tso on”, in addition to the number of tx and rx 
descriptors. I’ve only tested this with xl710s, so I don’t know if it works 
fine with mlx5.
- default mtu for tcp is 1460. If you want to use jumbo frames change 
startup.conf and under tcp stanza add “mtu 9000”. Never tested this in 
combination with tso, so you may not want to mix them.
- what sizes is SPDK using for rx/tx fifos when using vpp’s host stack? Given 
that writes/reads are as large as 128k, the fifos should probably be pretty 
large, say 4-6MB. 
- ensure that the cores spdk is using are not overlapping vpp’s worker(s) and 
that the interface and the workers (vpp’s and spdk’s) are on the same numa [3].

Finally, what sort of throughput are you seeing and with how many workers? What 
does the vector rate look like in vpp, i.e., execute "sh run” in the cli and 
check dpdk-input and tcp-output nodes? 

For reference, 1 connection with xl710s and 4MB fifos between 2 directly 
connected hosts should reach over 37Gbps [4]. 

Regards,
Florin

[1] https://git.fd.io/vpp/tree/src/plugins/rdma/rdma_doc.md 
[2] You may also want to switch to cubic as a congestion control algorithm by 
adding to the tcp stanza {cc-algo cubic}.
[3] To check the numa for your interface do “sh hardware”
[4] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Feb 14, 2020, at 11:07 PM, sejun.kwon via Lists.Fd.Io 
>  wrote:
> 
> Hello, I'm working on SPDK library + VPP, because some report said that VPP 
> reduces the overhead of network. When I test with VPP (with mlx5 poll mode 
> driver mtu 9000) and null device with spdk, 4k performance with VPP is much 
> better than the default(kTCP). But, 128k write performance with VPP is 30 
> percent lower than the one with kTCP. Is there anyone know why 128k write 
> performance with VPP is not good as kernel ? I increase Multi thread or 
> num-rx-desc and there is improvment, but it still lower than kernel. Is there 
> also build option related to performance? Thanks in advance. 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15409): https://lists.fd.io/g/vpp-dev/message/15409
Mute This Topic: https://lists.fd.io/mt/71294801/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #mellanox: https://lists.fd.io/mk?hashtag=mellanox&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VCL client connect error

2020-02-17 Thread Florin Coras
Hi Satya, 

Why are you commenting out the group (gid) configuration? The complaint is that 
vcl cannot connect to vpp’s binary api, so that may be part of the problem, if 
the user running vcl cannot read the binary api shared memory segment. 

You could also try connecting using private connections, instead of using shm 
based binary api segments, by configuring vpp to use the sock transport of the 
binary api. For that, add to startup.conf socksvr { socket-name 
/run/vpp-api.sock} and session { evt_qs_memfd_seg }

And then in vcl.conf add "api-socket-name /run/vpp-api.sock”. See [1] for a 
simple config. 

Regards, 
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Feb 17, 2020, at 7:08 AM, Satya Murthy  wrote:
> 
> Hi,
> 
> We are seeing following error when we try to connect to VPP via VCL test 
> client.
> Is this a known issue? 
> 
> startup file that we are using on VPP:
> 
> 
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
> #  gid vpp
> } 
>
> #api-segment {
> #  gid vpp
> #}
> 
> Error:
> ==
> ./vcl_test_client 127.0.0.1 12344
> VCL<1273>: using default heapsize 268435456 (0x1000)
> VCL<1273>: allocated VCL heap = 0x7fe8a141f010, size 268435456 (0x1000)
> VCL<1273>: using default configuration.
> vppcom_connect_to_vpp:577: vcl: VCL<1273>: app (vcl_test_client) 
> connecting to VPP api (/vpe-api)...
> vl_map_shmem:637: region init fail
> connect_to_vlib_internal:410: vl_client_api map rv -2
> vppcom_connect_to_vpp:583: VCL<1273>: app (vcl_test_client) connect failed!
> vppcom_app_create:724: VCL<1273>: ERROR: couldn't connect to VPP!
> ERROR when calling vppcom_app_create(): Connection refused
>
>
> ERROR: vppcom_app_create() failed (errno = 111)!
> 
> Any inputs on this please ?
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15433): https://lists.fd.io/g/vpp-dev/message/15433
Mute This Topic: https://lists.fd.io/mt/71351013/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

UDP is without any flow/congestion control. That is, there is nothing to push 
back on the sender when it over drives the receiver. Increasing the number of 
rx descriptors probably helps a bit but unless the rx nic is faster, I don’t 
know if there’s anything else that could avoid the drops.

I’m saying that because one connection should be able to do more than 10Gbps. 
But to be sure, does “sh session verbose” indicate that your rx fifo is full?

Regards,
Florin

> On Feb 19, 2020, at 9:30 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> 
> I've been trying to do some basic performance testing on 20.01 using the 
> vpp_echo application, and while I'm getting the expected performance with 
> TCP, I'm not quite able to achieve what I would expect with UDP. The NICs are 
> 10G X520, and on TCP I get around 9.5 Gbps, but with UDP I get about 6.5 Gbps 
> with about 30% packet loss.
> 
> The commands I use are:
> Server: ./vpp_echo socket-name /tmp/vpp-api.sock uri udp://10.0.0.71/ 
> fifo-size 100 uni RX=50Gb TX=0 stats 1 sclose=Y rx-buf 1400 tx-buf 0 
> mq-size 10
> Client: ./vpp_echo socket-name /tmp/vpp-api.sock client uri 
> udp://10.0.0.71/ fifo-size 100 uni TX=50Gb RX=0 stats 1 sclose=Y 
> tx-buf 1400 rx-buf 0
> 
> (For TCP tests the commands are pretty much the same, except for the URI 
> which is tcp://...)
> 
> I have a couple of hints but not sure how to make the necessary tweaks to 
> improve performance. On the receiver side, vpp# sh hardware-interfaces shows:
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend jumbo-frame 
> scatter
>security keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> 
> I'm thinking that udp-cksum not being active is an issue, is this something 
> that I need to explicitly enable somehow? I do have the following in 
> startup.conf:
> dpdk {
>   dev :05:00.0{
> num-rx-desc 1024
> num-tx-desc 1024
> tso on
>   }
>   enable-tcp-udp-checksum
> }
> 
> My other clue is this (again on the receiver side):
> vpp# sh interface
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> TenGigabitEthernet5/0/0   1  up  9000/0/0/0 rx 
> packets  25107326
> rx bytes  
>36136837440
> tx 
> packets 1
> tx bytes  
> 60
> drops 
> 44
> ip4   
>   25107281
> rx-miss   
>   11599259
> 
> Any tips on what might be causing the rx-miss, or things I should tune to 
> improve this for UDP?
> 
> Thank you!
> Dom 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15466): https://lists.fd.io/g/vpp-dev/message/15466
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Don, 

Inline.

> On Feb 19, 2020, at 11:54 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> Thanks for the response. I'm not so concerned about the packet drops (as you 
> point out it is to be expected), however increasing the number of rx 
> descriptors did help a lot, so thank you very much for that!

FC: Great!

> 
> I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
> Client (TX) side:
> ConnectionState  Rx-f  
> Tx-f
> [#1][U] 10.0.0.70:11202->10.0.0.71:   -  0 
> 85
> Thread 1: active sessions 1
> 
> Server (RX) side:
> vpp# sh session verbose
> ConnectionState  Rx-f  
> Tx-f
> [#0][U] 10.0.0.71:->0.0.0.0:0 -  0 0

FC: So the app reads the data as fast as it’s enqueued. That’s good because it 
limits the problem to how fast vpp can consume udp packets. 
> 
> Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
> trying to debug why it's not in the active tx offloads even though it shows 
> as available (and it is in the active tx offloads).

FC: Ow, I missed that. What interfaces are you using? 

Regards,
Florin

> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15468): https://lists.fd.io/g/vpp-dev/message/15468
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

Now that you mention it, it’s the same for my nics. Nonetheless, the packets 
that reach ip4-local are marked as having a valid l4 checksum. Check in 
ip4_local_check_l4_csum_x2 and ip4_local_check_l4_csum if 
ip4_local_l4_csum_validate is called or not. If not, there’s no extra overhead 
in processing the udp packets. 

Regards,
Florin

> On Feb 19, 2020, at 11:54 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> [Edited Message Follows]
> 
> ** Edit **: Corrected typo, udp-cksum not active in rx-offloads, but is 
> active in tx-offloads
> 
> Hi Florin,
> 
> Thanks for the response. I'm not so concerned about the packet drops (as you 
> point out it is to be expected), however increasing the number of rx 
> descriptors did help a lot, so thank you very much for that!
> 
> I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
> Client (TX) side:
> ConnectionState  Rx-f  
> Tx-f
> [#1][U] 10.0.0.70:11202->10.0.0.71:   -  0 
> 85
> Thread 1: active sessions 1
> 
> Server (RX) side:
> vpp# sh session verbose
> ConnectionState  Rx-f  
> Tx-f
> [#0][U] 10.0.0.71:->0.0.0.0:0 -  0 0
> 
> Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
> trying to debug why it's not in the active rx offloads even though it shows 
> as available (and it is in the active tx offloads).
> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15470): https://lists.fd.io/g/vpp-dev/message/15470
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

Was about to suggest that you used this [1] but I see you already figured it 
up. 

Let me know what’s wrong with the echo app once you get a chance to debug it.

Regards,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/25286

> On Feb 19, 2020, at 2:14 PM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi again,
> 
> For what it's worth, I added a hack in src/plugins/dpdk/device/init.c and set 
> xd->port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_UDP_CKSUM, and now I have:
> 
> vpp# sh hardware-interfaces
>   NameIdx   Link  Hardware
> TenGigabitEthernet5/0/01down  TenGigabitEthernet5/0/0
>   Link speed: unknown
>   Ethernet address a0:36:9f:be:0c:b4
>   Intel 82599
> carrier up full duplex mtu 9206
> flags: pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
> Devargs:
> rx: queues 1 (max 128), desc 4000 (min 32 max 4096 align 8)
> tx: queues 6 (max 64), desc 4000 (min 32 max 4096 align 8)
> pci: device 8086:154d subsystem 8086:7b11 address :05:00.00 numa 0
> max rx packet len: 15872
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend jumbo-frame 
> scatter
>security keep-crc
> rx offload active: ipv4-cksum udp-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso macsec-insert multi-segs security
> tx offload active: udp-cksum tcp-cksum tcp-tso multi-segs
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
>ipv6-udp ipv6-ex ipv6
> rss active:none
> tx burst function: ixgbe_xmit_pkts
> rx burst function: ixgbe_recv_pkts
> 
> The bad news is that after making this change, vpp_echo crashes, have not had 
> a chance to debug this yet but wanted to point out the potentially missing RX 
> offload setting in case it is useful.
> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15473): https://lists.fd.io/g/vpp-dev/message/15473
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-20 Thread Florin Coras
Hi Dom, 

That’s a really high vector rate. What cpu are you using? 

I’ve tried your config locally and I’m getting under 10Gbps with high rx drops. 
On the other hand, with vcl test client/server apps, after a bit of tweaking, I 
can saturate my 40Gbps nics with udpc (udp connection pinned to a thread). On 
the server side rx-miss was 8854

server: ./vcl_server 1234 -p udpc
client: ./vcl_client -p udpc 6.0.1.1 1234 -U -N 100 -T 32768

Remember to first ensure that arp tables are populated (with ping from cli) to 
avoid dropping the first udp packet. Also, in my testing I pin the vcl apps to 
cores on the same numa as vpp’s workers and the nics using taskset. 

It could be that the echo apps need a bit of optimization. 

Regards, 
Florin


> On Feb 20, 2020, at 8:31 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> I'm not sure why the echo app was crashing on me yesterday, built the debug 
> version, no crash, so I rebuilt the release version and also no crash. For 
> some reason, even though udp-cksum now shows as an active rx offload, the 
> function ip4_local_check_l4_csum is still being called, but I added a return 
> statement right at the top to see how much overhead it really introduces, and 
> it didn't really make a difference, so I'm thinking that is not the 
> bottleneck.
> 
> Do you think the rx-miss counter shown below is relevant, and if so, any tips 
> on what that suggests?
> 
> vpp# sh interface 
>   
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count 
> TenGigabitEthernet5/0/0   1  up  9000/0/0/0 rx 
> packets  35702325
> rx bytes  
>42584946555
> drops 
>122
> ip4   
>   35702203
> rx-miss   
>   16959304
> 
> vpp# sh runtime
> Thread 1 vpp_wk_0 (lcore 22)
> Time 90.3, 10 sec internal node vector rate 0.00
>   vector rates in 3.9519e5, out 0.e0, drop 4.7598e-1, punt 0.e0 
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
> dpdk-input   polling 12941545335702246
>0  6.94e2 .28   
> drop active 43  43
>0  1.82e31.00   
> error-drop   active 43  43
>0  1.47e31.00   
> ethernet-input   active 14877735702246
>0  1.93e1  239.97   
> ip4-input-no-checksumactive 14876735702203
>0  3.15e1  239.99   
> ip4-localactive 14876735702203
>0  5.03e2  239.99   
> ip4-lookup   active 14876735702203
>0  4.19e1  239.99   
> ip4-udp-lookup   active 14876735702203
>0  4.34e1  239.99   
> llc-inputactive 41  41
>0  1.45e31.00   
> lldp-input   active  2   2
>0  1.73e41.00   
> session-queuepolling 100382138   0
>0  2.82e20.00   
> udp4-input   active 14876735702203
>0  4.01e3  239.99   
> unix-epoll-input polling126267   0
>0  1.04e40.00
> 
> Thanks!
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15479): https://lists.fd.io/g/vpp-dev/message/15479
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding SCTP support in VPP host stack

2020-02-21 Thread Florin Coras
Hi Guruprasad, 

SCTP plugin has not been maintained in a long time and in vpp 20.05 we’ve 
actually removed the code. 

Regards, 
Florin

> On Feb 16, 2020, at 10:25 PM, Guru Prasad  wrote:
> 
> Hi,
> 
> Could anyone please help me on below queries:
>
> i)VPP_ECHO client and server application supports testing of SCTP stack?
> ii) How stable SCTP stack in vpp1908.
> iii) Is VPP SCTP stack RFC compliant?
> iv) What is the current performance numbers with VPP SCTP stack.
> 
> Thanks in Advance,
> Guruprasad T S
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15497): https://lists.fd.io/g/vpp-dev/message/15497
Mute This Topic: https://lists.fd.io/mt/71452555/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VCL library

2020-02-25 Thread Florin Coras
Hi Kusuma, 

No. For that you’ll have to use the builtin C apis. See the example echo 
client/server, http server or proxy apps here [1]. 

Regards,
Florin

[1] https://git.fd.io/vpp/tree/src/plugins/hs_apps

> On Feb 25, 2020, at 2:55 AM, Kusuma DS  wrote:
> 
> Hi, 
> 
> I have one question related to VCL library.
> 
> Can i use the VCL library api for writing internal application? 
> 
> 
> Thank you, 
> Kusuma
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15519): https://lists.fd.io/g/vpp-dev/message/15519
Mute This Topic: https://lists.fd.io/mt/71531012/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TLS configuration & throughput

2020-02-25 Thread Florin Coras
Hi Dom, 

First of all, tls code is not optimized and there are some scheduling 
inefficiencies that we are aware of and which do affect overall performance. 
Having said that, you might be able to improve throughput by increasing rx and 
tx buffer sizes (any reason for keeping them that small?). 

Note that tls-openssl engine (one of the 3 tls engines) does not use vpp’s 
native crypto infra, i.e., the crypto handlers are independent of it. Currently 
there is no way to inspect the ciphers on established connections but this 
could be added to the tls connection format function. 

Regards, 
Florin

> On Feb 25, 2020, at 1:50 PM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> 
> I'm trying to get an idea of TLS throughput using openssl without hardware 
> acceleration, and I'm using the vpp_echo application as follows:
> Server: taskset --cpu-list 4,6,8 ./vpp_echo socket-name /tmp/vpp-api.sock uri 
> tls://10.0.0.71/ fifo-size 200 uni RX=50Gb TX=0 stats 1 sclose=Y 
> rx-buf 4800 tx-buf 0 mq-size 10
> Client: taskset --cpu-list 4,6,8 ./vpp_echo socket-name /tmp/vpp-api.sock 
> client uri tls://10.0.0.71/ fifo-size 200 uni TX=50Gb RX=0 stats 1 
> sclose=Y tx-buf 1400 rx-buf 0 mq-size 500
> I've tried to make sure that openssl is used as the crypto engine by adding 
> the following to startup.conf:
> plugins {
> plugin crypto_ipsecmb_plugin.so { disable }
> plugin tlspicotls_plugin.so { disable }
> plugin crypto_native_plugin.so { disable }
> plugin tlsmbedtls_plugin.so { disable }
> }
> Using "show crypto handlers" I can confirm that "Active" and "Candidates" 
> only lists openssl for all ciphers.
> 
> In order to make sure that AES-GCM is used, I put a temporary hack in 
> src/plugins/tlsopenssl/tls_openssl.c near line 892:
> tls_openssl_set_ciphers("AESGCM"); //was originally 
> ALL:!ADH:!LOW:!EXP:!MD5:!RC4-SHA:!DES-CBC3-SHA:@STRENGTH
> 
> With this setup, I get around 1 Gbps initially, which after some time drops 
> off to 500 Mbps (over 10 Gbps NICs). When I use the exact same NICs and a 
> regular TLS client/server application (after stopping VPP and returning the 
> NICs to the OS) I get 5.3 Gbps.
> 
> My questions are:
> 1. Any suggestions on configuration or tuning to get TLS performance at least 
> close to what is possible using a generic TLS client / server using openssl ?
> 2. Is there a way to check / confirm that VPP is using AES-GCM when I run my 
> test as shown above?
> 
> Thank you!
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15533): https://lists.fd.io/g/vpp-dev/message/15533
Mute This Topic: https://lists.fd.io/mt/71542617/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TLS configuration & throughput

2020-02-26 Thread Florin Coras
Hi Dom, 

Out of curiosity, I tried out testing tls throughput with iperf. Note that this 
is a bit of a hack, i.e., ldp can transparently convert tcp connections into 
tls connections if the right environment variables are set (see more [1]). 
Sporadically, this does exhibit some setup instability, probably because tls 
might return some partial data.

After this patch [2], in my testbed running 2 Xeon Gold 6146, I’m seeing this:

Connecting to host 6.0.1.1, port 5201
[ 33] local 6.0.1.2 port 12620 connected to 6.0.1.1 port 5201
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[ 33]   0.00-1.00   sec  1.56 GBytes  13.4 Gbits/sec0   0.00 Bytes
[ 33]   1.00-2.00   sec  1.57 GBytes  13.5 Gbits/sec0   0.00 Bytes
[ 33]   2.00-3.00   sec  1.57 GBytes  13.5 Gbits/sec0   0.00 Bytes 
…
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth   Retr
[ 33]   0.00-30.00  sec  47.2 GBytes  13.5 Gbits/sec0 sender
[ 33]   0.00-30.00  sec  47.2 GBytes  13.5 Gbits/sec  receiver

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf 

[2] https://gerrit.fd.io/r/c/vpp/+/25477 


> On Feb 25, 2020, at 4:15 PM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> Thank you for your response. I used different rx/tx buffer sizes and it 
> didn't really make a difference. For this stage, it's good enough to know 
> that there are known performance limitations, thank you again for your help.
> 
> Regards,
> Dom 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15557): https://lists.fd.io/g/vpp-dev/message/15557
Mute This Topic: https://lists.fd.io/mt/71542617/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TLS configuration & throughput

2020-02-26 Thread Florin Coras
Hi Dom, 

Is the iperf client returning an error or does it crash? Do you get any errors 
in /var/log/syslog? 

Also, do a “sh session verbose” on both nodes to see if there’s any data 
pending in the rx/tx fifos. 

Regards,
Florin

> On Feb 26, 2020, at 10:26 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> Thanks so much for trying this out and for the suggestions. Unfortunately 
> this isn't working in my setup. Here's what I did just to make sure I'm not 
> missing anything.
> 
> I generated the key and cert as follows:
> openssl req -newkey rsa:2048 -nodes -keyout ldp.key -x509 -days 365 -out 
> ldp.crt
> 
> Confirmed settings as per [1] above and applied [2] and recompiled. Did a 
> first run without LDP_TRANSPARENT to confirm all other settings:
> 
> # LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG taskset --cpu-list 4,6,8 iperf3 -s 
> -B 10.0.0.71
> ---
> Server listening on 5201
> ---
> Accepted connection from 10.0.0.70, port 11960
> [ 34] local 10.0.0.71 port 5201 connected to 10.0.0.70 port 41655
> [ ID] Interval   Transfer Bandwidth
> [ 34]   0.00-1.00   sec  1.09 GBytes  9.40 Gbits/sec
> [ 34]   1.00-2.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   2.00-3.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   6.00-7.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec
> [ 34]  10.00-10.00  sec  1.24 MBytes  9.33 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bandwidth
> [ 34]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec  sender
> [ 34]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec  receiver
> ---
> Server listening on 5201
> ---
> 
> Now set LDP_TRANSPARENT and confirm (on both nodes):
> # export LDP_TRANSPARENT_TLS=1
> # env | grep LDP_
> LDP_TLS_CERT_FILE=/root/tlstest/ldp.crt
> LDP_TRANSPARENT_TLS=1
> LDP_PATH=/root/vpp.20.01/build-root/build-vpp-native/vpp/lib/libvcl_ldpreload.so
> LDP_TLS_KEY_FILE=/root/tlstest/ldp.key
> #
> 
> Re-started & configured VPP to have a clean run, and get this (server side 
> output):
> 
> # LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG taskset --cpu-list 4,6,8 iperf3 -s 
> -B 10.0.0.71
> ---
> Server listening on 5201
> ---
> Accepted connection from 10.0.0.70, port 40411
> [ 34] local 10.0.0.71 port 5201 connected to 10.0.0.70 port 14718
> [ ID] Interval   Transfer Bandwidth
> [ 34]   0.00-1.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
> [ 34]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec
> [ 34]  10.00-11.00  sec  0.00 Bytes  0.00 bits/sec
> [ 34]  11.00-12.00  sec  0.00 Bytes  0.00 bits/sec
> [ 34]  12.00-13.00  sec  0.00 Bytes  0.00 bits/sec
> [ 34]  13.00-14.00  sec  0.00 Bytes  0.00 bits/sec
> [ 34]  14.00-15.00  sec  0.00 Bytes  0.00 bits/sec
> 
> I've tried multiple times, always the same result, the connection seems to be 
> established but no traffic getting through. Here's some output from the 
> server side VPP instance, not sure if there is anything useful in there, I 
> couldn't see anything of interest.
> 
> Thank again for trying it out and for your suggestions!
> 
> Regards,
> Dom
> 
> 
> 
> vpp# sh int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> TenGigabitEthernet5/0/0   1  up  9000/0/0/0 rx 
> packets22
> rx bytes  
>   2263
> tx 
> packets13
> tx bytes  
>   4042
> drops 
>  5
> ip4   
> 17
> local00 down  0/0/0/0  

Re: [vpp-dev] TLS configuration & throughput

2020-02-26 Thread Florin Coras
Hi Dom, 

It could be that you’re hitting more often the issues that I also encountered 
locally. And yes, I noticed that even the server side has sporadic issues. 
Given that iperf works fine with tcp, I assume tls re-segments data in a way 
that iperf does not like. 

Now, with respect to the throughput, it seems a bit low. If you manage to get 
the test to work again, try to “clear run; show run” and see the number of 
loops/s reported by the active worker. If the number is low (under 100k) it 
might be that the cpu is not fast enough for both tcp + tls. Things might 
improve after the scheduling improvements (will eventually publish a patch for 
that). 

Regards,
Florin

> On Feb 26, 2020, at 11:46 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> Thanks once again! I was in the middle of collecting a bunch of information 
> to respond (basically nothing interesting in logs and the client does not 
> crash, it just sits there), and then on one miraculous run it actually 
> worked. I was hoping for a bit more performance (I only got 2.89 Gbps) but 
> hey, 3x improvement is a great start ;-)
> 
> FYI the server-side VPP instance crashed at the end of the test and I didn't 
> get a chance to collect anything.
> 
> The rest of the email below is from what I was typing before the test worked 
> (I didn't do anything differently that I am aware of...).
> 
> Regards,
> Dom
> 
> --- From failed test runs: --
> The iperf client indicates that it is connected, but basically just sits 
> there until I stop it with Ctrl-C:
> 
> # LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG taskset --cpu-list 4,6,8 iperf3 -c 
> 10.0.0.71
> Connecting to host 10.0.0.71, port 5201
> [ 33] local 10.0.0.70 port 37502 connected to 10.0.0.71 port 5201
> 
> VPP session info:
> Client side:
> vpp# sh session verbose
> Thread 0: no sessions
>
> ConnectionState  Rx-f  
> Tx-f
> [1:0][T] 10.0.0.70:37502->10.0.0.71:5201  ESTABLISHED0 0
> [1:1][TLS] app_wrk 2 index 0 engine 1 tcp 1:0 ESTABLISHED0 0
> [1:2][T] 10.0.0.70:6875->10.0.0.71:5201   ESTABLISHED0 0
> [1:3][TLS] app_wrk 2 index 1 engine 1 tcp 1:2 ESTABLISHED0 0
> Thread 1: active sessions 4
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
> 
> Server side:
> vpp# sh session verbose
> ConnectionState  Rx-f  
> Tx-f
> [0:0][TLS] app_wrk 2 engine 1 tcp 0:1 LISTEN 0 0
> [0:1][T] 10.0.0.71:5201->0.0.0.0:0LISTEN 0 0
> Thread 0: active sessions 2
>
> ConnectionState  Rx-f  
> Tx-f
> [1:0][T] 10.0.0.71:5201->10.0.0.70:6875   ESTABLISHED0 0
> [1:1][TLS] app_wrk 2 index 0 engine 1 tcp 1:0 ESTABLISHED0 0
> [1:2][T] 10.0.0.71:5201->10.0.0.70:37502  ESTABLISHED0 0
> [1:3][TLS] app_wrk 2 index 1 engine 1 tcp 1:2 ESTABLISHED0 0
> Thread 1: active sessions 4
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
> Thread 5: no sessions
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15571): https://lists.fd.io/g/vpp-dev/message/15571
Mute This Topic: https://lists.fd.io/mt/71542617/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Multiple UDP receiver applications on same port #vpp-hoststack

2020-02-26 Thread Florin Coras
Hi Raj, 

Now that’s interesting. VPP detects when an application dies (binary api 
mechanism) and forces through the session layer a de-attach which in turn leads 
to an unbind. 

In case of udp, not tcp, we have a shim layer that redirects udp packets to 
whomever registered for a certain port. In your case, udp-input was registered 
twice but because currently we don’t have a reference count, the unbind removes 
the registration for both binds.

Will add it to my todo list if nobody beats me to it. 

Regards,
Florin

> On Feb 26, 2020, at 1:28 PM, Raj Kumar  wrote:
> 
> Hi,
> When 2 or more UDP rx applications ( using VCL) receiving on the same port ( 
> bind on the same port but different ip address) then on stopping either one 
> of the application , all other application stopped receiving the traffic. As 
> soon as , I restart the application all other applications also start 
> receiving the traffic.
> 
> 
> vpp# sh ip6 int
> 
> vppnet1.2001 is admin up
> 
>   Local unicast address(es):
> 
> fd0d:edc4::2001::213/64
> 
> fd0d:edc4::2001::223/64
> 
>   Link-local address(es):
> 
> fe80::ba83:3ff:fe79:af8c
> 
> When both applications are running : -
> vpp# sh session verbose 1
> ConnectionState  Rx-f  
> Tx-f
> [#0][U] fd0d:edc4::2001::213:9915->:::0   -  0 0
> [#0][U] fd0d:edc4::2001::223:9915->:::0   -  0 0
> Thread 0: active sessions 2
> Thread 1: no sessions
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
>
> ConnectionState  Rx-f  
> Tx-f
> [#5][U] fd0d:edc4::2001::213:9915->fd0d:edc4:f-  15226 0
> Thread 5: active sessions 1
>
> ConnectionState  Rx-f  
> Tx-f
> [#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-  0 0
> Thread 6: active sessions 1
>
> On stopping first application  : -
> 
> vpp# sh session verbose 1
> 
> ConnectionState  Rx-f  
> Tx-f
> 
> [#0][U] fd0d:edc4::2001::223:9915->:::0   -  0 0
> 
> Thread 0: active sessions 1
> 
> Thread 1: no sessions
> 
> Thread 2: no sessions
> 
> Thread 3: no sessions
> 
> Thread 4: no sessions
> 
> Thread 5: no sessions
> 
>
> ConnectionState  Rx-f  
> Tx-f
> 
> [#6][U] fd0d:edc4::2001::223:9915->fd0d:edc4:f-  0 0
> 
>
> Thread 6: active sessions 1
> 
> One active session is there but in 'sh err'  "no listener punt" error 
> increments and application is not receiving the data.
> 
> 310150540 ip6-udp-lookup no listener punt
> 
> packet trace :- 
> 
> --- Start of thread 6 vpp_wk_5 ---
> Packet 1
>
> 01:12:04:676114: dpdk-input
>   vppnet1 rx queue 2
>   buffer 0x10b18f: current data 0, length 7634, buffer-pool 0, ref-count 1, 
> totlen-nifb 0, trace handle 0x600
>ext-hdr-valid
>   PKT MBUF: port 0, nb_segs 1, pkt_len 7634
> buf_len 9344, data_len 7634, ol_flags 0x182, data_off 128, phys_addr 
> 0x744c6440
> packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x678c0829 fdir.hi 0x0 fdir.lo 0x678c0829
> Packet Offload Flags
>   PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without 
> extension headers
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>   IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
>   UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
> tos 0x00, flow label 0x0, hop limit 64, payload length 7576
>   UDP: 23456 -> 9915
> length 7576, checksum 0x225d
> 01:12:04:676154: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP6: b8:83:03:79:9f:e8 -> b8:83:03:79:af:8c 802.1q vlan 2001
> 01:12:04:676156: ip6-input
>   UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
> tos 0x00, flow label 0x0, hop limit 64, payload length 7576
>   UDP: 23456 -> 9915
> length 7576, checksum 0x225d
> 01:12:04:676164: ip6-lookup
>   fib 0 dpo-idx 20 flow hash: 0x
>   UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
> tos 0x00, flow label 0x0, hop limit 64, payload length 7576
>   UDP: 23456 -> 9915
> length 7576, checksum 0x225d
> 01:12:04:676166: ip6-local
> UDP: fd0d:edc4::2001::104 -> fd0d:edc4::2001::223
>   tos 0x00, flow label 0x0, hop limit 64, payload length 7576
> UDP: 23456 -> 9915
>   length 7576, checksum 0x225d
> 01:12:04:676169: ip6-udp-lookup
>   UDP: src-port 23456 dst-port 9915
> 01:12:04:676169: i

Re: [vpp-dev] TLS configuration & throughput

2020-02-28 Thread Florin Coras
Hi Dom, 

I guess you’re not using vpp master. Loops/s should appear in the line you 
highlighted 

Regards,
Florin

> On Feb 28, 2020, at 8:54 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> I got another test run and was able to do the clear run; show run as you 
> suggested about 10 seconds into the test run just before it ended. I'm not 
> sure where to see the loops/s stat, so I've pasted the output from both 
> client and server below if you would not mind pointing out what I'm looking 
> for there.
> 
> Thank you,
> Dom
> 
> Server side:
> vpp# clear run
> vpp# show run
> Thread 0 vpp_main (lcore 20)
> Time 91.1, 10 sec internal node vector rate 0.00
>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
> api-rx-from-ringany wait 0   0
>   23  5.63e50.00
> dpdk-processany wait 0   0
>   31  2.25e50.00
> fib-walkany wait 0   0
>   46  2.41e30.00
> ikev2-manager-process   any wait 0   0
>   92  1.78e30.00
> ip4-full-reassembly-expire-wal  any wait 0   0
>9  2.48e30.00
> ip4-sv-reassembly-expire-walk   any wait 0   0
>9  1.85e30.00
> ip6-full-reassembly-expire-wal  any wait 0   0
>9  2.13e30.00
> ip6-mld-process any wait 0   0
>   92  1.20e30.00
> ip6-ra-process  any wait 0   0
>   92  1.42e30.00
> ip6-sv-reassembly-expire-walk   any wait 0   0
>9  2.19e30.00
> session-queue-process   any wait 0   0
>   92  2.76e30.00
> statseg-collector-process   time wait0   0
>9  1.95e40.00
> unix-cli-stdin   active  0   0
>9  1.08e50.00
> unix-epoll-input polling   1435898   0
>0  1.64e50.00
> ---
> Thread 1 vpp_wk_0 (lcore 22)
> Time 91.1, 10 sec internal node vector rate 4.57
>   vector rates in 3.0737e4, out 3.8315e3, drop 5.5956e-1, punt 0.e0
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
> TenGigabitEthernet5/0/0-output   active 349212  349212
>0  5.12e21.00
> TenGigabitEthernet5/0/0-tx   active 349212  349212
>0  7.47e21.00
> arp-inputactive  1   1
>0  3.55e31.00
> arp-replyactive  1   1
>0  3.50e41.00
> dpdk-input   polling 372763251 2452218
>0  2.77e40.00
> drop active 51  51
>0  9.22e21.00
> error-drop   active 51  51
>0  7.54e21.00
> ethernet-input   active 349264 2452218
>0  1.32e27.02
> interface-output active  1   1
>0  8.01e31.00
> ip4-drop active  3   3
>0  2.86e31.00
> ip4-input-no-checksumactive 349218 2452169
>0  1.26e27.02
> ip4-localactive 349218 2452169
>0  4.12e27.02
> ip4-lookup   active 454319 2801380
>0  1.33e26.17
> ip4-rewrite  active 349211  349211
>0  4.97e21.00
> llc-inputactive 45  45
>0  7.17e21.00
> lldp-input   active  3   3
>0  2.73e3

Re: [vpp-dev] vpp project committers: formal vote to add Matt Smith as a vpp committer

2020-03-02 Thread Florin Coras
+1

Regards,
Florin

> On Mar 2, 2020, at 6:15 AM, d...@barachs.net wrote:
> 
> VPP committers, please vote +1, 0, -1 on adding Matt Smith 
> (mgsm...@netgate.com ) as a vpp project 
> committer. 
> Matt has contributed O(100) merged patches, and he recently contributed the 
> entire vrrp plugin. See 
> https://gerrit.fd.io/r/q/owner:mgsmith%2540netgate.com 
> 
> Please vote (on vpp-dev@lists.fd.io ) by the end 
> of this Wednesday, 2/4/2020, so we can put the results in front of the TSC 
> this Thursday.
> Thanks... Dave
>
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15656): https://lists.fd.io/g/vpp-dev/message/15656
Mute This Topic: https://lists.fd.io/mt/71675525/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-11 Thread Florin Coras
Hi Kusuma, 

Not sure I understand the question. You want to deliver data to internal 
applications (supposedly using the session layer) without going through nodes 
like ip-local? 

If not, what do you mean by “sending through host stack”?

Regards,
Florin

> On Mar 11, 2020, at 11:16 AM, Kusuma DS  wrote:
> 
> Hi, 
> 
> Is there any method to send packets directly to internal apps without sending 
> through Host stack? 
> 
> 
> Regards, 
> Kusuma
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15731): https://lists.fd.io/g/vpp-dev/message/15731
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-11 Thread Florin Coras
Hi Kasuma, 

Applications that attach to the session layer (through the session api) can 
read/write data, i.e., byte stream or datagrams, that are delivered from/to the 
underlying transports (tcp, udp, quic, tls) via specific session layer apis or 
POSIX-like apis, if they use the vcl library. 

If you want to deliver udp packets to/from a vpp "builtin application" see 
udp_register_dst_port(). On the other hand, if you want to the same with the 
rest of the transport protocol, you’ll have to reimplement them since the ones 
we have are “plugged” into the session layer. 

Regards,
Florin

> On Mar 11, 2020, at 12:03 PM, Kusuma DS  wrote:
> 
> Hi Florin, 
> 
> I wanted to avoid using session api. 
> I meant host stack is session apis. 
> 
> Is there any method to send the data to internal apps? 
> 
> Regards, 
> Kusuma
> 
> On Wed, 11 Mar, 2020, 11:56 PM Florin Coras,  <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Kusuma, 
> 
> Not sure I understand the question. You want to deliver data to internal 
> applications (supposedly using the session layer) without going through nodes 
> like ip-local? 
> 
> If not, what do you mean by “sending through host stack”?
> 
> Regards,
> Florin
> 
> > On Mar 11, 2020, at 11:16 AM, Kusuma DS  > <mailto:kusumanjal...@gmail.com>> wrote:
> > 
> > Hi, 
> > 
> > Is there any method to send packets directly to internal apps without 
> > sending through Host stack? 
> > 
> > 
> > Regards, 
> > Kusuma
> > 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15736): https://lists.fd.io/g/vpp-dev/message/15736
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-12 Thread Florin Coras
Hi Ole, Vivek,

If I understand this right, you’re looking to intercept l2 packets in vpp 
(supposedly only from certain hosts), process them and maybe generate some 
return traffic. What is the payload of those l2 packets? 

You could write a feature that inspects all traffic on a certain interface and 
intercepts the packets that you’re interested in. 

Alternatively, session layer supports pure shared memory transports, i.e., 
cut-through connections (see slide 16-18 here [1]). For instance, a vpp builtin 
application could receive data directly over shared memory from an external 
application. However, currently session layer only knows how to lookup 
5-tuples, so the two peers (external app and vpp builtin app) need to agree on 
a shared “fake” 5-tuple.

Regards,
Florin

[1] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf

> On Mar 12, 2020, at 1:09 AM, Ole Troan  wrote:
> 
> Hi Vivek,
> 
>> We are trying to achieve the mechanism, something similar to TAP interface, 
>> in VPP.
>> 
>> So, the packets coming out of the TAP interface, will be directed directly 
>> to the application. The application will receive the packets, coming via TAP 
>> interface, process them and send it down via the Host stack.
>> 
>> Possible options, we could think of are:-
>> - Enhance the session layer to provide a L2 transport mechanism and add 
>> nodes like tap-input and tap-out which would achieve the same.
>> - Use the existing session layer by doing a IP/UDP encap and send it to the 
>> APP, via session layer and use existing mechanism.
>>  This introduces an overhead of additional encap/decap.
>> 
>> We wanted to check if there is any alternate option to directly transfer the 
>> packets from the plugin to the VPP App, without even involving the session 
>> layer and have no additional overhead encap/decap,
> 
> Is this similar to the idea of routing directly to the application?
> I.e. give each application an IP address (easier with IPv6), and the 
> application itself links in whatever transport layer it needs. In a VPP 
> context the application could sit behind a memif interface. The application 
> would need some support for IP address assignment, ARP/ND etc.
> Userland networking taken to the extreme. ;-)
> 
> Best regards,
> Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15758): https://lists.fd.io/g/vpp-dev/message/15758
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] A question about using packetdrill test vpp hoststack

2020-03-12 Thread Florin Coras
Hi Longfei, 

For protocol correctness, the only tool I’ve used is Defensics Codenomicon, 
which has about 1.2M tests and only needs an http terminator. Therefore, nginx 
+ ldp is enough. Having said that, it would be great to also support 
packetdrill. 

As for your issue, I’m not entirely sure why you’re hitting it. Could you try 
replacing the veth pair with a tap interface?

Regards,
Florin

> On Mar 12, 2020, at 2:53 AM, dailongfei  wrote:
> 
> Hi,
> 
> Recently, I wan to use packetdrill to test vpp hoststack .  I connet  vpp and 
> kernel-protocol-stack with veth.  And local  client runs on vpp hoststack , 
> remote is on kernel.
>  local <> vcl <> vpp <-> veth1 <-> veth0 <-> remote.
> 
> At remote , I just wen to  recvivethe  2-layer  or 3-layer packet, so I 
> use raw sock. However , I meet a problem that raw sock just get the copy of 
> packet that sent by local client , the packet still transfer to the upper 
> layer (4 layer) .And the upper layer will answer the packet ,  which will 
> influences my test.  Since the local client just want to receive the packet 
> that sent by raw sock.
>  local <> vcl <> vpp <-> veth1 <-> veth0 <-> raw sock.
>   
>|
>   
>---X--> upper layer .   
> 
> Do your matter the  same problem  when testing vpp hoststack ? And do you 
> have good idea about the vpp  hoststack test with packetdrill?
> 
> Regards,
> Longfei
>
> 
>   
> dailongfei
> 
> dailong...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15759): https://lists.fd.io/g/vpp-dev/message/15759
Mute This Topic: https://lists.fd.io/mt/71898758/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-12 Thread Florin Coras
Hi Murthy, 

Yes it does, although we’re guilty of not having it properly documented. 

The message queue used between vpp and a vcl worker can do both mutex/condvar 
and eventfd notifications. The former is the default but you can switch to 
eventfds by adding to vcl.conf "use-mq-eventfd”. You can then use 
vppcom_worker_mqs_epfd to retrieve a vcl worker's epoll fd (it’s an epoll fd 
for historic reasons) which you should be able to nest into your own linux 
epoll fd. 

Note that you’ll also need to force memfd segments for vpp’s message queues, 
i.e., session { evt_qs_memfd_seg }, and use the socket transport for binary 
api, i.e., in vpp’s startup.conf add "socksvr { /path/to/api.sock }" and in 
vcl.conf "api-socket-name /path/to/api.sock”. 

Regards,
Florin

> On Mar 12, 2020, at 4:40 AM, Satya Murthy  wrote:
> 
> Hi ,
> 
> We have a TCP application trying integrate with VPP-VCL framework.
> 
> Our application has its own dispatch loop with epoll and we would like to 
> know if VCL framework has any linux fd ( like an eventfd for the entire svm 
> message queue ) that we can add into our epoll to poll for VCL session 
> messages.
> 
> Once we get an asynchronous indication that a message has arrived in the VCL 
> svm message queue, we can call vppcom_epoll_wait() function to read the 
> messages for sessions and handle them accordingly. 
> 
> Any inputs on how we can achieve this?
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15760): https://lists.fd.io/g/vpp-dev/message/15760
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-13 Thread Florin Coras
Hi Murthy, 

Inline. 

> On Mar 13, 2020, at 4:54 AM, Satya Murthy  wrote:
> 
> Hi Florin,
> 
> Thank you very much for the inputs.
> These are very difficult to understand unless we go through the code in 
> detail.
> Today, Whole day, I was trying to follow your instructions and get this 
> working by looking at the code as well.

I’d recommend starting here 
https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf

> However, I am not fully successful. 
> Before going further, I would like to get my understanding clear, so that it 
> will form basis for my debugging.
> 
> Here are couple of questions:
> 1) 
> ==
> The message queue used between vpp and a vcl worker can do both mutex/condvar 
> and eventfd notifications. The former is the default but you can switch to 
> eventfds by adding to vcl.conf "use-mq-eventfd”. You can then use 
> vppcom_worker_mqs_epfd to retrieve a vcl worker's epoll fd (it’s an epoll fd 
> for historic reasons) which you should be able to nest into your own linux 
> epoll fd. 
> ==
> The message queues between VPP and VCL can be present either in "shared 
> memory" (or) "memfd segments".
> For eventfd to work, the queues need to be present in the "memfd segments".
> Is this correct understanding ?

FC: VCL workers and vpp use message queues to exchange io/ctrl messages and 
fifos to exchange data. Both are allocated in shared memory segments which can 
be of two flavors, POSIX shm segments (shm_open) or memfd based (memfd 
exchanged over a unix socket). For the latter to work, we use the binary api’s 
socket transport to exchange the fds.

If configured to use eventds for message queue signaling, vpp must exchange 
those eventfds with the vcl workers. For that, we also use the binary api’s 
socket. That should explain why the binary api’s socket transport is needed. 
Note also that this is just a configuration from vcl consumer perspective, it 
should not otherwise affect the app. 

> 
> 2) 
> ==
> Note that you’ll also need to force memfd segments for vpp’s message queues, 
> i.e., session { evt_qs_memfd_seg }, and use the socket transport for binary 
> api, i.e., in vpp’s startup.conf add "socksvr { /path/to/api.sock }" and in 
> vcl.conf "api-socket-name /path/to/api.sock”. 
> ==
> I didnt understand the reason for moving the binary api to sockets.
> Is this due to shm/memfd wont be used at the same time ? 

FC: I hope it’s clear now why you need to move to the binary api’s socket 
transport. 

> 
> 3) 
> In a nut shell:
> 
> VCL-APP   -> VPP
> VCL-APP  <---Binary Api via LinuxDomain 
> Sockets--> VPP
> 
> We will have two api clients with this model. One is shared memory client and 
> other is a socket client.

FC: If you mean to point out that there are two channels from vcl to vpp, 
that’s correct. The first is the one described above, but note that the message 
queues are not bidirectional. VPP has another set of message queues the apps 
use to enqueue notifications towards vpp.

As for the second channel, the binary api, it’s used to 1) attach vcl to the 
session layer, 2) its socket is used for exchanging fds (memfds and eventfds) 
3) sometimes for exchanging configuration. 

But again, apart from configuration changes, this should be completely 
transparent to vcl consumers. 

Regards,
Florin

> 
> Is my understanding correct ? 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15778): https://lists.fd.io/g/vpp-dev/message/15778
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-13 Thread Florin Coras
Hi Vivek, 

Inline.

> On Mar 12, 2020, at 11:06 PM, Vivek Gupta  wrote:
> 
> Hi Florin,
> 
> Please see inline.
> 
> Regards,
> Vivek
> 
> -Original Message-
> From: Florin Coras  
> Sent: Thursday, March 12, 2020 8:09 PM
> To: Ole Troan 
> Cc: Vivek Gupta ; kusumanjal...@gmail.com; 
> vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Query about internal apps
> 
> Hi Ole, Vivek,
> 
> If I understand this right, you’re looking to intercept l2 packets in vpp 
> (supposedly only from certain hosts), process them and maybe generate some 
> return traffic. What is the payload of those l2 packets? 
> Vivek> We are trying to intercept the complete l2 packets, coming on certain 
> interfaces, and pass it on the user application and user application can then 
> process those and forward them to the actual destination. 
> Payload will be the regular ethernet packets.

FC: If you could deliver data directly to the builtin application, would you 
still need L2 framing? Do you plan to intercept just a subset of the L2 frames 
on an interface or all?

Also, can the source of those frames be changed to use vcl? If not, your only 
option is to intercept packets somewhere in vpp. 
 
> 
> You could write a feature that inspects all traffic on a certain interface 
> and intercepts the packets that you’re interested in. 
> Vivek> However, for these intercepted packets, what should be right way to 
> pass it to the VPP internal application? For the session layer, we would need 
> a transport_connection for L2_proto and pass traffic through the session 
> using the fake tuple lookup. 

FC: A transport is needed if the two peers must exchange data over network 
interfaces. Cut through connections have no transport protocol framing as 
they’re just shared memory buffers over which data can be exchanged with 
whatever framing the two peer apps decide (or no framing at all).

Regards,
Florin
 
> 
> Alternatively, session layer supports pure shared memory transports, i.e., 
> cut-through connections (see slide 16-18 here [1]). For instance, a vpp 
> builtin application could receive data directly over shared memory from an 
> external application. However, currently session layer only knows how to 
> lookup 5-tuples, so the two peers (external app and vpp builtin app) need to 
> agree on a shared “fake” 5-tuple.
> 
> 
> Regards,
> Florin
> 
> [1] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf
> 
>> On Mar 12, 2020, at 1:09 AM, Ole Troan  wrote:
>> 
>> Hi Vivek,
>> 
>>> We are trying to achieve the mechanism, something similar to TAP interface, 
>>> in VPP.
>>> 
>>> So, the packets coming out of the TAP interface, will be directed directly 
>>> to the application. The application will receive the packets, coming via 
>>> TAP interface, process them and send it down via the Host stack.
>>> 
>>> Possible options, we could think of are:-
>>> - Enhance the session layer to provide a L2 transport mechanism and add 
>>> nodes like tap-input and tap-out which would achieve the same.
>>> - Use the existing session layer by doing a IP/UDP encap and send it to the 
>>> APP, via session layer and use existing mechanism.
>>> This introduces an overhead of additional encap/decap.
>>> 
>>> We wanted to check if there is any alternate option to directly transfer 
>>> the packets from the plugin to the VPP App, without even involving the 
>>> session layer and have no additional overhead encap/decap,
>> 
>> Is this similar to the idea of routing directly to the application?
>> I.e. give each application an IP address (easier with IPv6), and the 
>> application itself links in whatever transport layer it needs. In a VPP 
>> context the application could sit behind a memif interface. The application 
>> would need some support for IP address assignment, ARP/ND etc.
>> Userland networking taken to the extreme. ;-)
>> 
>> Best regards,
>> Ole
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15777): https://lists.fd.io/g/vpp-dev/message/15777
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-13 Thread Florin Coras
Hi Murthy, 

Glad it helps! By construction, when vcl is initialized (vppcom_app_create()) 
only one worker is initialized. As long as you don’t register other workers 
(vppcom_worker_register), you don’t really have to worry about anything else 
and there are no performance penalties.

It’s important to remember that vcl is not thread safe by itself, i.e., workers 
will never take any locks exactly because we want them to be as light weight as 
possible. If the application does not want to manage locking/forking, it should 
use vcl_locked (vls) but that’s an entirely different topic. 

Regards,
Florin

> On Mar 13, 2020, at 9:49 AM, Satya Murthy  wrote:
> 
> Hi Florin,
> 
> Thanks a lot for the detailed explanation. This kind of gives an overview of 
> this area, which really helps in our integration.
> 
> Just one more question:
> We are planning to remove the concept of vcl worker in our worker, as our app 
> is a single threaded app and will not be multi-threaded at any point of time 
> in future as well. Hope this is doable and does not pose any restrictions in 
> any of the VCL s/w layers.
>
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15781): https://lists.fd.io/g/vpp-dev/message/15781
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-16 Thread Florin Coras
Hi Murthy, 

Inline.

> On Mar 16, 2020, at 12:35 AM, Satya Murthy  wrote:
> 
> Hi Florin,
>
> Over the weekend, I went through the document that you mentioned and it gave 
> me a good overview. Thanks for pointing to that doc.
> However, my task of integrating mqs->epfd into our main dispatch loop still 
> seems to be needing more finer details of the code.

Glad it helped. 

>
> With this respect, I have following few queries. It would be great if you can 
> help with this.
> If this path poses more effort/risk in integration, we may want to switch to 
> host interface approach ( which has its own issues though :(  )

This does need more effort but as long as you understand what you’re doing it 
should be fine. 

>
> 1) Terminology mapping.
> In the VCL application framework, is the following mapping a correct 
> understanding (with some open items) ?
>
> TCP-Application => app with a specific client index

I’d call the app an application that needs a tcp connection. And yes, it will 
have a client index in vpp. 

> Worker  => thread in the tcp-application 

For your use case, yes. New threads are not implicitly mapped to new workers. 
 
> Session => tcp end point

Session in vcl is equivalent to an fd. The actual tcp endpoint is in the tcp 
transport layer which sits under the session layer in vpp.

> mq in worker=> mapped to ?

There’s no mapping here. The message queue is the “channel” on which vcl 
receives io/ctrl notifications from vpp. 

> mq_evt_conn => mapped to ?

Applications don’t have access to this. They only have access to the “mqs epoll 
fd”, i.e., mqs_epfd. Previously, the workers could’ve had more fds in their 
mqs_epfd but today we can only have 1, the mq’s eventfd, if one was configured. 
Initially, this mqs_epfd was used to epoll all of the fds in it, now it’s used 
to epoll the mq’s eventfd. In the future, this could be converted to a normal 
fd, but it should be transparent for the app. 

>
> 2) Howcome vppcom_epoll_create is creating a new session. Bit confusing here.

That’s like creating a new fd, i.e., you’re given a “session handle” in return. 
Why would it be confusing? 

>
> 3) Is the TCP server listen_fd also can be integrated into our application 
> dispatch loop ? 

Not sure I understood the question. Are you asking if you can integrate a 
listener fd in a vcl epoll fd? Then, yes. 

>
> 4) How to get hold of mqs->epfd to put into our dispatcher? 
>I see that the vcl_worker_t is inside vcl_private.c and .h which are not 
> accessible by the application directly.
>Also, i dont see a direct need of having vcl_test_worker* framework in our 
> app. Hope this is fine.

vcl_test_worker is a test toy app, so you really don’t need it. Applications 
should only use vppcom.h and the function you’re looking for is 
vppcom_worker_mqs_epfd. That will return the worker’s mqs_epfd which you can 
afterwards EPOLL_CTL_ADD to your linux epoll fd. 

>
> 5) why can't the TCP application directly use vcl_worker_t data structure ?

Because we don’t want applications to interact/depend on vcl internals. Also, 
the goal with vcl is to have a posix-like api. You should not need more than 
the session handles and the apis exposed in vppcom.h

>   
> 6) Is there a sample tcp server example that has the server implementation 
> with linux epoll instead of vppcom_epoll system.

A simple one, no, as far as I know. This mechanism was used in the envoy 
integration done here [1], but that’s not probably the easiest thing to follow. 

[1] https://github.com/sbelair2/envoy/tree/vpp_integration 


>
> 7) Do you think this approach of integration will be more complex and hence 
> do you suggest us moving to LDP (or) host interface approach ?
>Please let us know.

It will be slightly more complicated because you’ll be “manually” tracking the 
mq’s activity. But in terms of actual extra code needed, you only need to make 
sure that whenever you get an epoll event on the mqs_epfd, you call 
vppcom_epoll_wait() with the vcl epoll fd to which you’ve added your app’s 
sessions (probably including the listen_fd you mentioned above). 

>
> Thanks in advance for your time.

Hope it helps. 

Regards, 
Florin

>
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15791): https://lists.fd.io/g/vpp-dev/message/15791
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how to create a session from thin air?

2020-03-16 Thread Florin Coras
Hi Andreas, 

>From the info lower, I guess that you want to build a transparent tcp 
>terminator/proxy. For that, you’ll be forced to do a) because ip-local path is 
>purely for consuming packets whose destination is local ip addresses. 
>Moreover, you’ll have to properly classify/match all packets to connections 
>and hand them to tcp-input (or better yet tcp-input-nolookup) for tcp 
>processing.

Regarding the passing of data, is that at connection establishment or 
throughout the lifetime of the connection? If the former, your classifier 
together with your builtin app will have to instantiate tcp connections and 
sessions “manually” and properly initialize them whenever it detects a new 
flow. APIs like session_alloc and tcp_connection_alloc are already exposed. 

Regards,
Florin

> On Mar 16, 2020, at 10:39 AM, Andreas Schultz 
>  wrote:
> 
> Hi,
> 
> In our UPF plugin [1], I need to terminate a TCP connection with a non-local 
> destination IP *and* pass metadata from the plugin into the session.
> 
> I have solve this for the moment with some very ugly hacks. Florin Coras has 
> rightly criticise those hacks in earlier version of the plugin, but I have 
> not found a clean solution, yet.
> 
> The UPF plugin is basically a per session mini router instance (that wasn't 
> my idea, that is the way the specifications are written). It detects a TCP 
> connection that it needs to handle with rules that are unique for a given 
> session and then has to apply rules that are also unique per session to that 
> TCP connection. For the moment only HTTP with redirect rules are handled 
> (your normal captive portal use case).
> 
> What I need to do is:
>   a) detect the UPF session and the TCP connection in a packet forwarding 
> graph node and create a TCP session from it. The destination IP will not be 
> local, so the normal local input does not work.
>   b) pass metadata (the matched session and rule) into the TCP connection.
> 
> a) is somewhat doable, but passing metadata from the detection node into the 
> session proves challenging (without reimplementing all of the TCP input 
> node). There are no fields (except for IP headers) that are passed from the 
> vnet buffer into the TCP connection.
> 
> Any hints or ideas?
> 
> Regards,
> Andreas
> 
> 
> [1]: https://gerrit.fd.io/r/c/vpp/+/15798 
> <https://gerrit.fd.io/r/c/vpp/+/15798>
> 
> -- 
> Andreas Schultz
> 
> -- 
> 
> Principal Engineer
> 
> t: +49 391 819099-224
> 
> 
> --- enabling your networks 
> -
> 
> Travelping GmbH 
> Roentgenstraße 13
> 39108 Magdeburg
> Germany
> 
> 
> t: +49 391 819099-0
> f: +49 391 819099-299
> 
> e: i...@travelping.com <mailto:i...@travelping.com>
> w: https://www.travelping.com/ <https://www.travelping.com/>
> Company registration: Amtsgericht Stendal 
> Geschaeftsfuehrer: Holger Winkelmann
> Reg. No.: HRB 10578
> VAT ID: DE236673780
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15794): https://lists.fd.io/g/vpp-dev/message/15794
Mute This Topic: https://lists.fd.io/mt/72004409/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


<    1   2   3   4   5   6   7   8   >