Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Sounds good. Thanks Ben for the response!



On Tue, Dec 10, 2019 at 5:00 PM Benoit Ganne (bganne) 
wrote:

> Hi,
>
> > I have used below CLI's to create rdma interfaces over Mellanox , Can you
> > suggest what set of CLi's I should use so that packets from rdma will
> also
> > have mbuff fields set properly , so that we can directly right on KNI?
>
> You do not have to. Just create a KNI interface in VPP with the DPDK
> plugin and switch packets between KNI and rdma interfaces.
> VPP never use DPDK mbuf internally, when you get packets from/to DPDK in
> VPP you have a buffer metadata translation anyway. From our PoV this is not
> different than switching packets between a vhost interface and a DPDK
> hardware interface (eg. VIC).
>
> Best
> ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14861): https://lists.fd.io/g/vpp-dev/message/14861
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin

2019-12-10 Thread siddarth rai
Hello all,

I am working with VPP 19_04. I noticed that the VSZ of VPP is showing 200+
GB.

On further debugging, I discovered that the 'dpdk_plugin' is the one
causing this. If I disable the dpdk plugin, the VSZ falls below 20G.

Can anyone help me understand what is it in the dpdk plugin that is
causing this bulge in VSZ? Is there anyway to reduce it ?

Any help would be appreciated

Regards,
Siddarth Rai
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14860): https://lists.fd.io/g/vpp-dev/message/14860
Mute This Topic: https://lists.fd.io/mt/68143971/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Spurious API CRC failures

2019-12-10 Thread Dave Wallace

Correction inline...

On 12/10/2019 10:37 PM, Dave Wallace via Lists.Fd.Io wrote:

Jan/Vratko,

I spent the past several hours attempting to debug this issue.  When 
testing locally, using vpp master HEAD and csit oper-191209, I was 
able to reproduce the problem when running 
csit/resources/tools/integrated/check_crc.py


After attempting several iterations of reverting [0] and/or [1], I 
found that adding the crc's that were changed in [0] [1] to the main 
collection in csit/resources/api/vpp/supported_crcs.yaml would pass 
locally.  Unfortunately when I pushed the patch [2], it failed to pass 
the crc check in the csit-vpp-device-master-ubuntu1804-1n-skx verify 
job [2] (which I subsequently abandoned).


The changes in [1], don't look correct to me, since they are not 
related to gerrit 21706/17. I would think they either belong in their 
own collection or the main collection should be modified with the new 
crcs and not the ones in the "21706/17" collection.  I tried the 
latter experiment but the verify check failed locally.


At this point, it seems to me that there is a bug in the VppApiChecker 
but it is not clear to me where this root cause is when looking at the 
code. Or perhaps my local runtime environment is not correct.  I'll 
let you investigate further.


Hopefully this will help you resolve the issue quicker.

Thanks,
-daw-
[0] https://gerrit.fd.io/r/c/csit/+/23914
[1] https://gerrit.fd.io/r/c/csit/+/23921
[2] https://gerrit.fd.io/r/c/csit/+/23926

On 12/10/2019 9:03 AM, Dave Barach via Lists.Fd.Io wrote:


Folks,

This patch (among others) https://gerrit.fd.io/r/c/vpp/+/23625 
changes zero APIs, but fails API CRC validation.


Please fix AYEC. We’re dead in the water.

Thanks... Dave**

**

*04:37:59*/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py

*04:38:10*Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' 
for .api files.


*04:38:10*json files written to: 
/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/.


*04:38:10*+++ python3 csit/resources/tools/integrated/check_crc.py

*04:38:11*RuntimeError:

*04:38:11*Incompatible API CRCs found in .api.json files:

*04:38:11*{

*04:38:11* "ip_address_details":"0xb1199745",

*04:38:11* "ip_address_dump":"0x2d033de4",

*04:38:11* "ip_neighbor_add_del":"0x105518b6",

*04:38:11* "ip_route_add_del":"0xc1ff832d",

*04:38:11* "ip_table_add_del":"0x0ffdaec0",

*04:38:11* "sw_interface_ip6nd_ra_config":"0x3eb00b1c"

*04:38:11*}

*04:38:11*RuntimeError('Incompatible API CRCs found in .api.json 
files:\n{\n "ip_address_details":"0xb1199745",\n 
"ip_address_dump":"0x2d033de4",\n 
"ip_neighbor_add_del":"0x105518b6",\n 
"ip_route_add_del":"0xc1ff832d",\n "ip_table_add_del":"0x0ffdaec0",\n 
"sw_interface_ip6nd_ra_config":"0x3eb00b1c"\n}',)


*04:38:11*

*04:38:11*@@@

*04:38:11*

*04:38:11*VPP CSIT API CHECK FAIL!

*04:38:11*

*04:38:11*This means the patch under test has missing messages,

*04:38:11*or messages with unexpected CRCs compared to what CSIT needs.

*04:38:11*Either this Change and/or its ancestors were editing .api 
files,


*04:38:11*or your chain is not rebased upon the recent enough VPP 
codebase.


*04:38:11*

*04:38:11*Please rebase the patch to see if that fixes the problem.

*04:38:11*If that fails email csit-...@lists.fd.io for a new

*04:38:11*operational branch supporting the api changes.

*04:38:11*

*04:38:11*@@@

*04:38:11*Build step 'Execute shell' marked build as failure


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14852):https://lists.fd.io/g/vpp-dev/message/14852
Mute This Topic:https://lists.fd.io/mt/67971752/675079
Group Owner:vpp-dev+ow...@lists.fd.io
Unsubscribe:https://lists.fd.io/g/vpp-dev/unsub   [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14858): https://lists.fd.io/g/vpp-dev/message/14858
Mute This Topic: https://lists.fd.io/mt/67971752/675079
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14859): https://lists.fd.io/g/vpp-dev/message/14859
Mute This Topic: https://lists.fd.io/mt/67971752/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Spurious API CRC failures

2019-12-10 Thread Dave Wallace

Jan/Vratko,

I spent the past several hours attempting to debug this issue. When 
testing locally, using vpp master HEAD and csit oper-191209, I was able 
to reproduce the problem when running 
csit/resources/tools/integrated/check_crc.py


After attempting several iterations of reverting [0] and/or [1], I found 
that adding the crc's that were changed in [0] to the main collection in 
csit/resources/api/vpp/supported_crcs.yaml would pass locally.  
Unfortunately when I pushed the patch [2], it failed to pass the crc 
check in the csit-vpp-device-master-ubuntu1804-1n-skx verify job [2] 
(which I subsequently abandoned).


The changes in [1], don't look correct to me, since they are not related 
to gerrit 21706/17. I would think they either belong in their own 
collection or the main collection should be modified with the new crcs 
and not the ones in the "21706/17" collection.  I tried the latter 
experiment but the verify check failed locally.


At this point, it seems to me that there is a bug in the VppApiChecker 
but it is not clear to me where this root cause is when looking at the 
code. Or perhaps my local runtime environment is not correct.  I'll let 
you investigate further.


Hopefully this will help you resolve the issue quicker.

Thanks,
-daw-
[0] https://gerrit.fd.io/r/c/csit/+/23914
[1] https://gerrit.fd.io/r/c/csit/+/23921
[2] https://gerrit.fd.io/r/c/csit/+/23926

On 12/10/2019 9:03 AM, Dave Barach via Lists.Fd.Io wrote:


Folks,

This patch (among others) https://gerrit.fd.io/r/c/vpp/+/23625 changes 
zero APIs, but fails API CRC validation.


Please fix AYEC. We’re dead in the water.

Thanks... Dave**

**

*04:37:59*/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py

*04:38:10*Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' 
for .api files.


*04:38:10*json files written to: 
/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/.


*04:38:10*+++ python3 csit/resources/tools/integrated/check_crc.py

*04:38:11*RuntimeError:

*04:38:11*Incompatible API CRCs found in .api.json files:

*04:38:11*{

*04:38:11* "ip_address_details":"0xb1199745",

*04:38:11* "ip_address_dump":"0x2d033de4",

*04:38:11* "ip_neighbor_add_del":"0x105518b6",

*04:38:11* "ip_route_add_del":"0xc1ff832d",

*04:38:11* "ip_table_add_del":"0x0ffdaec0",

*04:38:11* "sw_interface_ip6nd_ra_config":"0x3eb00b1c"

*04:38:11*}

*04:38:11*RuntimeError('Incompatible API CRCs found in .api.json 
files:\n{\n "ip_address_details":"0xb1199745",\n 
"ip_address_dump":"0x2d033de4",\n 
"ip_neighbor_add_del":"0x105518b6",\n 
"ip_route_add_del":"0xc1ff832d",\n "ip_table_add_del":"0x0ffdaec0",\n 
"sw_interface_ip6nd_ra_config":"0x3eb00b1c"\n}',)


*04:38:11*

*04:38:11*@@@

*04:38:11*

*04:38:11*VPP CSIT API CHECK FAIL!

*04:38:11*

*04:38:11*This means the patch under test has missing messages,

*04:38:11*or messages with unexpected CRCs compared to what CSIT needs.

*04:38:11*Either this Change and/or its ancestors were editing .api files,

*04:38:11*or your chain is not rebased upon the recent enough VPP 
codebase.


*04:38:11*

*04:38:11*Please rebase the patch to see if that fixes the problem.

*04:38:11*If that fails email csit-...@lists.fd.io for a new

*04:38:11*operational branch supporting the api changes.

*04:38:11*

*04:38:11*@@@

*04:38:11*Build step 'Execute shell' marked build as failure


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14852): https://lists.fd.io/g/vpp-dev/message/14852
Mute This Topic: https://lists.fd.io/mt/67971752/675079
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14858): https://lists.fd.io/g/vpp-dev/message/14858
Mute This Topic: https://lists.fd.io/mt/67971752/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] How to enable logs in VPP?

2019-12-10 Thread Gudimetla, Leela Sankar
Hello,

I am looking for some information on how to enable error-logs, warnings, etc.
I see bunch of logging in the code like clib_error_xxx(), clib_warning_xxx(), 
etc.

Can someone share how to enable these logs to capture to a file on the target?

Thanks,
Leela sankar
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14857): https://lists.fd.io/g/vpp-dev/message/14857
Mute This Topic: https://lists.fd.io/mt/68006780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins Maintenance: 2019-12-10 1900 UTC to 2200 UTC

2019-12-10 Thread Vanessa Valderrama
Maintenance is complete. All systems are available. Please open a ticket
at support.linuxfoundation.org if you experience any issues.

Thank you,
Anton & Vanessa


On 12/10/19 1:04 PM, Vanessa Valderrama wrote:
>
> Starting maintenance
>
> On 12/10/19 7:15 AM, Vanessa Valderrama wrote:
>>
>> Jenkins sandbox is complete. Jenkins production will be shutdown at
>> 1800 UTC in preparation for maintenance.
>>
>> Thanks,
>> Vanessa
>>
>>
>> On 12/3/19 9:57 AM, Vanessa Valderrama wrote:
>>>
>>> *What:*
>>>
>>>   * Jenkins
>>>   o OS and security updates
>>>   o Upgrade to 2.190.3
>>>   o Plugin updates
>>>   * Nexus
>>>   o OS updates
>>>   * Jira
>>>   o OS updates
>>>   * Gerrit
>>>   o OS updates
>>>   * Sonar
>>>   o OS updates
>>>   * OpenGrok
>>>   o OS updates
>>>
>>> *When:  *2019-12-10 1900 UTC to 2200 UTC
>>>
>>> *Impact:*
>>>
>>> Maintenance will require a reboot of each FD.io system. Jenkins will
>>> be placed in shutdown mode at 1800 UTC. Please let us know if
>>> specific jobs cannot be aborted.
>>> The following systems will be unavailable during the maintenance window:
>>>
>>>   *     Jenkins sandbox
>>>   *     Jenkins production
>>>   *     Nexus
>>>   *     Jira
>>>   *     Gerrit
>>>   *     Sonar
>>>   *     OpenGrok
>>>
>>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14856): https://lists.fd.io/g/vpp-dev/message/14856
Mute This Topic: https://lists.fd.io/mt/65762523/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins Maintenance: 2019-12-10 1900 UTC to 2200 UTC

2019-12-10 Thread Vanessa Valderrama
Starting maintenance

On 12/10/19 7:15 AM, Vanessa Valderrama wrote:
>
> Jenkins sandbox is complete. Jenkins production will be shutdown at
> 1800 UTC in preparation for maintenance.
>
> Thanks,
> Vanessa
>
>
> On 12/3/19 9:57 AM, Vanessa Valderrama wrote:
>>
>> *What:*
>>
>>   * Jenkins
>>   o OS and security updates
>>   o Upgrade to 2.190.3
>>   o Plugin updates
>>   * Nexus
>>   o OS updates
>>   * Jira
>>   o OS updates
>>   * Gerrit
>>   o OS updates
>>   * Sonar
>>   o OS updates
>>   * OpenGrok
>>   o OS updates
>>
>> *When:  *2019-12-10 1900 UTC to 2200 UTC
>>
>> *Impact:*
>>
>> Maintenance will require a reboot of each FD.io system. Jenkins will
>> be placed in shutdown mode at 1800 UTC. Please let us know if
>> specific jobs cannot be aborted.
>> The following systems will be unavailable during the maintenance window:
>>
>>   *     Jenkins sandbox
>>   *     Jenkins production
>>   *     Nexus
>>   *     Jira
>>   *     Gerrit
>>   *     Sonar
>>   *     OpenGrok
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14855): https://lists.fd.io/g/vpp-dev/message/14855
Mute This Topic: https://lists.fd.io/mt/65762523/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Spurious API CRC failures

2019-12-10 Thread Jan Gelety via Lists.Fd.Io
Hello,

It is caused by incorrect voting in VPP patch 
https://gerrit.fd.io/r/c/vpp/+/23887 where -1 form api crc check was 
overwritten by arm job.

CSIT operational branch is already able to verify these new CRCs so, please, do 
recheck in all affected vpp commits.

Regards,
Jan

From: vpp-dev@lists.fd.io  On Behalf Of Dave Barach via 
Lists.Fd.Io
Sent: Tuesday, December 10, 2019 3:04 PM
To: csit-...@lists.fd.io; Maciek Konstantynowicz (mkonstan) 
; Andrew Yourtchenko 
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] Spurious API CRC failures

Folks,

This patch (among others) https://gerrit.fd.io/r/c/vpp/+/23625 changes zero 
APIs, but fails API CRC validation.

Please fix AYEC. We're dead in the water.

Thanks... Dave

04:37:59 
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
04:38:10 Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api 
files.
04:38:10 json files written to: 
/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/.
04:38:10 +++ python3 csit/resources/tools/integrated/check_crc.py
04:38:11 RuntimeError:
04:38:11 Incompatible API CRCs found in .api.json files:
04:38:11 {
04:38:11  "ip_address_details":"0xb1199745",
04:38:11  "ip_address_dump":"0x2d033de4",
04:38:11  "ip_neighbor_add_del":"0x105518b6",
04:38:11  "ip_route_add_del":"0xc1ff832d",
04:38:11  "ip_table_add_del":"0x0ffdaec0",
04:38:11  "sw_interface_ip6nd_ra_config":"0x3eb00b1c"
04:38:11 }
04:38:11 RuntimeError('Incompatible API CRCs found in .api.json files:\n{\n 
"ip_address_details":"0xb1199745",\n "ip_address_dump":"0x2d033de4",\n 
"ip_neighbor_add_del":"0x105518b6",\n "ip_route_add_del":"0xc1ff832d",\n 
"ip_table_add_del":"0x0ffdaec0",\n 
"sw_interface_ip6nd_ra_config":"0x3eb00b1c"\n}',)
04:38:11
04:38:11 @@@
04:38:11
04:38:11 VPP CSIT API CHECK FAIL!
04:38:11
04:38:11 This means the patch under test has missing messages,
04:38:11 or messages with unexpected CRCs compared to what CSIT needs.
04:38:11 Either this Change and/or its ancestors were editing .api files,
04:38:11 or your chain is not rebased upon the recent enough VPP codebase.
04:38:11
04:38:11 Please rebase the patch to see if that fixes the problem.
04:38:11 If that fails email csit-...@lists.fd.io 
for a new
04:38:11 operational branch supporting the api changes.
04:38:11
04:38:11 @@@
04:38:11 Build step 'Execute shell' marked build as failure

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14854): https://lists.fd.io/g/vpp-dev/message/14854
Mute This Topic: https://lists.fd.io/mt/67971752/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2019-12-10 14:05:22 UTC

2019-12-10 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14853): https://lists.fd.io/g/vpp-dev/message/14853
Mute This Topic: https://lists.fd.io/mt/67971785/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Spurious API CRC failures

2019-12-10 Thread Dave Barach via Lists.Fd.Io
Folks,

This patch (among others) https://gerrit.fd.io/r/c/vpp/+/23625 changes zero 
APIs, but fails API CRC validation.

Please fix AYEC. We're dead in the water.

Thanks... Dave

04:37:59 
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
04:38:10 Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api 
files.
04:38:10 json files written to: 
/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/.
04:38:10 +++ python3 csit/resources/tools/integrated/check_crc.py
04:38:11 RuntimeError:
04:38:11 Incompatible API CRCs found in .api.json files:
04:38:11 {
04:38:11  "ip_address_details":"0xb1199745",
04:38:11  "ip_address_dump":"0x2d033de4",
04:38:11  "ip_neighbor_add_del":"0x105518b6",
04:38:11  "ip_route_add_del":"0xc1ff832d",
04:38:11  "ip_table_add_del":"0x0ffdaec0",
04:38:11  "sw_interface_ip6nd_ra_config":"0x3eb00b1c"
04:38:11 }
04:38:11 RuntimeError('Incompatible API CRCs found in .api.json files:\n{\n 
"ip_address_details":"0xb1199745",\n "ip_address_dump":"0x2d033de4",\n 
"ip_neighbor_add_del":"0x105518b6",\n "ip_route_add_del":"0xc1ff832d",\n 
"ip_table_add_del":"0x0ffdaec0",\n 
"sw_interface_ip6nd_ra_config":"0x3eb00b1c"\n}',)
04:38:11
04:38:11 @@@
04:38:11
04:38:11 VPP CSIT API CHECK FAIL!
04:38:11
04:38:11 This means the patch under test has missing messages,
04:38:11 or messages with unexpected CRCs compared to what CSIT needs.
04:38:11 Either this Change and/or its ancestors were editing .api files,
04:38:11 or your chain is not rebased upon the recent enough VPP codebase.
04:38:11
04:38:11 Please rebase the patch to see if that fixes the problem.
04:38:11 If that fails email csit-...@lists.fd.io for a new
04:38:11 operational branch supporting the api changes.
04:38:11
04:38:11 @@@
04:38:11 Build step 'Execute shell' marked build as failure

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14852): https://lists.fd.io/g/vpp-dev/message/14852
Mute This Topic: https://lists.fd.io/mt/67971752/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2019-12-10 14:01:17 UTC

2019-12-10 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14851): https://lists.fd.io/g/vpp-dev/message/14851
Mute This Topic: https://lists.fd.io/mt/67971731/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins Maintenance: 2019-12-10 1900 UTC to 2200 UTC

2019-12-10 Thread Vanessa Valderrama
Jenkins sandbox is complete. Jenkins production will be shutdown at 1800
UTC in preparation for maintenance.

Thanks,
Vanessa


On 12/3/19 9:57 AM, Vanessa Valderrama wrote:
>
> *What:*
>
>   * Jenkins
>   o OS and security updates
>   o Upgrade to 2.190.3
>   o Plugin updates
>   * Nexus
>   o OS updates
>   * Jira
>   o OS updates
>   * Gerrit
>   o OS updates
>   * Sonar
>   o OS updates
>   * OpenGrok
>   o OS updates
>
> *When:  *2019-12-10 1900 UTC to 2200 UTC
>
> *Impact:*
>
> Maintenance will require a reboot of each FD.io system. Jenkins will
> be placed in shutdown mode at 1800 UTC. Please let us know if specific
> jobs cannot be aborted.
> The following systems will be unavailable during the maintenance window:
>
>   *     Jenkins sandbox
>   *     Jenkins production
>   *     Nexus
>   *     Jira
>   *     Gerrit
>   *     Sonar
>   *     OpenGrok
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14850): https://lists.fd.io/g/vpp-dev/message/14850
Mute This Topic: https://lists.fd.io/mt/65762523/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp19.08 ipsec vpp_papi

2019-12-10 Thread Terry
Dear Paul & Vratko,


Thanks for your great help, I'll check it soon.


Best regards,
Arvin






At 2019-11-26 01:33:29, "Vratko Polak -X (vrpolak - PANTHEON TECH SRO at 
Cisco)"  wrote:


> This situation does not happen when I use CLI like this

 

I think the difference is that CLI is restricted,

it can only accept printable characters on input.

Therefore it assumes it gets "hexlified" value,

and applies "unhexlify" on its input.

 

Contrary to that, PAPI (hopefully) can handle

arbitrary u8 arrays, so it does not unhexlify.

 

> local_crypto_key = "2b7e151628aed2a6abf7158809cf4f3d", 

 

You can try to use the unhexlified (binary) string:

  local_crypto_key = 
b"\x2b\x7e\x15\x16\x28\xae\xd2\xa6\xab\xf7\x15\x88\x09\xcf\x4f\x3d",

 

Vratko.

 

From: vpp-dev@lists.fd.io  On Behalf Of Paul Vinciguerra
Sent: Sunday, November 24, 2019 4:31 PM
To: Terry 
Cc: vpp-dev 
Subject: Re: [vpp-dev] vpp19.08 ipsec vpp_papi

 

That output is not random.  It is the hex of your string.

2b7e -> 32 62 37 65

 

On Sun, Nov 24, 2019 at 8:06 AM Terry  wrote:

Dear VPP experts,

 

I'm trying to configure ipsec with python API in vpp19.08. 

My configurations are as follows:

 

reply = vpp.api.ipsec_tunnel_if_add_del(is_add = 1, 

local_ip = "192.168.1.1", 

remote_ip = "192.168.2.2", 

local_spi = 1031, 

remote_spi = 1030, 

crypto_alg = 7, 

local_crypto_key_len = 16, 

local_crypto_key = "2b7e151628aed2a6abf7158809cf4f3d", 

remote_crypto_key_len = 16, 

remote_crypto_key = "2b7e151628aed2a6abf7158809cf4f3d", 

integ_alg = 2, 

local_integ_key_len = 16, 

local_integ_key = "4339314b55523947594d6d3547666b45", 

remote_integ_key_len = 16, 

remote_integ_key = "4339314b55523947594d6d3547666b45", 

renumber = 1, 

show_instance = 1)

But the output SA information is as follows:

vpp# show ipsec sa 0

[0] sa 2147483648 (0x8000) spi 1030 (0x0406) protocol:esp flags:[tunnel 
inbound aead ]

   locks 1

   salt 0x0

   seq 0 seq-hi 0

   last-seq 0 last-seq-hi 0 window 


   crypto alg aes-gcm-128 key 32623765313531363238616564326136

   integrity alg sha1-96 key 3439333134623535353233393437

   packets 0 bytes 0

   table-ID 0 tunnel src 192.168.2.2 dst 192.168.1.1

 

The crypto_key I configured is '2b7e151628aed2a6abf7158809cf4f3d', but the 
output key is '32623765313531363238616564326136'.

The output crypto key looks like a random number.

This situation does not happen when I use CLI like this:

'create ipsec tunnel local-ip 192.168.1.1 remote-ip 192.168.2.2 local-spi 1031 
remote-spi 1030 local-crypto-key 2b7e151628aed2a6abf7158809cf4f3d 
remote-crypto-key 2b7e151628aed2a6abf7158809cf4f3d crypto-alg aes-gcm-128'

 

Could you please give me some help?

 

Best regards,

Arvin

 

 

 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14676): https://lists.fd.io/g/vpp-dev/message/14676
Mute This Topic: https://lists.fd.io/mt/61874477/1594641
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [pvi...@vinciconsulting.com]
-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14848): https://lists.fd.io/g/vpp-dev/message/14848
Mute This Topic: https://lists.fd.io/mt/61874477/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp19.08 ipsec issue

2019-12-10 Thread Terry
Dear VPP Team,


I'm trying to config ipsec tunnel in vpp19.08. The configuration of 'ikev2' 
and 'create ipsec tunnel ...' both works fine, but it's difficult for me to 
config ipsec tunnel via 'ipsec sa...'. There are a lot of issue about ipsec in 
vpp-dev mail-list, I still not find the right answer.
  My test topology is as follow:




The configuration of each device are as follows:
user1:
ipv4 address: 100.0.0.3/24
gateway address: 100.0.0.1


vpp1:
# basic network
set interface state GigabitEthernet2/0/0 up
set interface state GigabitEthernet2/1/0 up
set interface ip address GigabitEthernet2/0/0 100.0.0.1/24
set interface ip address GigabitEthernet2/1/0 192.168.1.1/24
set interface promiscuous on GigabitEthernet2/0/0
set interface promiscuous on GigabitEthernet2/1/0
# ispec configuration
ipsec sa add 10 spi 1001 esp crypto-key 2b7e151628aed2a6abf7158809cf4f3d 
crypto-alg aes-cbc-128 tunnel-src 192.168.1.1 tunnel-dst 192.168.1.2
ipsec sa add 20 spi 1000 esp crypto-key 2b7e151628aed2a6abf7158809cf4f3d 
crypto-alg aes-cbc-128 tunnel-src 192.168.1.2 tunnel-dst 192.168.1.1
ipsec spd add 1
set interface ipsec spd GigabitEthernet2/1/0 1
ipsec policy add spd 1 inbound priority 100 protocol 50 action bypass
ipsec policy add spd 1 outbound priority 100 protocol 50 action bypass
ipsec policy add spd 1 inbound priority 10 action protect sa 20 local-ip-range 
100.0.0.3 - 100.0.0.3 remote-ip-range 172.168.1.3 - 172.168.1.3
ipsec policy add spd 1 outbound priority 20 action protect sa 10 local-ip-range 
100.0.0.3 - 100.0.0.3 remote-ip-range 172.168.1.3 - 172.168.1.3
ip route add 172.168.1.0/24 via 192.168.1.2 GigabitEthernet2/1/0


vpp2:
# basic network
set interface state GigabitEthernet2/1/0 up
set interface state GigabitEthernet2/2/0 up
set interface ip address GigabitEthernet2/1/0 172.168.1.1/24
set interface ip address GigabitEthernet2/2/0 192.168.1.2/24
set interface promiscuous on GigabitEthernet2/1/0
set interface promiscuous on GigabitEthernet2/2/0
# ipsec configuration
ipsec sa add 10 spi 1001 esp crypto-key 2b7e151628aed2a6abf7158809cf4f3d 
crypto-alg aes-cbc-128 tunnel-src 192.168.1.1 tunnel-dst 192.168.1.2
ipsec sa add 20 spi 1000 esp crypto-key 2b7e151628aed2a6abf7158809cf4f3d 
crypto-alg aes-cbc-128 tunnel-src 192.168.1.2 tunnel-dst 192.168.1.1
ipsec spd add 1
set interface ipsec spd GigabitEthernet2/2/0 1
ipsec policy add spd 1 inbound priority 100 protocol 50 action bypass
ipsec policy add spd 1 outbound priority 100 protocol 50 action bypass
ipsec policy add spd 1 inbound priority 10 action protect sa 10 local-ip-range 
172.168.1.3 - 172.168.1.3 remote-ip-range 100.0.0.3 - 100.0.0.3
ipsec policy add spd 1 outbound priority 20 action protect sa 20 local-ip-range 
172.168.1.3 - 172.168.1.3 remote-ip-range 100.0.0.3 - 100.0.0.3
ip route add 100.0.0.0/24 via 192.168.1.1 GigabitEthernet2/2/0


user2:
ipv4 address: 172.168.1.3/24
gateway address: 172.168.1.1


After configuration, I tried ping from user1 to user2, the packet dropped by 
vpp1, here is the trace info:
DBGvpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
--- Start of thread 1 vpp_wk_0 ---
Packet 1


00:08:35:264577: dpdk-input
  GigabitEthernet2/0/0 rx queue 0
  buffer 0x9e330: current data 0, length 98, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x100
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x7298cc80
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 00:50:56:aa:70:e3 -> 00:50:56:aa:53:75
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x15f0
fragment id 0x130b, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x5609
00:08:35:264631: ethernet-input
  frame: flags 0x3, hw-if-index 1, sw-if-index 1
  IP4: 00:50:56:aa:70:e3 -> 00:50:56:aa:53:75
00:08:35:264650: ip4-input-no-checksum
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x15f0
fragment id 0x130b, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x5609
00:08:35:264673: ip4-lookup
  fib 0 dpo-idx 2 flow hash: 0x
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x15f0
fragment id 0x130b, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x5609
00:08:35:264694: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 2 : ipv4 via 192.168.1.2 GigabitEthernet2/1/0: 
mtu:9000 000c29c781b0005056aa5d190800 flow hash: 0x
  : 000c29c781b0005056aa5d1908004554130b40003f0116f06403aca8
  0020: 01030800560911580013c609ee5d12510b001011
00:08:35:264701: ipsec4-output-feature
  spd 1 policy 3
00:08:35:264711: esp4-encrypt
  esp: sa-index 0 spi 1001 (0x03e9) seq 19 sa-seq-hi 0 crypto aes-cbc-128 
integrity none
00:08:35:264731: ip4-load-balance
  

[vpp-dev] Reminder: vpp public call TODAY (Dec 10, 2019) at 8am PST / 11am EST / 5pm CET

2019-12-10 Thread Dave Barach via Lists.Fd.Io

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14847): https://lists.fd.io/g/vpp-dev/message/14847
Mute This Topic: https://lists.fd.io/mt/67970365/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi,

> I have used below CLI's to create rdma interfaces over Mellanox , Can you
> suggest what set of CLi's I should use so that packets from rdma will also
> have mbuff fields set properly , so that we can directly right on KNI?

You do not have to. Just create a KNI interface in VPP with the DPDK plugin and 
switch packets between KNI and rdma interfaces.
VPP never use DPDK mbuf internally, when you get packets from/to DPDK in VPP 
you have a buffer metadata translation anyway. From our PoV this is not 
different than switching packets between a vhost interface and a DPDK hardware 
interface (eg. VIC).

Best
ben 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14846): https://lists.fd.io/g/vpp-dev/message/14846
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Hi Damjan,

I have used below CLI's to create rdma interfaces over Mellanox , Can you
suggest what set of CLi's I should use so that packets from rdma will also
have mbuff fields set properly , so that we can directly right on KNI?

create interface rdma host-if ens2f0 name device_9/0/0
create interface rdma host-if ens2f1 name device_9/0/1

Thanks,
Chetan Bhasin

On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion via Lists.Fd.Io  wrote:

>
>
> > On 6 Dec 2019, at 07:16, Prashant Upadhyaya 
> wrote:
> >
> > Hi,
> >
> > I use VPP with DPDK driver for I/O with NIC.
> > For high speed switching of packets to and from kernel, I use DPDK KNI
> > (kernel module and user space API's provided by DPDK)
> > This works well because the vlib buffer is backed by the DPDK mbuf
> > (KNI uses DPDK mbuf's)
> >
> > Now, if I choose to use a native driver of VPP for I/O with NIC, is
> > there a native equivalent in VPP to replace KNI as well ? The native
> > equivalent should not lose out on performance as compared to KNI so I
> > believe the tap interface can be ruled out here.
> >
> > If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> > would have to do a data copy between the vlib buffer and an mbuf  in
> > addition to doing all the DPDK pool maintenance etc. The copies would
> > be destructive for performance surely.
> >
> > So I believe, the question is -- in presence of native drivers in VPP,
> > what is the high speed equivalent of DPDK KNI.
>
> You can use dpdk and native drivers on the same time.
> How KNI performance compares to tap with vhost-net backend?
>
>
> --
> Damjan
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14826): https://lists.fd.io/g/vpp-dev/message/14826
> Mute This Topic: https://lists.fd.io/mt/67470059/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14845): https://lists.fd.io/g/vpp-dev/message/14845
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-