Re: [vpp-dev] Published: FD.io CSIT-1908 Release Report

2019-09-11 Thread Tibor Frank via Lists.Fd.Io
Hi All,

FD.io CSIT-1908.37 report has been published on FD.io docs site:

html: https://docs.fd.io/csit/rls1908/report/
pdf: https://docs.fd.io/csit/rls1908/report/_static/archive/csit_rls1908.pdf

Tibor


-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Maciek 
Konstantynowicz (mkonstan) via Lists.Fd.Io
Sent: Wednesday, September 4, 2019 10:17 PM
To: csit-dev ; vpp-dev ; 
honeycomb-dev ; hc2...@lists.fd.io; 
govpp-...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] Published: FD.io CSIT-1908 Release Report

Hi All,

FD.io CSIT-1908 report has been published on FD.io docs site:

html: https://docs.fd.io/csit/rls1908/report/
pdf: https://docs.fd.io/csit/rls1908/report/_static/archive/csit_rls1908.pdf

Many thanks to All in CSIT, VPP and wider FD.io community who contributed and 
worked hard to make CSIT-1908 happen!

Below two summaries:
- CSIT-1908 Release Summary, a high-level summary.
- Points of Note in CSIT-1908 Report, with specific links to report.

Welcome all comments, best by email to csit-...@lists.fd.io.

Cheers,
-Maciek


CSIT-1908 Release Summary

1. CSIT-1908 Report

- html link: https://docs.fd.io/csit/rls1908/report/
- pdf link: 
https://docs.fd.io/csit/rls1908/report/_static/archive/csit_rls1908.pdf

2. New Tests

  - VM/VNF service chains with DPDK Testpmd and VPP L2/IPv4 workloads
and external VXLAN encapsulation.
  - IPsec with new VPP native cipher algorithms, baseline and large
scale (up to 60k tunnels).
  - VM/VNF service chains with VPP IPsec workloads, baseline and
horizontal scaling (experimental, in-testing).
  - GBP (Group Based Policy) with external dot1q encapsulation.
  - Extended test coverage with VPP native AVF driver: IPv4 scale tests.
  - Refreshed VPP TCP/HTTP tests.
  - Number of VPP functional device tests running in container based
environment.
  - Good VPP PAPI (Python API) test coverage, PAPI used for all VPP
tests.

3. Benchmarking

  - Added new processor micro-architectures: ARM/AArch64 (TaiShan) and
Atom (Denverton).

- New testbeds onboarded into FD.io CSIT CI/CD functional and
  performance test pipelines.
- Daily trending with throughput changes monitoring, analytics and
  anomaly auto-detection.
- Release reports with benchmarking data including throughput,
  latency, test repeatibility.

  - Updated CSIT benchmarking report specification

- Consistent selection of tests across all testbeds and processor
  microarchitectures present in FD.io labs (Xeon, Atom, ARM) for
  iterative benchmarking tests to verify results repeatibility. Data
  presented in graphs conveying NDR (non-drop rate, zero packet
  loss) and PDR (partial drop rate) throughput statistics.
  Multi-core speedup and latency are also presented.
- Consistent comparison of NDR and PDR throughput results across the
  releases.
- Updated graph naming and test grouping to improve browsability and
  access to test data.

  - Increased test coverage in 2-node testbed environment (2n-skx).

  - Updated soak testing methodology and new results, aligned with
latest IETF draft specification draft-vpolak-bmwg-plrsearch-02.

4. Infrastructure

- API
  - PAPI (Python API) used for all VPP tests, migrated away from VAT
(VPP API Test).
  - VPP API change detection and gating in VPP and CSIT CI/CD.

- Test Environments
  - VPP functional device tests: migrated away from VIRL (VM based) to
container environment (with Nomad).
  - Added new physical testbeds: ARM/AArch64 (TaiShan) and Atom
(Denverton).

- CSIT Framework
  - Conifguration keyword alignment across 2-node and 3-node testbeds to
ease test portability across environments.

- Installer
  - Updated bare-metal CSIT performance testbed installer (ansible).


Points of Note in CSIT-1908 Report

Indexed specific links listed at the bottom.

1. VPP release notes
   a. Changes in CSIT-1908: [1]
   b. Known issues: [2]

2. VPP performance - 64B/IMIX throughput graphs (selected NIC models):
   a. Graphs explained: [3]
   b. L2 Ethernet Switching:[4]
   c. IPv4 Routing: [5]
   d. IPv6 Routing: [6]
   e. SRv6 Routing: [7]
   f. IPv4 Tunnels: [8]
   g. KVM VMs vhost-user:   [9]
   h. LXC/DRC Container Memif: [10]
   e. IPsec IPv4 Routing:  [11]
   f. Virtual Topology System: [12]

3. VPP performance - multi-core and latency graphs:
   a. Speedup Multi-Core:  [13]
   b. Latency: [14]

4. VPP system performance - NFV service density and TCP/IP:
   a. VNF (VM) Service Chains:  [15]
   b. CNF (Container) Service Chains:   [16]
   c. CNF

Re: [vpp-dev] FD.io Jenkins Restart

2019-09-11 Thread Vanessa Valderrama
Jenkins has been restarted and jobs have been restored.


On 09/11/2019 03:11 PM, Vanessa Valderrama wrote:
> We're going to terminate the two CSIT jobs. I spoke with Dave Wallace
> and we felt it'd be better to terminate the jobs and restart Jenkins.
>
> Thank you,
> Vanessa
>
> On 09/11/2019 02:17 PM, Vanessa Valderrama wrote:
>> Jenkins is still in shutdown mode. We'll do the restart when these jobs
>> are complete.
>>
>> https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/124/
>> https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/125/
>>
>> On 09/11/2019 01:09 PM, Vanessa Valderrama wrote:
>>> Jenkins has been placed in shutdown mode in preparation for a restart.
>>>
>>> We were seeing Gateway Time-out errors on ci-management-jjb-merge jobs
>>> which has caused problems with the job configurations. I have tried to
>>> push the jobs manually but I'm getting the same errors.
>>>
>>> We are going to restart Jenkins and try to do another manual push of the
>>> job configs.
>>>
>>> Thank you,
>>> Vanessa
>>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13966): https://lists.fd.io/g/vpp-dev/message/13966
Mute This Topic: https://lists.fd.io/mt/34106604/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins Restart

2019-09-11 Thread Vanessa Valderrama
We're going to terminate the two CSIT jobs. I spoke with Dave Wallace
and we felt it'd be better to terminate the jobs and restart Jenkins.

Thank you,
Vanessa

On 09/11/2019 02:17 PM, Vanessa Valderrama wrote:
> Jenkins is still in shutdown mode. We'll do the restart when these jobs
> are complete.
>
> https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/124/
> https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/125/
>
> On 09/11/2019 01:09 PM, Vanessa Valderrama wrote:
>> Jenkins has been placed in shutdown mode in preparation for a restart.
>>
>> We were seeing Gateway Time-out errors on ci-management-jjb-merge jobs
>> which has caused problems with the job configurations. I have tried to
>> push the jobs manually but I'm getting the same errors.
>>
>> We are going to restart Jenkins and try to do another manual push of the
>> job configs.
>>
>> Thank you,
>> Vanessa
>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13965): https://lists.fd.io/g/vpp-dev/message/13965
Mute This Topic: https://lists.fd.io/mt/34106604/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue with DPDK 19.08 / i40e driver

2019-09-11 Thread Damjan Marion via Lists.Fd.Io


> On 11 Sep 2019, at 16:43, Mathias Raoul  wrote:
> 
> Hello,
> 
> I have an issue with VPP and i40e driver, when I try to switch the interface 
> to up, the program stop with a segmentation fault. My configuration details 
> are below.
> 
> It might be a compatibility issue, because the DPDK documentation recommend 
> using the firmware v7 for i40E with DPDK v19.08. But the firmware is not yet 
> available for the  Cisco XL710 card.
> vpp stop in this file : dpdk-19.08/drivers/net/i40/base/i40e_adminq.c:933
> When I change dpdk version to 19.05 the bug disappear.
> 
> DBGvpp# show int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> FortyGigabitEthernet5e/0/01 down 9000/0/0/0
> local00 down  0/0/0/0
> DBGvpp# set interface state FortyGigabitEthernet5e/0/0  up
> vl_msg_api_trace_save:252: Message table length 44998
> 
> 
> Configuration:
> VPP : last commit on master : 1146ff4bcd336d8efc19405f1d83914e6115a01f
> 
> show version verbose
> Version:  v20.01-rc0~171-g1146ff4bc
> Compiled by:  root
> Compile host: 524b94e75c4d
> Compile date: Wed Sep 11 12:42:53 UTC 2019
> Compile location: /home/mraoul/dev/vpp
> Compiler: GCC 7.4.0
> Current PID:  19052
> 
> OS : Ubuntu 18.04.2 LTS
> 
> Network card : Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ 
> (rev 02)\

There is no enough information in your email to make any conclusion. Can you 
try to run VPP under gdb and grab traceback, preferably with debug image?

-- 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13964): https://lists.fd.io/g/vpp-dev/message/13964
Mute This Topic: https://lists.fd.io/mt/34104216/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins Restart

2019-09-11 Thread Vanessa Valderrama
Jenkins is still in shutdown mode. We'll do the restart when these jobs
are complete.

https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/124/
https://jenkins.fd.io/job/csit-vpp-perf-verify-1908-2n-skx/125/

On 09/11/2019 01:09 PM, Vanessa Valderrama wrote:
> Jenkins has been placed in shutdown mode in preparation for a restart.
>
> We were seeing Gateway Time-out errors on ci-management-jjb-merge jobs
> which has caused problems with the job configurations. I have tried to
> push the jobs manually but I'm getting the same errors.
>
> We are going to restart Jenkins and try to do another manual push of the
> job configs.
>
> Thank you,
> Vanessa
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13963): https://lists.fd.io/g/vpp-dev/message/13963
Mute This Topic: https://lists.fd.io/mt/34106604/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Bug in plugins/dpdk/device/init.c related to eal_init_args found using AddressSanitizer

2019-09-11 Thread Dave Wallace

Elias,

Please open a Jira Ticket and push a patch with this fix.

BTW, there is a macro [0] that safely adds c-string termination to a 
vector which I would recommend using for your fix (2).


Thanks,
-daw-
[0] 
https://docs.fd.io/vpp/19.08/db/d65/vec_8h.html#a2bc43313bc727b5453c3e5d7cc57a464


On 9/11/2019 11:39 AM, Elias Rudberg wrote:

Hello,

Thanks to the patches shared by Benoit Ganne on Monday, I was today
able to use AddressSanitizer for vpp. AddressSanitizer detected a
problem that I think is caused by a bug in plugins/dpdk/device/init.c
related to how the conf->eal_init_args vector is manipulated in the
dpdk_config function.

It appears that the code there uses two different kinds of strings,
both C-style null-terminated strings (char*) and vectors of type (u8*)
which are not necessarily null-terminated but instead have their length
stored in a different way (as described in vppinfra/vec.h).

In the dpdk_config function, various strings are added to the conf-

eal_init_args vector. Those strings need to be null-terminated because

they are later used as input to the "format" function which expects
null-terminated strings for its later arguments. The strings are mostly
null-terminated but not all of them, which leads to the error detected
by AddressSanitizer.

I think what happens is that some string that was generated by the
"format" function and is thus not null-terminated is later given as
input to a function that needs null-terminated strings as input,
leading to illegal memory access.

I'm able to make AddressSanitizer happy by making the following two
changes:

(1) Null-terminate the tmp string for conf->nchannels in the same way
as it is done in other places in the code:

-  tmp = format (0, "%d", conf->nchannels);
+  tmp = format (0, "%d%c", conf->nchannels, 0);

(2) Null-terminate conf->eal_init_args_str before the call to
dpdk_log_warn:

+  vec_add1(conf->eal_init_args_str, 0);

After that, vpp starts without complaints from AddressSanitizer.

Should this be reported as a new bug in the Jira system for VPP (
https://jira.fd.io/browse/VPP)?

Should I push a fix myself (not sure if I have permission to do that)
or could someone more familiar with that part of the code do it?

Best regards,
Elias


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13955): https://lists.fd.io/g/vpp-dev/message/13955
Mute This Topic: https://lists.fd.io/mt/34104878/675079
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13962): https://lists.fd.io/g/vpp-dev/message/13962
Mute This Topic: https://lists.fd.io/mt/34104878/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io Jenkins Restart

2019-09-11 Thread Vanessa Valderrama
Jenkins has been placed in shutdown mode in preparation for a restart.

We were seeing Gateway Time-out errors on ci-management-jjb-merge jobs
which has caused problems with the job configurations. I have tried to
push the jobs manually but I'm getting the same errors.

We are going to restart Jenkins and try to do another manual push of the
job configs.

Thank you,
Vanessa

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13961): https://lists.fd.io/g/vpp-dev/message/13961
Mute This Topic: https://lists.fd.io/mt/34106604/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Support for shared subnet

2019-09-11 Thread Burt Silverman
Thank you, Ole. Great that you put this in the context of point to point
interfaces. But I think a fundamental issue remains -- one that Dave Barach
would remind me of, although I hope I do not misrepresent his question.

As I see it, one line card represents a router, and it has 2 point-to-point
interfaces to another router, the PE. Who decides which of the two
interfaces passes the traffic? In the RedHat Linux example I pointed to
earlier, static routes were used to split traffic between local network and
default gateway (that example had a subnet rather than point to point
interfaces.) The dynamic routing protocols do not have load balancing built
in, do they -- I admit to being rusty but I don't recall there being any?
Although it seems like Cisco versions may have added something for some
protocols other than BGP. Actually I think OSPF v2 claims to load balance
in the case of equal cost paths, but it is a bit sketchy in the RFC.
Anyway, what if one interface is 10 Gbps and the other is 40 Gbps?

Perhaps Krishna could use a total of 2 VLANs rather than 2 VLANs per line
card, and also think in terms of point to point rather than subnets. I have
not thought through whether or how that helps hide the traffic
splitting/load balancing issue (or any other fine details.)

Burt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13960): https://lists.fd.io/g/vpp-dev/message/13960
Mute This Topic: https://lists.fd.io/mt/34092746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Temporary CRC failures

2019-09-11 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
[0] has been merged, [1] has been postponed.

The VPP API CRC jobs should be reliable from now on again.


Vratko.


[0] https://gerrit.fd.io/r/c/vpp/+/21508

[1] https://gerrit.fd.io/r/c/vpp/+/21706

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13959): https://lists.fd.io/g/vpp-dev/message/13959
Mute This Topic: https://lists.fd.io/mt/34105743/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] non-zero counter values in unrelated graph nodes

2019-09-11 Thread Satya Murthy
Hi ,

We are facing a strange issue, which we are not able to debug even after 
spending a good amount of time.
We are seeing "show node counters" displaying very high number of values all of 
a sudden for few unrelated nodes like "null-node" and "vmxnet3-input".
The values are also keep on changing ( going down and up ) on multiple runs of 
"show node counters".
This kind of indicates that counters are not getting incrementing by those 
graph nodes as such, but the issue is due to the memory pertaining to 
"counters".
Are there any known issues in this area by any chance ?

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13958): https://lists.fd.io/g/vpp-dev/message/13958
Mute This Topic: https://lists.fd.io/mt/34105342/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP API client with no rx pthread

2019-09-11 Thread Satya Murthy
Thanks Ole for the quick response.
Will got through the doc and give it a try.

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13957): https://lists.fd.io/g/vpp-dev/message/13957
Mute This Topic: https://lists.fd.io/mt/34101834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP API client with no rx pthread

2019-09-11 Thread Florin Coras
Hi Satya, 

Probably you can just replicate what the api rx-thread is doing, i.e., 
rx_thread_fn. In particular, take a look at vl_msg_api_queue_handler. 

Florin

> On Sep 11, 2019, at 3:26 AM, Satya Murthy  wrote:
> 
> Hi ,
> 
> We are trying to develop a VPP API client which needs synchronous reply 
> handling.
> Hence, we were thinking of NOT having a separate pthread for receiving the 
> response from VPP.
> We are planning to use no_rx_pthread version of connect api.
> 
> Is there any example code to receive and handle the response synchronously.
> I see all the examples are using separate pthread for receiving.
> 
> Any input on this will be of great help.
> 
> -- 
> Thanks & Regards,
> Murthy -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13952): https://lists.fd.io/g/vpp-dev/message/13952
> Mute This Topic: https://lists.fd.io/mt/34101834/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13956): https://lists.fd.io/g/vpp-dev/message/13956
Mute This Topic: https://lists.fd.io/mt/34101834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Bug in plugins/dpdk/device/init.c related to eal_init_args found using AddressSanitizer

2019-09-11 Thread Elias Rudberg
Hello,

Thanks to the patches shared by Benoit Ganne on Monday, I was today
able to use AddressSanitizer for vpp. AddressSanitizer detected a
problem that I think is caused by a bug in plugins/dpdk/device/init.c
related to how the conf->eal_init_args vector is manipulated in the
dpdk_config function.

It appears that the code there uses two different kinds of strings,
both C-style null-terminated strings (char*) and vectors of type (u8*)
which are not necessarily null-terminated but instead have their length
stored in a different way (as described in vppinfra/vec.h).

In the dpdk_config function, various strings are added to the conf-
>eal_init_args vector. Those strings need to be null-terminated because
they are later used as input to the "format" function which expects
null-terminated strings for its later arguments. The strings are mostly
null-terminated but not all of them, which leads to the error detected
by AddressSanitizer.

I think what happens is that some string that was generated by the
"format" function and is thus not null-terminated is later given as
input to a function that needs null-terminated strings as input,
leading to illegal memory access.

I'm able to make AddressSanitizer happy by making the following two
changes:

(1) Null-terminate the tmp string for conf->nchannels in the same way
as it is done in other places in the code:

-  tmp = format (0, "%d", conf->nchannels);
+  tmp = format (0, "%d%c", conf->nchannels, 0);

(2) Null-terminate conf->eal_init_args_str before the call to
dpdk_log_warn:

+  vec_add1(conf->eal_init_args_str, 0);

After that, vpp starts without complaints from AddressSanitizer.

Should this be reported as a new bug in the Jira system for VPP (
https://jira.fd.io/browse/VPP)?

Should I push a fix myself (not sure if I have permission to do that)
or could someone more familiar with that part of the code do it?

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13955): https://lists.fd.io/g/vpp-dev/message/13955
Mute This Topic: https://lists.fd.io/mt/34104878/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Issue with DPDK 19.08 / i40e driver

2019-09-11 Thread Mathias Raoul
Hello,

I have an issue with VPP and i40e driver, when I try to switch the
interface to up, the program stop with a segmentation fault. My
configuration details are below.

It might be a compatibility issue, because the DPDK documentation recommend
using the firmware v7 for i40E with DPDK v19.08. But the firmware is not
yet available for the  Cisco XL710 card.
vpp stop in this file : dpdk-19.08/drivers/net/i40/base/i40e_adminq.c:933
When I change dpdk version to 19.05 the bug disappear.

DBGvpp# show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
Counter  Count
FortyGigabitEthernet5e/0/01 down 9000/0/0/0
local00 down  0/0/0/0
DBGvpp# set interface state FortyGigabitEthernet5e/0/0  up
vl_msg_api_trace_save:252: Message table length 44998



*Configuration:*
VPP : last commit on master : 1146ff4bcd336d8efc19405f1d83914e6115a01f

show version verbose
Version:  v20.01-rc0~171-g1146ff4bc
Compiled by:  root
Compile host: 524b94e75c4d
Compile date: Wed Sep 11 12:42:53 UTC 2019
Compile location: /home/mraoul/dev/vpp
Compiler: GCC 7.4.0
Current PID:  19052

OS : Ubuntu 18.04.2 LTS

Network card : Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+
(rev 02)

*Network card driver infos :*
driver: i40e
version: 2.1.14-k
firmware-version: 6.01 0x800036bb 0.385.33
expansion-rom-version:
bus-info: :d8:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Best regards,

Mathias Raoul
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13954): https://lists.fd.io/g/vpp-dev/message/13954
Mute This Topic: https://lists.fd.io/mt/34104216/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP API client with no rx pthread

2019-09-11 Thread Ole Troan
Hi Satya,

> We are trying to develop a VPP API client which needs synchronous reply 
> handling.
> Hence, we were thinking of NOT having a separate pthread for receiving the 
> response from VPP.
> We are planning to use no_rx_pthread version of connect api.
> 
> Is there any example code to receive and handle the response synchronously.
> I see all the examples are using separate pthread for receiving.
> 
> Any input on this will be of great help.

Which language are you using?
VAPI (C) supports both blocking and non-blocking mode.
Python also is blocking, although uses a pthread in the background to deal with 
asynchronous events from VPP (if you use the want apis).

See src/vpp-api/vapi/vapi_doc.md for VAPI documentation.

Or you can write your own client against the Unix domain socket if you like... 
it really depends on your environment.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13953): https://lists.fd.io/g/vpp-dev/message/13953
Mute This Topic: https://lists.fd.io/mt/34101834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP API client with no rx pthread

2019-09-11 Thread Satya Murthy
Hi ,

We are trying to develop a VPP API client which needs synchronous reply 
handling.
Hence, we were thinking of NOT having a separate pthread for receiving the 
response from VPP.
We are planning to use no_rx_pthread version of connect api.

Is there any example code to receive and handle the response synchronously.
I see all the examples are using separate pthread for receiving.

Any input on this will be of great help.

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13952): https://lists.fd.io/g/vpp-dev/message/13952
Mute This Topic: https://lists.fd.io/mt/34101834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] I want to construct some packets to be sent by a specified thread,what can I do?

2019-09-11 Thread Christian Hopps


> On Sep 11, 2019, at 2:07 AM, wei_sky2...@163.com wrote:
> 
> On Tue, Sep 10, 2019 at 12:44 AM, Christian Hopps wrote:
> UTSL
> Thank you for reply。
> In our  scenario,we use the Intel ‘s DDP feature to hash different GTP 
> packets into different threads。And We want the GTP packets from the same UE 
> to be hashed on the same thread,DDP can‘t implement this function。
> So we consider using the Handoff mechanism 。But We still have some doubts.
> (1) We know VPP ensure efficiency by deal one group packets by the same 
> thread's nodes,if we use handoff ,how much does this mechanism affect 
> efficiency?Or is there any other mechanism to meet our requirements?

Actually, the fundamental efficiency is based on I-cache utilization, AFAICT. 
By processing a bunch of packets (the vector of packets, i.e., the node's 
frame) using the same smallish bit of code (the node function) the i-cache is 
only invalidated/reloaded when moving from node to node and so it executes very 
quickly over a large set of packets. In addition a very common design pattern 
inside a VPP node function is to loop on a clump of packets, while prefeching 
the next clump (which happens in parallel) so that the load from RAM to D-cache 
hit is hopefully minimized.

Together these optimizations work regardless of which thread is processing the 
vector of packets.

> (2)If we use Handoff,Which method is better:
> a)  Dealing GTP packet RSS without using devices, use software 
> RSS,use one Thread receive All packets,then handoff to other threads
> b)  Dealing GTP packet RSS with using devices, when the thread 
> receive packet, determine if the current thread need to deal this packet,if 
> not,handoff to other thread。

I think you'll need to figure this out yourself (or maybe someone else can 
help). I'm myself writing an application and learning what works and doesn't 
work right now. :)

Thanks,
Chris.

> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13949): https://lists.fd.io/g/vpp-dev/message/13949
> Mute This Topic: https://lists.fd.io/mt/34077019/1826170
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [cho...@chopps.org]
> -=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13951): https://lists.fd.io/g/vpp-dev/message/13951
Mute This Topic: https://lists.fd.io/mt/34077019/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Support for shared subnet

2019-09-11 Thread Ole Troan
Hi Krishna,

> Our product has multiple line cards and each line card has multiple 
> interfaces. We run an instance of VPP on each line card. All of these 
> interfaces are connected to a L2 switching network. A PE router is also 
> connected to this L2 network for connectivity to internet.
>  
> In order to explain this use case, lets say we currently have two line cards 
> (LC-1 and LC-2) and each line card has two interfaces. LC-1 has interfaces 
> IF-11 & IF-12 and similarly LC-2 has interfaces IF-21 & IF-22. The following 
> is a sample of configuration that is currently used.
>  
> --
> Line CardInterface Name   VLANIP Address on Interface IP Address 
> on PE router
> --
> LC-1  IF-11   10110.1.1.2/24
> 10.1.1.1/24
> LC-1  IF-12   10210.1.2.2/24
> 10.1.2.1/24
> LC-2  IF-21   10310.1.3.2/24
> 10.1.3.1/24
> LC-2  IF-22   10410.1.4.2/24
> 10.1.4.1/24
> -
>  
> The problem we are facing is that every time we add a new line card to our 
> product, we are now required to also configure additional VLANs and IP 
> subnets on the PE router. I am trying to explore if we can simply configure 
> one VLAN for all of these interfaces and configure IP addresses of the same 
> IP subnet on each interface. An example configuration that I’d like to use if 
> this were possible is shown below. In this approach we only configure one 
> vlan with one IP address on the PE router and when we add a new line card to 
> our product, we only need to configure IP addresses in the IP subnet 
> 10.1.1.0/24 IP subnet on the interfaces of this new line card.
>  
> --
> Line CardInterface Name   VLANIP Address on Interface IP Address 
> on PE router
> --
> LC-1  IF-11   10110.1.1.2/24
> 10.1.1.1/24
> LC-1  IF-12   10110.1.1.3/24   
> LC-2  IF-21   10110.1.1.4/24   
> LC-2  IF-22   10110.1.1.5/24   
> -

So this network is used as a backplane between the PE node and LCs?

One option is to coneptually view it as a hub and spoke network.
This model is a lot more supported in IPv6 than in IPv4, but you can certainly 
do it in IPv4 as well.

- PE address: 10.1.1.1/24
- Configure the addresses on the LCs as /32s.
  e.g. 10.1.1.2/32, 10.1.1.3/32 and so on.
- Set up a default route to the PE from each LC.
  ip route 0.0.0.0/0 via 10.1.1.1, LC-1

The PE can use ARP to resolve L2 addresses on LCs (unless you just configure 
static MACs).
In the other direction you might have to use static MACs, or you might have to 
do some additions
to the FIB glean / ARP logic. Basically how do you ARP for an address that 
isn't covered
by a connected prefix / hits a glean entry. But there's nothing "conceptually" 
wrong in building
a network like this. VPP has a feature called P2P Ethernet which does this (but 
I think it's v6 only).

This assumes you are happy with all traffic between LCs going via the PE of 
course.

Hope that helps.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13950): https://lists.fd.io/g/vpp-dev/message/13950
Mute This Topic: https://lists.fd.io/mt/34092746/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-