Re: [vpp-dev] efficient use of DPDK

2019-12-02 Thread Honnappa Nagarahalli
Thanks for bringing up the discussion

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Thomas
> Monjalon via Lists.Fd.Io
> Sent: Monday, December 2, 2019 4:35 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] efficient use of DPDK
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format to
> the other one, during Rx/Tx operations?
> 
> I'm sure there would be some benefits of switching VPP to natively use the
> DPDK mbuf allocated in mempools.
> What would be the drawbacks?
> 
> Last time I asked this question, the answer was about compatibility with
> other driver backends, especially ODP. What happened?
I think, the ODP4VPP project was closed sometime back. I do not know of anyone 
working on this project anymore.

> DPDK drivers are still the only external drivers used by VPP?
> 
> When using DPDK, more than 40 networking drivers are available:
>   https://core.dpdk.org/supported/
> After 4 years of Open Source VPP, there are less than 10 native drivers:
>   - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>   - hardware drivers: ixge, avf, pp2
> And if looking at ixge driver, we can read:
> "
>   This driver is not intended for production use and it is unsupported.
>   It is provided for educational use only.
>   Please use supported DPDK driver instead.
> "
> 
> So why not improving DPDK integration in VPP to make it faster?
> 
> DPDK mbuf has dynamic fields now; it can help to register metadata on
> demand.
> And it is still possible to statically reserve some extra space for 
> application-
> specific metadata in each packet.
> 
> Other improvements, like meson packaging usable with pkg-config, were
> done during last years and may deserve to be considered.
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14760): https://lists.fd.io/g/vpp-dev/message/14760
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-02 Thread Damjan Marion via Lists.Fd.Io

Hi THomas!

Inline...

> On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> 
> Hi all,
> 
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?

We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
performance and we are seeing significantly better performance with native AVF.
If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
Mpps and exactly the same test with native AVF driver is giving us arounf 24.86 
Mpps.

Thanks for native AVF driver and new buffer management code we managed to go 
bellow 100 clocks per packet for the whole ipv4 routing base test. 

My understanding is that performance difference is caused by 4 factors, but i 
cannot support each of them with number as i never conducted detailed testing.

- less work done in driver code, as we have freedom to cherrypick only data we 
need, and in case of DPDK, PMD needs to be universal

- no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion

- less pressure on cache (we touch 2 cacheline less with native driver for each 
packet), this is specially observable on smaller devices with less cache

- faster buffer management code


> 
> I'm sure there would be some benefits of switching VPP to natively use
> the DPDK mbuf allocated in mempools.

I dont agree with this statement, we hawe own buffer management code an we are 
not interested in using dpdk mempools. There are many use cases where we don't 
need DPDK and we wan't VPP not to be dependant on DPDK code.

> What would be the drawbacks?



> 
> Last time I asked this question, the answer was about compatibility with
> other driver backends, especially ODP. What happened?
> DPDK drivers are still the only external drivers used by VPP?

No, we still use DPDK drivers in many cases, but also we 
have lot of native drivers in VPP those days:

- intel AVF
- virtio
- vmxnet3
- rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 work 
in progess
- tap with virtio backend
- memif
- marvlel pp2
- (af_xdp - work in progress)

> 
> When using DPDK, more than 40 networking drivers are available:
>   https://core.dpdk.org/supported/
> After 4 years of Open Source VPP, there are less than 10 native drivers:
>   - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
>   - hardware drivers: ixge, avf, pp2
> And if looking at ixge driver, we can read:
> "
>   This driver is not intended for production use and it is unsupported.
>   It is provided for educational use only.
>   Please use supported DPDK driver instead.
> "

yep, ixgbe driver is not maintained for long time...

> 
> So why not improving DPDK integration in VPP to make it faster?

Yes, if we can get freedom to use parts of DPDK we want instead of being forced 
to adopt whole DPDK ecosystem.
for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...

Good example what will be good fit for us is rdma-core library, it allows you 
to programm nic and fetch packets from it in much more lightweight way, and if 
you really want to have super-fast datapath, there is direct verbs interface 
which gives you access to tx/rx rings directly.

> DPDK mbuf has dynamic fields now; it can help to register metadata on demand.
> And it is still possible to statically reserve some extra space for
> application-specific metadata in each packet.

I don't see this s a huge benefit, you still need to call rte_eal_init, you 
still need to use dpdk mempools. Basically it still requires adoption of the 
whole dpdk ecosystem which we don't want...


> Other improvements, like meson packaging usable with pkg-config,
> were done during last years and may deserve to be considered.

I'm aware of that but I was not able to found good justification to invest time 
to change existing scripting to move to meson. As typically vpp developers 
doesn't need to compile dpdk very frequently current solution is simply good 
enough...

-- 
Damjan 

[1] 
https://docs.fd.io/csit/master/report/vpp_performance_tests/packet_throughput_graphs/ip4-3n-skx-xxv710.html#


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14759): https://lists.fd.io/g/vpp-dev/message/14759
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] efficient use of DPDK

2019-12-02 Thread Thomas Monjalon
Hi all,

VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?

I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
What would be the drawbacks?

Last time I asked this question, the answer was about compatibility with
other driver backends, especially ODP. What happened?
DPDK drivers are still the only external drivers used by VPP?

When using DPDK, more than 40 networking drivers are available:
https://core.dpdk.org/supported/
After 4 years of Open Source VPP, there are less than 10 native drivers:
- virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
- hardware drivers: ixge, avf, pp2
And if looking at ixge driver, we can read:
"
This driver is not intended for production use and it is unsupported.
It is provided for educational use only.
Please use supported DPDK driver instead.
"

So why not improving DPDK integration in VPP to make it faster?

DPDK mbuf has dynamic fields now; it can help to register metadata on demand.
And it is still possible to statically reserve some extra space for
application-specific metadata in each packet.

Other improvements, like meson packaging usable with pkg-config,
were done during last years and may deserve to be considered.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14758): https://lists.fd.io/g/vpp-dev/message/14758
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-02 Thread Florin Coras
Hi Yang.L, 

I just tried out nginx + debug vcl and ldp + debug vpp. Everything seems to be 
working fine. 

Once you start nginx, do you get any errors in /var/log/syslog. What does “show 
sessions verbose” return? There might be some issues with your config.

Thanks, 
Florin

> On Dec 2, 2019, at 12:49 AM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> When nginx configuration item worker_processes = 1, everything is normal; 
> when nginx configuration item worker_processes> 1, the above situation will 
> occur.
> 
> Can you explain the above problem?
> thanks,
> Yang.L -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14746): https://lists.fd.io/g/vpp-dev/message/14746
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14757): https://lists.fd.io/g/vpp-dev/message/14757
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Using hoststack instead of tap

2019-12-02 Thread Florin Coras
Hi Paul, 

Are you thinking about using tcp to generate packets? If yes, probably you want 
to take a look at the iperf vcl tests (test/test_vcl.py).

Regards,
Florin

> On Dec 2, 2019, at 9:57 AM, Paul Vinciguerra  
> wrote:
> 
> There was a brief discussion on the community call about using hoststack as 
> an alternative to running tests as a privileged user for a tap interface.  Is 
> there any documentation on this, or is there a representative test case I can 
> reference?  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14755): https://lists.fd.io/g/vpp-dev/message/14755
> Mute This Topic: https://lists.fd.io/mt/65105296/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14756): https://lists.fd.io/g/vpp-dev/message/14756
Mute This Topic: https://lists.fd.io/mt/65105296/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Using hoststack instead of tap

2019-12-02 Thread Paul Vinciguerra
There was a brief discussion on the community call about using hoststack as
an alternative to running tests as a privileged user for a tap interface.
Is there any documentation on this, or is there a representative test case
I can reference?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14755): https://lists.fd.io/g/vpp-dev/message/14755
Mute This Topic: https://lists.fd.io/mt/65105296/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Devstack & vpp

2019-12-02 Thread Eyle Brinkhuis
Hi Guys,

Don’t know if anyone has already experienced this, but it seems that something 
goes wrong with deploying VPP and networking-vpp in a clean devstack setup:
2019-12-02 17:25:34.194 | +functions-common:service_check:1622   for 
service in ${ENABLED_SERVICES//,/ }
2019-12-02 17:25:34.198 | +functions-common:service_check:1624   sudo 
systemctl is-enabled 
devstack@vpp-agent.service
2019-12-02 17:25:34.208 | enabled
2019-12-02 17:25:34.212 | +functions-common:service_check:1628   sudo 
systemctl status devstack@vpp-agent.service 
--no-pager
2019-12-02 17:25:34.222 | ● 
devstack@vpp-agent.service - Devstack 
devstack@vpp-agent.service
2019-12-02 17:25:34.222 |Loaded: loaded 
(/etc/systemd/system/devstack@vpp-agent.service;
 enabled; vendor preset: enabled)
2019-12-02 17:25:34.222 |Active: failed (Result: exit-code) since Mon 
2019-12-02 18:24:14 CET; 1min 19s ago
2019-12-02 17:25:34.222 |  Main PID: 32189 (code=exited, status=1/FAILURE)
2019-12-02 17:25:34.222 |
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: Traceback 
(most recent call last):
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/usr/local/bin/vpp-agent", line 6, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: from 
networking_vpp.agent.server import main
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/opt/stack/networking-vpp/networking_vpp/agent/server.py", line 49, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: from 
networking_vpp.agent import vpp
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/opt/stack/networking-vpp/networking_vpp/agent/vpp.py", line 33, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: import 
vpp_papi  # type: ignore
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: 
ModuleNotFoundError: No module named 'vpp_papi'
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 systemd[1]: 
devstack@vpp-agent.service: Main process 
exited, code=exited, status=1/FAILURE
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 systemd[1]: 
devstack@vpp-agent.service: Failed with 
result 'exit-code'.


This is on Ubuntu 18.04.3 while it pulls VPP 19.08.1 and networking-vpp which 
for 19.08.1.

Anyone here for a quick fix?

Cheers,

Eyle

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14754): https://lists.fd.io/g/vpp-dev/message/14754
Mute This Topic: https://lists.fd.io/mt/65075143/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module named ply...'

2019-12-02 Thread Dave Wallace

Paul,

The VPP continuous integration system utilizes code primarily from the 
following three repos:


https://git.fd.io/ci-management/  (Note: this uses git submodule for 
global-jjb)

https://git.fd.io/csit/
https://git.fd.io/vpp/

Changes to any of these can cause "the wheels to fall off the bus".

Thanks,
-daw-

On 12/2/2019 10:24 AM, Paul Vinciguerra wrote:
How can we make this process more transparent?  I saw this issue on 
Friday.  I searched all the repo's for recently merged changes, but 
found nothing.  Is there somewhere else to look that I'm not aware of?




On Mon, Dec 2, 2019 at 8:25 AM Jerome Tollet via Lists.Fd.Io 
 > wrote:


Hi Dave,

I just sent a private email to Dave, Andrew and Ed 😉(see enclosed).

Thanks for you help.
Jerome

*De : *mailto:vpp-dev@lists.fd.io>> au nom
de "Dave Barach via Lists.Fd.Io "
mailto:cisco@lists.fd.io>>
*Répondre à : *"Dave Barach (dbarach)" mailto:dbar...@cisco.com>>
*Date : *lundi 2 décembre 2019 à 14:24
*À : *"Ed Kern (ejk)" mailto:e...@cisco.com>>,
"Andrew Yourtchenko (ayourtch)" mailto:ayour...@cisco.com>>
*Cc : *"vpp-dev@lists.fd.io "
mailto:vpp-dev@lists.fd.io>>
*Objet : *[vpp-dev] Patch validation seems to be broken: api-crc
job, 'no module named ply...'

Please have a look... Thanks... Dave

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ make json-api-files


/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py

Traceback (most recent call last):

  File

"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py",
line 3, in 

    import ply.lex as lex

ModuleNotFoundError: No module named 'ply'

Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for
.api files.

Traceback (most recent call last):

  File

"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
line 97, in 

    main()

  File

"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
line 92, in main

vppapigen(vppapigen_bin, output_dir, src_dir, f)

  File

"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
line 59, in vppapigen

src_file.name )])

  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output

**kwargs).stdout

  File "/usr/lib/python3.6/subprocess.py", line 438, in run

output=stdout, stderr=stderr)

subprocess.CalledProcessError: Command

'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
'--includedir', '/w/workspace/vpp-csit-verify-api-crc-master/src',
'--input',

'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api',
'JSON', '--output',

'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
returned non-zero exit status 1.

Makefile:610: recipe for target 'json-api-files' failed

make: *** [json-api-files] Error 1

+++ die 'Generation of .api.json files failed.'

+++ set -x

+++ set +eu

+++ warn 'Generation of .api.json files failed.'

+++ set -exuo pipefail

+++ echo 'Generation of .api.json files failed.'

Generation of .api.json files failed.

+++ exit 1

Build step 'Execute shell' marked build as failure

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14750):
https://lists.fd.io/g/vpp-dev/message/14750
Mute This Topic: https://lists.fd.io/mt/64831658/1594641
Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
[pvi...@vinciconsulting.com ]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14752): https://lists.fd.io/g/vpp-dev/message/14752
Mute This Topic: https://lists.fd.io/mt/64831658/675079
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14753): https://lists.fd.io/g/vpp-dev/message/14753
Mute This Topic: https://lists.fd.io/mt/64831658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module named ply...'

2019-12-02 Thread Paul Vinciguerra
How can we make this process more transparent?  I saw this issue on
Friday.  I searched all the repo's for recently merged changes, but found
nothing.  Is there somewhere else to look that I'm not aware of?



On Mon, Dec 2, 2019 at 8:25 AM Jerome Tollet via Lists.Fd.Io  wrote:

> Hi Dave,
>
> I just sent a private email to Dave, Andrew and Ed 😉 (see enclosed).
>
> Thanks for you help.
> Jerome
>
>
>
> *De : * au nom de "Dave Barach via Lists.Fd.Io"
> 
> *Répondre à : *"Dave Barach (dbarach)" 
> *Date : *lundi 2 décembre 2019 à 14:24
> *À : *"Ed Kern (ejk)" , "Andrew Yourtchenko (ayourtch)" <
> ayour...@cisco.com>
> *Cc : *"vpp-dev@lists.fd.io" 
> *Objet : *[vpp-dev] Patch validation seems to be broken: api-crc job, 'no
> module named ply...'
>
>
>
> Please have a look... Thanks... Dave
>
>
>
> +++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
>
> +++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
>
> +++ make json-api-files
>
>
> /w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
>
> Traceback (most recent call last):
>
>   File
> "/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py",
> line 3, in 
>
> import ply.lex as lex
>
> ModuleNotFoundError: No module named 'ply'
>
> Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.
>
> Traceback (most recent call last):
>
>   File
> "/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
> line 97, in 
>
> main()
>
>   File
> "/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
> line 92, in main
>
> vppapigen(vppapigen_bin, output_dir, src_dir, f)
>
>   File
> "/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
> line 59, in vppapigen
>
> src_file.name)])
>
>   File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
>
> **kwargs).stdout
>
>   File "/usr/lib/python3.6/subprocess.py", line 438, in run
>
> output=stdout, stderr=stderr)
>
> subprocess.CalledProcessError: Command
> '['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
> '--includedir', '/w/workspace/vpp-csit-verify-api-crc-master/src',
> '--input',
> '/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api',
> 'JSON', '--output',
> '/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
> returned non-zero exit status 1.
>
> Makefile:610: recipe for target 'json-api-files' failed
>
> make: *** [json-api-files] Error 1
>
> +++ die 'Generation of .api.json files failed.'
>
> +++ set -x
>
> +++ set +eu
>
> +++ warn 'Generation of .api.json files failed.'
>
> +++ set -exuo pipefail
>
> +++ echo 'Generation of .api.json files failed.'
>
> Generation of .api.json files failed.
>
> +++ exit 1
>
> Build step 'Execute shell' marked build as failure
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14750): https://lists.fd.io/g/vpp-dev/message/14750
> Mute This Topic: https://lists.fd.io/mt/64831658/1594641
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> pvi...@vinciconsulting.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14752): https://lists.fd.io/g/vpp-dev/message/14752
Mute This Topic: https://lists.fd.io/mt/64831658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vac_client_constructor memory allocation

2019-12-02 Thread emma sdi
You are right, The clib_mem_init call mmap to allocate virtual memory, but
I got random SIGSEGV for the client program When the running machine has
low memory ( about 2G ). So I changed client.c and memory_client.c to
allocate lower virtual memory and only when I call connect_to_vlib.


The changes are uploaded in this link
.


Please show me a sample of using Unix socket to call vpp api.

Another thing, when I enabled memory overcommit 'echo 1 >
/proc/sys/vm/overcommit_memory
' , SIGSEGV is not happened. but it's too risky and I don`t like it.

look at the following line of `ps aux` output:
root 22237  100  0.1 124111660 68004 ? Rsl  17:24  16:44
/usr/bin/vpp -c /etc/vpp/startup.conf
vpp has 124GB Virtual memory!! Is there a memory leak?

Regards,
Emma

On Mon, Dec 2, 2019 at 12:01 PM Ole Troan  wrote:

> Emma,
>
> > The function vac_client_constructor allocates 1 GB of memory in every
> binary which linked to the vlibmemoryclient library.
> > I have limited memory in my test machine. Is there any way to resolve
> this issue?
>
> Firstly this is virtual memory.
>
> If I recall correctly the API client uses the heap largely for the message
> dictionary.
> You can certainly make this tunable / make a better estimate of how much
> memory it needs.
> Or you can use the Unix Domain socket transport. That doesn't require a
> VPP memory heap on the client side at all.
>
> Cheers,
> Ole
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14751): https://lists.fd.io/g/vpp-dev/message/14751
Mute This Topic: https://lists.fd.io/mt/64406844/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module named ply...'

2019-12-02 Thread Jerome Tollet via Lists.Fd.Io
Hi Dave,
I just sent a private email to Dave, Andrew and Ed 😉 (see enclosed).
Thanks for you help.
Jerome

De :  au nom de "Dave Barach via Lists.Fd.Io" 

Répondre à : "Dave Barach (dbarach)" 
Date : lundi 2 décembre 2019 à 14:24
À : "Ed Kern (ejk)" , "Andrew Yourtchenko (ayourtch)" 

Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module 
named ply...'

Please have a look... Thanks... Dave

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ make json-api-files
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py", 
line 3, in 
import ply.lex as lex
ModuleNotFoundError: No module named 'ply'
Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 97, in 
main()
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 92, in main
vppapigen(vppapigen_bin, output_dir, src_dir, f)
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 59, in vppapigen
src_file.name)])
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 
'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
 '--includedir', '/w/workspace/vpp-csit-verify-api-crc-master/src', '--input', 
'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api', 
'JSON', '--output', 
'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
 returned non-zero exit status 1.
Makefile:610: recipe for target 'json-api-files' failed
make: *** [json-api-files] Error 1
+++ die 'Generation of .api.json files failed.'
+++ set -x
+++ set +eu
+++ warn 'Generation of .api.json files failed.'
+++ set -exuo pipefail
+++ echo 'Generation of .api.json files failed.'
Generation of .api.json files failed.
+++ exit 1
Build step 'Execute shell' marked build as failure
--- Begin Message ---
Hello,

I tried to push this patch (https://gerrit.fd.io/r/c/vpp/+/23700) which only 
adds a FEATURE.yaml file for DHCP.

 

This test: https://jenkins.fd.io/job/vpp-csit-verify-api-crc-master/2164/ 
returns and error and having a look at it logs 
(https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-api-crc-master/2164/console.log.gz)
  I found the following problem:

 

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ make json-api-files

/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py

Traceback (most recent call last):

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py", 
line 3, in 

import ply.lex as lex

ModuleNotFoundError: No module named 'ply'

Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.

Traceback (most recent call last):

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 97, in 

main()

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 92, in main

   vppapigen(vppapigen_bin, output_dir, src_dir, f)

 File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 59, in vppapigen

src_file.name)])

  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output

**kwargs).stdout

  File "/usr/lib/python3.6/subprocess.py", line 438, in run

output=stdout, stderr=stderr)

subprocess.CalledProcessError: Command 
'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
 '--includedir',  '/w/workspace/vpp-csit-verify-api-crc-master/src', '--input', 
'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api', 
'JSON', '--output', 
'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
  returned non-zero exit status 1.

Makefile:610: recipe for target 'json-api-files' failed

 

Can you help with that?

 

Jerome

--- End Message ---
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14750): https://lists.fd.io/g/vpp-dev/message/14750
Mute This Topic: https://lists.fd.io/mt/64831658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Patch validation seems to be broken: api-crc job, 'no module named ply...'

2019-12-02 Thread Dave Barach via Lists.Fd.Io
Please have a look... Thanks... Dave

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ make json-api-files
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py", 
line 3, in 
import ply.lex as lex
ModuleNotFoundError: No module named 'ply'
Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 97, in 
main()
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 92, in main
vppapigen(vppapigen_bin, output_dir, src_dir, f)
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 59, in vppapigen
src_file.name)])
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 
'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
 '--includedir', '/w/workspace/vpp-csit-verify-api-crc-master/src', '--input', 
'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api', 
'JSON', '--output', 
'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
 returned non-zero exit status 1.
Makefile:610: recipe for target 'json-api-files' failed
make: *** [json-api-files] Error 1
+++ die 'Generation of .api.json files failed.'
+++ set -x
+++ set +eu
+++ warn 'Generation of .api.json files failed.'
+++ set -exuo pipefail
+++ echo 'Generation of .api.json files failed.'
Generation of .api.json files failed.
+++ exit 1
Build step 'Execute shell' marked build as failure
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14749): https://lists.fd.io/g/vpp-dev/message/14749
Mute This Topic: https://lists.fd.io/mt/64831658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT stops processing for big amount of users

2019-12-02 Thread Юрий Иванов
Hi Filip

Simple NAT.

Regards Yurii

От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
Отправлено: 2 декабря 2019 г. 13:21:26
Кому: Юрий Иванов ; vpp-dev@lists.fd.io 

Копия: Ole Troan (otroan) 
Тема: RE: [vpp-dev] NAT stops processing for big amount of users


Hi Yurii,



Are you using endpoint-dependent nat or are you using simple NAT ?



Best regards,

Filip



[https://www.cisco.com/c/dam/m/en_us/signaturetool/images/logo/Cisco_Logo_no_TM_Cisco_Blue-RGB_43px.png]

Filip Varga

Engineer - Software

fiva...@cisco.com

Tel:









Cisco Systems, Inc.







Slovakia

cisco.com

[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]

Think before you print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.

Please click 
here
 for Company Registration Information.





From: Юрий Иванов 
Sent: Monday, December 2, 2019 8:17 AM
To: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) ; 
vpp-dev@lists.fd.io
Cc: Ole Troan (otroan) 
Subject: RE: [vpp-dev] NAT stops processing for big amount of users



Hi Fillip,

I've see you've created a patch, so I try to test it out today.



I've build it after applying your code so me version now is:



vpp# show version

vpp v20.01-rc0~737-g4c82b6f42 built by root on nat-1 at Sat nov 30 11:47:25 EET 
2019



After that move my private address traffic to my new vpp nat server.

As the rusult VPP stops processing traffic immediately.



The configuration is very strightforward with default vpp.conf:



set int ip address TenGigabitEthernet1/0/0 10.0.100.1/31

set int ip address TenGigabitEthernet1/0/1 19.246.159.1/25

set int state TenGigabitEthernet1/0/0 up

set int state TenGigabitEthernet1/0/1 up



ip route add 0.0.0.0/0 via 19.246.159.126 TenGigabitEthernet1/0/1

ip route add 10.0.0.0/8 via 10.0.100.0 TenGigabitEthernet1/0/0



set int nat44 in TenGigabitEthernet1/0/0 out TenGigabitEthernet1/0/1

nat44 add address 19.246.159.5 - 19.246.159.10



There are not many output addresses but I think for testing purposes it will be 
enough.



vpp# show nat44 sessions

NAT44 sessions:

 thread 0 vpp_main: 10240 sessions 

  10.9.1.19: 10 dynamic translations, 0 static translations

  10.71.0.129: 28 dynamic translations, 0 static translations

  10.83.0.196: 4 dynamic translations, 0 static translations

  10.17.0.127: 12 dynamic translations, 0 static translations

  10.79.0.119: 9 dynamic translations, 0 static translations

  ...

  -- more -- (1-30/1055)



vpp# show nat timeouts

udp timeout: 300sec

tcp-established timeout: 7440sec

tcp-transitory timeout: 240sec

icmp timeout: 60sec



show logging  shows nothing interesting.



Strange but only one out address has active counters.



vpp# sh nat44 addresses



NAT44 pool addresses:

19.246.159.5

  tenant VRF independent

  0 busy udp ports

  0 busy tcp ports

  0 busy icmp ports

19.246.159.6

  tenant VRF independent

  0 busy udp ports

  0 busy tcp ports

  0 busy icmp ports

19.246.159.7

  tenant VRF independent

  0 busy udp ports

  0 busy tcp ports

  0 busy icmp ports

19.246.159.8

  tenant VRF independent

  0 busy udp ports

  0 busy tcp ports

  0 busy icmp ports

19.246.159.9

  tenant VRF independent

  0 busy udp ports

  0 busy tcp ports

  0 busy icmp ports

19.246.159.10

  tenant VRF independent

  1350 busy udp ports

  8801 busy tcp ports

  89 busy icmp ports



Port utilization in attached image.





От: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> от имени Юрий Иванов 
mailto:format_...@outlook.com>>
Отправлено: 27 ноября 2019 г. 16:22
Кому: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
mailto:fiva...@cisco.com>>; 
vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>>
Копия: Ole Troan (otroan) mailto:otr...@cisco.com>>
Тема: Re: [vpp-dev] NAT stops processing for big amount of users



Thanks,



I'll wait for your fix.

I think using NAT with VPP will be mo' better than iptables/nftables.



Best regards,

Yurii



От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
mailto:fiva...@cisco.com>>
Отправлено: 27 ноября 2019 г. 12:39
Кому: format_...@outlook.com 
mailto:format_...@outlook.com>>; 
vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>>
Копия: Ole Troan (otroan) mailto:otr...@cisco.com>>
Тема: RE: [vpp-dev] NAT stops processing for big amount of users



Hi,



The issue is related to a bu

Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-02 Thread lin . yang13
Hi Florin,
When nginx configuration item worker_processes = 1, everything is normal;
when nginx configuration item worker_processes> 1, the above situation will 
occur.

Can you explain the above problem?
thanks,
Yang.L
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14746): https://lists.fd.io/g/vpp-dev/message/14746
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vac_client_constructor memory allocation

2019-12-02 Thread Ole Troan
Emma,

> The function vac_client_constructor allocates 1 GB of memory in every binary 
> which linked to the vlibmemoryclient library.
> I have limited memory in my test machine. Is there any way to resolve this 
> issue?

Firstly this is virtual memory.

If I recall correctly the API client uses the heap largely for the message 
dictionary.
You can certainly make this tunable / make a better estimate of how much memory 
it needs.
Or you can use the Unix Domain socket transport. That doesn't require a VPP 
memory heap on the client side at all.

Cheers,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14745): https://lists.fd.io/g/vpp-dev/message/14745
Mute This Topic: https://lists.fd.io/mt/64406844/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-