Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-24 Thread Lollita Liu
Hi, John.

We tried to bypass the node creation in interface creation and 
try the case again.  The GTPU throughput does not be affected by the interface 
creation any more. The basic source code is as follow:

diff --git a/src/vnet/interface.c b/src/vnet/interface.c
index 82eccc1..451019e 100644
--- a/src/vnet/interface.c
+++ b/src/vnet/interface.c
@@ -745,6 +745,10 @@ vnet_register_interface (vnet_main_t * vnm,
   hw->max_l3_packet_bytes[VLIB_RX] = ~0;
   hw->max_l3_packet_bytes[VLIB_TX] = ~0;
+  if (0 == strcmp(dev_class->name, "GTPU")) {
+goto skip_add_node;
+  }
+
   tx_node_name = (char *) format (0, "%v-tx", hw->name);
   output_node_name = (char *) format (0, "%v-output", hw->name);
@@ -881,6 +885,8 @@ vnet_register_interface (vnet_main_t * vnm,
   setup_output_node (vm, hw->output_node_index, hw_class);
   setup_tx_node (vm, hw->tx_node_index, dev_class);
+skip_add_node:
+
   /* Call all up/down callbacks with zero flags when interface is created. */
   vnet_sw_interface_set_flags_helper (vnm, hw->sw_if_index, /* flags */ 0,
  
VNET_INTERFACE_SET_FLAGS_HELPER_IS_CREATE);

BR/Lollita Liu

From: Lollita Liu
Sent: Tuesday, January 23, 2018 11:28 AM
To: 'John Lo (loj)' ; vpp-dev@lists.fd.io
Cc: David Yu Z ; Kingwel Xie 
; Terry Zhang Z ; Jordy 
You 
Subject: RE: Question and bug found on GTP performance testing

Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu mailto:lollita@ericsson.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lollita Liu
Sent: Monday, January 22, 2018 5:04 AM
To: vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: [vpp-dev] Question and bug found on GTP performance testing

Hi,

We are do performance testing on GTP of VPP source code, 
testing the GTPU performance impact by creating/removing tunnel. Found some 
curious thing and one bug.



Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data. The result 
is 4.7Mpps@64B.

Testing GTP encryption via one CPU across different rx and tx 
port on same NUMA, with 10K pre-created GTPU tunnel both with data, and 
creating another 10K GTPU tunnel at the same time.  The result is about 
400K@64B.


The create tunnel command is "create gtpu tunnel src 1.4.1.1 
dst 1.4.1.2 teid 1 decap-next ip4"and "ip route add 10.

[vpp-dev] Build triggering on a simple commit message update?

2018-01-24 Thread Marco Varlese
All,

I noticed that when a patch is updated solely for the commit message (and pushed
to gerrit), gerrit triggers a complete new build. 

I wonder if it is possible (on the gerrit backend) to catch that a patch has
been submitted with only the commit-message being updated hence skipping the
full verification process? 

I am thinking about it since we could save cycles (many) on the build-machines
between the many processes building the code on various distros.

Thoughts?


Cheers,

-- 
Marco V

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] heap per thread

2018-01-24 Thread Dave Barach (dbarach)
Yes, it’s possible. This is not the obvious way to do it.

Before I answer any questions: what are you trying to accomplish? Idiomatic vpp 
coding techniques typically don’t result in enough memory allocator traffic to 
make it worth using per-thread heaps.

D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saeed P
Sent: Wednesday, January 24, 2018 12:53 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] heap per thread

Hi
I tried to change the memory allocation on VPP to set different mheap per 
worker not a shared mheap per worker.
so on /vlib/threads.c at start_workers function chang as follow :

   if ( !strcmp( tr->name , "workers") )
   {
   tr->mheap_size = new_mheap_size ;
   }
   vec_add2 (vlib_worker_threads, w, 1);

  if (tr->mheap_size)
w->thread_mheap = mheap_alloc (0 , 
tr->mheap_size);
  else
w->thread_mheap = main_heap;

 by default the "tr->mheap_size" is zero so go into else and use the main_heap 
but now allocate mheap for workers, but it has coredump as GDB shows:

 Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
mheap_get_search_free_bin (align_offset=4, align=, 
n_user_data_bytes_arg=, bin=11, v=0x7fffb5bdd000) at 
/root/CGNAT/build-data/../src/vppinfra/mheap.c:401
401   uword this_object_n_user_data_bytes = mheap_elt_data_bytes 
(e);

 Is it possible to set different mheap per worker ?


Thanks,
-Saeed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-24 Thread Dave Barach (dbarach)
We're not going to turn vnet_register_interface(...) into an epic catalog of 
special-purpose strcmp's. Any patch which looks the least bit like the diffs 
shown below is guaranteed to be scored -2, and never merged.

Please let John propose a mechanism to address this issue.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Lollita Liu
Sent: Wednesday, January 24, 2018 5:09 AM
To: John Lo (loj) ; vpp-dev@lists.fd.io
Cc: Kingwel Xie ; David Yu Z 
; Terry Zhang Z ; Jordy 
You 
Subject: Re: [vpp-dev] Question and bug found on GTP performance testing

Hi, John.

We tried to bypass the node creation in interface creation and 
try the case again.  The GTPU throughput does not be affected by the interface 
creation any more. The basic source code is as follow:

diff --git a/src/vnet/interface.c b/src/vnet/interface.c
index 82eccc1..451019e 100644
--- a/src/vnet/interface.c
+++ b/src/vnet/interface.c
@@ -745,6 +745,10 @@ vnet_register_interface (vnet_main_t * vnm,
   hw->max_l3_packet_bytes[VLIB_RX] = ~0;
   hw->max_l3_packet_bytes[VLIB_TX] = ~0;

+  if (0 == strcmp(dev_class->name, "GTPU")) {
+goto skip_add_node;
+  }
+
   tx_node_name = (char *) format (0, "%v-tx", hw->name);
   output_node_name = (char *) format (0, "%v-output", hw->name);

@@ -881,6 +885,8 @@ vnet_register_interface (vnet_main_t * vnm,
   setup_output_node (vm, hw->output_node_index, hw_class);
   setup_tx_node (vm, hw->tx_node_index, dev_class);

+skip_add_node:
+
   /* Call all up/down callbacks with zero flags when interface is created. */
   vnet_sw_interface_set_flags_helper (vnm, hw->sw_if_index, /* flags */ 0,
  
VNET_INTERFACE_SET_FLAGS_HELPER_IS_CREATE);

BR/Lollita Liu

From: Lollita Liu
Sent: Tuesday, January 23, 2018 11:28 AM
To: 'John Lo (loj)' mailto:l...@cisco.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu mailto:lollita@ericsson.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lollita Liu
Sent: Monday, January 22, 2018 5:04 AM
To: vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: [vpp-dev] Question 

Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-24 Thread John Lo (loj)
Hi Lolita,

I made a proper fix for this and the patch has just been merged in master 
branch:
https://gerrit.fd.io/r/#/c/10216/

Can you try a vpp image with this patch and see if it works better now?

Regards,
John

From: Dave Barach (dbarach)
Sent: Wednesday, January 24, 2018 8:38 AM
To: Lollita Liu ; John Lo (loj) ; 
vpp-dev@lists.fd.io
Cc: Kingwel Xie ; David Yu Z 
; Terry Zhang Z ; Jordy 
You 
Subject: RE: Question and bug found on GTP performance testing

We're not going to turn vnet_register_interface(...) into an epic catalog of 
special-purpose strcmp's. Any patch which looks the least bit like the diffs 
shown below is guaranteed to be scored -2, and never merged.

Please let John propose a mechanism to address this issue.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lollita Liu
Sent: Wednesday, January 24, 2018 5:09 AM
To: John Lo (loj) mailto:l...@cisco.com>>; 
vpp-dev@lists.fd.io
Cc: Kingwel Xie mailto:kingwel@ericsson.com>>; 
David Yu Z mailto:david.z...@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: Re: [vpp-dev] Question and bug found on GTP performance testing

Hi, John.

We tried to bypass the node creation in interface creation and 
try the case again.  The GTPU throughput does not be affected by the interface 
creation any more. The basic source code is as follow:

diff --git a/src/vnet/interface.c b/src/vnet/interface.c
index 82eccc1..451019e 100644
--- a/src/vnet/interface.c
+++ b/src/vnet/interface.c
@@ -745,6 +745,10 @@ vnet_register_interface (vnet_main_t * vnm,
   hw->max_l3_packet_bytes[VLIB_RX] = ~0;
   hw->max_l3_packet_bytes[VLIB_TX] = ~0;

+  if (0 == strcmp(dev_class->name, "GTPU")) {
+goto skip_add_node;
+  }
+
   tx_node_name = (char *) format (0, "%v-tx", hw->name);
   output_node_name = (char *) format (0, "%v-output", hw->name);

@@ -881,6 +885,8 @@ vnet_register_interface (vnet_main_t * vnm,
   setup_output_node (vm, hw->output_node_index, hw_class);
   setup_tx_node (vm, hw->tx_node_index, dev_class);

+skip_add_node:
+
   /* Call all up/down callbacks with zero flags when interface is created. */
   vnet_sw_interface_set_flags_helper (vnm, hw->sw_if_index, /* flags */ 0,
  
VNET_INTERFACE_SET_FLAGS_HELPER_IS_CREATE);

BR/Lollita Liu

From: Lollita Liu
Sent: Tuesday, January 23, 2018 11:28 AM
To: 'John Lo (loj)' mailto:l...@cisco.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu mailto:lollita@ericsson.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on

[vpp-dev] Fwd: [FD.io Helpdesk #51526] AutoReply: VPP Release 18.01 blocker -- merge jobs failing!

2018-01-24 Thread Dave Wallace

FYI...


 Forwarded Message 
Subject: 	[FD.io Helpdesk #51526] AutoReply: VPP Release 18.01 blocker 
-- merge jobs failing!

Date:   Wed, 24 Jan 2018 11:20:32 -0500
From:   FD.io Helpdesk via RT 
Reply-To:   fdio-helpd...@rt.linuxfoundation.org
To: dwallac...@gmail.com



Greetings,

Your support ticket regarding:
"VPP Release 18.01 blocker -- merge jobs failing!",
has been entered in our ticket tracker.  A summary of your ticket appears below.

If you have any follow-up related to this issue, please reply to this email.

You may also follow up on your open tickets by visiting https://rt.linuxfoundation.org/ 
-- if you have not logged into RT before, you will need to follow the "Forgot your 
password" link to set an RT password.

--
The Linux Foundation Support Team


-
Dear helpd...@fd.io,

The VPP 18.01 release merge jobs are failing during the upload of
artifacts to nexus.fd.io which is blocking today's release:

vpp-merge-1801-ubuntu-1604 (jobs #29 & #30)
vpp-merge-1801-centos7 (jobs #29 & #30)
vpp-docs-merge-1801 (jobs #20 & #21)

Please resolve these issues as soon as possible.

Thanks,
-daw-

[ from
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-merge-1801-ubuntu1604/30/console-timestamp.log.gz
]
- %< -
   [ERROR] Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file
(default-cli) on project standalone-pom: Failed to deploy artifacts:
Could not transfer artifact
io.fd.vpp:vpp-lib:deb:deb:18.01-release_amd64 from/to
fd.io.stable.1801.ubuntu.xenial.main
(https://nexus.fd.io/content/repositories/fd.io.stable.1801.ubuntu.xenial.main):
Failed to transfer file:
https://nexus.fd.io/content/repositories/fd.io.stable.1801.ubuntu.xenial.main/io/fd/vpp/vpp-lib/18.01-release_amd64/vpp-lib-18.01-release_amd64-deb.deb.
Return code is: 400, ReasonPhrase: Bad Request. -> [Help 1]
- %< -

[ from
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-merge-1801-centos7/30/console-timestamp.log.gz
]
- %< -
   [ERROR] Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file
(default-cli) on project standalone-pom: Failed to deploy artifacts:
Could not transfer artifact io.fd.vpp:vpp:rpm:18.01-release.x86_64
from/to fd.io.stable.1801.centos7
(https://nexus.fd.io/content/repositories/fd.io.stable.1801.centos7):
Failed to transfer file:
https://nexus.fd.io/content/repositories/fd.io.stable.1801.centos7/io/fd/vpp/vpp/18.01-release.x86_64/vpp-18.01-release.x86_64.rpm.
Return code is: 400, ReasonPhrase: Bad Request. -> [Help 1]
- %< -

[ from https://jenkins.fd.io/view/vpp/job/vpp-docs-merge-1801/21/console ]
- %< -
00:00:40.241 Resolving jenkins.fd.io (jenkins.fd.io)... failed:
Temporary failure in name resolution.
00:01:20.286 wget: unable to resolve host address ‘jenkins.fd.io’
00:01:20.287 --2018-01-24 08:24:00--
https://jenkins.fd.io/job/vpp-docs-merge-1801/21//timestamps?time=HH:mm:ss&appendLog
00:01:20.289 Resolving jenkins.fd.io (jenkins.fd.io)... failed:
Temporary failure in name resolution.
00:02:00.331 wget: unable to resolve host address ‘jenkins.fd.io’
- %< -

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] dpdk_lib_init: Why hi->max_packet_bytes passed to rte_eth_dev_set_mtu()

2018-01-24 Thread Saxena, Nitin
Hi,

Can somebody provide input to my query?

Thanks,
Nitin

On 23-Jan-2018, at 23:53, Saxena, Nitin 
mailto:nitin.sax...@cavium.com>> wrote:
Hi,

I am running VPP on Cavium’s OCTEONTx processor which is using 
VNET_DPDK_PMD_THUNDERX driver in dpdk.  I am running into failure because 
dpdk_lib_init() and dpdk_device_setup() functions are calling 
rte_eth_dev_set_mtu() with argument hi->max_packet_bytes.

  1  Line no:112   src/plugins/dpdk/device/common.c  rte_eth_dev_set_mtu 
(xd->device_index, hi->max_packet_bytes);
  2   Line no: 674  src/plugins/dpdk/device/init.c   rte_eth_dev_set_mtu 
(xd->device_index, hi->max_packet_bytes);

I can see hi->max_packet_bytes being set to ETHERNET_MAX_PACKET_BYTES == 9216 
which is not the MTU but instead max frame size (in case of OCTEONTx)

The dpdk_lib_init() calls dpdk function: rte_eth_dev_info_get (i, &dev_info); 
to get dev_info.max_rx_pktlen

hence MTU can be calculated as

 mtu = dev_info.max_rx_pktlen - sizeof(ethernet_header_t);

which the one should be passed to rte_eth_dev_set_mtu() and not 
hi->max_packet_bytes which does not take into account what DPDK PMD driver 
passing max_rx_pktlen.

Any comment/input will be helpful?

Thanks,
Nitin


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Missing PLY ?

2018-01-24 Thread Jon Loeliger
Hey Kids,

The new API Gen seems to want ply.lex, but I don't think
it is listed as a dependency or something somewhere.  Or
maybe I have a really crappy Python.  Dunno.

Net effect, shown below, isn't good.

Did I miss a step?

Thanks,
jdl


make[4]: Entering directory
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
  APIGEN   vlibmemory/memclnt.api.h
  JSON API vlibmemory/memclnt.api.json
Traceback (most recent call last):
  File
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
Traceback (most recent call last):
  File
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
make[4]: *** [vlibmemory/memclnt.api.h] Error 1
make[4]: *** Waiting for unfinished jobs
make[4]: *** [vlibmemory/memclnt.api.json] Error 1
make[4]: Leaving directory
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
make[3]: *** [vpp-build] Error 2
make[3]: Leaving directory
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
make[2]: *** [install-packages] Error 1
make[2]: Leaving directory
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
error: Bad exit status from /var/tmp/rpm-tmp.8lAVBj (%build)
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Dave Barach (dbarach)
“$ make install-dep” fixed it for me… D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Wednesday, January 24, 2018 1:58 PM
To: vpp-dev 
Subject: [vpp-dev] Missing PLY ?

Hey Kids,

The new API Gen seems to want ply.lex, but I don't think
it is listed as a dependency or something somewhere.  Or
maybe I have a really crappy Python.  Dunno.

Net effect, shown below, isn't good.

Did I miss a step?

Thanks,
jdl


make[4]: Entering directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
  APIGEN   vlibmemory/memclnt.api.h
  JSON API vlibmemory/memclnt.api.json
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
make[4]: *** [vlibmemory/memclnt.api.h] Error 1
make[4]: *** Waiting for unfinished jobs
make[4]: *** [vlibmemory/memclnt.api.json] Error 1
make[4]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
make[3]: *** [vpp-build] Error 2
make[3]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
make[2]: *** [install-packages] Error 1
make[2]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
error: Bad exit status from /var/tmp/rpm-tmp.8lAVBj (%build)

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Florin Coras
Hi Dad, 

Did you try the evergreen: 
 
make wipe
make install-dep
make build

Cheers,
Florin


> On Jan 24, 2018, at 10:57 AM, Jon Loeliger  wrote:
> 
> Hey Kids,
> 
> The new API Gen seems to want ply.lex, but I don't think
> it is listed as a dependency or something somewhere.  Or
> maybe I have a really crappy Python.  Dunno.
> 
> Net effect, shown below, isn't good.
> 
> Did I miss a step?
> 
> Thanks,
> jdl
> 
> 
> make[4]: Entering directory 
> `/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
>   APIGEN   vlibmemory/memclnt.api.h
>   JSON API vlibmemory/memclnt.api.json
> Traceback (most recent call last):
>   File 
> "/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
>  line 4, in 
> import ply.lex as lex
> ImportError: No module named ply.lex
> Traceback (most recent call last):
>   File 
> "/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
>  line 4, in 
> import ply.lex as lex
> ImportError: No module named ply.lex
> make[4]: *** [vlibmemory/memclnt.api.h] Error 1
> make[4]: *** Waiting for unfinished jobs
> make[4]: *** [vlibmemory/memclnt.api.json] Error 1
> make[4]: Leaving directory 
> `/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
> make[3]: *** [vpp-build] Error 2
> make[3]: Leaving directory 
> `/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
> make[2]: *** [install-packages] Error 1
> make[2]: Leaving directory 
> `/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
> error: Bad exit status from /var/tmp/rpm-tmp.8lAVBj (%build)
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Jon Loeliger
On Wed, Jan 24, 2018 at 1:10 PM, Florin Coras 
wrote:

> Hi Dad,
>

LOL.  I deserved that.  At least I didn't get "Hi Old Man". :-)

Did you try the evergreen:
>
> make wipe
> make install-dep
> make build
>

Yeah, have done a "make  install-dep" a couple times now
and it doesn't improve things at all.

I seem to be using python 2.7.5.

jdl


jdl@bcc-1 $ python
Python 2.7.5 (default, Aug  4 2017, 00:39:18)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ply
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named ply
>>>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Florin Coras


> On Jan 24, 2018, at 11:15 AM, Jon Loeliger  wrote:
> 
> 
> 
> On Wed, Jan 24, 2018 at 1:10 PM, Florin Coras  > wrote:
> Hi Dad,
> 
> LOL.  I deserved that.  At least I didn't get "Hi Old Man". :-)

:-D

> 
> Did you try the evergreen:
> 
> make wipe
> make install-dep
> make build
> 
> Yeah, have done a "make  install-dep" a couple times now
> and it doesn't improve things at all.
> 
> I seem to be using python 2.7.5.

This seems to be working fine on centos7. They say ply is compatible with both 
python 2 and 3 .. :/

Florin

> 
> jdl
> 
> 
> jdl@bcc-1 $ python
> Python 2.7.5 (default, Aug  4 2017, 00:39:18)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import ply
> Traceback (most recent call last):
>   File "", line 1, in 
> ImportError: No module named ply
> >>>
> 
> 
>  

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Jon Loeliger
>
> Did you try the evergreen:
>>
>> make wipe
>> make install-dep
>> make build
>>
>
> Yeah, have done a "make  install-dep" a couple times now
> and it doesn't improve things at all.
>
> I seem to be using python 2.7.5.
>
>
> This seems to be working fine on centos7. They say ply is compatible with
> both python 2 and 3 .. :/
>

Oh, it works once it is installed!

It appears to not be in the RPM_DEPENDS list.

I can patch that up...

jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Florin Coras


> On Jan 24, 2018, at 11:34 AM, Jon Loeliger  wrote:
> 
>> Did you try the evergreen:
>> 
>> make wipe
>> make install-dep
>> make build
>> 
>> Yeah, have done a "make  install-dep" a couple times now
>> and it doesn't improve things at all.
>> 
>> I seem to be using python 2.7.5.
> 
> This seems to be working fine on centos7. They say ply is compatible with 
> both python 2 and 3 .. :/
> 
> Oh, it works once it is installed!
> 
> It appears to not be in the RPM_DEPENDS list.
> 
> I can patch that up…

Perfect! Thanks!

Florin

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP performance tuning guide

2018-01-24 Thread Li, Charlie
Hi All,

I am looking for some guidance on how to tune the VPP performance, specifically 
how to tune the IP Forwarding performance for small packets.

Appreciate if someone can point me to some documents or online resources on 
this topic.

My setup is simple, just an XL710-QDA2 NIC card with two 40G ports and VPP is 
configured to forward IP traffic between the two ports. 

Basically I am using the default /etc/vpp/startup.conf (with PCI device address 
and core numbers modified). The throughput for bi-directional traffic with 
small packets is far below line rate. Throwing more cores does not seem to 
improve.

Regards,
Charlie Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Build triggering on a simple commit message update?

2018-01-24 Thread Ed Kern (ejk)
hey marco,

What your looking for (imo) is not a gerrit change. Its a jenkins side change

 https://gerrit.fd.io/r/10237

Note: I also included not running on trivial rebase which may freak some people 
out.

You cant just gerrit-trigger-patch-submitted because the way they roll 
everything up and up and up it
would change behavior of all projects (outside of vpp) using that global 
trigger.

I have no plans on advancing this patch..only pointing out one way to do it.

Ed


On Jan 24, 2018, at 4:13 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

All,

I noticed that when a patch is updated solely for the commit message (and pushed
to gerrit), gerrit triggers a complete new build.

I wonder if it is possible (on the gerrit backend) to catch that a patch has
been submitted with only the commit-message being updated hence skipping the
full verification process?

I am thinking about it since we could save cycles (many) on the build-machines
between the many processes building the code on various distros.

Thoughts?


Cheers,

--
Marco V

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-24 Thread Dave Wallace

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the 
recipe on the wiki: 
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages


Thank you to all of the VPP community who have contributed to the 18.01 
VPP Release.



Elvis has left the building!
-daw-

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-24 Thread Ni, Hongjun
Congratulations to VPP community!

From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Thursday, January 25, 2018 1:23 PM
To: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-24 Thread Florin Coras
Awesome!

Thanks for making this happen, Dave!

Florin

> On Jan 24, 2018, at 9:23 PM, Dave Wallace  wrote:
> 
> Folks,
> 
> The VPP 18.01 Release artifacts are now available on nexus.fd.io
> 
> The ubuntu.xenial and centos packages can be installed following the recipe 
> on the wiki:https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages 
> 
> 
> Thank you to all of the VPP community who have contributed to the 18.01 VPP 
> Release.
> 
> 
> Elvis has left the building!
> -daw-
> 
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-24 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Congratulations!

From: csit-dev-boun...@lists.fd.io [mailto:csit-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: 25 stycznia 2018 06:23
To: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: [csit-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Ole Troan
>> Did you try the evergreen:
>> 
>> make wipe
>> make install-dep
>> make build
>> 
>> Yeah, have done a "make  install-dep" a couple times now
>> and it doesn't improve things at all.
>> 
>> I seem to be using python 2.7.5.
> 
> This seems to be working fine on centos7. They say ply is compatible with 
> both python 2 and 3 .. :/
> 
> Oh, it works once it is installed!
> 
> It appears to not be in the RPM_DEPENDS list.
> 
> I can patch that up...

Thanks! Sorry about that one. (Seems to be on the DEB and RPM_SUSE lists. A 
case of beer anyone updates a dependency list and gets it wrong? (Send me your 
address.) :-))

Cheers,
Ole



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build triggering on a simple commit message update?

2018-01-24 Thread Marco Varlese
Hi Ed,
On Thu, 2018-01-25 at 00:28 +, Ed Kern (ejk) wrote:
> hey marco,
> 
> 
> 
> 
> What your looking for (imo) is not a gerrit change. Its a jenkins side change
> 
> 
> 
> 
> 
>  https://gerrit.fd.io/r/10237
Thank you for showing the path; I am not very familiar with Jenkins, etc. so
this example is very much appreciated!
> 
> 
> 
> Note: I also included not running on trivial rebase which may freak some
> people out.
Yeah, possibly that should be avoided... 
> 
> 
> 
> You cant just gerrit-trigger-patch-submitted because the way they roll
> everything up and up and up it
> 
> would change behavior of all projects (outside of vpp) using that global
> trigger.
Understood.
> 
> 
> 
> I have no plans on advancing this patch..only pointing out one way to do it.
What are other people's sentiment about the current situation and the proposed
change of direction?
> 
> 
> 
> Ed
Cheers,Marco
> 
> 
> 
> > On Jan 24, 2018, at 4:13 AM, Marco Varlese  wrote:
> > 
> > 
> > 
> > All,
> > 
> > 
> > 
> > I noticed that when a patch is updated solely for the commit message (and
> > pushed
> > 
> > to gerrit), gerrit triggers a complete new build. 
> > 
> > 
> > 
> > I wonder if it is possible (on the gerrit backend) to catch that a patch has
> > 
> > been submitted with only the commit-message being updated hence skipping the
> > 
> > full verification process? 
> > 
> > 
> > 
> > I am thinking about it since we could save cycles (many) on the build-
> > machines
> > 
> > between the many processes building the code on various distros.
> > 
> > 
> > 
> > Thoughts?
> > 
> > 
> > 
> > 
> > 
> > Cheers,
> > 
> > 
> > 
> > -- 
> > 
> > Marco V
> > 
> > 
> > 
> > SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
> > 
> > HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
> > 
> > ___
> > 
> > vpp-dev mailing list
> > 
> > vpp-dev@lists.fd.io
> > 
> > https://lists.fd.io/mailman/listinfo/vpp-dev
> > 
> 
> 
> 
> 
> 
> 
> 
-- 
Marco V


SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev