Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Neale Ranns via Lists.Fd.Io
Hi Zhiyong, There is : make wipe-release to clear out the files related to build-release /neale De : au nom de Zhiyong Yang Date : vendredi 21 décembre 2018 à 06:00 À : Paul Vinciguerra , "Neale Ranns (nranns)" Cc : Ole Troan , "vpp-dev@lists.fd.io" , Damjan Marion Objet : Re: [vpp-dev

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread JB
Hi Matus, I've not yet had the chance to check what that looks like with multiple clients. I did check the rest of what you wrote, and that made everything a bit clearer! I have it setup so one physical interface handles all internal traffic, and another physical interface handles all extern

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
Is worker distribution same in case of multiple clients (you ca see this with same “show run” exercise, take a look at number of interface and nat44-in2out calls for each core)? Maybe you should try to play with interface rx queue placement (you can see it in “show interface rx-placement” output

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread JB
Hi Matus, Thanks! Any suggestions on what that can be done to alleviate the issues? The above test was done with a single client, but the same symptoms are shown when throwing far more flows at it, around 5.5 million sessions, from thousands of L3 sources.? John _

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
Hi, in your case most of NAT translations are done in one core. With 4 cores you are lucky and flows arrive at same core where translations are processing (no worker handoff) and with 10 cores there is worker handoff between two workers and it is reason of performance drop. Basically your flows

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Zhiyong Yang
Hi Neale, Paul, Neale is right. I try Neale’s method and it works fine. A little different from Neale’s method as below. git checkout fe820689cf56e894ae5fa38f33a48b6960038033 make wipe make rebuild-release It works! If use make build-release instead, the compiling fails again.

Re: [vpp-dev] Sanity check re: NAT for same-service mapping

2018-12-20 Thread JB
​Hi Matus, Thanks, that's what I figured. I'll see about appending the code in case more people find use for it. John From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) Sent: Thursday, December 20, 2018 7:20 AM To: John Biscevic; Ole Troan Cc: v

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread JB
Hi Damjan,   Absolutely.   I raw one case with the default number of NAT workers (10) which has poor performance, and another case with a fewer number of NAT workers (4) showing great performance. They're separated by two different files, both are attached. John vpp# sh run Thread 0 vpp_main (lc

Re: [vpp-dev] dpdk: switch to in-memory mode, deprecate use of socket-mem

2018-12-20 Thread Kingwel Xie
Hi Matthew, The patch (https://gerrit.fd.io/r/#/c/16287/) was intended to allocate crypto mem pool from DPDK, instead of from vPP. I guess you are using 2MB huge page, so you are experiencing out of memory with new patch created by Damjan. Please switch to 1GB, to see if it still happens. Hi D

[vpp-dev] How to add Sweetcomb to FD.io Code Contribution Metrics?

2018-12-20 Thread Ni, Hongjun
Hi folks, Could someone help to guide how to add Sweetcomb to FD.io Code Contribution Metrics? http://stackalytics.com/?release=all&project_type=fdio-group&metric=commits Thanks, Hongjun -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11742): http

Re: [vpp-dev] dpdk: switch to in-memory mode, deprecate use of socket-mem

2018-12-20 Thread Damjan Marion via Lists.Fd.Io
> On 20 Dec 2018, at 18:46, Matthew Smith wrote: > > > Hi Damjan, > > There is a comment that says "preallocate at least 16MB of hugepages per > socket, if more is needed it is up to consumer to preallocate more". What > does a consumer need to do in order to preallocate more? This is jus

Re: [vpp-dev] dpdk: switch to in-memory mode, deprecate use of socket-mem

2018-12-20 Thread Matthew Smith
Hi Damjan, There is a comment that says "preallocate at least 16MB of hugepages per socket, if more is needed it is up to consumer to preallocate more". What does a consumer need to do in order to preallocate more? I've recently had problems using AES-GCM with IPsec on a test system. Mempool all

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Paul Vinciguerra
Sure. Revert it. If that's the case, I need to look at why make wipe is missing it. On Thu, Dec 20, 2018 at 10:35 AM Neale Ranns (nranns) wrote: > > > Hi Paul, > > > > I’d like to revert the fix if Zhiyong confirms a clean is all that’s > needed. > > > > I think we need to VPP API compiler to g

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Neale Ranns via Lists.Fd.Io
Hi Paul, I’d like to revert the fix if Zhiyong confirms a clean is all that’s needed. I think we need to VPP API compiler to generate the necessary dependencies (e.g. as the usual .d file) for the imports it sees. This way we can setup the necessary dependencies in the makefile. /neale De :

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Paul Vinciguerra
Up until two days ago, make wipe did not clean out .api.json files. It had to be done by hand. https://gerrit.fd.io/r/#/c/16405/ My fix is more likely a workaround that gets me working and I am ok reverting if necessary. I can always cherry pick it with git review -x. I think Neale is right and

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Neale Ranns via Lists.Fd.Io
Hi Zhiyong, Works for me with a good clean [vagrant@localhost vpp]$ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch Distributor ID

Re: [vpp-dev] Ipv4 random reassembly failure on x86 and ARM

2018-12-20 Thread Juraj Linkeš
Thanks for bringing that patch to my attention, I didn't use it (I believe it hadn't been merged yet). A quick re-test shows that the failure is gone - thanks! Juraj -Original Message- From: Klement Sekera [mailto:ksek...@cisco.com] Sent: Thursday, December 20, 2018 1:26 PM To: Juraj L

Re: [vpp-dev] dpdk-input : serious load imbalance

2018-12-20 Thread Damjan Marion via Lists.Fd.Io
> On 20 Dec 2018, at 05:19, mik...@yeah.net wrote: > >Thanks for you advice. That helps a lot. The result of DPDK testpmd is > almost the same. It seems something wrong with DPDK. > Or the card. Have you tried to play with RSS options? -- Damjan -=-=-=-=-=-=-=-=-=-=-=- Links: You rece

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Neale Ranns via Lists.Fd.Io
/root/zhiyong/vpp/src/vnet/ethernet/ethernet_types_api.h:25:13: note: expected ‘const u8 * {aka const unsigned char *}’ but argument is of type ‘vl_api_mac_address_t {aka struct _vl_api_mac_address}’ extern void mac_address_decode (const u8 * in, mac_address_t * out); the argument is not of ty

Re: [vpp-dev] Ipv4 random reassembly failure on x86 and ARM

2018-12-20 Thread Klement Sekera via Lists.Fd.Io
Is this with https://gerrit.fd.io/r/#/c/16548/ merged? Quoting Juraj Linkeš (2018-12-20 12:09:12) >Hi Klement and vpp-dev, > >  > >[1]https://jira.fd.io/browse/VPP-1522 fixed the issue with an assert we've >been seeing with random reassembly, however, there's still some other >

[vpp-dev] Ipv4 random reassembly failure on x86 and ARM

2018-12-20 Thread Juraj Linkeš
Hi Klement and vpp-dev, https://jira.fd.io/browse/VPP-1522 fixed the issue with an assert we've been seeing with random reassembly, however, there's still some other failure in that test: https://jira.fd.io/browse/VPP-1475 It seems that not all fragments are sent properly. The run documented in

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Zhiyong Yang
Ole, In addition, the compiler that I’m using is gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) Zhiyong From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Zhiyong Yang Sent: Thursday, December 20, 2018 6:15 PM To: Ole Troan ; Damjan Marion Cc: vpp-dev@lists

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Zhiyong Yang
Ole, Damjan, It works well now. Thank you guys. I believe there should be no problem for other OSes as well. ☺ Regards Zhiyong From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole Troan Sent: Thursday, December 20, 2018 6:08 PM To: Yang, Zhiyong Cc: vpp-dev@li

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Ole Troan
Hi, Paul also noticed that, and provided a fix in https://gerrit.fd.io/r/#/c/16562/ Just merged. Can you confirm that fixes the issue? Also which compiler and version do you use? Cheers Ole > On 20 Dec 2018, at 11:04, Zhiyong Yang wrote: > > Hi VPP guys, > > Could you noti

Re: [vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Damjan Marion via Lists.Fd.Io
> On 20 Dec 2018, at 11:04, Zhiyong Yang wrote: > > Hi VPP guys, > > Could you notice this building errors on CentOs as below? > Could anybody help fix it? > > Regards > Zhiyong > Does https://gerrit.fd.io/r/#/c/16562/ helps? -=-=-=-=

[vpp-dev] build errors on CenOS 7.5

2018-12-20 Thread Zhiyong Yang
Hi VPP guys, Could you notice this building errors on CentOs as below? Could anybody help fix it? Regards Zhiyong Prefix path : /opt/vpp/external/x86_64;/root/zhiyong/vpp/build-root/install-vpp-native/external Install prefix : /root/zhiyong/vpp/build-root/install-v

[vpp-dev] dpdk: switch to in-memory mode, deprecate use of socket-mem

2018-12-20 Thread Damjan Marion via Lists.Fd.Io
Regarding: https://gerrit.fd.io/r/#/c/16543/ This patch switches dpdk to new in-memory mode, and reduces dpdk memory footprint as pages are allocated dynamically on-demand. I tested on both Ubuntu and Centos 7.5 and everything looks good but will appreciate feedback from people using non-st

Re: [vpp-dev] [csit-dev] anomaly detection changes

2018-12-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> is distingushing performance differences caused by environment The main contribution has been fixed, start clicking here [2] if interested. > We will probably create separate page for the new detection, It took a while, as I believe some tests are still not reliable enough, but done [3]. (You

Re: [vpp-dev] NAT workers above 4 completely tanks performance

2018-12-20 Thread Damjan Marion via Lists.Fd.Io
> On 19 Dec 2018, at 16:56, JB wrote: > > Hello everyone, > > This is on "19.01", I've yet to test with 18.10. I've setup dynamic NAT. > I've tested this with a clean setup without specifying anything special in > startup.conf. The results are irrelevant of other NAT settings except NAT > wor