[vpp-dev] C language binding API

2017-08-28 Thread Samuel S
Where can i find tutorial for C language binding for API?
or how can i communicate to API through C language?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] how can i compile vpp-api/client

2017-08-28 Thread Samuel S
How can i compile test.c in vpp-api/client directory?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] C language binding API

2017-08-28 Thread Prabhjot Singh Sethi
Please refer to https://wiki.fd.io/view/VPP/How_To_Use_The_C_API

Regards,
Prabhjot

- Original Message -
From: "Samuel S" 
To:
Cc:
Sent:Mon, 28 Aug 2017 12:02:09 +0430
Subject:[vpp-dev] C language binding API

Where can i find tutorial for C language binding for API? 
or how can i communicate to API through C language?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for fd.io_vpp

2017-08-28 Thread Dave Barach (dbarach)
+1

From: Damjan Marion [mailto:dmarion.li...@gmail.com]
Sent: Saturday, August 26, 2017 3:11 PM
To: Eric Chen 
Cc: Dave Barach (dbarach) ; Sergio Gonzalez Monroy 
; vpp-dev 
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Hi Eric,

Same code compiles perfectly fine on ARM64 with newer gcc version.

If you are starting new development cycle it makes sense to me that you pick up 
latest ubuntu release,
specially when new hardware is involved instead of trying to chase this kind of 
bugs.

Do you have any strong reason to stay on ubuntu 16.04? Both 17.04 and upcoming 
17.10 are working fine on arm64 and
compiling of VPP works without issues.

Thanks,

Damjan


On 26 Aug 2017, at 15:23, Eric Chen 
mailto:eri...@marvell.com>> wrote:

Dave,

Thanks for your answer.
I tried below variation, it doesn’t help.

Btw, there is not only one place reporting “error: unable to generate reloads 
for:”,

I will try to checkout the version of 17.01.1,
since with the same native compiler, I succeeded to build fd.io_odp4vpp (which 
is based on fd.io 17.01.1).

will keep you posted.

Thanks
Eric

From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: 2017年8月26日 20:08
To: Eric Chen mailto:eri...@marvell.com>>; Sergio Gonzalez 
Monroy 
mailto:sergio.gonzalez.mon...@intel.com>>; 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Just so everyone knows, the function in question is almost too simple for its 
own good:

always_inline uword
vlib_process_suspend_time_is_zero (f64 dt)
{
  return dt < 10e-6;
}

What happens if you try this variation?

always_inline int
vlib_process_suspend_time_is_zero (f64 dt)
{
  if (dt < 10e-6)
 return 1;
  return 0;
}

This does look like a gcc bug, but it may not be hard to work around...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Eric Chen
Sent: Friday, August 25, 2017 11:02 PM
To: Eric Chen mailto:eri...@marvell.com>>; Sergio Gonzalez 
Monroy 
mailto:sergio.gonzalez.mon...@intel.com>>; 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Hi Sergio,

I upgrading to Ubuntu 16.04,

Succedd to Nativly build fd.io_odp4vpp (w / odp-linux),
However when buidl fd.io_vpp (w/ dpdk),  it reported below error,
(almost the same , only difference is over dpdk or odp-linux)

Anyone met before? Seem a bug of gcc.

In file included from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/error_funcs.h:43:0,
 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/vlib.h:70,
 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vnet/l2/l2_fib.c:19:
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h: In 
function ‘vlib_process_suspend_time_is_zero’:
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
error: unable to generate reloads for:
}
^
(insn 11 37 12 2 (set (reg:CCFPE 66 cc)
(compare:CCFPE (reg:DF 79)
(reg:DF 80))) 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:441 
395 {*cmpedf}
 (expr_list:REG_DEAD (reg:DF 80)
(expr_list:REG_DEAD (reg:DF 79)
(nil
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
internal compiler error: in curr_insn_transform, at lra-constraints.c:3509
Please submit a full bug report,
with preprocessed source if appropriate.
See 
>
 for instructions.
Makefile:6111: recipe for target 'vnet/l2/l2_fib.lo' failed
make[4]: *** [vnet/l2/l2_fib.lo] Error 1
make[4]: *** Waiting for unfinished jobs



ericxh@linaro-developer:~/work/git_work/fd.io_vpp$
 gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.3.1-14ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-multia

[vpp-dev] Is it possible to apply L2 classifier before ethernet-input?

2017-08-28 Thread Nagaprabhanjan Bellari
Hi,

There are cases where creating sub interfaces for a lot of vlan
combinations (outer and inner vlan) and would prefer to match the packet
and direct it some node in the graph for control plane processing. (Think
PPPoE)

Is it possible to directly punt packets to l2 classifier? Because
ethernet-input would try to identify a sub interface and won't find
anything.

Thanks,
-nagp
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-08-28 Thread Maciek Konstantynowicz (mkonstan)
+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered regression for L2BD with MAC learning with higher total number
of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
threaded configurations seem to be not impacted.

Billy, Karl, Can you confirm this aligns with your findings?

More detail
===
MAC scale tests have been now added L2BD and L2BD+vhost CSIT suites, as
a simple extension to existing L2 testing suites. Some known issues with
TG prevented CSIT to add those tests in the past, but now as TG issues
have been addressed, the tests could be added swiftly. The complete list
of added tests is listed in [1] - thanks to Peter Mikus for great work
there!

Results from running those tests multiple times within FD.io CSIT 
lab
infra can be glanced over by checking dedicated test trigger commits
[2][3][4], summary graphs in linked xls [5]. The results confirm there
is regression in VPP l2fib code affecting all scaled up MAC tests in
multi-thread configuration. Single-thread configurations seems not be
impacted.

The tests in commit [1] are not merged yet, as they're waiting for
TG/TRex team to fix TRex issue with mis-calculating Ethernet FCS with
large number of L2 MAC flows (>10k MAC flows). Issue is tracked by [6],
TRex v2.29 with the fix ETA is w/e 1-Sep i.e. this week. Reported CSIT test
results are using Ethernet frames with UDP headers that's masking the
TRex issue.

We have also vpp git bisected the problem between v17.04 (good) and
v17.07 (bad) in a separate IXIA based lab in SJC, and found the culprit
vpp patch [7]. Awaiting fix from vpp-dev, jira ticket raised [8].

Many thanks for reporting this regression and working with CSIT to plug
this hole in testing.

-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 
CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ 
ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ 
ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 
CSIT-793];
[7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
[8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 
VPP-963];

On 14 Aug 2017, at 23:40, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:

In the last VPP call, I reported some internal Red Hat performance testing was 
showing a significant drop in performance between releases 17.04 to 17.07. This 
with l2-bridge testing - PVP - 0.002% Drop Rate:
   VPP-17.04: 256 Flow 7.8 MP/s 10k Flow 7.3 MP/s 1m Flow 5.2 MP/s
   VPP-17.07: 256 Flow 7.7 MP/s 10k Flow 2.7 MP/s 1m Flow 1.8 MP/s

The performance team re-ran some of the tests for me with some additional data 
collected. Looks like the size of the L2 FIB table was reduced in 17.07. Below 
are the number of entries in the MAC Table after the tests are run:
   17.04:
 show l2fib
 408 l2fib entries
   17.07:
 show l2fib
 1067053 l2fib entries with 1048576 learned (or non-static) entries

This caused more packets to be flooded (see out of 'show node counters' below). 
I looked but couldn't find anything. Is the size of the L2 FIB Table table 
configurable?

Thanks,
Billy McFall


17.04:

show node counters
   CountNode  Reason
:
 313035313l2-inputL2 input packets
555726l2-floodL2 flood packets
:
 310115490l2-inputL2 input packets
824859l2-floodL2 flood packets
:
 313508376l2-inputL2 input packets
   1041961l2-floodL2 flood packe

[vpp-dev] Fwd: VPP Performance drop from 17.04 to 17.07

2017-08-28 Thread Maciek Konstantynowicz (mkonstan)
Dear vpp-dev,

Pls let us know if CSIT team can assist any further with resolving this
v1707 regression:

VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 
VPP-963]

VPP-963 ticket includes git bisecting logs and associated outputs from
VPP cli per bisecting step. IXIA screens are available on request, with
IXIA pkt loss counters matching 100% packet lost (rx-miss) by VPP cli.

-Maciek

Begin forwarded message:

From: Maciek Konstantynowicz mailto:mkons...@cisco.com>>
Subject: Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07
Date: 28 August 2017 at 13:53:52 BST
To: Billy McFall mailto:bmcf...@redhat.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>, 
csit-...@lists.fd.io

+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered regression for L2BD with MAC learning with higher total number
of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
threaded configurations seem to be not impacted.

Billy, Karl, Can you confirm this aligns with your findings?

More detail
===
MAC scale tests have been now added L2BD and L2BD+vhost CSIT suites, as
a simple extension to existing L2 testing suites. Some known issues with
TG prevented CSIT to add those tests in the past, but now as TG issues
have been addressed, the tests could be added swiftly. The complete list
of added tests is listed in [1] - thanks to Peter Mikus for great work
there!

Results from running those tests multiple times within FD.io 
CSIT lab
infra can be glanced over by checking dedicated test trigger commits
[2][3][4], summary graphs in linked xls [5]. The results confirm there
is regression in VPP l2fib code affecting all scaled up MAC tests in
multi-thread configuration. Single-thread configurations seems not be
impacted.

The tests in commit [1] are not merged yet, as they're waiting for
TG/TRex team to fix TRex issue with mis-calculating Ethernet FCS with
large number of L2 MAC flows (>10k MAC flows). Issue is tracked by [6],
TRex v2.29 with the fix ETA is w/e 1-Sep i.e. this week. Reported CSIT test
results are using Ethernet frames with UDP headers that's masking the
TRex issue.

We have also vpp git bisected the problem between v17.04 (good) and
v17.07 (bad) in a separate IXIA based lab in SJC, and found the culprit
vpp patch [7]. Awaiting fix from vpp-dev, jira ticket raised [8].

Many thanks for reporting this regression and working with CSIT to plug
this hole in testing.

-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 
CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ 
ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ 
ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 
CSIT-793];
[7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
[8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 
VPP-963];

On 14 Aug 2017, at 23:40, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:

In the last VPP call, I reported some internal Red Hat performance testing was 
showing a significant drop in performance between releases 17.04 to 17.07. This 
with l2-bridge testing - PVP - 0.002% Drop Rate:
   VPP-17.04: 256 Flow 7.8 MP/s 10k Flow 7.3 MP/s 1m Flow 5.2 MP/s
   VPP-17.07: 256 Flow 7.7 MP/s 10k Flow 2.7 MP/s 1m Flow 1.8 MP/s

The performance team re-ran some of the tests for me with some additional data 
collected. Looks like the size of the L2 FIB table was reduced in 17.07. Below 
are the number of entries in the MAC Table after the tests are run:
   17.04:
 show l2fib

[vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Marco Varlese
Hi,

I'm running the tip of master branch and I get a segmentation fault
when launcing "vpp -c /etc/vpp/startup.conf"

My startup.conf is very simple, I don't even map dpdk interfaces, etc.
since I am using in a virt environment.

I wonder if by any chance a new setting/parameter was introduced which
I am missing hence having such an issue?

The stacktrace of the execution is below.

load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

Program received signal SIGSEGV, Segmentation fault.
mfib_entry_alloc (mfib_entry_index=,
prefix=0x7f1219818ce0, fib_index=0) at /usr/src/debug/vpp-
17.10/src/vnet/mfib/mfib_entry.c:407
407 mfib_entry->mfe_prefix = *prefix;
(gdb) 



Thanks,
Marco

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Marco Varlese
Apologies, I forgot to also provide some extra information:

> Using DPDK 17.08.

> A backtrace below:

(gdb) bt
#0  0x7765bced in mfib_entry_create () from
/usr/lib64/libvnet.so.0
#1  0x7765cdc7 in mfib_table_entry_update () from
/usr/lib64/libvnet.so.0
#2  0x77656b85 in ip4_mfib_table_find_or_create_and_lock ()
from /usr/lib64/libvnet.so.0
#3  0x7765d257 in mfib_table_find_or_create_and_lock () from
/usr/lib64/libvnet.so.0
#4  0x77338b14 in ip4_lookup_init () from
/usr/lib64/libvnet.so.0
#5  0x77273bff in vnet_main_init () from
/usr/lib64/libvnet.so.0
#6  0x773a4507 in ip_main_init () from /usr/lib64/libvnet.so.0
#7  0x7fffb35d8572 in ?? () from
/usr/lib64/vpp_plugins/ioam_plugin.so
#8  0x7796128d in vlib_call_init_exit_functions () from
/usr/lib64/libvlib.so.0
#9  0x779657a5 in vlib_main () from /usr/lib64/libvlib.so.0
#10 0x7799d3c6 in ?? () from /usr/lib64/libvlib.so.0
#11 0x76f7a250 in clib_calljmp () from
/usr/lib64/libvppinfra.so.0
#12 0x7fffd0f0 in ?? ()
#13 0x7799df54 in vlib_unix_main () from
/usr/lib64/libvlib.so.0
#14 0x in ?? ()


Cheers,
Marco

On Mon, 2017-08-28 at 15:10 +0200, Marco Varlese wrote:
> Hi,
> 
> I'm running the tip of master branch and I get a segmentation fault
> when launcing "vpp -c /etc/vpp/startup.conf"
> 
> My startup.conf is very simple, I don't even map dpdk interfaces,
> etc.
> since I am using in a virt environment.
> 
> I wonder if by any chance a new setting/parameter was introduced
> which
> I am missing hence having such an issue?
> 
> The stacktrace of the execution is below.
> 
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
> Development Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
> Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
> Deployment on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
> Interface (experimetal))
> load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
> Translation)
> load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
> 
> Program received signal SIGSEGV, Segmentation fault.
> mfib_entry_alloc (mfib_entry_index=,
> prefix=0x7f1219818ce0, fib_index=0) at /usr/src/debug/vpp-
> 17.10/src/vnet/mfib/mfib_entry.c:407
> 407   mfib_entry->mfe_prefix = *prefix;
> (gdb) 
> 
> 
> 
> Thanks,
> Marco
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [csit-dev] make test python segfault in ubuntu 16.04

2017-08-28 Thread Florin Coras (fcoras)
Hi Dave,

Thanks a lot for thorough analysis! I’d also really like to see 1 be fixed as 
soon as possible.

Cheers,
Florin

From: Dave Wallace 
Date: Friday, August 25, 2017 at 10:56 AM
To: "vpp-dev@lists.fd.io" 
Cc: "Florin Coras (fcoras)" , "csit-...@lists.fd.io" 

Subject: Re: [csit-dev] make test python segfault in ubuntu 16.04

vpp-dev, Florin,

Below is an analysis of the all of the failures that this patch encountered 
before finally passing. None of the failures were related in any way to the 
code changes in the patch.

In summary, there appear to be a number of different factors involved with 
these failures.
· Two failures appear to be caused by the run-time environment.
· An intermittent bug appears to exist in `L2BD Multi-instance test 5 - 
delete 5 BDs'
· The segfault shows lots of threads being run.  Are tests being 
executed in parallel?  If so, it would be interesting to serialize the tests to 
see if that fixes any of these issues.
I'm also seeing a variation in the order that the "make tests" are run (or at 
least in the order of the status reports).  My understanding of the 'make test' 
python infrastructure is insufficient to make an intelligent guess as to 
whether this has any bearing on any of these failures.

I get more predictable result output when running make test locally on my own 
server, but the order of test output is different than in the CI test runs.  
Locally, the order of tests appears to be the same between different runs of 
'make test'.  I have also not seen any of these errors on my server which is 
running Ubuntu 17.04, although I have not done an endurance test either.

My recommendation based on this analysis is as follows:
  1. The L2BD unit test issue be investigated by the appropriate 'make test' 
experts
  2. vpp-verify-master-centos7, vpp-verify-master-ubuntu1604, and 
vpp-test-debug-master-ubuntu1604 jobs should be run operationally in the 
Container PoC environment with the rest of the jjb jobs run in the cloud infra.

Thanks,
-daw-


 %< 
[ From https://gerrit.fd.io/r/#/c/8133 ]

=> Container PoC Aug 24 8:36 PM  Patch Set 9:  Build Successful
http://jenkins.ejkern.net:8080/job/vpp-docs-verify-master/1515/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-make-test-docs-verify-master/1512/ : 
SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-verify-master-centos7/1983/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1301/ : 
SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-verify-master-ubuntu1604/2022/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-fake-csit-verify-master/1695/ : SUCCESS

=> fd.io JJB  Aug 24 9:19 PM  Patch Set 9:  Verified-1  Build Failed
https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/6775/ : FAILURE
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/6775
Failure Signature:
  01:08:59  verify templates on IP6 datapath  Fatal Python error: 
Segmentation fault

Comment:
  Python bug or resource starvation?  Lots of threads running...
  Possibly due to bad environment/sick minion.
https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/3098/ : SUCCESS
https://jenkins.fd.io/job/vpp-verify-master-centos7/6770/ : SUCCESS
https://jenkins.fd.io/job/vpp-csit-verify-virl-master/6781/ : SUCCESS
https://jenkins.fd.io/job/vpp-docs-verify-master/5370/ : SUCCESS

=> Container PoC  Aug 24 10:54 PM  Patch Set 9:  Build Successful
http://jenkins.ejkern.net:8080/job/vpp-docs-verify-master/1519/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-make-test-docs-verify-master/1516/ : 
SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-verify-master-centos7/1987/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-test-debug-master-ubuntu1604/1305/ : 
SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-verify-master-ubuntu1604/2027/ : SUCCESS
http://jenkins.ejkern.net:8080/job/vpp-fake-csit-verify-master/1699/ : SUCCESS

=> fd.io JJB  Aug 24 11:13 PM  Patch Set 9:  Verified-1  Build Failed
https://jenkins.fd.io/job/vpp-verify-master-centos7/6774/ : FAILURE
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/6774
Failure Signature:
  00:23:17.198 CCLD vcl_test_client
  00:24:32.936 FATAL: command execution failed
  00:24:32.937 java.io.IOException

Comment:
  Bad environment/sick minion?
  There's no reason for compilation to kill the build.
https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/6779/ : FAILURE
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/6779
Failure Signature:
  03:02:47  
==
  03:02:47  collect information on Ethernet, IP4 and IP6 datapath (no timers)
  03:02:47  
==
  03:02:47  no timers, one CFLOW packet, 9 Flows inside 
 OK
  03:02:47  no timers, two CFLOW packets (mtu=256), 3 Flows in ea

[vpp-dev] Duplicate Prefetching of 128 bytes memory.

2017-08-28 Thread mrityunjay.kum...@wipro.com
Dear Team
I would like bring it to your kind notice of below code of vpp-1707-dpdk 
plunging.

static_always_inline void dpdk_prefetch_buffer_by_index (vlib_main_t * vm, u32 
bi)
{
  vlib_buffer_t *b;
  struct rte_mbuf *mb;
  b = vlib_get_buffer (vm, bi);
  mb = rte_mbuf_from_vlib_buffer (b);
  CLIB_PREFETCH (mb, CLIB_CACHE_LINE_BYTES, LOAD);
  CLIB_PREFETCH (b, CLIB_CACHE_LINE_BYTES, LOAD);
}


#define CLIB_PREFETCH(addr,size,type)   \
do {\
  void * _addr = (addr);  \
\
  ASSERT ((size) <= 4*CLIB_CACHE_LINE_BYTES); \
  _CLIB_PREFETCH (0, size, type);   \
  _CLIB_PREFETCH (1, size, type);   \
  _CLIB_PREFETCH (2, size, type);   \
  _CLIB_PREFETCH (3, size, type);   \
} while (0)


[cid:a4244621-4b4e-4fe0-b6e7-c6928cd4909a]



Here , Sizeof(rte_mbuf) = 128 and sizeof(vlib_buffer_t) = 128 + HEAD_ROOM(128)= 
256.


In above code part, vlib_buffer is ahead of 128 bytes from start of rte_mbuf 
structure.  As i understood one CLIB_PREFETCH will load 256 bytes from memory. 
,hence total pre-fetch  is 512 bytes. As per the above code first CLIB_PREFETCH 
will load 256, which includes 128 of rte_mbuf + 128 of vlib_buffer as well . 
2nd CLIB_PREFETCH will also load vlib_buffer whcih as ready has been loaded.

I must say duplication in prefetching the memory. Please correct me if I am 
wrong.

Regards
MJ
Senior Project Engineer.
Mob: 9735128504









The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Marco Varlese
And a even more complete BT with sources below:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
vlib_plugin_early_init:356: plugin path /usr/lib64/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control
Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

Program received signal SIGSEGV, Segmentation fault.
mfib_entry_alloc (mfib_entry_index=,
prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
17.10/src/vnet/mfib/mfib_entry.c:407
407 mfib_entry->mfe_prefix = *prefix;
Missing separate debuginfos, use: zypper install libdpdk-17_08-0-
debuginfo-17.08-82.1.x86_64 libnuma1-debuginfo-2.0.9-10.2.x86_64
libopenssl1_0_0-debuginfo-1.0.2j-6.3.1.x86_64 libz1-debuginfo-1.2.8-
10.1.x86_64 vpp-plugins-debuginfo-17.10-14.2.x86_64

(gdb) bt
#0  mfib_entry_alloc (mfib_entry_index=,
prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
17.10/src/vnet/mfib/mfib_entry.c:407
#1  mfib_entry_create (fib_index=fib_index@entry=0, source=source@entry
=MFIB_SOURCE_DEFAULT_ROUTE, prefix=prefix@entry=0x7fffb60f0ce0, rpf_id=
rpf_id@entry=0, entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP)
at /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_entry.c:719
#2  0x7765cdc7 in mfib_table_entry_update (fib_index=0, prefix=
prefix@entry=0x7fffb60f0ce0, source=source@entry=MFIB_SOURCE_DEFAULT_RO
UTE, rpf_id=rpf_id@entry=0, 
entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP) at
/usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:184
#3  0x77656b85 in ip4_create_mfib_with_table_id (table_id=0) at
/usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:72
#4  ip4_mfib_table_find_or_create_and_lock (table_id=table_id@entry=0)
at /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:122
#5  0x7765d257 in mfib_table_find_or_create_and_lock (proto=pro
to@entry=FIB_PROTOCOL_IP4, table_id=table_id@entry=0) at
/usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:435
#6  0x77338b14 in ip4_lookup_init (vm=vm@entry=0x77bb62e0
) at /usr/src/debug/vpp-
17.10/src/vnet/ip/ip4_forward.c:1202
#7  0x77273bff in vnet_main_init (vm=vm@entry=0x77bb62e0
) at /usr/src/debug/vpp-17.10/src/vnet/misc.c:92
#8  0x773a4507 in ip_main_init (vm=0x77bb62e0
) at /usr/src/debug/vpp-
17.10/src/vnet/ip/ip_init.c:104
#9  0x7fffb35d8572 in ?? () from
/usr/lib64/vpp_plugins/ioam_plugin.so
#10 0x7796128d in vlib_call_init_exit_functions
(vm=0x77bb62e0 , head=, call_once=
call_once@entry=1) at /usr/src/debug/vpp-17.10/src/vlib/init.c:57
#11 0x779612d3 in vlib_call_all_init_functions (vm=) at /usr/src/debug/vpp-17.10/src/vlib/init.c:75
#12 0x779657a5 in vlib_main (vm=, vm@entry=0x7ff
ff7bb62e0 , input=input@entry=0x7fffb60f0fa0) at
/usr/src/debug/vpp-17.10/src/vlib/main.c:1754
#13 0x7799d3c6 in thread0 (arg=140737349640928) at
/usr/src/debug/vpp-17.10/src/vlib/unix/main.c:525
#14 0x76f7a250 in clib_calljmp () at /usr/src/debug/vpp-
17.10/src/vppinfra/longjmp.S:110
#15 0x7fffd100 in ?? ()
#16 0x7799df54 in vlib_unix_main (argc=,
argv=) at /usr/src/debug/vpp-
17.10/src/vlib/unix/main.c:588
#17 0x in ?? ()



Regards,
Marco

On Mon, 2017-08-28 at 15:41 +0200, Marco Varlese wrote:
> Apologies, I forgot to also provide some extra information:
> 
> > Using DPDK 17.08.
> > A backtrace below:
> 
> (gdb) bt
> #0  0x7765bced in mfib_entry_create () from
> /usr/lib64/libvnet.so.0
> #1  0x7765cdc7 in mfib_table_entry_update () from
> /usr/lib64/libvnet.so.0
> #2  0x77656b85 in ip4_mfib_table_find_or_create_and_lock ()
> from /usr/lib64/libvnet.so.0
> #3  0x7765d257 in mfib_table_find_or_create_and_lock () from
> /usr/lib64/libvnet.so.0
> #4  0x77338b14 in ip4_lookup_init () from
> /usr/lib64/libvnet.so.0
> #5  0x77273bff in vnet_main_init () from
> /usr/lib64/libvnet.so.0
> #6  0x773a4507 in ip_main_init () from
> /usr/lib64/libvnet.so.0
> #7  0x7fffb35d8572 in ?? () from
> /usr/lib64/vpp_plugins/ioam_plugin.so
> #8  0x7796128d in vlib_call_init_exit_functions () from
> /usr/lib64/libvlib.so.0
> #9  0x779

Re: [vpp-dev] Duplicate Prefetching of 128 bytes memory.

2017-08-28 Thread Damjan Marion (damarion)


> On 27 Aug 2017, at 12:04, mrityunjay.kum...@wipro.com wrote:
> 
> Dear Team
> I would like bring it to your kind notice of below code of vpp-1707-dpdk 
> plunging.
> 
> static_always_inline void dpdk_prefetch_buffer_by_index (vlib_main_t * vm, 
> u32 bi)
> {
>   vlib_buffer_t *b;
>   struct rte_mbuf *mb;
>   b = vlib_get_buffer (vm, bi);
>   mb = rte_mbuf_from_vlib_buffer (b);
>   CLIB_PREFETCH (mb, CLIB_CACHE_LINE_BYTES, LOAD);
>   CLIB_PREFETCH (b, CLIB_CACHE_LINE_BYTES, LOAD);
> }
> 
> #define CLIB_PREFETCH(addr,size,type)   \
> do {\
>   void * _addr = (addr);  \
> \
>   ASSERT ((size) <= 4*CLIB_CACHE_LINE_BYTES); \
>   _CLIB_PREFETCH (0, size, type);   \
>   _CLIB_PREFETCH (1, size, type);   \
>   _CLIB_PREFETCH (2, size, type);   \
>   _CLIB_PREFETCH (3, size, type);   \
> } while (0)
> 
> 
> 
> 
> Here , Sizeof(rte_mbuf) = 128 and sizeof(vlib_buffer_t) = 128 + 
> HEAD_ROOM(128)= 256. 
> 
> In above code part, vlib_buffer is ahead of 128 bytes from start of rte_mbuf 
> structure.  As i understood one CLIB_PREFETCH will load 256 bytes from 
> memory. ,hence total pre-fetch  is 512 bytes. As per the above code first 
> CLIB_PREFETCH will load 256, which includes 128 of rte_mbuf + 128 of 
> vlib_buffer as well . 2nd CLIB_PREFETCH will also load vlib_buffer whcih as 
> ready has been loaded. 
> 
> I must say duplication in prefetching the memory. Please correct me if I am 
> wrong. 

Hi MJ,

Yes, you are wrong. Each invocation of CLIB_PREFETCH in the inline function you 
listed above will prefetch one cacheline, so 64 bytes, not 256.
Please look at _CLIB_PREFETCH macro for details…




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] https://gerrit.fd.io/r/8156 missing from master

2017-08-28 Thread Dave Wallace

Andrew,

Having got some feedback from others, I don't think there is any one 
right way.


For bugs found in a stable branch, it makes sense to push the original 
patch there, then cherry pick to master and then identify if it makes 
sense to cherry pick it to any other branches.  For bugs found in master 
(which is what I originally had in mind), going in the other direction 
makes more sense.


So I guess the best thing would be to ensure that the branch which the 
bug was found is included in the body of the comments section of the 
patch and what other branches require the fix -- or maybe this 
information belongs in Jira.  The most important thing is to ensure 
master has all of the applicable patches in it.


Thanks,
-daw-

On 08/25/2017 04:57 PM, Andrew Yourtchenko wrote:

Dave,

Oh, I think during the "release" time we did the opposite. I was 
basing the logic on the sensible origin of the work on the code: the 
issue found on stable/1707, so I first verify the fix with the finder 
in their lab, thus ensuring they test just that change, and then 
commit and then cherry pick into the master, thus ensuring the commit 
message on the master gets the reference to the other commit.


Of course, any new things and the minor bug fixes, especially those I 
would find myself or someone else on master, would go to master first.


At least that was my logic - but I am happy to replace it with better 
one! :-)


--a

On 25 Aug 2017, at 14:32, Dave Wallace > wrote:



Andrew,

IMHO, best practice is to always commit to master first, then cherry 
pick to stable branches as required.  That being said, I'm not sure 
if this has been explicitly agreed upon by the VPP community.


Thanks,
-daw-

On 8/25/17 3:38 AM, Andrew Yourtchenko wrote:

Dave,

Yeah, those things are found throughout testing of stable/1707 with 
various control planes (openstack etc), so I first deal with them in 
stable/1707 and after the commit id is there, cherry pick into 
master (so i made https://gerrit.fd.io/r/#/c/8207/ for master now). 
Those are few things that need to go in before we can stamp a 17.07.1...


--a

On 24 Aug 2017, at 23:54, Dave Wallace > wrote:



Andrew,

I just merged https://gerrit.fd.io/r/8156 into stable/1707 and 
afterwards realized that it has not been committed to master.  
Shouldn't this be included in master as well?


Same thing with https://gerrit.fd.io/r/#/c/8147/ -- is there a 
reason this is not being committed to master, then cherry-picked to 
stable/1707?


Thanks,
-daw-




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] https://gerrit.fd.io/r/8156 missing from master

2017-08-28 Thread Andrew Yourtchenko
Dave,

> On 28 Aug 2017, at 17:32, Dave Wallace  wrote:
> 
> Andrew,
> 
> Having got some feedback from others, I don't think there is any one right 
> way.  
> 
> For bugs found in a stable branch, it makes sense to push the original patch 
> there, then cherry pick to master and then identify if it makes sense to 
> cherry pick it to any other branches.  For bugs found in master (which is 
> what I originally had in mind), going in the other direction makes more sense.

Aha, cool! We are in sync.

> 
> So I guess the best thing would be to ensure that the branch which the bug 
> was found is included in the body of the comments section of the patch and 
> what other branches require the fix -- or maybe this information belongs in 
> Jira.  The most important thing is to ensure master has all of the applicable 
> patches in it.

Absolutely, I always ensure that. The reason I didn't cherry pick from the hip 
before the commit is cherry pick after the commit includes the commit ID, so I 
think it is useful to have that in the commit message :-)

--a



> 
> Thanks,
> -daw-
> 
>> On 08/25/2017 04:57 PM, Andrew Yourtchenko wrote:
>> Dave,
>> 
>> Oh, I think during the "release" time we did the opposite. I was basing the 
>> logic on the sensible origin of the work on the code: the issue found on 
>> stable/1707, so I first verify the fix with the finder in their lab, thus 
>> ensuring they test just that change, and then commit and then cherry pick 
>> into the master, thus ensuring the commit message on the master gets the 
>> reference to the other commit.
>> 
>> Of course, any new things and the minor bug fixes, especially those I would 
>> find myself or someone else on master, would go to master first. 
>> 
>> At least that was my logic - but I am happy to replace it with better one! 
>> :-)
>> 
>> --a
>> 
>> On 25 Aug 2017, at 14:32, Dave Wallace  wrote:
>> 
>>> Andrew,
>>> 
>>> IMHO, best practice is to always commit to master first, then cherry pick 
>>> to stable branches as required.  That being said, I'm not sure if this has 
>>> been explicitly agreed upon by the VPP community.
>>> 
>>> Thanks,
>>> -daw-
>>> 
 On 8/25/17 3:38 AM, Andrew Yourtchenko wrote:
 Dave,
 
 Yeah, those things are found throughout testing of stable/1707 with 
 various control planes (openstack etc), so I first deal with them in 
 stable/1707 and after the commit id is there, cherry pick into master (so 
 i made https://gerrit.fd.io/r/#/c/8207/ for master now). Those are few 
 things that need to go in before we can stamp a 17.07.1...
 
 --a
 
 On 24 Aug 2017, at 23:54, Dave Wallace  wrote:
 
> Andrew,
> 
> I just merged https://gerrit.fd.io/r/8156 into stable/1707 and afterwards 
> realized that it has not been committed to master.  Shouldn't this be 
> included in master as well?
> 
> Same thing with https://gerrit.fd.io/r/#/c/8147/ -- is there a reason 
> this is not being committed to master, then cherry-picked to stable/1707?
> 
> Thanks,
> -daw-
>>> 
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Query for IPSec support on VPP

2017-08-28 Thread Mukesh Yadav (mukyadav)
Hi,


I have recently started working on VPP IPSec. My knowledge for same is limited 
to IPSEC.

I have few queries w.r.t to broader support of IPSec in VPP. Would appreciate 
any pointers/help for same.





As per wiki below, I have installed IPSec and it worked well for 
aes-cbc-128/sha1

https://wiki.fd.io/view/VPP/IPSec_and_IKEv2





I see source of VPP and found that VPP core code only supports AES_CBC/SHA1.



Quick google pointed me few links where VPP used DPDK for IPSEC.

Wanted to know what all Enc/hmac algorithm are supported by VPP->DPDK



For same, I followed below

https://docs.fd.io/vpp/17.04/dpdk_crypto_ipsec_doc.html

and compiled vpp using “make vpp_uses_dpdk_cryptodev_sw=yes build-release”



I see dpdk crypto files in dir src/plugins/dpdk/ipsec. Here it looks that only 
aes-gcm-128 is supported.

Not sure whether this is what I shall be looking for Dpdk supported IPSec.



With above steps:

When I am trying to configure aes-gcm-128, I get error

vpp# ipsec sa add 10 spi 1001 esp crypto-alg aes-gcm-128 crypto-key 
4a506a794f574265564551694d653768

ipsec sa: unsupported aes-gcm-128 crypto-alg





IPSec support via VPP core and Dpdk is as follows it seems:

1.  Aes-cbc is supported in VPP core

2.  Aes-gcm is supported in VPP via DPDK.



Is there any plan/way to include other algorithms like DES_CBC/MD5/AES_XCBC?





System Details:
vpp# show vers
vpp v17.10-rc0~103-g42e6b09 built by vagrant on localhost at Sun Aug 27 
22:06:20 PDT 2017
vpp# show dpdk vers
DPDK Version: DPDK 17.05.0
DPDK EAL init args:   -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix 
vpp -b :00:03.0 -b :00:09.0 --master-lcore 0 --socket-mem 256



Thanks
Mukesh

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Marco Varlese
After long digging I managed to find the issue...

The problem happens when building VPP using gcc-7 compiler but it
doesn't come up when building it with gcc-6.

I will keep digging into this but I hope it might be of help to you
folks too...


Cheers,
Marco

On Mon, 2017-08-28 at 16:05 +0200, Marco Varlese wrote:
> And a even more complete BT with sources below:
> 
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> vlib_plugin_early_init:356: plugin path /usr/lib64/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control
> Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
> Development Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
> Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
> Deployment on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
> Interface (experimetal))
> load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
> Translation)
> load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
> 
> Program received signal SIGSEGV, Segmentation fault.
> mfib_entry_alloc (mfib_entry_index=,
> prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
> 17.10/src/vnet/mfib/mfib_entry.c:407
> 407   mfib_entry->mfe_prefix = *prefix;
> Missing separate debuginfos, use: zypper install libdpdk-17_08-0-
> debuginfo-17.08-82.1.x86_64 libnuma1-debuginfo-2.0.9-10.2.x86_64
> libopenssl1_0_0-debuginfo-1.0.2j-6.3.1.x86_64 libz1-debuginfo-1.2.8-
> 10.1.x86_64 vpp-plugins-debuginfo-17.10-14.2.x86_64
> 
> (gdb) bt
> #0  mfib_entry_alloc (mfib_entry_index=,
> prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
> 17.10/src/vnet/mfib/mfib_entry.c:407
> #1  mfib_entry_create (fib_index=fib_index@entry=0, source=source@ent
> ry
> =MFIB_SOURCE_DEFAULT_ROUTE, prefix=prefix@entry=0x7fffb60f0ce0,
> rpf_id=
> rpf_id@entry=0, entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP)
> at /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_entry.c:719
> #2  0x7765cdc7 in mfib_table_entry_update (fib_index=0,
> prefix=
> prefix@entry=0x7fffb60f0ce0, source=source@entry=MFIB_SOURCE_DEFAULT_
> RO
> UTE, rpf_id=rpf_id@entry=0, 
> entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP) at
> /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:184
> #3  0x77656b85 in ip4_create_mfib_with_table_id (table_id=0)
> at
> /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:72
> #4  ip4_mfib_table_find_or_create_and_lock (table_id=table_id@entry=0
> )
> at /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:122
> #5  0x7765d257 in mfib_table_find_or_create_and_lock
> (proto=pro
> to@entry=FIB_PROTOCOL_IP4, table_id=table_id@entry=0) at
> /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:435
> #6  0x77338b14 in ip4_lookup_init (vm=vm@entry=0x77bb62e0
> ) at /usr/src/debug/vpp-
> 17.10/src/vnet/ip/ip4_forward.c:1202
> #7  0x77273bff in vnet_main_init (vm=vm@entry=0x77bb62e0
> ) at /usr/src/debug/vpp-17.10/src/vnet/misc.c:92
> #8  0x773a4507 in ip_main_init (vm=0x77bb62e0
> ) at /usr/src/debug/vpp-
> 17.10/src/vnet/ip/ip_init.c:104
> #9  0x7fffb35d8572 in ?? () from
> /usr/lib64/vpp_plugins/ioam_plugin.so
> #10 0x7796128d in vlib_call_init_exit_functions
> (vm=0x77bb62e0 , head=,
> call_once=
> call_once@entry=1) at /usr/src/debug/vpp-17.10/src/vlib/init.c:57
> #11 0x779612d3 in vlib_call_all_init_functions (vm= out>) at /usr/src/debug/vpp-17.10/src/vlib/init.c:75
> #12 0x779657a5 in vlib_main (vm=, vm@entry=0x7
> ff
> ff7bb62e0 , input=input@entry=0x7fffb60f0fa0) at
> /usr/src/debug/vpp-17.10/src/vlib/main.c:1754
> #13 0x7799d3c6 in thread0 (arg=140737349640928) at
> /usr/src/debug/vpp-17.10/src/vlib/unix/main.c:525
> #14 0x76f7a250 in clib_calljmp () at /usr/src/debug/vpp-
> 17.10/src/vppinfra/longjmp.S:110
> #15 0x7fffd100 in ?? ()
> #16 0x7799df54 in vlib_unix_main (argc=,
> argv=) at /usr/src/debug/vpp-
> 17.10/src/vlib/unix/main.c:588
> #17 0x in ?? ()
> 
> 
> 
> Regards,
> Marco
> 
> On Mon, 2017-08-28 at 15:41 +0200, Marco Varlese wrote:
> > Apologies, I forgot to also provide some extra information:
> > 
> > > Using DPDK 17.08.
> > > A backtrace below:
> > 
> > (gdb) bt
> > #0  0x7765bced in mfib_entry_create () from
> > /usr/lib64/libvnet.so.0
> > #1  0x7765cdc7 in mfib_table_entry_update () from
> > /usr/lib64/libvnet.so.0
> > #2  0x77656b85 in ip4_mfib_table_find_or_create_and_lock ()
> > from

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-08-28 Thread Billy McFall
On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) <
mkons...@cisco.com> wrote:

> + csit-dev
>
> Billy,
>
> Per the last week CSIT project call, from CSIT perspective, we
> classified your reported issue as Test coverage escape.
>
> Summary
> ===
> CSIT test coverage got fixed, see more detail below. The CSIT tests
> uncovered regression for L2BD with MAC learning with higher total number
> of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
> threaded configurations seem to be not impacted.
>
> Billy, Karl, Can you confirm this aligns with your findings?
>

When you say "multi-threaded configuration", I assume you mean multiple
worker threads? Karl's tests had 4 workers, one for each NIC (physical and
vhost-user). He only tested multi-threaded, so we can not confirm that
single-threaded
configurations seem to be not impacted.

Our numbers are a little different from yours, but we are both seeing drops
between releases. We had a bigger drop off with 10k flows, but seems to be
similar with the million flow tests.

I was a little disappointed the MAC limit change by John Lo on 8/23 didn't
improve master number some.

Thanks for all the hard work and adding these additional test cases.

Billy


> More detail
> ===
> MAC scale tests have been now added L2BD and L2BD+vhost CSIT suites, as
> a simple extension to existing L2 testing suites. Some known issues with
> TG prevented CSIT to add those tests in the past, but now as TG issues
> have been addressed, the tests could be added swiftly. The complete list
> of added tests is listed in [1] - thanks to Peter Mikus for great work
> there!
>
> Results from running those tests multiple times within FD.io
>  CSIT lab
> infra can be glanced over by checking dedicated test trigger commits
> [2][3][4], summary graphs in linked xls [5]. The results confirm there
> is regression in VPP l2fib code affecting all scaled up MAC tests in
> multi-thread configuration. Single-thread configurations seems not be
> impacted.
>
> The tests in commit [1] are not merged yet, as they're waiting for
> TG/TRex team to fix TRex issue with mis-calculating Ethernet FCS with
> large number of L2 MAC flows (>10k MAC flows). Issue is tracked by [6],
> TRex v2.29 with the fix ETA is w/e 1-Sep i.e. this week. Reported CSIT test
> results are using Ethernet frames with UDP headers that's masking the
> TRex issue.
>
> We have also vpp git bisected the problem between v17.04 (good) and
> v17.07 (bad) in a separate IXIA based lab in SJC, and found the culprit
> vpp patch [7]. Awaiting fix from vpp-dev, jira ticket raised [8].
>
> Many thanks for reporting this regression and working with CSIT to plug
> this hole in testing.
>
> -Maciek
>
> [1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/
> ge8145] [https://jira.fd.io/browse/CSIT-786 CSIT-786];
> L2FIB scale testing for 10k, 100k, 1M FIB entries
>  ./l2:
>  10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
>  10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
>  10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
>  10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-
> cfsrr1-ndrpdrdisc
>  10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-
> cfsrr1-ndrpdrdisc
>  10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-
> cfsrr1-ndrpdrdisc
> [2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
> [3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ ge8167];
> [4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ ge8172];
> [5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs.
> v17.04, 20170825_l2fib_regression_10k_100k_1M.xlsx, [
> https://jira.fd.io/browse/CSIT-794 CSIT-794];
> [6] TRex v2.28 Ethernet FCS mis-calculation issue [
> https://jira.fd.io/browse/CSIT-793 CSIT-793];
> [7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
> [8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken [
> https://jira.fd.io/browse/VPP-963 VPP-963];
>
> On 14 Aug 2017, at 23:40, Billy McFall  wrote:
>
> In the last VPP call, I reported some internal Red Hat performance testing
> was showing a significant drop in performance between releases 17.04 to
> 17.07. This with l2-bridge testing - PVP - 0.002% Drop Rate:
>VPP-17.04: 256 Flow 7.8 MP/s 10k Flow 7.3 MP/s 1m Flow 5.2 MP/s
>VPP-17.07: 256 Flow 7.7 MP/s 10k Flow 2.7 MP/s 1m Flow 1.8 MP/s
>
> The performance team re-ran some of the tests for me with some additional
> data collected. Looks like the size of the L2 FIB table was reduced in
> 17.07. Below are the number of entries in the MAC Table after the tests are
> run:
>17.04:
>  show l2fib
>  408 l2fib entries
>17.07:
>  show l2fib
>  1067053 l2fib entries with 1048576 learned (or non-static) entries
>
> This caused more packets to be flooded (see out of 'show node counters'
> below). I looked but couldn't find anything. Is the size of the L2 FIB
> Table table config

[vpp-dev] ethernet over dial-up support

2017-08-28 Thread Алексей Болдырев
Скажите можалуйста, планируется ли добавление поддержки ppp а также поддержки 
ethernet over dial-up?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-08-28 Thread Maciek Konstantynowicz (mkonstan)

On 28 Aug 2017, at 17:47, Billy McFall 
mailto:bmcf...@redhat.com>> wrote:



On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) 
mailto:mkons...@cisco.com>> wrote:
+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered regression for L2BD with MAC learning with higher total number
of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
threaded configurations seem to be not impacted.

Billy, Karl, Can you confirm this aligns with your findings?

When you say "multi-threaded configuration", I assume you mean multiple worker 
threads?

Yes, I should have said multiple data plane threads, in VPP land that’s worker 
threads indeed.

Karl's tests had 4 workers, one for each NIC (physical and vhost-user). He only 
tested multi-threaded, so we can not confirm that single-threaded 
configurations seem to be not impacted.

Okay. Still your result align with our tests, both CSIT and offline with IXIA.


Our numbers are a little different from yours, but we are both seeing drops 
between releases.

Your numbers are different most likely due to different MAC scale. You
quote MAC scale per direction, we quote total MAC scale, i.e. total
number of VPP l2fib entries.

We had a bigger drop off with 10k flows, but seems to be similar with the 
million flow tests.

Our 10k flows is equivalent of 2* 5k flows, defined as:

flow-ab1 => (smac-a1,dmac-b1)
flow-ab2 => (smac-a2,dmac-b2)
..
flow-ab5000 => (smac-a5000,dmac-b5000)

flow-ba1 => (smac-b1,dmac-a1)
flow-ba2 => (smac-b2,dmac-a2)
..
flow-ba5000 => (smac-b5000,dmac-a5000)

In your case, based on description provided by Karl on the last CSIT
call I read 10k flows tests has 2*10k flows, defined as:

flow-ab1 => (smac-a1,dmac-b1)
flow-ab2 => (smac-a2,dmac-b2)
..
flow-ab1 => (smac-a1,dmac-b1)

flow-ba1 => (smac-b1,dmac-a1)
flow-ba2 => (smac-b2,dmac-a2)
..
flow-ba1 => (smac-b1,dmac-a1)

Also, your PDR packet loss tolerance at  0.002% Drop Rate is different
than CSIT PDR (0.5% pkt loss rate tolerance) and NDR (zero pkt loss rate
tolerance).


I was a little disappointed the MAC limit change by John Lo on 8/23 didn't 
improve master number some.

Thanks for all the hard work and adding these additional test cases.

You are welcome. Thanks again for reporting this regression.
Let’s wait for vpp-dev fix, so that we retest verify the fix.

-Maciek


Billy


More detail
===
MAC scale tests have been now added L2BD and L2BD+vhost CSIT suites, as
a simple extension to existing L2 testing suites. Some known issues with
TG prevented CSIT to add those tests in the past, but now as TG issues
have been addressed, the tests could be added swiftly. The complete list
of added tests is listed in [1] - thanks to Peter Mikus for great work
there!

Results from running those tests multiple times within FD.io 
CSIT lab
infra can be glanced over by checking dedicated test trigger commits
[2][3][4], summary graphs in linked xls [5]. The results confirm there
is regression in VPP l2fib code affecting all scaled up MAC tests in
multi-thread configuration. Single-thread configurations seems not be
impacted.

The tests in commit [1] are not merged yet, as they're waiting for
TG/TRex team to fix TRex issue with mis-calculating Ethernet FCS with
large number of L2 MAC flows (>10k MAC flows). Issue is tracked by [6],
TRex v2.29 with the fix ETA is w/e 1-Sep i.e. this week. Reported CSIT test
results are using Ethernet frames with UDP headers that's masking the
TRex issue.

We have also vpp git bisected the problem between v17.04 (good) and
v17.07 (bad) in a separate IXIA based lab in SJC, and found the culprit
vpp patch [7]. Awaiting fix from vpp-dev, jira ticket raised [8].

Many thanks for reporting this regression and working with CSIT to plug
this hole in testing.

-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 
CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ 
ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [htt

[vpp-dev] Issue forwarding TCP packets

2017-08-28 Thread Prabhjot Singh Sethi

We have been trying to use VPP as a L2/L3 forwarding data path between
VMs.

where we have connected tap interfaces from VM to VPP using "create
host-interface name "
after adding both the interfaces to same bridge-domain (bd_id = 2), we
are able to ping from
one VM to another (both in same subnet). However when we try to do ssh
we observe that all the 
tcp packets are transmitted/forwarded by VPP to the other end but they
are dropped as
"bad segments received".

Can anyone help with what could be wrong here, are we missing any
other required config?

Note:- same VMs works well when connected to linux bridge

root@vm-2:~# netstat -s | grep -A11 -i tcp:
Tcp:
    33 active connections openings
    4 passive connection openings
    0 failed connection attempts
    1 connection resets received
    1 connections established
    885 segments received
    650 segments send out 
    8 segments retransmited
    19 bad segments received.       Keeps
increasing for every tcp packet
    2 resets sent
    InCsumErrors: 19                Keeps
increasing for every tcp packet
root@vm-2:~# 

Regards,
Prabhjot

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] https://gerrit.fd.io/r/8156 missing from master

2017-08-28 Thread Dave Wallace

On 08/28/2017 12:05 PM, Andrew Yourtchenko wrote:

Dave,

On 28 Aug 2017, at 17:32, Dave Wallace > wrote:



Andrew,

Having got some feedback from others, I don't think there is any one 
right way.


For bugs found in a stable branch, it makes sense to push the 
original patch there, then cherry pick to master and then identify if 
it makes sense to cherry pick it to any other branches.  For bugs 
found in master (which is what I originally had in mind), going in 
the other direction makes more sense.


Aha, cool! We are in sync.



So I guess the best thing would be to ensure that the branch which 
the bug was found is included in the body of the comments section of 
the patch and what other branches require the fix -- or maybe this 
information belongs in Jira.  The most important thing is to ensure 
master has all of the applicable patches in it.


Absolutely, I always ensure that. The reason I didn't cherry pick from 
the hip before the commit is cherry pick after the commit includes the 
commit ID, so I think it is useful to have that in the commit message :-)


Good point.  I agree!

Thanks,
-daw-



--a





Thanks,
-daw-

On 08/25/2017 04:57 PM, Andrew Yourtchenko wrote:

Dave,

Oh, I think during the "release" time we did the opposite. I was 
basing the logic on the sensible origin of the work on the code: the 
issue found on stable/1707, so I first verify the fix with the 
finder in their lab, thus ensuring they test just that change, and 
then commit and then cherry pick into the master, thus ensuring the 
commit message on the master gets the reference to the other commit.


Of course, any new things and the minor bug fixes, especially those 
I would find myself or someone else on master, would go to master 
first.


At least that was my logic - but I am happy to replace it with 
better one! :-)


--a

On 25 Aug 2017, at 14:32, Dave Wallace > wrote:



Andrew,

IMHO, best practice is to always commit to master first, then 
cherry pick to stable branches as required.  That being said, I'm 
not sure if this has been explicitly agreed upon by the VPP community.


Thanks,
-daw-

On 8/25/17 3:38 AM, Andrew Yourtchenko wrote:

Dave,

Yeah, those things are found throughout testing of stable/1707 
with various control planes (openstack etc), so I first deal with 
them in stable/1707 and after the commit id is there, cherry pick 
into master (so i made https://gerrit.fd.io/r/#/c/8207/ for master 
now). Those are few things that need to go in before we can stamp 
a 17.07.1...


--a

On 24 Aug 2017, at 23:54, Dave Wallace > wrote:



Andrew,

I just merged https://gerrit.fd.io/r/8156 into stable/1707 and 
afterwards realized that it has not been committed to master. 
Shouldn't this be included in master as well?


Same thing with https://gerrit.fd.io/r/#/c/8147/ -- is there a 
reason this is not being committed to master, then cherry-picked 
to stable/1707?


Thanks,
-daw-






___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Dave Wallace

Marco,

Thanks for the follow up. Could you please file a Jira for this issue 
(https://jira.fd.io/secure/RapidBoard.jspa?rapidView=20&projectKey=VPP) 
and/or submit a patch if you find a workaround?


Thanks,
-daw-

On 08/28/2017 12:22 PM, Marco Varlese wrote:

After long digging I managed to find the issue...

The problem happens when building VPP using gcc-7 compiler but it
doesn't come up when building it with gcc-6.

I will keep digging into this but I hope it might be of help to you
folks too...


Cheers,
Marco

On Mon, 2017-08-28 at 16:05 +0200, Marco Varlese wrote:

And a even more complete BT with sources below:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
vlib_plugin_early_init:356: plugin path /usr/lib64/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control
Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

Program received signal SIGSEGV, Segmentation fault.
mfib_entry_alloc (mfib_entry_index=,
prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
17.10/src/vnet/mfib/mfib_entry.c:407
407 mfib_entry->mfe_prefix = *prefix;
Missing separate debuginfos, use: zypper install libdpdk-17_08-0-
debuginfo-17.08-82.1.x86_64 libnuma1-debuginfo-2.0.9-10.2.x86_64
libopenssl1_0_0-debuginfo-1.0.2j-6.3.1.x86_64 libz1-debuginfo-1.2.8-
10.1.x86_64 vpp-plugins-debuginfo-17.10-14.2.x86_64

(gdb) bt
#0  mfib_entry_alloc (mfib_entry_index=,
prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
17.10/src/vnet/mfib/mfib_entry.c:407
#1  mfib_entry_create (fib_index=fib_index@entry=0, source=source@ent
ry
=MFIB_SOURCE_DEFAULT_ROUTE, prefix=prefix@entry=0x7fffb60f0ce0,
rpf_id=
rpf_id@entry=0, entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP)
 at /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_entry.c:719
#2  0x7765cdc7 in mfib_table_entry_update (fib_index=0,
prefix=
prefix@entry=0x7fffb60f0ce0, source=source@entry=MFIB_SOURCE_DEFAULT_
RO
UTE, rpf_id=rpf_id@entry=0,
 entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP) at
/usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:184
#3  0x77656b85 in ip4_create_mfib_with_table_id (table_id=0)
at
/usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:72
#4  ip4_mfib_table_find_or_create_and_lock (table_id=table_id@entry=0
)
at /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:122
#5  0x7765d257 in mfib_table_find_or_create_and_lock
(proto=pro
to@entry=FIB_PROTOCOL_IP4, table_id=table_id@entry=0) at
/usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:435
#6  0x77338b14 in ip4_lookup_init (vm=vm@entry=0x77bb62e0
) at /usr/src/debug/vpp-
17.10/src/vnet/ip/ip4_forward.c:1202
#7  0x77273bff in vnet_main_init (vm=vm@entry=0x77bb62e0
) at /usr/src/debug/vpp-17.10/src/vnet/misc.c:92
#8  0x773a4507 in ip_main_init (vm=0x77bb62e0
) at /usr/src/debug/vpp-
17.10/src/vnet/ip/ip_init.c:104
#9  0x7fffb35d8572 in ?? () from
/usr/lib64/vpp_plugins/ioam_plugin.so
#10 0x7796128d in vlib_call_init_exit_functions
(vm=0x77bb62e0 , head=,
call_once=
call_once@entry=1) at /usr/src/debug/vpp-17.10/src/vlib/init.c:57
#11 0x779612d3 in vlib_call_all_init_functions (vm=) at /usr/src/debug/vpp-17.10/src/vlib/init.c:75
#12 0x779657a5 in vlib_main (vm=, vm@entry=0x7
ff
ff7bb62e0 , input=input@entry=0x7fffb60f0fa0) at
/usr/src/debug/vpp-17.10/src/vlib/main.c:1754
#13 0x7799d3c6 in thread0 (arg=140737349640928) at
/usr/src/debug/vpp-17.10/src/vlib/unix/main.c:525
#14 0x76f7a250 in clib_calljmp () at /usr/src/debug/vpp-
17.10/src/vppinfra/longjmp.S:110
#15 0x7fffd100 in ?? ()
#16 0x7799df54 in vlib_unix_main (argc=,
argv=) at /usr/src/debug/vpp-
17.10/src/vlib/unix/main.c:588
#17 0x in ?? ()



Regards,
Marco

On Mon, 2017-08-28 at 15:41 +0200, Marco Varlese wrote:

Apologies, I forgot to also provide some extra information:


Using DPDK 17.08.
A backtrace below:

(gdb) bt
#0  0x7765bced in mfib_entry_create () from
/usr/lib64/libvnet.so.0
#1  0x7765cdc7 in mfib_table_entry_update () from
/usr/lib64/libvnet.so.0
#2  0x77656b85 in ip4_mfib_table_find_or_crea

Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

2017-08-28 Thread Brian Brooks
On 08/23 08:05:16, Damjan Marion (damarion) wrote:
> 
> On 23 Aug 2017, at 06:30, Brian Brooks 
> mailto:brian.bro...@arm.com>> wrote:
> 
> Hi Damjan, George,
> 
> I just pulled lastest source and tried native build (platforms/vpp.mk) on 
> ARMv8:
> 
>  cat: '/sys/bus/pci/devices/:00:01.0/uevent': No such file or directory
> 
> From dpdk/Makefile,
> 
>  
> ##
>  # Intel x86
>  
> ##
>  ifeq ($(MACHINE),$(filter $(MACHINE),x86_64 i686))
>  DPDK_TARGET   ?= $(MACHINE)-native-linuxapp-$(DPDK_CC)
>  DPDK_MACHINE  ?= nhm
>  DPDK_TUNE ?= core-avx2
>  
> ##
>  # Cavium ThunderX
>  
> ##
>  else ifneq (,$(findstring thunder,$(shell cat 
> /sys/bus/pci/devices/:00:01.0/uevent | grep cavium)))
>  export CROSS=""
>  DPDK_TARGET   ?= arm64-thunderx-linuxapp-$(DPDK_CC)
>  DPDK_MACHINE  ?= thunderx
>  DPDK_TUNE ?= generic
> 
> So, I am thinking we need to modify this to support MACHINE=aarch64 and 
> possibly
> rework thunder detection to not fail hard on non-thunder machines.
> 
> Yes, unfortunately I don’t have non-thunder system to take care for this but 
> should be easy.

Hi Damjan, George,

Please see https://gerrit.fd.io/r/#/c/8228/ for the change described above.

Thanks,
Brian

> Another thing which needs attention is proper cacheline size detection  
> during vpp build. thunder-x have 128 byte cacheline
> and others are 64 if i get it right. Last time I was looking there was no way 
> to find it out from sysfs but maybe new kernels
> expose that info.
> 
> 
> Regards,
> Brian
> 
> On 08/22 17:55:20, George Zhao wrote:
> Thanks Demjan,
> 
> Confirmed that your patches worked on our system as well.
> 
> George
> 
> From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
> Sent: Tuesday, August 22, 2017 5:03 AM
> To: George Zhao
> Cc: Dave Barach (dbarach); discuss; csit-dev; vpp-dev
> Subject: Re: [vpp-dev] [discuss] Question about VPP support for ARM 64
> 
> Dear George,
> 
> I tried on my Cavium ThunderX system with latest Ubuntu and after fixing few 
> minor issues (all patches submitted to master) I got VPP running.
> I use latest Ubuntu devel (17.10, mainly as I upgraded to new kernel in my 
> attempts to get system working)
> 
> For me it is hard to help you with your particular system, as I don’t have 
> access to similar one, but my guess is that it shouldn’’t be too hard to get 
> it working.
> 
> Thanks,
> 
> Damjan
> 
> On 20 Aug 2017, at 23:12, George Zhao 
> mailto:george.y.z...@huawei.com>>
>  wrote:
> 
> Hi Damian,
> 
> IT is Applied Micro overdrive 1000, here are the uname -a output:
> 
> $>> uname -a
> Linux OD1K 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:10:33 UTC 2017 
> aarch64 aarch64 aarch64 GNU/Linux
> 
> thanks
> George
> 发件人:Damjan Marion (damarion)
> 收件人:George Zhao
> 抄送:dbarach,discuss,csit-dev,vpp-dev
> 时间:2017-08-20 10:03:27
> 主题:Re: [vpp-dev] [discuss] Question about VPP support for ARM 64
> 
> 
> 
> George, are you using ThunderX platform?
> 
> I spent few hours today trying to install latest ubuntu on my ThunderX system 
> but no luck, kernel hangs at some point, both ubuntu provided and manually 
> compiled.
> 
> Can you share about more details about your system?
> 
> Thanks,
> 
> Damjan
> 
> 
> 
> On 19 Aug 2017, at 22:48, George Zhao 
> mailto:george.y.z...@huawei.com>>
>  wrote:
> 
> If a bug is filed, may I have the bug number, I would be love to trace this 
> patch.
> 
> BTW, how do I file a bug for VPP, I did a quick wiki search with no luck.
> 
> Thanks,
> George
> 
> -Original Message-
> From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
> Sent: Saturday, August 19, 2017 7:42 AM
> To: George Zhao 
> mailto:george.y.z...@huawei.com>>
> Cc: 
> vpp-dev@lists.fd.io; 
> disc...@lists.fd.io; 
> csit-...@lists.fd.io;
>  Damjan Marion (damarion) 
> mailto:damar...@cisco.com>>
> Subject: RE: [discuss] Question about VPP support for ARM 64
> 
> +1, pls add the typedef...
> 
> Thanks… Dave
> 
> -Original Message-
> From: Damjan Marion (damarion)
> Sent: Saturday, August 19, 2017 9:09 AM
> To: Dave Barach (dbarach) 
> mailto:dbar...@cisco.com>>
> Cc: George Zhao 
> mailto:george.y.z...@huawei.com>>;
>  vpp-dev@lists.fd.io; 
> disc...@lists.fd.io

Re: [vpp-dev] [EXT] Re: [discuss] Question about VPP support for ARM 64

2017-08-28 Thread Brian Brooks
Hi Eric,

Seeing similar ICEs with GCC 5.3.1. Can you try GCC 5.4.0+?
That might require a newer Ubuntu filesystem than the one
described on the MACCHIATObin wiki. I will be setting up a
MACCHIATObin shortly.

Brian

On 08/26 02:59:33, Eric Chen wrote:
> HI Brian,
> 
> Yes,  I upgrading to Ubuntu 16.04,
> 
> Succedd to Nativly build fd.io_odp4vpp (w / odp-linux), 
> However when buidl fd.io_vpp (w/ dpdk),  it reported below error,
> 
> Anyone met before? Seem a bug of gcc. 
> 
> In file included from 
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/error_funcs.h:43:0,
>  from 
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/vlib.h:70,
>  from 
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vnet/l2/l2_fib.c:19:
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h: In 
> function ‘vlib_process_suspend_time_is_zero’:
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1:
>  error: unable to generate reloads for:
>  }
>  ^
> (insn 11 37 12 2 (set (reg:CCFPE 66 cc)
> (compare:CCFPE (reg:DF 79)
> (reg:DF 80))) 
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:441 
> 395 {*cmpedf}
>  (expr_list:REG_DEAD (reg:DF 80)
> (expr_list:REG_DEAD (reg:DF 79)
> (nil
> /home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1:
>  internal compiler error: in curr_insn_transform, at lra-constraints.c:3509
> Please submit a full bug report,
> with preprocessed source if appropriate.
> See  for instructions.
> Makefile:6111: recipe for target 'vnet/l2/l2_fib.lo' failed
> make[4]: *** [vnet/l2/l2_fib.lo] Error 1
> make[4]: *** Waiting for unfinished jobs
> 
> 
> 
> ericxh@linaro-developer:~/work/git_work/fd.io_vpp$ gcc -v
> Using built-in specs.
> COLLECT_GCC=gcc
> COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
> Target: aarch64-linux-gnu
> Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
> 5.3.1-14ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
> --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
> --program-suffix=-5 --enable-shared --enable-linker-build-id 
> --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
> --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
> --enable-libstdcxx-debug --enable-libstdcxx-time=yes 
> --with-default-libstdcxx-abi=new --enable-gnu-unique-object 
> --disable-libquadmath --enable-plugin --with-system-zlib 
> --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
> --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
> --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
> --with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
> --enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror 
> --enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu 
> --target=aarch64-linux-gnu
> Thread model: posix
> gcc version 5.3.1 20160413 (Ubuntu/Linaro 5.3.1-14ubuntu2)
> 
> 
> -Original Message-
> From: Brian Brooks [mailto:brian.bro...@arm.com] 
> Sent: 2017年8月26日 2:00
> To: Eric Chen 
> Cc: George Zhao ; discuss ; 
> csit-dev ; Damjan Marion (damarion) 
> ; vpp-dev 
> Subject: Re: [EXT] Re: [vpp-dev] [discuss] Question about VPP support for ARM 
> 64
> 
> Hi Eric,
> 
> On 08/23 06:23:07, Eric Chen wrote:
> > Hi Brian,
> > 
> > I am trying to natively build vpp in aarch64 box as well,
> > 
> > However when I "make install-dep", it report error -- "unable to 
> > locate package default-jdk-headless",
> > 
> > If you are using Ubuntu as well, could you share with me your apt-get 
> > source list?
> 
> Are you natively building on a MACCHIATObin with the Ubuntu filesystem?
> 
> > Thanks
> > Eric
> > 
> > -Original Message-
> > From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> > On Behalf Of Brian Brooks
> > Sent: 2017年8月23日 12:31
> > To: George Zhao 
> > Cc: discuss ; csit-dev ; 
> > Damjan Marion (damarion) ; vpp-dev 
> > 
> > Subject: [EXT] Re: [vpp-dev] [discuss] Question about VPP support for 
> > ARM 64
> > 
> > External Email
> > 
> > --
> > Hi Damjan, George,
> > 
> > I just pulled lastest source and tried native build (platforms/vpp.mk) on 
> > ARMv8:
> > 
> >   cat: '/sys/bus/pci/devices/:00:01.0/uevent': No such file or 
> > directory
> > 
> > From dpdk/Makefile,
> > 
> >   
> > ##
> >   # Intel x86
> >   
> > ##
> >   ifeq ($(MACHINE),$(filter $(MACHINE),x86_64 i686))
> >   DPDK_TARGET   ?= $(MACHINE)-native-linuxapp-$(DPDK_CC)
> >   DPDK_MACHINE  

Re: [vpp-dev] Issue forwarding TCP packets

2017-08-28 Thread Florin Coras
Hi Prabhjot, 

>From your description, I suspect it may be a linux tcp checksum offload issue. 
>Could you try disabling it for all interfaces with:

ethtool --offload   rx off tx off

Hope this helps, 
Florin

> On Aug 28, 2017, at 10:56 AM, Prabhjot Singh Sethi  
> wrote:
> 
> We have been trying to use VPP as a L2/L3 forwarding data path between VMs.
> 
> where we have connected tap interfaces from VM to VPP using "create 
> host-interface name "
> after adding both the interfaces to same bridge-domain (bd_id = 2), we are 
> able to ping from
> one VM to another (both in same subnet). However when we try to do ssh we 
> observe that all the 
> tcp packets are transmitted/forwarded by VPP to the other end but they are 
> dropped as
> "bad segments received".
> 
> Can anyone help with what could be wrong here, are we missing any other 
> required config?
> 
> Note:- same VMs works well when connected to linux bridge
> 
> root@vm-2:~# netstat -s | grep -A11 -i tcp:
> Tcp:
> 33 active connections openings
> 4 passive connection openings
> 0 failed connection attempts
> 1 connection resets received
> 1 connections established
> 885 segments received
> 650 segments send out 
> 8 segments retransmited
> 19 bad segments received.   Keeps increasing for 
> every tcp packet
> 2 resets sent
> InCsumErrors: 19    Keeps increasing for 
> every tcp packet
> root@vm-2:~# 
> 
> Regards,
> Prabhjot
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] ACL Match in fa_node.c

2017-08-28 Thread Wang, Yipeng1
Thank you Andrew, seems I was looking at an older version...

I was thinking of similar thing like using  tuple space search like you already 
done. It is very good work and lots of considerations. One quick question:

I just start looking at the code, but it seems all the rules are stored in one 
hash table (acl_lookup_hash). If a packet say having src IP= 1.1.0.0, 
dst=2.2.2.2, and there are two rules one is src=1.1.0.0/16, dst=2.2.0.0/32  and 
the other is src=1.1.0.0/32, dst=2.2.0.0/16. The two rules will have same key 
value in the table right? When do lookup, assume packet header is masked by the 
second rule’s mask type and become src=1.1.0.0 dst=2.2.0.0.  Will it match the 
first rule as well by mistake?

Thank you.
Yipeng

From: Andrew Yourtchenko [mailto:ayour...@gmail.com]
Sent: Sunday, August 27, 2017 6:30 AM
To: Wang, Yipeng1 
Cc: vpp-dev@lists.fd.io; zhang...@yunshan.net.cn
Subject: Re: [vpp-dev] ACL Match in fa_node.c

Hi Yipeng,

It's already there - just have a look through hash_* files in the ACL plugin 
directory on the master or latest stable/1707 :-)

There are several things more that can be taken care of (e.g. the determination 
of the "ACE not shadowed" shortcut for applied ACEs would allow some good 
further improvement, or more distinction between the v4 and v6 lookups, and 
potentially more cleverness with handling of port ranges).

Also, if you have ideas for a different lookup mechanism - feel free to add one 
- using a similar API as for the hash-based one it should be fairly 
straightforward.

Do you have some specific ideas in mind ?

--a

On 26 Aug 2017, at 03:23, Wang, Yipeng1 
mailto:yipeng1.w...@intel.com>> wrote:
Hi, Andrew and Pan,

I came across the thread you are talking about improvements of ACL plugin. Any 
update? I am also interested in the performance of ACL,  wondering what is the 
direction of the optimizations you are working on. Could you share?

Thanks
Yipeng




>On 5/23/17, 张攀 yunshan.net.cn> wrote:

> Hi Andrew!

>

>

> -- Original --

> From:  "Andrew   Yourtchenko" gmail.com>;

> Date:  Tue, May 23, 2017 07:56 PM

> To:  "张攀" yunshan.net.cn>;

> Cc:  "vpp-dev" lists.fd.io>;

> Subject:  Re: [vpp-dev] ACL Match in fa_node.c

>

>

> Hi!

>

> On 5/23/17, 张攀  yunshan.net.cn> wrote:

>> Hi guys,

>>

>>

>> I looked into the source code of vpp/src/plugin/acl/fa_node.c,

>> in function full_acl_match_5tuple(), it seems that every ingress packet

>> is

>> matching against each ACL rule stored in acl_main->acls in a for-loop

>> manner. This seems not fairly effective.

>

> You're absolutely right on both counts. First make it work, then make

> it right, then make it fast :-) I have some ideas that I wanted to

> experiment with there, would you be interested to help ? ACL matching

> is a fairly distinct function operation to not affect much else.

>

> [PAN]: I would be very pleased to help as I am also a addicted person to

> high performance programming :D

>

>



Great! :-) I will try to sketch the idea tomorrow/thursday and will

add you to the draft, so we can work together. (I also will have

limited connectivity the next week, so I won't be a lot in your way!

:-) (or if you have some good idea on how you'd like to do it, feel

free to shoot a doc/code into a gerrit draft, let's see!)



>

>>

>>

>> Besides, I notice that in vpp/src/plugin/acl/acl.c,when you call the

>> function acl_hook_l2_input_classify(), you will create a

>> vnet_classify_table, but I didn't see any code which adds

>> classify_session

>> to it, why?

>

> I had used classify table for storing the sessions in the pre-1704

> version of the ACL plugin.

>

> in 1704 as I was adding the L3, I moved to the new data path while

> keeping the old one still around, and potentially switchable (not

> terribly gracefully, but still).

>

> In the current master the classifier is used merely as a hook to get

> into the packet processing within the L2 packet path - that's why you

> see no sessions added.

>

> [PAN]: Cool. Correct me if I am wrong: ingress packet will first check

> against if there is matched session existing

> in fa_node.c and if not, the packet will check against ACL then to decide

> whether to create a new session or drop.



Exactly, that is the idea!



>

>

> Is there any 'match then action' alike behavior existing in fa_node?  And a

> long standing mystery question for me: what

> does "fa" stand for :p

>



"FA" is "feature arc" :) When adding the routed path, the feature arc

support appeared a bit earlier, and I was "wow, this is so excellent

and easy", and I thought I might use the same mechanism in L2 as well,

so I just bluntly called everything "fa_*". But then I decided to

leave 

[vpp-dev] netwoking-vpp l3 issue

2017-08-28 Thread ????????
Hello, I recently install openstack using devstack with networking-vpp plugin, 
and disable q-agt, q-l3 replaced by vpp-agent, but router doesnot work. so I 
want to ask that :
a. how the vpp-agent implement route function?
b. vpp-agent call which vpp command to achive route function ?


thank you very much!___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Issue forwarding TCP packets

2017-08-28 Thread prabhjot

Thanks Florin,
it works with offload disabled, earlier when i tried changing  
offload settings i missed doing it on one machine.


However my question is still about why VPP is unable to handle it, is  
it a missing configuration or missing functionality in VPP?


i can ssh from one vm to another without turning off offload while  
using linux bridge, and i don't see any change in the packet as well  
after being forwarded. Though tcpdump complains about checksum, but  
everything works fine.


packet ingressing into linux bridge from vm-1
06:41:58.626869 02:53:71:ef:2f:2a > 02:91:fb:46:9e:43, ethertype IPv4  
(0x0800), length 74: (tos 0x0, ttl 64, id 6341, offset 0, flags [DF],  
proto TCP (6), length 60)
1.1.1.3.51912 > 1.1.1.4.22: Flags [S], cksum 0x0437 (incorrect ->  
0xdeae), seq 2306393024, win 29200, options [mss 1460,sackOK,TS val  
1825541 ecr 0,nop,wscale 7], length 0

0x:  0291 fb46 9e43 0253 71ef 2f2a 0800 4500
0x0010:  003c 18c5 4000 4006 1def 0101 0103 0101
0x0020:  0104 cac8 0016 8978 c3c0   a002
0x0030:  7210 0437  0204 05b4 0402 080a 001b
0x0040:  db05   0103 0307

packet egressing from linux bridge to vm-2
06:41:58.627130 02:53:71:ef:2f:2a > 02:91:fb:46:9e:43, ethertype IPv4  
(0x0800), length 74: (tos 0x0, ttl 64, id 6341, offset 0, flags [DF],  
proto TCP (6), length 60)
1.1.1.3.51912 > 1.1.1.4.22: Flags [S], cksum 0x0437 (incorrect ->  
0xdeae), seq 2306393024, win 29200, options [mss 1460,sackOK,TS val  
1825541 ecr 0,nop,wscale 7], length 0

0x:  0291 fb46 9e43 0253 71ef 2f2a 0800 4500
0x0010:  003c 18c5 4000 4006 1def 0101 0103 0101
0x0020:  0104 cac8 0016 8978 c3c0   a002
0x0030:  7210 0437  0204 05b4 0402 080a 001b
0x0040:  db05   0103 0307


Regards,
Prabhjot

Quoting Florin Coras :


Hi Prabhjot,

From your description, I suspect it may be a linux tcp checksum  
offload issue. Could you try disabling it for all interfaces with:


ethtool --offload   rx off tx off

Hope this helps,
Florin

On Aug 28, 2017, at 10:56 AM, Prabhjot Singh Sethi  
 wrote:


We have been trying to use VPP as a L2/L3 forwarding data path between VMs.

where we have connected tap interfaces from VM to VPP using "create  
host-interface name "
after adding both the interfaces to same bridge-domain (bd_id = 2),  
we are able to ping from
one VM to another (both in same subnet). However when we try to do  
ssh we observe that all the
tcp packets are transmitted/forwarded by VPP to the other end but  
they are dropped as

"bad segments received".

Can anyone help with what could be wrong here, are we missing any  
other required config?


Note:- same VMs works well when connected to linux bridge

root@vm-2:~# netstat -s | grep -A11 -i tcp:
Tcp:
33 active connections openings
4 passive connection openings
0 failed connection attempts
1 connection resets received
1 connections established
885 segments received
650 segments send out
8 segments retransmited
19 bad segments received.   Keeps  
increasing for every tcp packet

2 resets sent
InCsumErrors: 19    Keeps  
increasing for every tcp packet

root@vm-2:~#

Regards,
Prabhjot
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] SIGSEGV when bootstrapping

2017-08-28 Thread Marco Varlese
Hi Dave,
On Mon, 2017-08-28 at 14:12 -0400, Dave Wallace wrote:
> Marco,
> 
>   
> 
>   Thanks for the follow up. Could you please file a Jira for this
>   issue
> (https://jira.fd.io/secure/RapidBoard.jspa?rapidView=20&projectKey=VP
> P)
>   and/or submit a patch if you find a workaround?
Sure, I filed the bug and the link is https://jira.fd.io/browse/VPP-964
I keep digging and hopefully find the solution soon! :)
>   
> 
>   Thanks,
> 
>   -daw-
Cheers,Marco
> 
> 
> On 08/28/2017 12:22 PM, Marco Varlese
>   wrote:
> 
> 
> 
> >   After long digging I managed to find the issue...
> > 
> > The problem happens when building VPP using gcc-7 compiler but it
> > doesn't come up when building it with gcc-6.
> > 
> > I will keep digging into this but I hope it might be of help to you
> > folks too...
> > 
> > 
> > Cheers,
> > Marco
> > 
> > On Mon, 2017-08-28 at 16:05 +0200, Marco Varlese wrote:
> > 
> >   
> > > And a even more complete BT with sources below:
> > > 
> > > [Thread debugging using libthread_db enabled]
> > > Using host libthread_db library "/lib64/libthread_db.so.1".
> > > vlib_plugin_early_init:356: plugin path /usr/lib64/vpp_plugins
> > > load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control
> > > Lists)
> > > load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
> > > Development Kit (DPDK))
> > > load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per
> > > Packet)
> > > load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> > > load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-
> > > locator
> > > addressing for IPv6)
> > > load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> > > load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> > > load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> > > load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6
> > > Rapid
> > > Deployment on IPv4 Infrastructure (RFC5969))
> > > load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet
> > > Memory
> > > Interface (experimetal))
> > > load_one_plugin:184: Loaded plugin: nat_plugin.so (Network
> > > Address
> > > Translation)
> > > load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
> > > 
> > > Program received signal SIGSEGV, Segmentation fault.
> > > mfib_entry_alloc (mfib_entry_index=,
> > > prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
> > > 17.10/src/vnet/mfib/mfib_entry.c:407
> > > 407   mfib_entry->mfe_prefix = *prefix;
> > > Missing separate debuginfos, use: zypper install libdpdk-17_08-0-
> > > debuginfo-17.08-82.1.x86_64 libnuma1-debuginfo-2.0.9-10.2.x86_64
> > > libopenssl1_0_0-debuginfo-1.0.2j-6.3.1.x86_64 libz1-debuginfo-
> > > 1.2.8-
> > > 10.1.x86_64 vpp-plugins-debuginfo-17.10-14.2.x86_64
> > > 
> > > (gdb) bt
> > > #0  mfib_entry_alloc (mfib_entry_index=,
> > > prefix=0x7fffb60f0ce0, fib_index=0) at /usr/src/debug/vpp-
> > > 17.10/src/vnet/mfib/mfib_entry.c:407
> > > #1  mfib_entry_create (fib_index=fib_index@entry=0, source=source
> > > @ent
> > > ry
> > > =MFIB_SOURCE_DEFAULT_ROUTE, prefix=prefix@entry=0x7fffb60f0ce0,
> > > rpf_id=
> > > rpf_id@entry=0, entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DRO
> > > P)
> > > at /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_entry.c:719
> > > #2  0x7765cdc7 in mfib_table_entry_update (fib_index=0,
> > > prefix=
> > > prefix@entry=0x7fffb60f0ce0, source=source@entry=MFIB_SOURCE_DEFA
> > > ULT_
> > > RO
> > > UTE, rpf_id=rpf_id@entry=0, 
> > > entry_flags=entry_flags@entry=MFIB_ENTRY_FLAG_DROP) at
> > > /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:184
> > > #3  0x77656b85 in ip4_create_mfib_with_table_id
> > > (table_id=0)
> > > at
> > > /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:72
> > > #4  ip4_mfib_table_find_or_create_and_lock (table_id=table_id@ent
> > > ry=0
> > > )
> > > at /usr/src/debug/vpp-17.10/src/vnet/mfib/ip4_mfib.c:122
> > > #5  0x7765d257 in mfib_table_find_or_create_and_lock
> > > (proto=pro
> > > to@entry=FIB_PROTOCOL_IP4, table_id=table_id@entry=0) at
> > > /usr/src/debug/vpp-17.10/src/vnet/mfib/mfib_table.c:435
> > > #6  0x77338b14 in ip4_lookup_init (vm=vm@entry=0x77bb
> > > 62e0
> > > ) at /usr/src/debug/vpp-
> > > 17.10/src/vnet/ip/ip4_forward.c:1202
> > > #7  0x77273bff in vnet_main_init (vm=vm@entry=0x77bb6
> > > 2e0
> > > ) at /usr/src/debug/vpp-
> > > 17.10/src/vnet/misc.c:92
> > > #8  0x773a4507 in ip_main_init (vm=0x77bb62e0
> > > ) at /usr/src/debug/vpp-
> > > 17.10/src/vnet/ip/ip_init.c:104
> > > #9  0x7fffb35d8572 in ?? () from
> > > /usr/lib64/vpp_plugins/ioam_plugin.so
> > > #10 0x7796128d in vlib_call_init_exit_functions
> > > (vm=0x77bb62e0 , head=,
> > > call_once=
> > > call_once@entry=1) at /usr/src/debug/vpp-17.10/src/vlib/init.c:57
> > > #11 0x779612d3 in vlib_call_all_init_functions
> > > (vm= > > out>) at /usr/src/