[vpp-dev] dpdk output function

2017-10-31 Thread Yuliang Li
Hi,

Some node is called "TenGigabitEthernet5/0/1-output". I am using dpdk on
this interface. Does anyone know what is the function that this node calls?

Thanks,
-- 
Yuliang Li
PhD student
Department of Computer Science
Yale University
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [SFC] Query regarding SFC classifier configuration for ip4 traffic

2017-10-31 Thread Ni, Hongjun
Hi Phaneendra,

Please try below scripts:

classify table mask l3 ip4 proto
classify session l2-input-hit-next input-node nsh-classifier table-index 0 
match l3 ip4 proto 6 opaque-index 47615
set int l2 bridge TenGigabitEthernet5/0/0 1 1
set interface l2 input classify intfc TenGigabitEthernet5/0/0 ip4-table 0

-Hongjun

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Phaneendra Manda
Sent: Tuesday, October 31, 2017 8:11 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] [SFC] Query regarding SFC classifier configuration for ip4 
traffic

Hi All,

I am trying out SFC with VPP for ip4 traffic on dpdk interface. I have few 
queries.

1. I would like to know what is the configuration for IP4 traffic to reach the 
nsh-classifier node in VPP using vppctl ?

 I am trying with the following command for redirecting ip4 traffic to 
nsh-classifier node. But   the command throws error: "Table index 
required"

 classify table mask l3 ip4 proto
 classify session hit-next input-node nsh-classifier table-index 0 match l3 
ip4 proto 17   opaque-index 47615  -- This command throws error


2. Do i need to associate interface with classifier table created?

Thanks in advance :)

--
Thanks & regards,
Phaneendra Manda.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] debuginfo rpms missing from nexus yum repos.

2017-10-31 Thread Thomas F Herbert

Hi,

I noticed while working on csit that vpp debuginfo rpms are "missing" 
from Nexus Centos repo and have been since late September. The newest 
ones date back to September.


Does anybody know why?

--Tom


--
*Thomas F Herbert*
NFV and Fast Data Planes
Office of the CTO
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Assumed "deny" at end of ACLs?

2017-10-31 Thread Andrew Yourtchenko
Yep!

--a

> On 31 Oct 2017, at 17:57, Jon Loeliger  wrote:
> 
>> On Mon, Oct 30, 2017 at 3:38 PM, Jon Loeliger  wrote:
>>> On Mon, Oct 30, 2017 at 3:34 PM, Andrew Yourtchenko  
>>> wrote:
>>> Jon,
>>> 
>>> Assuming it’s ACL plugin that you ask about, yes - if none of the ACLs in 
>>> the list of ACLs applied to interface in a given direction matches, it’s 
>>> the same as deny.
>>> 
>>> --a
> 
> What about for the MACIP acls too?  Is there an assumed "deny any" at
> the end of those rule sets too?
> 
> Thanks,
> jdl
>  
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] FD.io Notification: VPP openSUSE jobs

2017-10-31 Thread Vanessa Valderrama
This change was delayed due to some last minute changes and other
unexpected issues.

The change is still scheduled for today:

*When:* 2017-10-31 @ 1800 UTC (11:00am PDT)

Thank you,
Vanessa

On 10/30/2017 04:32 PM, Vanessa Valderrama wrote:
>
> *What:*
>
> The openSUSE image issues have been resolved.  The jobs are passing on
> the sandbox.  I will be enabling openSUSE for VPP in production
> tomorrow.  Please feel free to review the jobs on the sandbox.
>
> https://jenkins.fd.io/sandbox/
>
> *When:* 2017-10-31 @ 1500 UTC (8:00am PDT)
>
> *Where:* Please contact LF via IRC fdio-infra (valderrv) to report issues
>
> *Impact:*  VPP Gerrit voting could -1 due if openSUSE jobs fail
>
>
>



signature.asc
Description: OpenPGP digital signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-31 Thread Vanessa Valderrama via RT
Anton and I worked with Dave to troubleshoot this issue.  It appears to be 
isolated.  He's going to contact Cisco IT and approved this ticket being closed.

On Thu Oct 26 13:45:06 2017, valderrv wrote:
> When Dave returns from his conference we'll scheudle time to
> troubleshoot this issue with him.
> 
> On Thu Oct 19 16:46:42 2017, valderrv wrote:
> >
> >
> > $ ssh -6 -v -p29418 gerrit.fd.io
> > OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g  1 Mar 2016
> > debug1: Reading configuration data /users/dbarach/.ssh/config
> > debug1: /users/dbarach/.ssh/config line 5: Applying options for
> > gerrit.fd.io
> > debug1: Reading configuration data /etc/ssh/ssh_config
> > debug1: /etc/ssh/ssh_config line 19: Applying options for *
> > debug1: Connecting to gerrit.fd.io
> > [2604:e100:1:0:f816:3eff:fe7e:8731]
> > port 29418.
> > debug1: Connection established.
> > debug1: identity file /users/dbarach/.ssh/id_rsa type 1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_rsa-cert type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_dsa type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_dsa-cert type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_ecdsa type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_ecdsa-cert type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_ed25519 type -1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /users/dbarach/.ssh/id_ed25519-cert type -1
> > debug1: Enabling compatibility mode for protocol 2.0
> > debug1: Local version string SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
> > debug1: Remote protocol version 2.0, remote software version
> > GerritCodeReview_2.14.4 (SSHD-CORE-1.4.0)
> > debug1: no match: GerritCodeReview_2.14.4 (SSHD-CORE-1.4.0)
> > debug1: Authenticating to gerrit.fd.io:29418 as 'dbarach'
> > debug1: SSH2_MSG_KEXINIT sent
> > debug1: SSH2_MSG_KEXINIT received
> > debug1: kex: algorithm: ecdh-sha2-nistp256
> > debug1: kex: host key algorithm: ssh-rsa
> > debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha2-256
> > compression: none
> > debug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha2-256
> > compression: none
> > debug1: sending SSH2_MSG_KEX_ECDH_INIT
> > debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
> > 
> >
> > $ nc -6 -nv 2604:e100:1:0:f816:3eff:fe7e:8731 29418
> > Connection to 2604:e100:1:0:f816:3eff:fe7e:8731 29418 port [tcp/*]
> > succeeded!
> > SSH-2.0-GerritCodeReview_2.14.4 (SSHD-CORE-1.4.0)
> > 
> >
> > $ traceroute6 gerrit.fd.io
> > traceroute to dev.fd.io (2604:e100:1:0:f816:3eff:fe7e:8731) from
> > 2001:420:2c50:2014:66f6:9dff:fe7a:118, 30 hops max, 24 byte packets
> >  1  2001:420:2c50:2014::1 (2001:420:2c50:2014::1)  0.732 ms  0.351 ms
> > 0.305 ms
> >  2  2001:420:2c50:2000::1 (2001:420:2c50:2000::1)  0.95 ms  0.573 ms
> > 0.46 ms
> >  3  bxb22-cibb-gw1-ten1-5.cisco.com (2001:420:2c40:f::)  0.897 ms
> > 0.813 ms  0.747 ms
> >  4  bxb23-sbb-gw1-ten1-7.cisco.com (2001:420:2c40:33::)  0.881 ms
> > 0.847 ms  0.708 ms
> >  5  bxb22-rbb-gw1-ten1-7.cisco.com (2001:420:2c40:b::1)  0.832 ms
> > 0.754 ms  0.707 ms
> >  6  2001:420:c000:454:: (2001:420:c000:454::)  0.56 ms  0.577 ms
> > 0.526 ms
> >  7  capnet-rtp10-bxb25-10ge.cisco.com (2001:420:c000:135::1)  21.136
> > ms  21.182 ms  21.061 ms
> >  8  rtp10-cd-rbb-gw1-por20.cisco.com (2001:420:c000:401::1)  21.016
> > ms
> > 21.026 ms  20.999 ms
> >  9  rpt10-corp-gw1-ten0-1-0.cisco.com (2001:420:2001:11e::)  21.107
> > ms
> > 21.109 ms  21.051 ms
> > 10  rtp10-cd-dmzbb-gw1-vla777.cisco.com (2001:420:2040:d::6)  21.575
> > ms  21.331 ms  21.244 ms
> > 11  rtp10-cd-isp-gw1-ten0-0-0.cisco.com (2001:420:2040:5::)  66.238
> > ms
> > 47.285 ms  48.875 ms
> > 12  2001:1890:c00:7401::ee6e:5239 (2001:1890:c00:7401::ee6e:5239)
> > 22.615 ms  22.196 ms  22.176 ms
> > 13  rlgnc22crs.ipv6.att.net (2001:1890:ff::12:123:138:162)
> > 33.943
> > ms  34.681 ms  31.996 ms
> > 14  wswdc21crs.ipv6.att.net (2001:1890:ff::12:122:2:190)  34.221
> > ms  31.865 ms  31.968 ms
> > 15  wswdc401igs.ipv6.att.net (2001:1890:ff::12:122:113:37)
> > 32.893
> > ms  34.639 ms  31.971 ms
> > 16  att-gw.wswdc.tinet.net (2001:1890:1fff:20f:192:205:37:194)
> > 30.417
> > ms  32.166 ms  30.514 ms
> > 17  xe-7-3-0.cr0-mtl1.ip6.gtt.net (2001:668:0:2::1:4452)  48.912 ms
> > 48.865 ms  48.824 ms
> > 18  2001:668:0:3::0:adcd:2f2e (2001:668:0:3::0:adcd:2f2e)
> > 48.897 ms  48.851 ms  48.757 ms
> > 19  2605:9000:0:f3f::3 (2605:9000:0:f3f::3)  48.861 ms  51.709 ms
> > 48.73 ms
> > 20  2605:9000:0:100::1 (2605:9000:0:100::1)  50.773 ms  48.931 ms
> > 51.24 ms
> > 21  2605:9000:400:107::c (2605:9000:400:107::c)  48.992 ms  48.933 ms
> > 48.864 ms
> > 

Re: [vpp-dev] FD.io Notification: VPP openSUSE jobs

2017-10-31 Thread Marco Varlese
Thank you Vanessa :)

On Mon, 2017-10-30 at 16:32 -0500, Vanessa Valderrama wrote:
> 
> What:
> 
> 
> 
> The openSUSE image issues have been resolved.  The jobs
> are passing on the sandbox.  I will be enabling openSUSE for VPP
> in production tomorrow.  Please feel free to review the jobs on
> the sandbox.
> 
> https://jenkins.fd.io/sandbox/
> 
> When: 2017-10-31 @ 1500 UTC (8:00am PDT)
> 
> Where: Please contact LF via IRC fdio-infra (valderrv) to
>   report issues
> 
> 
> 
> Impact:  VPP Gerrit voting could -1 due if openSUSE jobs
>   fail
> 
> 
> 
> 
> 
> 
> 
>   
> 
>   
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP default graph

2017-10-31 Thread Dave Barach (dbarach)
Dear Mostafa,

First, “show vlib graph” describes the entire graph in detail.

Vpp uses ingress flow-hashing (e.g. hardware RSS hashing) across a set of 
threads running identical graph replicas to achieve multi-core scaling.

Historical experiments with pipelining in vpp dissuaded me from pursuing that 
processing model: the entire pipeline runs at the speed of the slowest stage. 
More to the point: if the offered workload changes, one needs to reconfigure 
the pipeline to achieve decent performance.

In vpp, you can spin up arbitrary threads and process packets however you like, 
of course.

It would help if you’d describe your application in detail, otherwise we won’t 
be able to make detailed suggestions.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Mostafa Salari
Sent: Tuesday, October 31, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP default graph

Hi, I have 3 issues:
1. I want to know what is the default structure of graph nodes when VPP is 
running!
2. In dpdk ip_pipeline application, i was able to determine how many instances 
be created and determine lcore that each instance must run on it! In this way, 
i was able to make custom optimizations and make a fast packet processing 
pipeline for my special goal. What is the way in VPP?
3. In order to change the default arrangement, what should i do?

Best regards,
Mostafa
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] link dpdk.a statically to vpp

2017-10-31 Thread Gonzalez Monroy, Sergio

Hi Shachar,

On 30/10/2017 16:00, Shachar Beiser wrote:


Hi,

 I would like to link the dpdk statically to vpp and not as a 
shared object.


 I see there is an option  :

sudo make dpdk-install-dev DPDK_MLX5_PMD=y */ENABLE_DPDK_SHARED=n/*

/ but it seems that it is not enough . /

/Can you direct me what I need to do ?/

*/    -Shachar Beiser/*




I think there are something to note here.
DPDK is a VPP plugin (dpdk_plugin.so), thus a shared object.

The option you are using just affects how the plugin is built, by 
default DPDK *is* statically linked in the plugin, with 
ENABLE_DPDK_SHARED it would dynamically linked against DPDK.


HTH,
Sergio



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Segmentation fault in ikev2 test

2017-10-31 Thread Neale Ranns (nranns)
Hi Xyxue,

If you don’t have at a FIB with index 2 (i.e. you haven’t created to additional 
IP tables/VRFs) then:
  The '(vnet_buffer (p0)->sw_if_index[VLIB_TX] ' is 2 when 'del-sa' execute 
fail.

is certainly the cause of your crash. I would attempt to determine when and 
where this value was set in IPSEC path.
If it doesn’t happen all the time, then that’s often indicative of a single 
versus dual loop code path.

regards
neale


From:  on behalf of 薛欣颖 
Date: Tuesday, 31 October 2017 at 09:43
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Segmentation fault in ikev2 test


Hi,


I'm testing ikev2 and there is something error in my test:

 VPP# ikev2 initiate del-sa ee25d9aa4e7a0f1a
0: ikev2_generate_message:1774: sa state changed to 
IKEV2_STATE_NOTIFY_AND_DELETE

 Program received signal SIGSEGV, Segmentation fault.
0x2b293d767f4b in ip4_fib_mtrie_lookup_step_one (m=0x0, 
dst_address=0x2b2991d243f4) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_mtrie.h:225
225   next_leaf = m->root_ply.leaves[dst_address->as_u16[0]];
(gdb) bt
#0  0x2b293d767f4b in ip4_fib_mtrie_lookup_step_one (m=0x0, 
dst_address=0x2b2991d243f4) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_mtrie.h:225
#1  0x2b293d769bb3 in ip4_lookup_inline (vm=0x2b293d59c120 
, node=0x2b293f843080, frame=0x2b293fbfe800,
lookup_for_responses_to_locally_received_packets=0) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_forward.c:362
#2  0x2b293d76a02f in ip4_lookup (vm=0x2b293d59c120 , 
node=0x2b293f843080, frame=0x2b293fbfe800)
at /root/tmp64/build-data/../src/vnet/ip/ip4_forward.c:472
#3  0x2b293d31b910 in dispatch_node (vm=0x2b293d59c120 , 
node=0x2b293f843080, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x2b293fbfe800, 
last_time_stamp=375367370313442) at 
/root/tmp64/build-data/../src/vlib/main.c:1032
#4  0x2b293d31bef3 in dispatch_pending_node (vm=0x2b293d59c120 
, pending_frame_index=0, last_time_stamp=375367370313442)
at /root/tmp64/build-data/../src/vlib/main.c:1182
#5  0x2b293d31e009 in vlib_main_or_worker_loop (vm=0x2b293d59c120 
, is_main=1) at /root/tmp64/build-data/../src/vlib/main.c:1649
#6  0x2b293d31e0bb in vlib_main_loop (vm=0x2b293d59c120 ) 
at /root/tmp64/build-data/../src/vlib/main.c:1668
#7  0x2b293d31e76f in vlib_main (vm=0x2b293d59c120 , 
input=0x2b293f728fb0) at /root/tmp64/build-data/../src/vlib/main.c:1804
#8  0x2b293d361d48 in thread0 (arg=47456122945824) at 
/root/tmp64/build-data/../src/vlib/unix/main.c:515
#9  0x2b293e2e8dcc in clib_calljmp () at 
/root/tmp64/build-data/../src/vppinfra/longjmp.S:128
#10 0x7ffcf2dbf510 in ?? ()
#11 0x2b293d3621e1 in vlib_unix_main (argc=4, argv=0x7ffcf2dc0798) at 
/root/tmp64/build-data/../src/vlib/unix/main.c:578
#12 0x00407ff1 in main (argc=4, argv=0x7ffcf2dc0798) at 
/root/tmp64/build-data/../src/vpp/vnet/main.c:206
(gdb)

in ip4_lookup_inline
'
fib_index0 =
vec_elt (im->fib_index_by_sw_if_index,
 vnet_buffer (p0)->sw_if_index[VLIB_RX]);
fib_index0 =
(vnet_buffer (p0)->sw_if_index[VLIB_TX] ==
 (u32) ~ 0) ? fib_index0 : vnet_buffer (p0)->sw_if_index[VLIB_TX];
'

The '(vnet_buffer (p0)->sw_if_index[VLIB_TX] ' is (u32) ~ 0 when 'del-sa' 
execute success.
The '(vnet_buffer (p0)->sw_if_index[VLIB_TX] ' is 2 when 'del-sa' execute fail.

By the way, the fault doesn't happen every time.

What should I do to solve the problem?

Thanks,
xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in l3 VLAN

2017-10-31 Thread Neale Ranns (nranns)

Hi Xyxue,

Support for VLANs on host/af_packet interface was added rather recently. See:
  https://gerrit.fd.io/r/#/c/8435/
and its cherry-picked cousins.

/neale


From:  on behalf of 薛欣颖 
Date: Tuesday, 31 October 2017 at 02:10
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] problem in l3 VLAN


Hi,

Is the vpp support l3 VLAN?

I'm testing two direct connect sub interfaces. The configuration and result are 
shown below:

vpp1:
VPP# set interface ip address host-eth2.1 1.1.1.2/24
VPP#
VPP# ping 1.1.1.1

Statistics: 5 sent, 0 received, 100% packet loss
VPP# show adj
[@0] ipv4-glean: host-eth1
[@1] ipv4-glean: host-eth2
[@2] ipv4 via 192.168.247.140 host-eth2: 000c2903f35c293129c20800
[@3] ipv4 via 192.1.190.254 host-eth1: 005056f225e2000c293129b80800
[@4] ipv4 via 192.168.247.254 host-eth2: 005056ec6077000c293129c20800
[@5] ipv4-glean: host-eth2.1
VPP#

vpp2:
VPP# show interface address
host-eth1 (up):
  192.3.1.130/24
host-eth2 (up):
  192.168.247.140/24
host-eth2.1 (up):
  1.1.1.1/24
VPP# show adj
[@0] ipv4-glean: host-eth1
[@1] ipv4-glean: host-eth2
[@2] ipv4 via 192.168.247.138 host-eth2: 000c293129c2000c2903f3500800
[@3] ipv4 via 192.3.1.1 host-eth1: 005056c7000c2903f3460800
[@4] ipv4 via 192.168.247.254 host-eth2: 005056ec6077000c2903f3500800
[@5] ipv4 via 192.3.1.254 host-eth1: 005056e6791c2903f3460800
[@6] ipv4-glean: host-eth2.1
VPP#

Thanks,
xyxue



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Segmentation fault in ikev2 test

2017-10-31 Thread 薛欣颖

Hi,

I'm testing ikev2 and there is something error in my test:
 
 VPP# ikev2 initiate del-sa ee25d9aa4e7a0f1a
0: ikev2_generate_message:1774: sa state changed to 
IKEV2_STATE_NOTIFY_AND_DELETE

 Program received signal SIGSEGV, Segmentation fault.
0x2b293d767f4b in ip4_fib_mtrie_lookup_step_one (m=0x0, 
dst_address=0x2b2991d243f4) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_mtrie.h:225
225   next_leaf = m->root_ply.leaves[dst_address->as_u16[0]];
(gdb) bt
#0  0x2b293d767f4b in ip4_fib_mtrie_lookup_step_one (m=0x0, 
dst_address=0x2b2991d243f4) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_mtrie.h:225
#1  0x2b293d769bb3 in ip4_lookup_inline (vm=0x2b293d59c120 
, node=0x2b293f843080, frame=0x2b293fbfe800, 
lookup_for_responses_to_locally_received_packets=0) at 
/root/tmp64/build-data/../src/vnet/ip/ip4_forward.c:362
#2  0x2b293d76a02f in ip4_lookup (vm=0x2b293d59c120 , 
node=0x2b293f843080, frame=0x2b293fbfe800)
at /root/tmp64/build-data/../src/vnet/ip/ip4_forward.c:472
#3  0x2b293d31b910 in dispatch_node (vm=0x2b293d59c120 , 
node=0x2b293f843080, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x2b293fbfe800, 
last_time_stamp=375367370313442) at 
/root/tmp64/build-data/../src/vlib/main.c:1032
#4  0x2b293d31bef3 in dispatch_pending_node (vm=0x2b293d59c120 
, pending_frame_index=0, last_time_stamp=375367370313442)
at /root/tmp64/build-data/../src/vlib/main.c:1182
#5  0x2b293d31e009 in vlib_main_or_worker_loop (vm=0x2b293d59c120 
, is_main=1) at /root/tmp64/build-data/../src/vlib/main.c:1649
#6  0x2b293d31e0bb in vlib_main_loop (vm=0x2b293d59c120 ) 
at /root/tmp64/build-data/../src/vlib/main.c:1668
#7  0x2b293d31e76f in vlib_main (vm=0x2b293d59c120 , 
input=0x2b293f728fb0) at /root/tmp64/build-data/../src/vlib/main.c:1804
#8  0x2b293d361d48 in thread0 (arg=47456122945824) at 
/root/tmp64/build-data/../src/vlib/unix/main.c:515
#9  0x2b293e2e8dcc in clib_calljmp () at 
/root/tmp64/build-data/../src/vppinfra/longjmp.S:128
#10 0x7ffcf2dbf510 in ?? ()
#11 0x2b293d3621e1 in vlib_unix_main (argc=4, argv=0x7ffcf2dc0798) at 
/root/tmp64/build-data/../src/vlib/unix/main.c:578
#12 0x00407ff1 in main (argc=4, argv=0x7ffcf2dc0798) at 
/root/tmp64/build-data/../src/vpp/vnet/main.c:206
(gdb) 

in ip4_lookup_inline 
'
fib_index0 =
vec_elt (im->fib_index_by_sw_if_index,
 vnet_buffer (p0)->sw_if_index[VLIB_RX]);
fib_index0 =
(vnet_buffer (p0)->sw_if_index[VLIB_TX] ==
 (u32) ~ 0) ? fib_index0 : vnet_buffer (p0)->sw_if_index[VLIB_TX];
'
 
The '(vnet_buffer (p0)->sw_if_index[VLIB_TX] ' is (u32) ~ 0 when 'del-sa' 
execute success.
The '(vnet_buffer (p0)->sw_if_index[VLIB_TX] ' is 2 when 'del-sa' execute fail.

By the way, the fault doesn't happen every time.

What should I do to solve the problem?

Thanks,
xyxue


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] nat44 address pool not fully used

2017-10-31 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
Hi,

You need to change “translation hash buckets” in startup configuration of the 
NAT plugin https://wiki.fd.io/view/VPP/NAT#Startup_config, we added session 
number limitation to avoid running out of memory crash in runtime (maximum 
sessions = 10 x “translation hash buckets”).

Regards,
Matus


From: Yuliang Li [mailto:yuliang...@yale.edu]
Sent: Monday, October 30, 2017 8:21 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] nat44 address pool not fully used

Here are the results:
   CountNode  Reason
 10240  nat44-in2out-slowpath Good in2out packets processed
   7847236  nat44-in2out-slowpath Maximum sessions exceeded
  23864696  nat44-in2out  Good in2out packets processed
 10240  nat44-in2out-slowpath Good in2out packets processed
   7846673  nat44-in2out-slowpath Maximum sessions exceeded
  23864371  nat44-in2out  Good in2out packets processed

It seems the number of maximum sessions limits. I just updated from an older 
version to the latest by pulling from https://gerrit.fd.io/r/vpp3 days ago. I 
did not change the configuration file that I used before. Is there any default 
parameter value change?

On Mon, Oct 30, 2017 at 1:29 AM, Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES at Cisco) > wrote:
Hi,

Are you on latest?
Could you please provide “show node counters” output.

Regards,
Matus


From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yuliang Li
Sent: Saturday, October 28, 2017 8:48 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] nat44 address pool not fully used

Hi,

I use "nat44 add addr 10.1.1.2-10.1.1.254", in the hope that in2out traffic can 
use any of the source IP in the range.

However, when I generate in2out traffic composing 65536 different internal 
source IP (with the same source port), only 10.1.1.2 is used for external 
source IP, and allocated 20162 port number. That means only 20162 internal IP 
get translated, while the rest are dropped. I am wondering why it does not 
allocate other addresses in the pool (e.g., 10.1.1.3)?

Here are the output of show nat44 detail:
10.1.1.2
  tenant VRF independent
  0 busy udp ports
  20162 busy tcp ports
  0 busy icmp ports
10.1.1.3
  tenant VRF independent
  0 busy udp ports
  0 busy tcp ports
  0 busy icmp ports
 (all following shows 0 busy ports).

Thanks,
--
Yuliang Li
PhD student
Department of Computer Science
Yale University



--
Yuliang Li
PhD student
Department of Computer Science
Yale University
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev