[vpp-dev] ipsec vpn

2018-12-16 Thread xulang
Hi all,
How can we use IPSEC VPN to protect multi subnetworks.
Such as  10.11.0.0/16  and 192.168.0.0/16.
Do they negotiate this information through IKEV2 AUTH procedure? 
And the code show that there is only one TS per profile, how can that protect 
multi subnet.




Regards,
xlangyun-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11629): https://lists.fd.io/g/vpp-dev/message/11629
Mute This Topic: https://lists.fd.io/mt/28779640/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Out of memory ?

2018-11-27 Thread xulang
Hi all,
I sent 4 MACs l2 packets to VPP, then it crashed, and below is the stack.  
out of memory?  How can I get this correct?


(gdb)  bt
#00x004074ef  in  debug_sigabrt  (sig=)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/vnet/main.c:63
#1
#20x7f439ca84a0d  in  raise  ()  from  /lib/libc.so.6
#30x7f439ca85944  in  abort  ()  from  /lib/libc.so.6
#40x004076be  in  os_panic  ()
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/vnet/main.c:290
#50x7f439ddd4c7f  in  clib_mem_alloc_aligned_at_offset  (
os_out_of_memory_on_failure=1,  align_offset=,  
align=4,  
size=7825458)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/mem.h:102
#6vec_resize_allocate_memory  (v=v@entry=0x3055afb0,  
length_increment=length_increment@entry=100,  data_bytes=,  
header_bytes=,  header_bytes@entry=0,  
data_align=data_align@entry=4)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/vec.c:84
#70x004277d8  in  _vec_resize  (data_align=0,  header_bytes=0,  
data_bytes=,  length_increment=100,  v=)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/---Type
to  continue,  or  qto  quit---
vec.h:142
#8shmem_cli_output  (arg=139927313067576,  
buffer=0x7f43608fc8bc  "  7a:7a:c0:a8:f8:6614   
 0/13  --  -  
GigabitEthernet0/0/3  \n",  buffer_bytes=100)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/api/api.c:1095
#90x7f43a4d8222e  in  vlib_cli_output  (vm=,  
fmt=fmt@entry=0x7f439ef16c90  "%=19U%=7d%=7d  
%3d/%-3d%=9v%=7s%=7s%=5s%=30U")
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/cli.c:594
#10  0x7f439eb2581a  in  display_l2fib_entry  (key=...,  
result=result@entry=...,  s=s@entry=0x7f43608b07c0  "3")
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vnet/l2/l2_fib.c:120
#11  0x7f439eb25af6  in  show_l2fib  (vm=0x7f43a4fe19c0  
,  
input=,  cmd=)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vnet/l2/l2_fib.c:274
#12  0x7f43a4d82641  in  vlib_cli_dispatch_sub_commands  (
vm=vm@entry=0x7f43a4fe19c0  ,  
cm=cm@entry=0x7f43a4fe1c28  ,  
input=input@entry=0x7f435dca2e40,  parent_command_index=)
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/cli.---Type
to  continue,  or  qto  quit---Quit
(gdb)  frame  10
#10  0x7f439eb2581a  in  display_l2fib_entry  (key=...,  
result=result@entry=...,  s=s@entry=0x7f43608b07c0  "3")
at  
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vnet/l2/l2_fib.c:120

120 
/syslog/share/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vnet/l2/l2_fib.c:
  No  such  file  or  directory.


root@QWAC:/#  cat  /pro c/meminfo 
MemTotal:3938480 kB
MemFree: 1956388 kB
MemAvailable:2299188 kB
Buffers:  179036 kB
Cached:   186656 kB
SwapCached:0 kB
Active:   482992 kB
Inactive: 134164 kB
Active(anon): 253420 kB
Inactive(anon):11816 kB
Active(file): 229572 kB
Inactive(file):   122348 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal: 0 kB
SwapFree:  0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages:252064 kB
Mapped:   208352 kB
Shmem: 13276 kB
Slab:  36312 kB
SReclaimable:  20616 kB
SUnreclaim:15696 kB
KernelStack:2016 kB
PageTables: 4236 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 1444952 kB
Committed_AS:2579656 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  268632 kB
VmallocChunk:   34359462972 kB
HugePages_Total: 512
HugePages_Free:  384
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   13688 kB
DirectMap2M: 4071424 kB




unix {
  nodaemon
  log /tmp/vpp.log
  full-coredump
}


api-trace {
  on
}


api-segment {
  gid vpp
}


cpu {
## In the VPP there is one main thread and optionally the user can create 
worker(s)
## The main thread and worker thread(s) can be pinned to CPU core(s) manually 
or automatically


## Manual pinning of thread(s) to CPU core(s)


## Set logical CPU core where main thread runs
#main-core 1


## Set logical CPU core(s) where worker threads are running
#corelist-workers 2-3


## Automatic pinning of thread(s) to CPU core(s)


## Sets number of CPU core(s) to be skipped (1 ... N-1)
## Skipped CPU core(s) are not used for pinning main thread and working 
thread(s).
## The main thread is automatic

[vpp-dev] PPPOE

2018-11-25 Thread xulang
Hi all,
I would like to use pppoe server and pppoe client, is there any material  about 
this?




Regards,
Xlangyun-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11398): https://lists.fd.io/g/vpp-dev/message/11398
Mute This Topic: https://lists.fd.io/mt/28316755/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] IPSEC VPN NAT Traversal

2018-11-24 Thread xulang
Hi all,
It seems that this function does not work.


ipsec: support UDP encap/decap for NAT traversal










Regards,
Xlangyun-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11393): https://lists.fd.io/g/vpp-dev/message/11393
Mute This Topic: https://lists.fd.io/mt/28307690/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vlib_buffer_alloc error

2018-11-10 Thread xulang
My version is 17.04.






At 2018-11-10 16:38:17, "xulang"  wrote:

Hi all,


Sometimes(pretty many times), we get strange buffer index and which can cause 
fatal problem to vpp when we use ipsec vpn.
Here are some details in function fill_free_list:


(gdb) p mb0
$31 = (struct rte_mbuf *) 0x7f4a796e3740
(gdb) p mb1
$32 = (struct rte_mbuf *) 0x7f4a796e2d00
(gdb) p mb2
$33 = (struct rte_mbuf *) 0x7f4a796e22c0
(gdb) p mb3
$34 = (struct rte_mbuf *) 0x7f4a796e1880


(gdb) p b1
$29 = (vlib_buffer_t *) 0x7f4a796e2d80
(gdb) p b0
$30 = (vlib_buffer_t *) 0x7f4a796e37c0
(gdb) p b2
$27 = (vlib_buffer_t *) 0x7f4a796e2340
(gdb) p b3
$28 = (vlib_buffer_t *) 0x7f4a796e1900


(gdb) p bi0
$21 = 3281633096
(gdb) p bi1
$22 = 32586
(gdb) p bi2
$25 = 84176
(gdb) p bi3
$26 = 84135


Those buffers address are right, but the buffed indexes are quite strange.
Any clue or suggestion here?




(gdb)
fill_free_list (min_free_buffers=1, fl=0x7f4ac36eee40, vm=0x7f4b07dcf2c0 
)
at 
/home/vbras/codenew/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:238
238 vec_add1_aligned (fl->buffers, bi0, CLIB_CACHE_LINE_BYTES);
(gdb)
233 bi0 = vlib_get_buffer_index (vm, b0);

(gdb) call sizeof(struct rte_mbuf)

$35 = 128




Regards,
Ewan




 -=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11192): https://lists.fd.io/g/vpp-dev/message/11192
Mute This Topic: https://lists.fd.io/mt/28070210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vlib_buffer_alloc error

2018-11-10 Thread xulang
Hi all,


Sometimes(pretty many times), we get strange buffer index and which can cause 
fatal problem to vpp when we use ipsec vpn.
Here are some details in function fill_free_list:


(gdb) p mb0
$31 = (struct rte_mbuf *) 0x7f4a796e3740
(gdb) p mb1
$32 = (struct rte_mbuf *) 0x7f4a796e2d00
(gdb) p mb2
$33 = (struct rte_mbuf *) 0x7f4a796e22c0
(gdb) p mb3
$34 = (struct rte_mbuf *) 0x7f4a796e1880


(gdb) p b1
$29 = (vlib_buffer_t *) 0x7f4a796e2d80
(gdb) p b0
$30 = (vlib_buffer_t *) 0x7f4a796e37c0
(gdb) p b2
$27 = (vlib_buffer_t *) 0x7f4a796e2340
(gdb) p b3
$28 = (vlib_buffer_t *) 0x7f4a796e1900


(gdb) p bi0
$21 = 3281633096
(gdb) p bi1
$22 = 32586
(gdb) p bi2
$25 = 84176
(gdb) p bi3
$26 = 84135


Those buffers address are right, but the buffed indexes are quite strange.
Any clue or suggestion here?




(gdb)
fill_free_list (min_free_buffers=1, fl=0x7f4ac36eee40, vm=0x7f4b07dcf2c0 
)
at 
/home/vbras/codenew/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:238
238 vec_add1_aligned (fl->buffers, bi0, CLIB_CACHE_LINE_BYTES);
(gdb)
233 bi0 = vlib_get_buffer_index (vm, b0);

(gdb) call sizeof(struct rte_mbuf)

$35 = 128




Regards,
Ewan-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11190): https://lists.fd.io/g/vpp-dev/message/11190
Mute This Topic: https://lists.fd.io/mt/28070210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] openwrt gdb threads

2018-09-29 Thread xulang
Hi all,
I tried to run VPP on multiple CPU cores on Openwrt system.
And it did work, more than one CPU cores were consumed 100 percent resources.
But I don't know why there is only one thread when we use cmd "info threads"  
in gdb .
Is there any clue here?




Regards,
xiaoxu-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10715): https://lists.fd.io/g/vpp-dev/message/10715
Mute This Topic: https://lists.fd.io/mt/26421354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] ipsec vpn(site to site)

2018-08-28 Thread xulang
Hi all,
I'd like to build a site to site VPN tunnel with vpp only.
Because VPP can't be SA initiator, so we can't use IKE2, so how can we build 
this, is there any files about this?


Regards,
Xiaoxu




 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10329): https://lists.fd.io/g/vpp-dev/message/10329
Mute This Topic: https://lists.fd.io/mt/25071597/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] target symbol

2018-07-29 Thread xulang
I would like to use tool "nm" to check the function symbols.
Such as " nm vpp |grep vlib".






At 2018-07-29 19:56:02, "Dave Barach (dbarach)"  wrote:


I don’t understand what you mean by “reserve symbols.” Please explain what 
you’re trying to do in more detail.

 

From:vpp-dev@lists.fd.io  On Behalf Of xulang
Sent: Sunday, July 29, 2018 2:43 AM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] target symbol

 

Hi all,

How can I reserve symbols in the target file vpp, what should I do to the 
Makefile under build-root.

 

 

Regards,

xiaoC

 

 -=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9966): https://lists.fd.io/g/vpp-dev/message/9966
Mute This Topic: https://lists.fd.io/mt/23848314/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] target symbol

2018-07-28 Thread xulang
Hi all,
How can I reserve symbols in the target file vpp, what should I do to the 
Makefile under build-root.




Regards,
xiaoC-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9964): https://lists.fd.io/g/vpp-dev/message/9964
Mute This Topic: https://lists.fd.io/mt/23848314/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] dpdk & vpp

2018-07-27 Thread xulang
Hi all,
I used gdb to trace tx procedure, but it returned at "return ops->enqueue(mp, 
obj_table, n)", did not  go deep anymore.
This the error drop node procedure.
Below is stack info.



dpdk_rte_pktmbuf_free (b=, vm=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
rte_pktmbuf_free_seg (m=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
__rte_mbuf_raw_free (m=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
rte_mempool_put (obj=, mp=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
rte_mempool_put_bulk (n=, obj_table=, 
mp=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
rte_mempool_generic_put (flags=, cache=, 
n=, obj_table=, mp=)
at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
__mempool_generic_put (cache=, n=, 
obj_table=, mp=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-data/../src/plugins/dpdk/buffer.c:391
391dpdk_rte_pktmbuf_free (vm, b);
(gdb) 
rte_mempool_ops_enqueue_bulk (n=, obj_table=, 
mp=)
at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:495
495return ops->enqueue(mp, obj_table, n);
(gdb) 
rte_pktmbuf_free_seg (m=) at 
/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/build-root/install-vpp-native/dpdk/include/dpdk/rte_mbuf.h:1242
1242if (likely(NULL != (m = __rte_pktmbuf_prefree_seg(m {
(gdb) 







At 2018-07-27 17:05:29, "xulang"  wrote:

Hi Ray,
It seems that it does not work, but I am not about that, please check this for 
me.


VPP.mk:
vpp_uses_external_dpdk = yes
vpp_dpdk_inc_dir = /usr/include/dpdk
vpp_dpdk_lib_dir = /usr/lib
# vpp_dpdk_shared_lib = yes


DPDK:
export 
RTE_SDK=/home/vbras/pack/VBRASV100R001_new_trunk/vpp1704/dpdk/dpdktest/dpdk-17.02
export RTE_TARGET=x86_64-native-linuxapp-gcc
export DESTDIR=/usr
make install T=$RTE_TARGET


VPP:
 make V=0 TAG=vpp PLATFORM=vpp install-deb




MY question is below:
I added seven interfaces to one bridge and send packets to that bridge.
The tx packets are correct, because I captured them with Wireshark on the host 
system.
But there is something wrong with rte_ring.
No one put rte_mbuf back to the rte_ring, the PROD's head and tail are always 
16384.
Below is some information, If you guys need more, please let me know.










(gdb) p *((struct rte_ring*)(((struct 
rte_mempool*)dpdk_main->pktmbuf_pools[0])->pool_data))
$3 = {
  name = "MP_mbuf_pool_socket0", '\000' , 
  flags = 0, 
  memzone = 0x77eb3d00, 
  prod = {
watermark = 32768, 
sp_enqueue = 0, 
size = 32768, 
mask = 32767, 
head = 16384, 
tail = 16384
  }, 
  cons = {
sc_dequeue = 0, 
size = 32768, 
mask = 32767, 
head = 10250, 
tail = 10250
  }, 
  ring = 0x7fff357560c0
}












root@vBRAS:~# vppctl show int
  Name   Idx   State  Counter  
Count 
GigabitEthernet0/0/0  1down  
GigabitEthernet0/0/1  2 up   
GigabitEthernet0/0/2  3 up   rx packets 
4
 rx bytes   
  621
 tx packets 
  150
 tx bytes   
 7812
GigabitEthernet0/0/3  4 up   rx packets 
1
 rx bytes   
  243
 tx packets 
  153
 tx bytes   
 8190
GigabitEthernet0/0/4  5 up   rx packets 
1
 rx bytes   
  243
 tx packets 
  153
 tx bytes   
 8190
GigabitEthernet0/0/5  6 up   rx packets 
6
 rx bytes   
  543
 tx packets 
  148
 tx bytes   
 7890
GigabitEthernet0/0/6 

Re: [vpp-dev] dpdk & vpp

2018-07-27 Thread xulang
   18up   
host-vge8 19up   
host-vvlan1   20up   rx packets 
  139
 rx bytes   
 5856
 drops  
  139
 ip4
1






At 2018-07-27 10:54:03, "Ray Cai"  wrote:


Hi Xiao:

 

By changing dpdk driver, do you mean that you changed code in DPDK and wants to 
compile VPP with modified DPDK?

 

If so, I recommend checking out the option to compile VPP with 
“vpp_uses_external_dpdk = yes”.

Here are the steps I used:

1)  Compiled DPDK with EXTRA_CFLAGS’-fPIC –pie’ and have a DESTDIR for the 
compiled header/libs using whatever configuration of your choosing.

2)  Modify build-data/platforms/vpp.mk to

a.   Vpp_uses_external_dpdk = yes

b.   Vpp_dpdk_inc_dir = /include/dpdk

c.   Vpp_dpdk_lib_dir = /lib

3)  Then build vpp and create packages.

 

This way the external dpdk is compiled as static lib and linked directly into 
vpp binary. At the end you should see a few packages in build-root and they 
would contain everything you need. Just install them and they should be running 
with your modified DPDK.

 

I hope the instructions help.

 

Thanks,

-Ray

 

From:vpp-dev@lists.fd.io  On Behalf Of xulang
Sent: Thursday, July 26, 2018 5:58 PM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] dpdk & vpp

 

Hi all,

I have changed dpdk drivers, how can I make that change effective?

The cmd  "dpkg -l |grep vpp" does not include  anything related to dpdk.

I am look forward to hearing from u.

 

 

 

 

Regards,

xiaoC

 

 

 -=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9946): https://lists.fd.io/g/vpp-dev/message/9946
Mute This Topic: https://lists.fd.io/mt/23828481/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] dpdk & vpp

2018-07-26 Thread xulang
Hi all,
I have changed dpdk drivers, how can I make that change effective?
The cmd  "dpkg -l |grep vpp" does not include  anything related to dpdk.
I am look forward to hearing from u.








Regards,
xiaoC

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9944): https://lists.fd.io/g/vpp-dev/message/9944
Mute This Topic: https://lists.fd.io/mt/23828481/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] CPU instructions

2018-06-26 Thread xulang
Hi all,

We edited vpp.mk like this:




vpp.mk

# vector packet processor

vpp_arch = native

ifeq ($(shell uname -m),x86_64)

vpp_march = core2# Nehalem Instruction set

vpp_mtune = intel# Optimize for Sandy Bridge

else ifeq ($(shell uname -m),aarch64)

ifeq ($(TARGET_PLATFORM),thunderx)

vpp_march = armv8-a+crc

vpp_mtune = thunderx

vpp_dpdk_target = arm64-thunderx-linuxapp-gcc

else

vpp_march = core2

vpp_mtune = intel

endif

endif

vpp_native_tools = vppapigen




But we  encountered a ERROR like this " This system does not support "SSE4.1 
please check that RTE_MACHINE is set correctly" ".

How to make dpdk use lower version instructions or how to set this RTE_MACHINE 
properly?

Is there some file like vpp.mk in dpdk?

Should we change cmd "make V=0 PLATFORM=vpp TAG=vpp" somehow.

All I want to do is to run vpp on CPU N2600.




Regards,

Xlangyun














-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9708): https://lists.fd.io/g/vpp-dev/message/9708
Mute This Topic: https://lists.fd.io/mt/22708334/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp openwrt

2018-06-21 Thread xulang
Hi all,
Is there anybody  have tried to build vpp for OpenWRT?
Whether if there are some files about this.


Regards,
Xlangyun
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9668): https://lists.fd.io/g/vpp-dev/message/9668
Mute This Topic: https://lists.fd.io/mt/22500636/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] dpdk drivers

2018-06-21 Thread xulang
Hi all
I have changed something in the function "eth_em_xmit_pkts",  which belong to 
e1000 driver.
But I found out that my change does not work.  Its object file is 
"librte_pmd_e1000.a", but I do not know how it connected to vpp.
What can I do  to make this change effective.


Regards,
Xlangyun
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9667): https://lists.fd.io/g/vpp-dev/message/9667
Mute This Topic: https://lists.fd.io/mt/22500432/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP & CPU

2018-06-15 Thread xulang
Hi all,
I want to run vpp on those kinds of CPU, such as N2600 D525 2117U.
Is that possible? How can I do it if I build vpp on CPU I7?




Regards,
xlangyun
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9621): https://lists.fd.io/g/vpp-dev/message/9621
Mute This Topic: https://lists.fd.io/mt/22293893/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 17.04 Bridge

2018-06-12 Thread xulang
Hi all,
If I add more than four phy interfaces into one bridge,
VPP will corrupt at many different places.
Is there any bug in this feature?








Regards,
Ewan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9592): https://lists.fd.io/g/vpp-dev/message/9592
Mute This Topic: https://lists.fd.io/mt/22060814/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VAT

2018-06-05 Thread xulang
Hi all,
Is there any files to tell us when and how module vat is been used?


Regards,
Xlangyun

[vpp-dev] something wrong about memory

2018-06-02 Thread xulang
Hi all,
I have  stopped my vpp process and its threads in gdb,  and I even did not bind 
any phy interfaces, but it consumes memory continuously.
What should  I do?
I hope hearing from you, thanks.
Below is some information.








Thread 1 "vpp_main" hit Breakpoint 2, dispatch_node (vm=0x779aa2a0 
, node=0x7fff6ec89440, type=VLIB_NODE_TYPE_INPUT, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
last_time_stamp=393305677230170) at 
/home/wangzy/oldcode/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c:926
926{
(gdb) info threads
  Id   Target Id Frame 
* 1Thread 0x77fd6740 (LWP 9928) "vpp_main" dispatch_node 
(vm=0x779aa2a0 , node=0x7fff6ec89440, 
type=VLIB_NODE_TYPE_INPUT, dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
last_time_stamp=393305677230170) at 
/home/wangzy/oldcode/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c:926
  2Thread 0x7fff6e9bd700 (LWP 9929) "vpp" 0x7fffefd8794d in recvmsg () 
at ../sysdeps/unix/syscall-template.S:84
  3Thread 0x7affda084700 (LWP 9930) "eal-intr-thread" 0x7fffef8b0e23 in 
epoll_wait () at ../sysdeps/unix/syscall-template.S:84
  4Thread 0x7affd9883700 (LWP 9931) "vpp_stats" 0x7fffefd87c1d in 
nanosleep () at ../sysdeps/unix/syscall-template.S:84
(gdb) b dispatch_node
Breakpoint 2 at 0x777570e0: file 
/home/wangzy/oldcode/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c,
 line 926.
(gdb) c
Continuing.
(gdb) b recvmsg
Breakpoint 3 at 0x7fffef8b1770: recvmsg. (2 locations)
(gdb) b epoll_wait
Breakpoint 4 at 0x7fffef8b0df0: file ../sysdeps/unix/syscall-template.S, line 
84.
(gdb) thread 4
[Switching to thread 4 (Thread 0x7affd9883700 (LWP 9931))]
#0  0x7fffefd87c1d in nanosleep () at ../sysdeps/unix/syscall-template.S:84
84../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) info threads
  Id   Target Id Frame 
  1Thread 0x77fd6740 (LWP 9928) "vpp_main" dispatch_node 
(vm=0x779aa2a0 , node=0x7fff6ec89440, 
type=VLIB_NODE_TYPE_INPUT, dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
last_time_stamp=393305677230170) at 
/home/wangzy/oldcode/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c:926
  2Thread 0x7fff6e9bd700 (LWP 9929) "vpp" 0x7fffefd8794d in recvmsg () 
at ../sysdeps/unix/syscall-template.S:84
  3Thread 0x7affda084700 (LWP 9930) "eal-intr-thread" 0x7fffef8b0e23 in 
epoll_wait () at ../sysdeps/unix/syscall-template.S:84
* 4Thread 0x7affd9883700 (LWP 9931) "vpp_stats" 0x7fffefd87c1d in 
nanosleep () at ../sysdeps/unix/syscall-template.S:84
(gdb) 




root@ubuntu:/home/wangzy# ps aux|grep vpp
root   5405  0.0  3.5 197540 143976 pts/22  S+   Jun01   0:06 gdb vpp
root   9928  0.4 26.7 5369610712 1076944 pts/22 tl 00:28   0:04 
/usr/bin/vpp -c /etc/vpp/startup.conf
root  10166  0.0  0.0  14228  1092 pts/18   S+   00:43   0:00 grep 
--color=auto vpp
root@ubuntu:/home/wangzy# ps aux|grep vpp
root   5405  0.0  3.5 197540 143976 pts/22  S+   Jun01   0:06 gdb vpp
root   9928  0.4 26.7 5369610712 1078976 pts/22 tl 00:28   0:04 
/usr/bin/vpp -c /etc/vpp/startup.conf
root  10168  0.0  0.0  14228   944 pts/18   S+   00:43   0:00 grep 
--color=auto vpp
root@ubuntu:/home/wangzy# ps aux|grep vpp
root   5405  0.0  3.5 197540 143976 pts/22  S+   Jun01   0:06 gdb vpp
root   9928  0.4 26.8 5369610712 1083048 pts/22 tl 00:28   0:04 
/usr/bin/vpp -c /etc/vpp/startup.conf
root  10177  0.0  0.0  14228   904 pts/18   S+   00:46   0:00 grep 
--color=auto vpp
root@ubuntu:/home/wangzy# 


Regards
xlangyun

Re: [vpp-dev] vpp's memory is leaking

2018-05-29 Thread xulang
Hi all,
I closed transparent hugepage function, then the RES memory does not increase 
anymore, is that ok?


echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag


Regards


在 2018-05-29 16:01:27,"xulang"  写道:

Hi all,
My version is 17.04, I encountered a memory leaking problem, the RES memory of  
VPP is increasing slowly and continuously.
I shutdown all interfaces and set break points on memory allocate functions, 
such as   malloc,calloc,realloc, mmap, vmalloc, clib_mem_alloc, 
mheap_alloc_with_flags.
The program is still running continuously and the RES memory is also increasing 
continuously, any guides?




Regards




root@ubuntu:/home/wangzy# top -c |grep vpp
4499 root  20   0  5.000t 1.207g 197808 S 201.0 31.4  26:30.57 /usr/bin/vpp 
-c /etc/vpp/startup.conf
   
  4499 root  20   0  5.000t 1.209g 197808 S 201.7 31.5  26:36.62 
/usr/bin/vpp -c /etc/vpp/startup.conf   

  
  4499 root  20   0  5.000t 1.209g 197808 t   3.3 31.5  26:36.72 
/usr/bin/vpp -c /etc/vpp/startup.conf   
  
  4499 root  20   0  5.000t 1.209g 197808 S 115.0 31.5  26:40.18 
/usr/bin/vpp -c /etc/vpp/startup.conf   

  
  4499 root  20   0  5.000t 1.209g 197808 S 201.0 31.5  26:46.23 
/usr/bin/vpp -c /etc/vpp/startup.conf  
  4499 root  20   0  5.000t 1.209g 197808 S 200.7 31.5  26:52.27 
/usr/bin/vpp -c /etc/vpp/startup.conf  
  4499 root  20   0  5.000t 1.209g 197808 S 201.3 31.5  26:58.31 
/usr/bin/vpp -c /etc/vpp/startup.conf   







 

[vpp-dev] vpp's memory is leaking

2018-05-29 Thread xulang
Hi all,
My version is 17.04, I encountered a memory leaking problem, the RES memory of  
VPP is increasing slowly and continuously.
I shutdown all interfaces and set break points on memory allocate functions, 
such as   malloc,calloc,realloc, mmap, vmalloc, clib_mem_alloc, 
mheap_alloc_with_flags.
The program is still running continuously and the RES memory is also increasing 
continuously, any guides?




Regards




root@ubuntu:/home/wangzy# top -c |grep vpp
4499 root  20   0  5.000t 1.207g 197808 S 201.0 31.4  26:30.57 /usr/bin/vpp 
-c /etc/vpp/startup.conf
   
  4499 root  20   0  5.000t 1.209g 197808 S 201.7 31.5  26:36.62 
/usr/bin/vpp -c /etc/vpp/startup.conf   

  
  4499 root  20   0  5.000t 1.209g 197808 t   3.3 31.5  26:36.72 
/usr/bin/vpp -c /etc/vpp/startup.conf   
  
  4499 root  20   0  5.000t 1.209g 197808 S 115.0 31.5  26:40.18 
/usr/bin/vpp -c /etc/vpp/startup.conf   

  
  4499 root  20   0  5.000t 1.209g 197808 S 201.0 31.5  26:46.23 
/usr/bin/vpp -c /etc/vpp/startup.conf  
  4499 root  20   0  5.000t 1.209g 197808 S 200.7 31.5  26:52.27 
/usr/bin/vpp -c /etc/vpp/startup.conf  
  4499 root  20   0  5.000t 1.209g 197808 S 201.3 31.5  26:58.31 
/usr/bin/vpp -c /etc/vpp/startup.conf   




Re: [vpp-dev] show trace caused "out of memory"

2018-05-28 Thread xulang
Hi all,
my mistake, it is working.


Regards






在 2018-05-29 09:41:16,"xulang"  写道:

Hi,
Below is part of my startup.conf. 
The default heapsize is 512M, so I set 2G heapsize for 4 CPU cores.
But it does not work, nothing changed, any ideas?






main-core 1
## Set logical CPU core(s) where worker threads are running
corelist-workers 2-3


heapsize { 2G }


Regards



在 2018-05-28 19:10:30,"Kingwel Xie"  写道:


Hi,

 

You should increase heap size. In startup.conf, heapsize 1g or something like 
that.

 

When running in multi-core environment, vpp definitely need more memory, 
because some global variables have to be expanded to have multiple copies – per 
worker thread. F.g., Interface Counters, Error counters …

 

Regards,

Kingwel

 

From:vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of xulang
Sent: Monday, May 28, 2018 6:16 PM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] show trace caused "out of memory"

 

Hi all,

When we only use one CPU core, the cmd "show trace max 5000" works well.

But it will crash when we use four CPU cores because of "out of memory".

Below are some information, any guides?

 

root@vBRAS:~# cat /proc/meminfo 

MemTotal:4028788 kB

MemFree:  585636 kB

MemAvailable: 949116 kB

Buffers:   22696 kB

Cached:   592600 kB

SwapCached:0 kB

Active:  1773520 kB

Inactive: 118616 kB

Active(anon):1295912 kB

Inactive(anon):45640 kB

Active(file): 477608 kB

Inactive(file):72976 kB

Unevictable:3656 kB

Mlocked:3656 kB

SwapTotal:976380 kB

SwapFree: 976380 kB

Dirty: 0 kB

Writeback: 0 kB

AnonPages:   1280520 kB

Mapped:   112324 kB

Shmem: 62296 kB

Slab:  84456 kB

SReclaimable:  35976 kB

SUnreclaim:48480 kB

KernelStack:5968 kB

PageTables:   267268 kB

NFS_Unstable:  0 kB

Bounce:0 kB

WritebackTmp:  0 kB

CommitLimit: 2466484 kB

Committed_AS:   5368769328 kB

VmallocTotal:   34359738367 kB

VmallocUsed:   0 kB

VmallocChunk:  0 kB

HardwareCorrupted: 0 kB

AnonHugePages:348160 kB

CmaTotal:  0 kB

CmaFree:   0 kB

HugePages_Total: 512

HugePages_Free:  384

HugePages_Rsvd:0

HugePages_Surp:0

Hugepagesize:   2048 kB

DirectMap4k:   96064 kB

DirectMap2M: 3049472 kB

DirectMap1G: 3145728 kB

 

 

0: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so

0: vlib_pci_bind_to_uio: Skipping PCI device :02:0e.0 as host interface 
ens46 is up

EAL: Detected 4 lcore(s)

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

[New Thread 0x7b0019efa700 (LWP 5207)]

[New Thread 0x7b00196f9700 (LWP 5208)]

[New Thread 0x7b0018ef8700 (LWP 5209)]

EAL: PCI device :02:01.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:06.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:07.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:08.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:09.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0a.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0b.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0c.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0d.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0e.0 on NUMA socket -1

EAL:   Device is blacklisted, not initializing

DPDK physical memory layout:

Segment 0: phys:0x7d40, len:2097152, virt:0x7b001500, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

Segment 1: phys:0x7d80, len:266338304, virt:0x7affe460, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

[New Thread 0x7b00186f7700 (LWP 5210)]

/usr/bin/vpp[5202]: dpdk_ipsec_process:241: DPDK Cryptodev support is disabled, 
default to OpenSSL IPsec

/usr/bin/vpp[5202]: dpdk_lib_init:1084: 16384 mbufs allocated but total rx/tx 
ring size is 18432

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /vpe-api: cleanup 
ghost pid 4719

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /global_vm: 
cleanup ghost pid 4719

Thread 1 "vpp_main" received signal SIGABRT, Aborted.

0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

(gdb) 

(gdb) 

(gdb) 

(gdb) p errno /*there are only 81 opened fd belong to progress VPP*/

$1 = 9

(gdb) bt

#0  0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

#1  0x7fffef65702a in abort () from /lib/x8

Re: [vpp-dev] show trace caused "out of memory"

2018-05-28 Thread xulang
Hi,
Below is part of my startup.conf. 
The default heapsize is 512M, so I set 2G heapsize for 4 CPU cores.
But it does not work, nothing changed, any ideas?






main-core 1
## Set logical CPU core(s) where worker threads are running
corelist-workers 2-3


heapsize { 2G }


Regards



在 2018-05-28 19:10:30,"Kingwel Xie"  写道:


Hi,

 

You should increase heap size. In startup.conf, heapsize 1g or something like 
that.

 

When running in multi-core environment, vpp definitely need more memory, 
because some global variables have to be expanded to have multiple copies – per 
worker thread. F.g., Interface Counters, Error counters …

 

Regards,

Kingwel

 

From:vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of xulang
Sent: Monday, May 28, 2018 6:16 PM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] show trace caused "out of memory"

 

Hi all,

When we only use one CPU core, the cmd "show trace max 5000" works well.

But it will crash when we use four CPU cores because of "out of memory".

Below are some information, any guides?

 

root@vBRAS:~# cat /proc/meminfo 

MemTotal:4028788 kB

MemFree:  585636 kB

MemAvailable: 949116 kB

Buffers:   22696 kB

Cached:   592600 kB

SwapCached:0 kB

Active:  1773520 kB

Inactive: 118616 kB

Active(anon):1295912 kB

Inactive(anon):45640 kB

Active(file): 477608 kB

Inactive(file):72976 kB

Unevictable:3656 kB

Mlocked:3656 kB

SwapTotal:976380 kB

SwapFree: 976380 kB

Dirty: 0 kB

Writeback: 0 kB

AnonPages:   1280520 kB

Mapped:   112324 kB

Shmem: 62296 kB

Slab:  84456 kB

SReclaimable:  35976 kB

SUnreclaim:48480 kB

KernelStack:5968 kB

PageTables:   267268 kB

NFS_Unstable:  0 kB

Bounce:0 kB

WritebackTmp:  0 kB

CommitLimit: 2466484 kB

Committed_AS:   5368769328 kB

VmallocTotal:   34359738367 kB

VmallocUsed:   0 kB

VmallocChunk:  0 kB

HardwareCorrupted: 0 kB

AnonHugePages:348160 kB

CmaTotal:  0 kB

CmaFree:   0 kB

HugePages_Total: 512

HugePages_Free:  384

HugePages_Rsvd:0

HugePages_Surp:0

Hugepagesize:   2048 kB

DirectMap4k:   96064 kB

DirectMap2M: 3049472 kB

DirectMap1G: 3145728 kB

 

 

0: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so

0: vlib_pci_bind_to_uio: Skipping PCI device :02:0e.0 as host interface 
ens46 is up

EAL: Detected 4 lcore(s)

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

[New Thread 0x7b0019efa700 (LWP 5207)]

[New Thread 0x7b00196f9700 (LWP 5208)]

[New Thread 0x7b0018ef8700 (LWP 5209)]

EAL: PCI device :02:01.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:06.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:07.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:08.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:09.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0a.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0b.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0c.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0d.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0e.0 on NUMA socket -1

EAL:   Device is blacklisted, not initializing

DPDK physical memory layout:

Segment 0: phys:0x7d40, len:2097152, virt:0x7b001500, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

Segment 1: phys:0x7d80, len:266338304, virt:0x7affe460, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

[New Thread 0x7b00186f7700 (LWP 5210)]

/usr/bin/vpp[5202]: dpdk_ipsec_process:241: DPDK Cryptodev support is disabled, 
default to OpenSSL IPsec

/usr/bin/vpp[5202]: dpdk_lib_init:1084: 16384 mbufs allocated but total rx/tx 
ring size is 18432

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /vpe-api: cleanup 
ghost pid 4719

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /global_vm: 
cleanup ghost pid 4719

Thread 1 "vpp_main" received signal SIGABRT, Aborted.

0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

(gdb) 

(gdb) 

(gdb) 

(gdb) p errno /*there are only 81 opened fd belong to progress VPP*/

$1 = 9

(gdb) bt

#0  0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

#1  0x7fffef65702a in abort () from /lib/x86_64-linux-gnu/libc.so.6

#2  0x0040724e in os_panic () at 
/home/vbras/new_trunk/VBRASV10

Re: [vpp-dev] show trace caused "out of memory"

2018-05-28 Thread xulang
Hi,
Thanks, 
what is the default heapsize?
So we need to prepare extra PHY memory, right?


Regards



在 2018-05-28 19:10:30,"Kingwel Xie"  写道:


Hi,

 

You should increase heap size. In startup.conf, heapsize 1g or something like 
that.

 

When running in multi-core environment, vpp definitely need more memory, 
because some global variables have to be expanded to have multiple copies – per 
worker thread. F.g., Interface Counters, Error counters …

 

Regards,

Kingwel

 

From:vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of xulang
Sent: Monday, May 28, 2018 6:16 PM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] show trace caused "out of memory"

 

Hi all,

When we only use one CPU core, the cmd "show trace max 5000" works well.

But it will crash when we use four CPU cores because of "out of memory".

Below are some information, any guides?

 

root@vBRAS:~# cat /proc/meminfo 

MemTotal:4028788 kB

MemFree:  585636 kB

MemAvailable: 949116 kB

Buffers:   22696 kB

Cached:   592600 kB

SwapCached:0 kB

Active:  1773520 kB

Inactive: 118616 kB

Active(anon):1295912 kB

Inactive(anon):45640 kB

Active(file): 477608 kB

Inactive(file):72976 kB

Unevictable:3656 kB

Mlocked:3656 kB

SwapTotal:976380 kB

SwapFree: 976380 kB

Dirty: 0 kB

Writeback: 0 kB

AnonPages:   1280520 kB

Mapped:   112324 kB

Shmem: 62296 kB

Slab:  84456 kB

SReclaimable:  35976 kB

SUnreclaim:48480 kB

KernelStack:5968 kB

PageTables:   267268 kB

NFS_Unstable:  0 kB

Bounce:0 kB

WritebackTmp:  0 kB

CommitLimit: 2466484 kB

Committed_AS:   5368769328 kB

VmallocTotal:   34359738367 kB

VmallocUsed:   0 kB

VmallocChunk:  0 kB

HardwareCorrupted: 0 kB

AnonHugePages:348160 kB

CmaTotal:  0 kB

CmaFree:   0 kB

HugePages_Total: 512

HugePages_Free:  384

HugePages_Rsvd:0

HugePages_Surp:0

Hugepagesize:   2048 kB

DirectMap4k:   96064 kB

DirectMap2M: 3049472 kB

DirectMap1G: 3145728 kB

 

 

0: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so

0: vlib_pci_bind_to_uio: Skipping PCI device :02:0e.0 as host interface 
ens46 is up

EAL: Detected 4 lcore(s)

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

[New Thread 0x7b0019efa700 (LWP 5207)]

[New Thread 0x7b00196f9700 (LWP 5208)]

[New Thread 0x7b0018ef8700 (LWP 5209)]

EAL: PCI device :02:01.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:06.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:07.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:08.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:09.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0a.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0b.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0c.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0d.0 on NUMA socket -1

EAL:   probe driver: 8086:100f net_e1000_em

EAL: PCI device :02:0e.0 on NUMA socket -1

EAL:   Device is blacklisted, not initializing

DPDK physical memory layout:

Segment 0: phys:0x7d40, len:2097152, virt:0x7b001500, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

Segment 1: phys:0x7d80, len:266338304, virt:0x7affe460, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0

[New Thread 0x7b00186f7700 (LWP 5210)]

/usr/bin/vpp[5202]: dpdk_ipsec_process:241: DPDK Cryptodev support is disabled, 
default to OpenSSL IPsec

/usr/bin/vpp[5202]: dpdk_lib_init:1084: 16384 mbufs allocated but total rx/tx 
ring size is 18432

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /vpe-api: cleanup 
ghost pid 4719

/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /global_vm: 
cleanup ghost pid 4719

Thread 1 "vpp_main" received signal SIGABRT, Aborted.

0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

(gdb) 

(gdb) 

(gdb) 

(gdb) p errno /*there are only 81 opened fd belong to progress VPP*/

$1 = 9

(gdb) bt

#0  0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6

#1  0x7fffef65702a in abort () from /lib/x86_64-linux-gnu/libc.so.6

#2  0x0040724e in os_panic () at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/vnet/main.c:290

#3  0x7fffefe6b49b in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=, align=4, 
size=187

[vpp-dev] show trace caused "out of memory"

2018-05-28 Thread xulang
Hi all,
When we only use one CPU core, the cmd "show trace max 5000" works well.
But it will crash when we use four CPU cores because of "out of memory".
Below are some information, any guides?


root@vBRAS:~# cat /proc/meminfo 
MemTotal:4028788 kB
MemFree:  585636 kB
MemAvailable: 949116 kB
Buffers:   22696 kB
Cached:   592600 kB
SwapCached:0 kB
Active:  1773520 kB
Inactive: 118616 kB
Active(anon):1295912 kB
Inactive(anon):45640 kB
Active(file): 477608 kB
Inactive(file):72976 kB
Unevictable:3656 kB
Mlocked:3656 kB
SwapTotal:976380 kB
SwapFree: 976380 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages:   1280520 kB
Mapped:   112324 kB
Shmem: 62296 kB
Slab:  84456 kB
SReclaimable:  35976 kB
SUnreclaim:48480 kB
KernelStack:5968 kB
PageTables:   267268 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 2466484 kB
Committed_AS:   5368769328 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   0 kB
VmallocChunk:  0 kB
HardwareCorrupted: 0 kB
AnonHugePages:348160 kB
CmaTotal:  0 kB
CmaFree:   0 kB
HugePages_Total: 512
HugePages_Free:  384
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   96064 kB
DirectMap2M: 3049472 kB
DirectMap1G: 3145728 kB




0: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
0: vlib_pci_bind_to_uio: Skipping PCI device :02:0e.0 as host interface 
ens46 is up
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
[New Thread 0x7b0019efa700 (LWP 5207)]
[New Thread 0x7b00196f9700 (LWP 5208)]
[New Thread 0x7b0018ef8700 (LWP 5209)]
EAL: PCI device :02:01.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:07.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:08.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:09.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:0a.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:0b.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:0c.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:0d.0 on NUMA socket -1
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device :02:0e.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
DPDK physical memory layout:
Segment 0: phys:0x7d40, len:2097152, virt:0x7b001500, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x7d80, len:266338304, virt:0x7affe460, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
[New Thread 0x7b00186f7700 (LWP 5210)]
/usr/bin/vpp[5202]: dpdk_ipsec_process:241: DPDK Cryptodev support is disabled, 
default to OpenSSL IPsec
/usr/bin/vpp[5202]: dpdk_lib_init:1084: 16384 mbufs allocated but total rx/tx 
ring size is 18432
/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /vpe-api: cleanup 
ghost pid 4719
/usr/bin/vpp[5202]: svm_client_scan_this_region_nolock:1139: /global_vm: 
cleanup ghost pid 4719
Thread 1 "vpp_main" received signal SIGABRT, Aborted.
0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) 
(gdb) 
(gdb) 
(gdb) p errno /*there are only 81 opened fd belong to progress VPP*/
$1 = 9
(gdb) bt
#0  0x7fffef655428 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fffef65702a in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x0040724e in os_panic () at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/vnet/main.c:290
#3  0x7fffefe6b49b in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=, align=4, 
size=18768606)   /*mmap*/
at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=, 
length_increment=length_increment@entry=1, data_bytes=, 
header_bytes=, header_bytes@entry=0, 
data_align=data_align@entry=4)
at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/vec.c:84
#5  0x00420f04 in _vec_resize (data_align=, 
header_bytes=, data_bytes=, 
length_increment=, v=)
at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/vec.h:142
#6  vl_api_cli_request_t_handler (mp=) at 
/home/vbras/new_trunk/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vpp/api/api.c:1132
#7  0x77bce2e3 in vl_msg_api_handler_with_vm_node (am=0x77dd6160 
, the_msg=0x30521

Re: [vpp-dev] new next_node caused Segmentation fault

2018-05-26 Thread xulang


Thanks


Regards





At 2018-05-26 11:37:36, "Dave Barach (dbarach)"  wrote:


Did you notice how many nodes set .n_next_nodes = IP_LOCAL_N_NEXT?

 

Go do something about ip6_local_node, or better yet: use vlib_node_add_next(..) 
to add an arc from ip4_local_node to ethernet-input...

 

D.

 

From: xulang 
Sent: Friday, May 25, 2018 11:15 PM
To: Dave Barach (dbarach) 
Cc:vpp-dev@lists.fd.io
Subject: Re:RE: [vpp-dev] new next_node caused Segmentation fault

 

yeah, ~0 is not right, 

but I only changed "ip_local_next_t" and "VLIB_REGISTER_NODE (ip4_local_node)"

This is backtrace.

 

(gdb) bt

#0  0x7776e73d in vlib_get_node (i=4294967295, 

vm=0x779aa2a0 )

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node_funcs.h:60

#1  vlib_node_main_init (vm=0x779aa2a0 )

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node.c:607

#2  0x77757a1a in vlib_main (

vm=vm@entry=0x779aa2a0 , 

input=input@entry=0x7fffaec1efa0)

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c:1694

#3  0x77790f23 in thread0 (arg=140737347494560)

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/unix/main.c:507

#4  0x7fffefe1def0 in clib_calljmp ()

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/longjmp.S:110

#5  0x7fffcc70 in ?? ()

#6  0x7779193d in vlib_unix_main (argc=, 

argv=)

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/unix/main.c:606

---Type  to continue, or q  to quit---

#7  0x8d48b63c8d48f263 in ?? ()

#8  0x894cb1348d497e34 in ?? ()

#9  0x894c08408b4c1c46 in ?? ()

#10 0x8b4418408b4c2446 in ?? ()

#11 0x89442c46894c2050 in ?? ()

#12 0x50893446894c3c56 in ?? ()

#13 0x1030054801c18328 in ?? ()

#14 0x76744c244c3b in ?? ()

#15 0xeff0b08ba874c985 in ?? ()

 

Regards






At 2018-05-25 20:14:17, "Dave Barach (dbarach)"  wrote:



You’re either passing ~0 to vlib_get_node – or causing the infra to do so - 
which can’t possibly work:

 

vlib_get_node (i=4294967295,vm=0x779aa2a0 )

 

You didn’t send a full backtrace so there’s nothing more I can do to help.

 

D>

 

From:vpp-dev@lists.fd.io  On Behalf Of xulang
Sent: Friday, May 25, 2018 5:27 AM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] new next_node caused Segmentation fault

 

Hi all,

I tried to add a new next node to the node "ip4_local_node",

but which caused a segmentation fault, is there something I have missed?

 

 

typedef enum

{

  IP_LOCAL_NEXT_DROP,

  IP_LOCAL_NEXT_PUNT,

  IP_LOCAL_NEXT_UDP_LOOKUP,

  IP_LOCAL_NEXT_ICMP,

  IP_LOCAL_NEXT_CAPWAP,

  IP_LOCAL_N_NEXT,

} ip_local_next_t;

 

VLIB_REGISTER_NODE (ip4_local_node) =

{

  .function = ip4_local,

  .name = "ip4-local",

  .vector_size = sizeof (u32),

  .format_trace = format_ip4_forward_next_trace,

  .n_next_nodes = IP_LOCAL_N_NEXT,

  .next_nodes =

  {

[IP_LOCAL_NEXT_DROP] = "error-drop",

[IP_LOCAL_NEXT_PUNT] = "error-punt",

[IP_LOCAL_NEXT_UDP_LOOKUP] = "ip4-udp-lookup",

[IP_LOCAL_NEXT_ICMP] = "ip4-icmp-input",

[IP_LOCAL_NEXT_CAPWAP] = "ethernet-input",

  },

 

 

 

(gdb) run -c /etc/vpp/startup.conf 

Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

[New Thread 0x7fffae81d700 (LWP 118617)]

vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins

load_one_plugin:188: Loaded plugin: acl_plugin.so (Access Control Lists)

load_one_plugin:188: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))

load_one_plugin:188: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)

load_one_plugin:188: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)

load_one_plugin:188: Loaded plugin: ioam_plugin.so (Inbound OAM)

load_one_plugin:114: Plugin disabled (default): ixge_plugin.so

load_one_plugin:188: Loaded plugin: lb_plugin.so (Load Balancer)

load_one_plugin:188: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))

load_one_plugin:188: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))

load_one_plugin:188: Loaded plugin: snat_plugin.so (Network Address Translation)

 

Thread 1 "vpp" received signal SIGSEGV, Segmentation fault.

0x7776e73d in vlib_get_node (i=4294967295, 

vm=0x779aa2a0 )

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node_funcs.h:60

60 return vec_elt (vm->node_main.nodes, i);

(gdb) 

 

Regards

 

 



 

 

Re: [vpp-dev] new next_node caused Segmentation fault

2018-05-25 Thread xulang
yeah, ~0 is not right, 
but I only changed "ip_local_next_t" and "VLIB_REGISTER_NODE (ip4_local_node)"
This is backtrace.


(gdb) bt
#0  0x7776e73d in vlib_get_node (i=4294967295, 
vm=0x779aa2a0 )
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node_funcs.h:60
#1  vlib_node_main_init (vm=0x779aa2a0 )
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node.c:607
#2  0x77757a1a in vlib_main (
vm=vm@entry=0x779aa2a0 , 
input=input@entry=0x7fffaec1efa0)
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/main.c:1694
#3  0x77790f23 in thread0 (arg=140737347494560)
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/unix/main.c:507
#4  0x7fffefe1def0 in clib_calljmp ()
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vppinfra/longjmp.S:110
#5  0x7fffcc70 in ?? ()
#6  0x7779193d in vlib_unix_main (argc=, 
argv=)
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/unix/main.c:606
---Type  to continue, or q  to quit---
#7  0x8d48b63c8d48f263 in ?? ()
#8  0x894cb1348d497e34 in ?? ()
#9  0x894c08408b4c1c46 in ?? ()
#10 0x8b4418408b4c2446 in ?? ()
#11 0x89442c46894c2050 in ?? ()
#12 0x50893446894c3c56 in ?? ()
#13 0x1030054801c18328 in ?? ()
#14 0x76744c244c3b in ?? ()
#15 0xeff0b08ba874c985 in ?? ()


Regards





At 2018-05-25 20:14:17, "Dave Barach (dbarach)"  wrote:


You’re either passing ~0 to vlib_get_node – or causing the infra to do so - 
which can’t possibly work:

 

vlib_get_node (i=4294967295,vm=0x779aa2a0 )

 

You didn’t send a full backtrace so there’s nothing more I can do to help.

 

D>

 

From:vpp-dev@lists.fd.io  On Behalf Of xulang
Sent: Friday, May 25, 2018 5:27 AM
To:vpp-dev@lists.fd.io
Subject: [vpp-dev] new next_node caused Segmentation fault

 

Hi all,

I tried to add a new next node to the node "ip4_local_node",

but which caused a segmentation fault, is there something I have missed?

 

 

typedef enum

{

  IP_LOCAL_NEXT_DROP,

  IP_LOCAL_NEXT_PUNT,

  IP_LOCAL_NEXT_UDP_LOOKUP,

  IP_LOCAL_NEXT_ICMP,

  IP_LOCAL_NEXT_CAPWAP,

  IP_LOCAL_N_NEXT,

} ip_local_next_t;

 

VLIB_REGISTER_NODE (ip4_local_node) =

{

  .function = ip4_local,

  .name = "ip4-local",

  .vector_size = sizeof (u32),

  .format_trace = format_ip4_forward_next_trace,

  .n_next_nodes = IP_LOCAL_N_NEXT,

  .next_nodes =

  {

[IP_LOCAL_NEXT_DROP] = "error-drop",

[IP_LOCAL_NEXT_PUNT] = "error-punt",

[IP_LOCAL_NEXT_UDP_LOOKUP] = "ip4-udp-lookup",

[IP_LOCAL_NEXT_ICMP] = "ip4-icmp-input",

[IP_LOCAL_NEXT_CAPWAP] = "ethernet-input",

  },

 

 

 

(gdb) run -c /etc/vpp/startup.conf 

Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

[New Thread 0x7fffae81d700 (LWP 118617)]

vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins

load_one_plugin:188: Loaded plugin: acl_plugin.so (Access Control Lists)

load_one_plugin:188: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))

load_one_plugin:188: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)

load_one_plugin:188: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)

load_one_plugin:188: Loaded plugin: ioam_plugin.so (Inbound OAM)

load_one_plugin:114: Plugin disabled (default): ixge_plugin.so

load_one_plugin:188: Loaded plugin: lb_plugin.so (Load Balancer)

load_one_plugin:188: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))

load_one_plugin:188: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))

load_one_plugin:188: Loaded plugin: snat_plugin.so (Network Address Translation)

 

Thread 1 "vpp" received signal SIGSEGV, Segmentation fault.

0x7776e73d in vlib_get_node (i=4294967295, 

vm=0x779aa2a0 )

at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node_funcs.h:60

60 return vec_elt (vm->node_main.nodes, i);

(gdb) 

 

Regards

 

 



[vpp-dev] new next_node caused Segmentation fault

2018-05-25 Thread xulang
Hi all,
I tried to add a new next node to the node "ip4_local_node",
but which caused a segmentation fault, is there something I have missed?




typedef enum
{
  IP_LOCAL_NEXT_DROP,
  IP_LOCAL_NEXT_PUNT,
  IP_LOCAL_NEXT_UDP_LOOKUP,
  IP_LOCAL_NEXT_ICMP,
  IP_LOCAL_NEXT_CAPWAP,
  IP_LOCAL_N_NEXT,
} ip_local_next_t;


VLIB_REGISTER_NODE (ip4_local_node) =
{
  .function = ip4_local,
  .name = "ip4-local",
  .vector_size = sizeof (u32),
  .format_trace = format_ip4_forward_next_trace,
  .n_next_nodes = IP_LOCAL_N_NEXT,
  .next_nodes =
  {
[IP_LOCAL_NEXT_DROP] = "error-drop",
[IP_LOCAL_NEXT_PUNT] = "error-punt",
[IP_LOCAL_NEXT_UDP_LOOKUP] = "ip4-udp-lookup",
[IP_LOCAL_NEXT_ICMP] = "ip4-icmp-input",
[IP_LOCAL_NEXT_CAPWAP] = "ethernet-input",
  },






(gdb) run -c /etc/vpp/startup.conf 
Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffae81d700 (LWP 118617)]
vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins
load_one_plugin:188: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:188: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:188: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
load_one_plugin:188: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:188: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:188: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:188: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))
load_one_plugin:188: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
load_one_plugin:188: Loaded plugin: snat_plugin.so (Network Address Translation)


Thread 1 "vpp" received signal SIGSEGV, Segmentation fault.
0x7776e73d in vlib_get_node (i=4294967295, 
vm=0x779aa2a0 )
at 
/home/wangzy/VBRASV100R001_new_trunk/vpp1704/build-data/../src/vlib/node_funcs.h:60
60  return vec_elt (vm->node_main.nodes, i);
(gdb) 


Regards

[vpp-dev] IKEV2

2018-05-06 Thread xulang
Hi all,
Do we have plan to make IKEV2 support the role sponsor?


Regards,
xulang

[vpp-dev] IPSEC VPN

2018-04-08 Thread xulang
Hi all,Here are the ipsec vpn configuration example.Does this command "set 
interface ipsec spd GigabitEthernet0/8/0 1" mean that  all traffic 
comes through this int will be processed by ipsec?How cloud I only protect some 
specific traffic and leave the other traffic to the normal forwarding 
procedure? 







set int ip address GigabitEthernet0/8/0 192.168.100.3/24
set int state GigabitEthernet0/8/0 up
set ip arp GigabitEthernet0/8/0 192.168.100.2 08:00:27:12:3c:cc
ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58
ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 
4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 
4339314b55523947594d6d3547666b45764e6a58
ipsec spd add 1
set interface ipsec spd GigabitEthernet0/8/0 1
ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 
192.168.100.3 - 192.168.100.3 remote-ip-range 192.168.100.2 - 192.168.100.2
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 
192.168.100.3 - 192.168.100.3 remote-ip-range 192.168.100.2 - 192.168.100.2

[vpp-dev] vpp bridge int

2018-04-02 Thread xulang
Hi all,
My vpp version is 17.04.
Why there is no bridge int, how could I configure ip to one bridge domain?


Regards,
xulang

[vpp-dev] uninstall vpp and dpdk

2018-03-17 Thread xulang
Hi,
How do I uninstall vpp and dpdk compeletely?




Regards,
xulang

[vpp-dev] vpp & dpdk

2018-03-16 Thread xulang
Hi, 
How does vpp link dpdk related target files, such as librte*.a, or what are 
these static library file linked to?
I have compiled and installed VPP,  if I want to change dpdk drivers, how do I 
make this change go to effect?




Regards,
Xulang









Re: [vpp-dev] dpdk drivers

2018-03-15 Thread xulang
Hi,
I  changed  DPDK_BUILD_DIR?= $(CURDIR)/dpdk-17.02 and the file  
"rte_eth_bond_pmd.c" under this folder.
I also changed dpdk Makefile like this:
$(B)/.extract.ok: $(B)/.download.ok
#@echo --- extracting $(DPDK_TARBALL) ---
#@tar --directory $(B) --extract --file $(CURDIR)/$(DPDK_TARBALL)
   #@cp ./dpdk-17.02/drivers/net/bonding/rte_eth_bond_pmd.c 
./_build/dpdk-17.02/drivers/net/bonding/


Then I insert the new igb_uio.ko, but it still does not work.
The compile procedure of VPP and DPDK are independent from each other, am I 
right?
I hope to hearing from you.




Regards,
Xulang





[vpp-dev] download specific version vpp

2018-03-08 Thread xulang
Hi all,
How could I download one specific version vpp, for example if I would like to 
download version 17.02 with git, what should I do?


Regards,
xulang

[vpp-dev] dpdk drivers

2018-03-08 Thread xulang
Hi all,
I'v changed a dpdk driver file which is rte_eth_bond_pmd.c,
Then I would like to compile dpdk to become effiective.
I start vpp with gdb, and use gdb cmd "list slave_configure", there is no 
change, I hope to hearing  from  you.


 Here are my operations.
DPDK compile:
export RTE_SDK="/home/wangzy/test/VBRASV100R001_trunk/vpp1704/dpdk/dpdk-17.02"
export RTE_TARGET="x86_64-native-linuxapp-gcc"
export DESTDIR=/
make config T=x86_64-native-linuxapp-gcc && make make install 
T=x86_64-native-linuxapp-gcc modprobe uio insmod build/kmod/igb_uio.ko


VPP compile:
make distclean
./bootstrap.sh
 make V=0 PLATFORM=vpp TAG=vpp install-deb
dpkg -i  *.deb


GDB result:
(gdb) list slave_configure
1310}
1311
1312int
1313slave_configure(struct rte_eth_dev *bonded_eth_dev,
1314struct rte_eth_dev *slave_eth_dev)
1315{
1316struct bond_rx_queue *bd_rx_q;
1317struct bond_tx_queue *bd_tx_q;
1318
1319int errval;
(gdb) 
1320uint16_t q_id;
1321
1322/* Stop slave */
1323rte_eth_dev_stop(slave_eth_dev->data->port_id);
1324
1325/* Enable interrupts on slave device if supported */
1326if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
1327slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
1328
1329/* If RSS is enabled for bonding, try to enable it for slaves  */
(gdb) 


Regards,
xulang





Re: [vpp-dev] bond carrier down

2018-03-04 Thread xulang
I have only one vm with bond interface, and this vm is connected to a switch, 
no bond interfaces on the other side, this is the reason why it does not work?
I would like to use static bond interface, we still need to configure bond 
interface on the other side?

regards,
xulang


[vpp-dev] bond carrier down

2018-03-02 Thread xulang
Hi all,
I have encounterd carrier state problem on bond interface, any idea?


Here are my version info:
vppctl show version
vpp v17.04-release built by root on ubuntu at Fri May  5 03:06:25 PDT 2017


Here is my dpdk config:
vdev eth_bond0,mode=2,slave=:02:09.0,slave=:02:01.0,xmit_policy=l34


Here is int state:
vppctl show int
  Name   Idx   State  Counter  
Count 
BondEthernet0 4down  
GigabitEthernet2/1/0  1 bond-slave   
GigabitEthernet2/9/0  2 bond-slave   
GigabitEthernet2/a/0  3down  
local00down


vppctl show hardware-interfaces
  NameIdx   Link  Hardware
BondEthernet0  4down  Slave-Idx: 1 2
  Ethernet address 00:0c:29:82:90:c1
  Ethernet Bonding
carrier down 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
cpu socket 0


GigabitEthernet2/1/0   1slave GigabitEthernet2/1/0
  Ethernet address 00:0c:29:82:90:c1
  Intel 82540EM (e1000)
carrier up full duplex speed 1000 mtu 9216 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


GigabitEthernet2/9/0   2slave GigabitEthernet2/9/0
  Ethernet address 00:0c:29:82:90:c1
  Intel 82540EM (e1000)
carrier up full duplex speed 1000 mtu 9216 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


GigabitEthernet2/a/0   3down  GigabitEthernet2/a/0
  Ethernet address 00:0c:29:82:90:cb
  Intel 82540EM (e1000)
carrier up full duplex speed 1000 mtu 9216 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


Then after I koncked into cmd "vppctl set int state BondEthernet0 up"
vppctl show hardware-interfaces
  NameIdx   Link  Hardware
BondEthernet0  4down  Slave-Idx: 1 2
  Ethernet address 00:0c:29:82:90:c1
  Ethernet Bonding
carrier down 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
cpu socket 0


rx frames ok 232
rx bytes ok15607
extended stats:
  rx good packets232
  rx good bytes15607
GigabitEthernet2/1/0   1slave GigabitEthernet2/1/0
  Ethernet address 00:0c:29:82:90:c1
  Intel 82540EM (e1000)
carrier down 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


GigabitEthernet2/9/0   2slave GigabitEthernet2/9/0
  Ethernet address 00:0c:29:82:90:c1
  Intel 82540EM (e1000)
carrier down 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


rx frames ok 232
rx bytes ok15607
extended stats:
  rx good packets232
  rx good bytes15607
GigabitEthernet2/a/0   3down  GigabitEthernet2/a/0
  Ethernet address 00:0c:29:82:90:cb
  Intel 82540EM (e1000)
carrier up full duplex speed 1000 mtu 9216 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024


Regards,
xulang