Re: [vpp-dev] question about output ACL

2019-04-04 Thread xyxue
Hi ,

The classify table info is shown below, Thank you very much for your reply.

VPP1810# show classify tables verbose
  TableIdx  Sessions   NextTbl  NextNode
 0 1-1-1
  Heap: total: 2.06M, used: 13405245765845824, free: 2.06M, trimmable: 2.06M
no traced allocations

  nbuckets 2, skip 1 match 1 flag 0 offset 0
  mask 
  linear-search buckets 0

[1]: heap offset 696, elts 2, normal
0: [696]: next_index 0 advance 0 opaque -1 action 0 metadata 0
k: 0a02
hits 0, last_heard 0.00

1 active elements
1 free lists
0 linear-search buckets

Thanks,
Xue



 
From: Andrew Yourtchenko
Date: 2019-04-04 17:17
To: 薛欣颖
CC: vpp-dev
Subject: Re: [vpp-dev] question about output ACL
hi Xue,
 
could you send the output of "show classify tables index 0 verbose"
after you set that table as outacl ?
 
Thanks!
 
--a
 
On 4/4/19, xyxue  wrote:
>
> Hi guys,
>
> I am trying to test ACL funtion, input ACL is OK, But output ACL is not
> effective
>
> my configuration as below, is there anything wrong in my configuration?
> Thanks for your response
>
> VPP1810# show version
> vpp v18.10-7~g6ff8790-dirty built by root on localhost.localdomain at Mon
> Apr  1 15:06:48 EDT 2019
>
> VPP1810# classify table mask l3 ip4 src
> VPP1810# classify session acl-hit-next deny table-index 0 match l3 ip4 src
> 10.0.0.2
> VPP1810# set interface output acl intfc host-eth8 ip4-table 0
>
>
> Packet 1
>
> 00:04:29:245976: af-packet-input
>   af_packet: hw_if_index 5 next-index 4
> tpacket2_hdr:
>   status 0x1 len 124 snaplen 124 mac 66 net 80
>   sec 0x5ca3021e nsec 0x1d5674aa vlan 0 vlan_tpid 0
> 00:04:29:245984: ethernet-input
>   IP4: 00:10:94:00:00:02 -> ff:ff:ff:ff:ff:ff
> 00:04:29:245989: ip4-input
>   unknown 253: 10.0.0.2 -> 10.1.1.2
> tos 0x00, ttl 255, length 110, checksum 0xa585
> fragment id 0x0009
> 00:04:29:245994: ip4-lookup
>   fib 0 dpo-idx 2 flow hash: 0x
>   unknown 253: 10.0.0.2 -> 10.1.1.2
>
>
>
> tos 0x00, ttl 255, length 110, checksum 0xa585
> fragment id 0x0009
> 00:04:29:245999: ip4-rewrite
>   tx_sw_if_index 3 dpo-idx 2 : ipv4 via 10.1.1.2 host-eth8: mtu:0
> 000c295a907c298abc980800 flow hash: 0x
>   :
> 000c295a907c298abc980800456e0009fefda6850a020a01
>   0020: 0102
> 00:04:29:246003: ip4-outacl
>   OUTACL: sw_if_index 3, next_index 1, table 0, offset -1
> 00:04:29:246061: host-eth8-output
>   host-eth8
>   IP4: 00:0c:29:8a:bc:98 -> 00:0c:29:5a:90:70
>   unknown 253: 10.0.0.2 -> 10.1.1.2
> tos 0x00, ttl 254, length 110, checksum 0xa685
> fragment id 0x0009
>
> Thanks,
> Xue
>
>
>
 
 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
 
View/Reply Online (#12701): https://lists.fd.io/g/vpp-dev/message/12701
Mute This Topic: https://lists.fd.io/mt/30894420/675372
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [xy...@fiberhome.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12702): https://lists.fd.io/g/vpp-dev/message/12702
Mute This Topic: https://lists.fd.io/mt/30894420/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about output ACL

2019-04-04 Thread xyxue

Hi guys,

I am trying to test ACL funtion, input ACL is OK, But output ACL is not 
effective

my configuration as below, is there anything wrong in my configuration? Thanks 
for your response

VPP1810# show version 
vpp v18.10-7~g6ff8790-dirty built by root on localhost.localdomain at Mon Apr  
1 15:06:48 EDT 2019

VPP1810# classify table mask l3 ip4 src
VPP1810# classify session acl-hit-next deny table-index 0 match l3 ip4 src 
10.0.0.2
VPP1810# set interface output acl intfc host-eth8 ip4-table 0


Packet 1

00:04:29:245976: af-packet-input
  af_packet: hw_if_index 5 next-index 4
tpacket2_hdr:
  status 0x1 len 124 snaplen 124 mac 66 net 80
  sec 0x5ca3021e nsec 0x1d5674aa vlan 0 vlan_tpid 0
00:04:29:245984: ethernet-input
  IP4: 00:10:94:00:00:02 -> ff:ff:ff:ff:ff:ff
00:04:29:245989: ip4-input
  unknown 253: 10.0.0.2 -> 10.1.1.2
tos 0x00, ttl 255, length 110, checksum 0xa585
fragment id 0x0009
00:04:29:245994: ip4-lookup
  fib 0 dpo-idx 2 flow hash: 0x
  unknown 253: 10.0.0.2 -> 10.1.1.2 


tos 0x00, ttl 255, length 110, checksum 0xa585
fragment id 0x0009
00:04:29:245999: ip4-rewrite
  tx_sw_if_index 3 dpo-idx 2 : ipv4 via 10.1.1.2 host-eth8: mtu:0 
000c295a907c298abc980800 flow hash: 0x
  : 000c295a907c298abc980800456e0009fefda6850a020a01
  0020: 0102
00:04:29:246003: ip4-outacl
  OUTACL: sw_if_index 3, next_index 1, table 0, offset -1
00:04:29:246061: host-eth8-output
  host-eth8
  IP4: 00:0c:29:8a:bc:98 -> 00:0c:29:5a:90:70
  unknown 253: 10.0.0.2 -> 10.1.1.2
tos 0x00, ttl 254, length 110, checksum 0xa685
fragment id 0x0009

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12700): https://lists.fd.io/g/vpp-dev/message/12700
Mute This Topic: https://lists.fd.io/mt/30894420/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about multicast mpls

2018-12-03 Thread xyxue
Hi Neale,

I found the "multicast" in the command below:
VLIB_CLI_COMMAND (create_mpls_tunnel_command, static) = {
  .path = "mpls tunnel",
  .short_help =
  "mpls tunnel [multicast] [l2-only] via [next-hop-address] 
[next-hop-interface] [next-hop-table ] [weight ] [preference 
] [udp-encap-id ] [ip4-lookup-in-table ] 
[ip6-lookup-in-table ] [mpls-lookup-in-table ] [resolve-via-host] 
[resolve-via-connected] [rx-ip4 ] [out-labels ]",
  .function = vnet_create_mpls_tunnel_command_fn,
};

Is this enough for the mpls multicast? 
Or the " vnet_mpls_local_label()" multicast  + “mpls tunnel” multicast => mpls 
multicast? 

Thanks,
Xue



 
From: Neale Ranns (nranns)
Date: 2018-11-30 16:31
To: 薛欣颖
CC: vpp-dev
Subject: Re: [vpp-dev] question about multicast mpls
 
Hi Xue,
 
I don’t have any. And a quick look at the CLI implementation in 
vnet_mpls_local_label() shows it does not accept a ‘multicast’ keyword.
 
/neale
 
 
De :  au nom de xyxue 
Date : vendredi 30 novembre 2018 à 01:21
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: [vpp-dev] question about multicast mpls
 
Hi Neale,
 
Is there any cli configuration examples about multicast mpls ? 
 
Thanks,
Xue
 
From: Neale Ranns via Lists.Fd.Io
Date: 2018-11-28 20:59
To: 薛欣颖; vpp-dev
CC: vpp-dev
Subject: Re: [vpp-dev] question about multicast mpls
Hi Xue,
 
MPLS multicast has been supported for a while. Please see the unit tests for 
examples: test/test_mpls.py test_mcast_*()
 
Regards,
Neale
 
 
De :  au nom de xyxue 
Date : mercredi 28 novembre 2018 à 13:04
À : vpp-dev 
Objet : [vpp-dev] question about multicast mpls
 
 
Hi guys,
 
I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ?
Is there any example show about multicast mpls?
 
Thank you very much for your reply.
 
Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11477): https://lists.fd.io/g/vpp-dev/message/11477
Mute This Topic: https://lists.fd.io/mt/28430049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about multicast mpls

2018-11-29 Thread xyxue
Hi Neale,

Is there any cli configuration examples about multicast mpls ? 

Thanks,
Xue

From: Neale Ranns via Lists.Fd.Io
Date: 2018-11-28 20:59
To: 薛欣颖; vpp-dev
CC: vpp-dev
Subject: Re: [vpp-dev] question about multicast mpls
Hi Xue,
 
MPLS multicast has been supported for a while. Please see the unit tests for 
examples: test/test_mpls.py test_mcast_*()
 
Regards,
Neale
 
 
De :  au nom de xyxue 
Date : mercredi 28 novembre 2018 à 13:04
À : vpp-dev 
Objet : [vpp-dev] question about multicast mpls
 
 
Hi guys,
 
I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ?
Is there any example show about multicast mpls?
 
Thank you very much for your reply.
 
Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11465): https://lists.fd.io/g/vpp-dev/message/11465
Mute This Topic: https://lists.fd.io/mt/28430049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about ROSEN MVPN

2018-11-28 Thread xyxue
Hi guys,

Is the vpp support the forwarding about ROSEN MVPN(described in RFC 6037)?
Or is the vpp support roadmap?

Thank you very much for your reply.

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11446): https://lists.fd.io/g/vpp-dev/message/11446
Mute This Topic: https://lists.fd.io/mt/28430065/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about multicast mpls

2018-11-28 Thread xyxue

Hi guys,

I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ?
Is there any example show about multicast mpls?

Thank you very much for your reply.

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11445): https://lists.fd.io/g/vpp-dev/message/11445
Mute This Topic: https://lists.fd.io/mt/28430049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about L2VPN/EVPN

2018-10-12 Thread xyxue

Hi guys,

I'm testing the EVPN. By the configuration on 
https://wiki.fd.io/view/VPP/Using_VPP_as_a_VXLAN_Tunnel_Terminator , the 
VXLAN/EVPN function tests ok. 
Is the vpp support L2VPN/EVPN?What's the configuration of L2VPN/EVPN?

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10807): https://lists.fd.io/g/vpp-dev/message/10807
Mute This Topic: https://lists.fd.io/mt/27268202/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question abot FRR

2018-10-10 Thread xyxue

Hi Neale,

Sorry for didn't describe it clearly. I wanted to know if the vpp support fast 
reroute. I have got the answer.
Thank you very much for your reply.

Thanks,
Xue



 
From: Neale Ranns via Lists.Fd.Io
Date: 2018-10-10 15:38
To: 薛欣颖; vpp-dev
CC: vpp-dev
Subject: Re: [vpp-dev] question abot FRR
 
Hi Xue,
 
which FRR ;)
This one:
  https://tools.ietf.org/html/rfc5286
we don’t support
 
For this one:
  https://frrouting.org/
I’ll leave it to the community to comment.
 
/neale
 
 
De :  au nom de xyxue 
Date : mercredi 10 octobre 2018 à 08:35
À : vpp-dev 
Objet : [vpp-dev] question abot FRR
 
 
Hi guys,
 
If the vpp support FRR? What's the configuration about FRR?
 
Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10786): https://lists.fd.io/g/vpp-dev/message/10786
Mute This Topic: https://lists.fd.io/mt/27154774/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question abot FRR

2018-10-10 Thread xyxue

Hi guys,

If the vpp support FRR? What's the configuration about FRR?

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10783): https://lists.fd.io/g/vpp-dev/message/10783
Mute This Topic: https://lists.fd.io/mt/27154774/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about bfd protocol message

2018-09-25 Thread xyxue
Hi Klement,

I'm interested in BFD for LSP (RFC 5884 ).  Thank you very much for your reply.

Thanks,
Xue


 
From: Klement Sekera via Lists.Fd.Io
Date: 2018-09-25 16:57
To: 薛欣颖; vpp-dev
CC: vpp-dev
Subject: Re: [vpp-dev] question about bfd protocol message
Hi Xue,
 
I'm not sure what protocol message you mean. Can you please elaborate or
point to RFC number & section which describes the message you're
interested in?
 
Thanks,
Klement
 
Quoting xyxue (2018-09-25 09:48:58)
>Hi guys��
>I��m testing the bfd . Is the bfd support protocol message? 
>Thanks,
>Xue
> 
>--
 
 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
 
View/Reply Online (#10637): https://lists.fd.io/g/vpp-dev/message/10637
Mute This Topic: https://lists.fd.io/mt/26218372/675372
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [xy...@fiberhome.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10649): https://lists.fd.io/g/vpp-dev/message/10649
Mute This Topic: https://lists.fd.io/mt/26218372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about bfd protocol message

2018-09-25 Thread xyxue

Hi guys,

I’m testing the bfd . Is the bfd support protocol message? 

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10633): https://lists.fd.io/g/vpp-dev/message/10633
Mute This Topic: https://lists.fd.io/mt/26218372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about l2vpn with sr mpls

2018-09-18 Thread xyxue

Hi guys,

I'm testing the sr mpls. I found the  example show of l3vpn with sr mpls in 
https://wiki.fd.io/view/VPP/Segment_Routing_for_MPLS. 

Is there any example show about l2vpn with sr mpls? Is the vpp support l2vpn 
with sr mpls?

Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10561): https://lists.fd.io/g/vpp-dev/message/10561
Mute This Topic: https://lists.fd.io/mt/25752317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] question about qos

2018-06-13 Thread xyxue

Hi guys,

Is there a plan to support modify the qos (configure policer)param?

Thanks,
xue



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9595): https://lists.fd.io/g/vpp-dev/message/9595
Mute This Topic: https://lists.fd.io/mt/22075862/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] SIGABRT when create two host interface in the vat

2018-06-07 Thread xyxue

Hi guys,

When I create two host interface in vat. two minutes later the SIGABRT appear . 
The 'fill_free_list' was called about 56 times.

VPP# /home/vpp/build-data/../src/vlib/buffer.c:615 (alloc_from_free_list) 
assertion `len >= n_alloc_buffers' fails

Program received signal SIGABRT, Aborted.
0xb7ffd424 in __kernel_vsyscall ()
(gdb) bt
#0  0xb7ffd424 in __kernel_vsyscall ()
#1  0xb72a02d7 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#2  0xb72a1b13 in __GI_abort () at abort.c:90
#3  0x0804cb1c in os_panic () at /home/vpp/build-data/../src/vpp/vnet/main.c:353
#4  0xb74b9a8c in debugger () at /home/vpp/build-data/../src/vppinfra/error.c:84
#5  0xb74b9db1 in _clib_error (how_to_die=2, function_name=0x0, line_number=0, 
fmt=0xb7f7bd58 "%s:%d (%s) assertion `%s' fails")
at /home/vpp/build-data/../src/vppinfra/error.c:143
#6  0xb7ef929d in alloc_from_free_list (vm=0xb7fa8920 , 
free_list=0x376e0400, alloc_buffers=0x3778b9a0, n_alloc_buffers=256)
at /home/vpp/build-data/../src/vlib/buffer.c:615
#7  0xb7ef9449 in vlib_buffer_alloc_internal (vm=0xb7fa8920 , 
buffers=0x3778b9a0, n_buffers=256) at 
/home/vpp/build-data/../src/vlib/buffer.c:638
#8  0xb7bf2048 in vlib_buffer_alloc (vm=0xb7fa8920 , 
buffers=0x3778b9a0, n_buffers=256) at 
/home/vpp/build-data/../src/vlib/buffer_funcs.h:263
#9  0xb7bf3889 in af_packet_device_input_fn (vm=0xb7fa8920 , 
node=0x373b1b00, frame=0x0, apif=0x377815a8)
at /home/vpp/build-data/../src/vnet/devices/af_packet/node.c:207
#10 0xb7bf40fa in af_packet_input_fn (vm=0xb7fa8920 , 
node=0x373b1b00, frame=0x0) at 
/home/vpp/build-data/../src/vnet/devices/af_packet/node.c:352
#11 0xb7f34c2e in dispatch_node (vm=0xb7fa8920 , 
node=0x373b1b00, type=VLIB_NODE_TYPE_INPUT, 
dispatch_state=VLIB_NODE_STATE_INTERRUPT, frame=0x0, 
last_time_stamp=127565946476942) at 
/home/vpp/build-data/../src/vlib/main.c:1010
#12 0xb7f36a16 in vlib_main_or_worker_loop (vm=0xb7fa8920 , 
is_main=1) at /home/vpp/build-data/../src/vlib/main.c:1556
#13 0xb7f371a7 in vlib_main_loop (vm=0xb7fa8920 ) at 
/home/vpp/build-data/../src/vlib/main.c:1649
#14 0xb7f377c9 in vlib_main (vm=0xb7fa8920 , 
input=0x37359fd0) at /home/vpp/build-data/../src/vlib/main.c:1806
#15 0xb7f728db in thread0 (arg=3086649632) at 
/home/vpp/build-data/../src/vlib/unix/main.c:622
#16 0xb74cae48 in clib_calljmp () at 
/home/vpp/build-data/../src/vppinfra/longjmp.S:204
#17 0xb7fa8920 in ?? () from 
/home/vpp/build-root/install-vpp_debug-native/vpp/lib/libvlib.so.0
#18 0x0804c6c2 in main (argc=3, argv=0xb584) at 
/home/vpp/build-data/../src/vpp/vnet/main.c:273
(gdb) 

Is this a problem in vpp? How can I solve the problem?

Thanks,
xue




[vpp-dev] SIGSEGV after del a host interface

2018-06-07 Thread xyxue

Hi guys,

When I create a host interface and then del it, the SIGSEGV will appear. This 
don't appear every time.
Program received signal SIGSEGV, Segmentation fault.
0x76ca931e in vnet_get_sup_sw_interface (vnm=0x77627400 
, sw_if_index=1) at 
/home/vpp/18.01/build-data/../src/vnet/interface_funcs.h:84
84if (sw->type == VNET_SW_INTERFACE_TYPE_SUB ||
(gdb) bt
#0  0x76ca931e in vnet_get_sup_sw_interface (vnm=0x77627400 
, sw_if_index=1) at 
/home/vpp/18.01/build-data/../src/vnet/interface_funcs.h:84
#1  0x76ca9397 in vnet_get_sup_hw_interface (vnm=0x77627400 
, sw_if_index=1) at 
/home/vpp/18.01/build-data/../src/vnet/interface_funcs.h:93
#2  0x76caba44 in ethernet_input_inline (vm=0x7791dce0 
, node=0x7fffb58b55c0, from_frame=0x7fffb5dd5440, 
variant=ETHERNET_INPUT_VARIANT_ETHERNET) at 
/home/vpp/18.01/build-data/../src/vnet/ethernet/node.c:645
#3  0x76cac090 in ethernet_input (vm=0x7791dce0 , 
node=0x7fffb58b55c0, from_frame=0x7fffb5dd5440)
at /home/vpp/18.01/build-data/../src/vnet/ethernet/node.c:779
#4  0x7769b628 in dispatch_node (vm=0x7791dce0 , 
node=0x7fffb58b55c0, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb5dd5440, 
last_time_stamp=6119383983860) at 
/home/vpp/18.01/build-data/../src/vlib/main.c:1010
#5  0x7769bc0b in dispatch_pending_node (vm=0x7791dce0 
, pending_frame_index=0, last_time_stamp=6119383983860)
at /home/vpp/18.01/build-data/../src/vlib/main.c:1160
#6  0x7769de22 in vlib_main_or_worker_loop (vm=0x7791dce0 
, is_main=1) at 
/home/vpp/18.01/build-data/../src/vlib/main.c:1630
#7  0x7769ded1 in vlib_main_loop (vm=0x7791dce0 ) 
at /home/vpp/18.01/build-data/../src/vlib/main.c:1649
#8  0x7769e61a in vlib_main (vm=0x7791dce0 , 
input=0x7fffb5705fb0) at /home/vpp/18.01/build-data/../src/vlib/main.c:1804
#9  0x776e110e in thread0 (arg=140737346919648) at 
/home/vpp/18.01/build-data/../src/vlib/unix/main.c:625
#10 0x76928570 in clib_calljmp () at 
/home/vpp/18.01/build-data/../src/vppinfra/longjmp.S:128
#11 0x7fffd390 in ?? ()
#12 0x776e15b8 in vlib_unix_main (argc=31, argv=0x64c800) at 
/home/vpp/18.01/build-data/../src/vlib/unix/main.c:690
#13 0x00405d12 in main (argc=31, argv=0x64c800) at 
/home/vpp/18.01/build-data/../src/vpp/vnet/main.c:241
(gdb) 

What should I do to solve the problem?

Thanks,
xue




Re: [vpp-dev] The VCL server CPU utilization is 100% , if there is no message to the epoll_wait

2018-06-04 Thread xyxue

Hi daw,

Thank you for your reply . Another question :If the 'listen ,send,recv,connect' 
will cause the cpu utilization 100%?

Thanks,
Xyxue


 
发件人: Dave Wallace
发送时间: 2018-06-05 02:22
收件人: vpp-dev; 薛欣颖
主题: Re: [vpp-dev] The VCL server CPU utilization is 100% , if there is no 
message to the epoll_wait
Xyxue,

This is a known by-product of the existing [prototype] epoll_wait 
implementation. There is currently no mechanism in epoll_wait to have the 
thread block on a condvar which would cause the thread to sleep instead of 
sitting in a continuous polling loop.

Fixing this is on the list of things to do in VCL, but there is currently no 
ETA for when it will be resolved.

Thanks,
-daw-

On 6/4/2018 6:13 AM, xyxue wrote:

Hi guys,

I‘m testing the VCL . The VCL server CPU utilization is 100% ,if there is no 
message to the epoll_wait. Is there anything I can do to slove it?
Is there  the same problem in the VCL client?

Thanks,
Xyxue






[vpp-dev] The VCL server CPU utilization is 100% , if there is no message to the epoll_wait

2018-06-04 Thread xyxue

Hi guys,

I‘m testing the VCL . The VCL server CPU utilization is 100% ,if there is no 
message to the epoll_wait. Is there anything I can do to slove it?
Is there  the same problem in the VCL client?

Thanks,
Xyxue




[vpp-dev] Is there a plan to support the udp mode in vcl?

2018-05-22 Thread xyxue
Hi Florin, Ed,

I'm testing the vcl in udp mode. After our debug  vpp can receive the udp 
packets , but the packets will be dropped . Trace info is shown below:

Packet 1 

01:46:56:707843: af-packet-input 
af_packet: hw_if_index 1 next-index 4 
tpacket2_hdr: 
status 0x81 len 106 snaplen 106 mac 66 net 80 
sec 0x5b03780d nsec 0x3498f0be vlan 0 vlan_tpid 0 
01:47:04:449319: ethernet-input 
IP4: 02:fe:3f:8d:97:a4 -> 02:fe:6e:ca:46:82 
01:47:04:449340: ip4-input 
UDP: 2.1.1.1 -> 2.1.1.2 
tos 0x00, ttl 255, length 92, checksum 0x758c 
fragment id 0x, flags DONT_FRAGMENT 
UDP: 3536 -> 2111 
length 72, checksum 0x8106 
01:47:04:449349: ip4-lookup 
fib 0 dpo-idx 7 flow hash: 0x 
UDP: 2.1.1.1 -> 2.1.1.2 
tos 0x00, ttl 255, length 92, checksum 0x758c 
fragment id 0x, flags DONT_FRAGMENT 
UDP: 3536 -> 2111 
length 72, checksum 0x8106 
01:47:04:449360: ip4-local 
UDP: 2.1.1.1 -> 2.1.1.2 
tos 0x00, ttl 255, length 92, checksum 0x758c 
fragment id 0x, flags DONT_FRAGMENT 
UDP: 3536 -> 2111 
length 72, checksum 0x8106 
01:47:04:449367: ip4-udp-lookup 
UDP: src-port 3536 dst-port 2111 
01:47:04:449379: udp4-input 
UDP_INPUT: connection 0, disposition 4, thread 0 
01:47:04:449415: error-drop 
udp4-input: UDP packets enqueued 

Is there a plan to support the udp mode in vcl?

Thanks,
Xyxue




Re: [vpp-dev] problem in xcrw testing

2018-05-16 Thread xyxue
Hi John,

I changed my configuration to that shown below:

create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface l3 host-eth1
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
set interface l2 xcrw host-eth5 next gre-output  rw 
4558fd2fa5720b0101020b0101016558
create bridge-domain 1
set interface l2 bridge teb-gre0 1
set interface l2 bridge xcrw0 1

There is an assert :
Program received signal SIGSEGV, Segmentation fault.
0x76f30e72 in gre_interface_tx_inline (vm=0x7791d1a0 
, node=0x7fffb6c87600, frame=0x7fffb676fbc0) at 
/home/vpp/build-data/../src/vnet/gre/gre.c:366
366   hw = vnet_get_hw_interface (vnm, sw->hw_if_index);
(gdb) bt
#0  0x76f30e72 in gre_interface_tx_inline (vm=0x7791d1a0 
, node=0x7fffb6c87600, frame=0x7fffb676fbc0) at 
/home/vpp/build-data/../src/vnet/gre/gre.c:366
#1  0x76f3126b in gre_interface_tx (vm=0x7791d1a0 
, node=0x7fffb6c87600, frame=0x7fffb676fbc0) at 
/home/vpp/build-data/../src/vnet/gre/gre.c:412
#2  0x7769b2aa in dispatch_node (vm=0x7791d1a0 , 
node=0x7fffb6c87600, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, 
frame=0x7fffb676fbc0, last_time_stamp=35926090675277) at 
/home/vpp/build-data/../src/vlib/main.c:1010
#3  0x7769b88d in dispatch_pending_node (vm=0x7791d1a0 
, pending_frame_index=4, last_time_stamp=35926090675277)
at /home/vpp/build-data/../src/vlib/main.c:1160
#4  0x7769daa4 in vlib_main_or_worker_loop (vm=0x7791d1a0 
, is_main=1) at /home/vpp/build-data/../src/vlib/main.c:1630
#5  0x7769db53 in vlib_main_loop (vm=0x7791d1a0 ) 
at /home/vpp/build-data/../src/vlib/main.c:1649
#6  0x7769e2ad in vlib_main (vm=0x7791d1a0 , 
input=0x7fffb5704fb0) at /home/vpp/build-data/../src/vlib/main.c:1806
#7  0x776e0d64 in thread0 (arg=140737346916768) at 
/home/vpp/build-data/../src/vlib/unix/main.c:617
#8  0x76927560 in clib_calljmp () at 
/home/vpp/build-data/../src/vppinfra/longjmp.S:128
#9  0x7fffd350 in ?? ()
#10 0x776e1223 in vlib_unix_main (argc=31, argv=0x653800) at 
/home/vpp/build-data/../src/vlib/unix/main.c:682
#11 0x00406202 in main (argc=31, argv=0x653800) at 
/home/vpp/build-data/../src/vpp/vnet/main.c:241
(gdb) 

Is there any examples about xcrw ? Is there any configuration I missed?

Thanks,
Xyxue
From: John Lo (loj)
Date: 2018-05-16 16:02
To: 薛欣颖; Neale Ranns (nranns); vpp-dev
Subject: Re: [vpp-dev] problem in xcrw testing
Hi Xyxue,
 
The L2 cross connect CLI via either xconnect or xcrw would put the source/input 
interface into L2 mode and setup an one-directional cross connect to the 
specified peer/output interface.  The expectation is there should be a 
xcrw/cross connect CLI for the other direction specifying the other interface 
as source/input so it will also be in L2 mode.  
 
Hope this help, with regards,
John
 
From: 薛欣颖 <xy...@fiberhome.com> 
Sent: Wednesday, May 16, 2018 3:46 AM
To: John Lo (loj) <l...@cisco.com>; Neale Ranns (nranns) <nra...@cisco.com>; 
vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: Re: [vpp-dev] problem in xcrw testing
 
Hi John,
 
The description of 'set interface l2 xcrw' in docs is that 'Add or delete a 
Layer 2 to Layer 3 rewrite cross-connect'.
After put the gre tunnel in l2 mode , there is a 'Layer 2 to Layer 2 rewrite 
cross-connect'. Is there anything wrong in my comprehension?


Thanks,
Xyxue


 
From: John Lo (loj)
Date: 2018-05-15 20:32
To: 薛欣颖; Neale Ranns (nranns); vpp-dev
Subject: Re: [vpp-dev] problem in xcrw testing
The GRE tunnel needs to be put in L2 mode so it can be a valid l2-output 
interface. You can do that by putting the GRE tunnel into a bridge domain, or 
xconnect it to peer interface:
 
vpp# set int l2  bri ?
  set interface l2 bridge  set interface l2 bridge  
 [bvi] [shg]
vpp# set int l2 xconnect ?
  set interface l2 xconnectset interface l2 xconnect 
 
 
Regards,
John
 
From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of xyxue
Sent: Tuesday, May 15, 2018 8:10 AM
To: Neale Ranns (nranns) <nra...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] problem in xcrw testing
 
 
Hi Neale,
 
After I changed my configuration (the configuration and the trace info is shown 
below) 
 
create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
set interface l2 xcrw host-eth5 next teb-gre0-output rw 
4558fd2fa5720b0101020b0101016558



VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:01:18:847131: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snapl

Re: [vpp-dev] problem in xcrw testing

2018-05-16 Thread xyxue
Hi John,

The description of 'set interface l2 xcrw' in docs is that 'Add or delete a 
Layer 2 to Layer 3 rewrite cross-connect'.
After put the gre tunnel in l2 mode , there is a 'Layer 2 to Layer 2 rewrite 
cross-connect'. Is there anything wrong in my comprehension?

Thanks,
Xyxue


 
From: John Lo (loj)
Date: 2018-05-15 20:32
To: 薛欣颖; Neale Ranns (nranns); vpp-dev
Subject: Re: [vpp-dev] problem in xcrw testing
The GRE tunnel needs to be put in L2 mode so it can be a valid l2-output 
interface. You can do that by putting the GRE tunnel into a bridge domain, or 
xconnect it to peer interface:
 
vpp# set int l2  bri ?
  set interface l2 bridge  set interface l2 bridge  
 [bvi] [shg]
vpp# set int l2 xconnect ?
  set interface l2 xconnectset interface l2 xconnect 
 
 
Regards,
John
 
From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of xyxue
Sent: Tuesday, May 15, 2018 8:10 AM
To: Neale Ranns (nranns) <nra...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] problem in xcrw testing
 
 
Hi Neale,
 
After I changed my configuration (the configuration and the trace info is shown 
below) 
 
create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
set interface l2 xcrw host-eth5 next teb-gre0-output rw 
4558fd2fa5720b0101020b0101016558


VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:01:18:847131: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snaplen 78 mac 66 net 80
  sec 0x5afaa73c nsec 0x2c4032d7 vlan 0
00:01:18:847197: ethernet-input
  IP4: 00:50:56:c0:00:05 -> 00:50:56:c0:00:04
00:01:18:847305: l2-input
  l2-input: sw_if_index 2 dst 00:50:56:c0:00:04 src 00:50:56:c0:00:05
00:01:18:847323: l2-output
  l2-output: sw_if_index 4 dst 00:50:56:c0:00:04 src 00:50:56:c0:00:05 data 08 
00 45 01 00 40 00 01 00 00 40 01
00:01:18:847336: error-drop
  l2-output: L2 Output interface not valid


It doesn't seem to be in effect. Is there something I missed?


Thanks,
Xyxue


 
From: Neale Ranns
Date: 2018-05-15 15:58
To: 薛欣颖; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in xcrw testing
Hi Xyxue,
 
The GRE tunnel needs to be in L2 mode, which in the case of GRE is known as 
‘transparent ethernet bridging’. So:
  create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
 
/neale


 
 
From: <vpp-dev@lists.fd.io> on behalf of xyxue <xy...@fiberhome.com>
Date: Tuesday, 15 May 2018 at 08:50
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: [vpp-dev] problem in xcrw testing
 
 
Hi guys,

I’m testing the xcrw . But the packet were 'error-drop'. My configuration and 
trace info is shown below:

create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1
set interface ip address gre0 100.1.1.2/24
VPP# set interface l2 xcrw host-eth5 next gre4-input rw 
4558fd2fa5720b0101020b0101016558
VPP# set interface l2 xcrw xcrw0 next host-eth1-output rw 1234

VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:00:22:692356: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snaplen 78 mac 66 net 80
  sec 0x5af559c7 nsec 0x8bb04a9 vlan 0
00:00:22:692493: ethernet-input
  IP4: 00:50:56:c0:00:05 -> 00:50:56:31:35:92
00:00:22:692706: l2-input
  l2-input: sw_if_index 2 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05
00:00:22:692724: l2-output
  l2-output: sw_if_index 4 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05 data 08 
00 45 01 00 40 00 01 00 00 40 01
00:00:22:692737: l2-xcrw
  L2_XCRW: next index 1 tx_fib_index -1
00:00:22:692753: gre4-input
  GRE: tunnel 0 len 88 src 11.1.1.2 dst 11.1.1.1
00:00:22:692771: error-drop
  gre4-input: GRE input packets dropped due to missing tunnel
  
Is there any problem in my configuration?
  
Thanks,
Xyxue





Re: [vpp-dev] problem in xcrw testing

2018-05-15 Thread xyxue

Hi Neale,

After I changed my configuration (the configuration and the trace info is shown 
below) 

create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
set interface l2 xcrw host-eth5 next teb-gre0-output rw 
4558fd2fa5720b0101020b0101016558

VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:01:18:847131: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snaplen 78 mac 66 net 80
  sec 0x5afaa73c nsec 0x2c4032d7 vlan 0
00:01:18:847197: ethernet-input
  IP4: 00:50:56:c0:00:05 -> 00:50:56:c0:00:04
00:01:18:847305: l2-input
  l2-input: sw_if_index 2 dst 00:50:56:c0:00:04 src 00:50:56:c0:00:05
00:01:18:847323: l2-output
  l2-output: sw_if_index 4 dst 00:50:56:c0:00:04 src 00:50:56:c0:00:05 data 08 
00 45 01 00 40 00 01 00 00 40 01
00:01:18:847336: error-drop
  l2-output: L2 Output interface not valid

It doesn't seem to be in effect. Is there something I missed?

Thanks,
Xyxue


 
From: Neale Ranns
Date: 2018-05-15 15:58
To: 薛欣颖; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in xcrw testing
Hi Xyxue,
 
The GRE tunnel needs to be in L2 mode, which in the case of GRE is known as 
‘transparent ethernet bridging’. So:
  create gre tunnel src 11.1.1.2 dst 11.1.1.1 teb
 
/neale

 
 
From: <vpp-dev@lists.fd.io> on behalf of xyxue <xy...@fiberhome.com>
Date: Tuesday, 15 May 2018 at 08:50
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: [vpp-dev] problem in xcrw testing
 
 
Hi guys,

I’m testing the xcrw . But the packet were 'error-drop'. My configuration and 
trace info is shown below:

create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1
set interface ip address gre0 100.1.1.2/24
VPP# set interface l2 xcrw host-eth5 next gre4-input rw 
4558fd2fa5720b0101020b0101016558
VPP# set interface l2 xcrw xcrw0 next host-eth1-output rw 1234

VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:00:22:692356: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snaplen 78 mac 66 net 80
  sec 0x5af559c7 nsec 0x8bb04a9 vlan 0
00:00:22:692493: ethernet-input
  IP4: 00:50:56:c0:00:05 -> 00:50:56:31:35:92
00:00:22:692706: l2-input
  l2-input: sw_if_index 2 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05
00:00:22:692724: l2-output
  l2-output: sw_if_index 4 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05 data 08 
00 45 01 00 40 00 01 00 00 40 01
00:00:22:692737: l2-xcrw
  L2_XCRW: next index 1 tx_fib_index -1
00:00:22:692753: gre4-input
  GRE: tunnel 0 len 88 src 11.1.1.2 dst 11.1.1.1
00:00:22:692771: error-drop
  gre4-input: GRE input packets dropped due to missing tunnel
  
Is there any problem in my configuration?
  
Thanks,
Xyxue





Re: [vpp-dev] question about the VCL

2018-05-15 Thread xyxue

Hi Florin, Ed,

I'm testing VCL , and the IPPROTO_RAW is a test case .Since it is not supported 
,I'm testing the UDP mode:

server:./vcl_test_server -D 2
client:./vcl_test_client -D 2.1.1.1 2 

An assert occure when client startup. More info is shown below:
DBGvpp# 0: /home/vpp/build-data/../src/vnet/session/session_node.c:214 
(session_tx_fill_buffer) assertion `n_bytes_read > 0' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
0x75c39428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x75c39428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x75c3b02a in __GI_abort () at abort.c:89
#2  0x004074de in os_panic () at 
/home/vpp/build-data/../src/vpp/vnet/main.c:310
#3  0x764202be in debugger () at 
/home/vpp/build-data/../src/vppinfra/error.c:84
#4  0x764206f6 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x774e8b18 "%s:%d (%s) assertion `%s' fails")
at /home/vpp/build-data/../src/vppinfra/error.c:143
#5  0x771d2d48 in session_tx_fill_buffer (vm=0x77b8a980 
, ctx=0x7fffb61e1100, b=0x7ffe87338a80, 
n_bufs=0x7fffb6aaf996, 
peek_data=0 '\000') at 
/home/vpp/build-data/../src/vnet/session/session_node.c:214
#6  0x771d3e34 in session_tx_fifo_read_and_snd_i (vm=0x77b8a980 
, node=0x7fffb6bb2900, e=0x7fffb63b1dd0, 
s=0x7fffb6149c80, n_tx_packets=0x7fffb6aafb4c, peek_data=0 '\000') at 
/home/vpp/build-data/../src/vnet/session/session_node.c:449
#7  0x771d4847 in session_tx_fifo_dequeue_and_snd (vm=0x77b8a980 
, node=0x7fffb6bb2900, e0=0x7fffb63b1dd0, 
s0=0x7fffb6149c80, n_tx_pkts=0x7fffb6aafb4c) at 
/home/vpp/build-data/../src/vnet/session/session_node.c:536
#8  0x771d567c in session_queue_node_fn (vm=0x77b8a980 
, node=0x7fffb6bb2900, frame=0x0)
at /home/vpp/build-data/../src/vnet/session/session_node.c:789
#9  0x778de072 in dispatch_node (vm=0x77b8a980 , 
node=0x7fffb6bb2900, type=VLIB_NODE_TYPE_INPUT, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
last_time_stamp=2385171693806) at /home/vpp/build-data/../src/vlib/main.c:988
#10 0x778dff5f in vlib_main_or_worker_loop (vm=0x77b8a980 
, is_main=1) at /home/vpp/build-data/../src/vlib/main.c:1505
#11 0x778e0a12 in vlib_main_loop (vm=0x77b8a980 ) 
at /home/vpp/build-data/../src/vlib/main.c:1633
#12 0x778e1438 in vlib_main (vm=0x77b8a980 , 
input=0x7fffb6aaffb0) at /home/vpp/build-data/../src/vlib/main.c:1787
#13 0x7794e264 in thread0 (arg=140737349462400) at 
/home/vpp/build-data/../src/vlib/unix/main.c:568
#14 0x76445090 in clib_calljmp () at 
/home/vpp/build-data/../src/vppinfra/longjmp.S:110
#15 0x7fffd1c0 in ?? ()
#16 0x7794e6e5 in vlib_unix_main (argc=3, argv=0x7fffe418) at 
/home/vpp/build-data/../src/vlib/unix/main.c:632
#17 0x00406f22 in main (argc=3, argv=0x7fffe418) at 
/home/vpp/build-data/../src/vpp/vnet/main.c:249
(gdb) 

Is there anything I can do to fix it?

Thanks,
Xyxue




[vpp-dev] problem in xcrw testing

2018-05-15 Thread xyxue

Hi guys,

I’m testing the xcrw . But the packet were 'error-drop'. My configuration and 
trace info is shown below:

create host-interface name eth1 mac 00:50:56:3a:32:d2 
create host-interface name eth5 mac 00:50:56:31:35:92
set interface ip address host-eth1 11.1.1.2/24
create gre tunnel src 11.1.1.2 dst 11.1.1.1
set interface ip address gre0 100.1.1.2/24
VPP# set interface l2 xcrw host-eth5 next gre4-input rw 
4558fd2fa5720b0101020b0101016558
VPP# set interface l2 xcrw xcrw0 next host-eth1-output rw 1234

VPP# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:00:22:692356: af-packet-input
  af_packet: hw_if_index 2 next-index 4
tpacket2_hdr:
  status 0x2001 len 78 snaplen 78 mac 66 net 80
  sec 0x5af559c7 nsec 0x8bb04a9 vlan 0
00:00:22:692493: ethernet-input
  IP4: 00:50:56:c0:00:05 -> 00:50:56:31:35:92
00:00:22:692706: l2-input
  l2-input: sw_if_index 2 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05
00:00:22:692724: l2-output
  l2-output: sw_if_index 4 dst 00:50:56:31:35:92 src 00:50:56:c0:00:05 data 08 
00 45 01 00 40 00 01 00 00 40 01
00:00:22:692737: l2-xcrw
  L2_XCRW: next index 1 tx_fib_index -1
00:00:22:692753: gre4-input
  GRE: tunnel 0 len 88 src 11.1.1.2 dst 11.1.1.1
00:00:22:692771: error-drop
  gre4-input: GRE input packets dropped due to missing tunnel
  
Is there any problem in my configuration?
  
Thanks,
Xyxue




[vpp-dev] question about the VCL

2018-05-14 Thread xyxue

Hi guys,

Is the VCL support RAW_IP now ? Or is there a plan to support it? 

Thanks,
Xyxue




[vpp-dev] gcc version 4.3.3 use, compile error appear

2018-05-03 Thread xyxue
Hi guys,

There is a union in fib_prefix_t but without a name, and the use of the union 
in 'add_port_range_adjacency' is shown below:
 union { 
  ip46_address_t fp_addr; 
  struct { 
 mpls_label_t fp_label; 
 mpls_eos_bit_t fp_eos; 
 dpo_proto_t fp_payload_proto; 
   };
 
 fib_prefix_t pfx =
 { 
.fp_proto = FIB_PROTOCOL_IP4, 
.fp_len = length, 
.fp_addr = 
{ 
.ip4 = *address, 
}, 
}; 
When we use the gcc(version 4.3.3). There is some error info:
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c: In 
function 'add_port_range_adjacency': 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:935: 
error: unknown field 'fp_addr' specified in initializer 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:935: 
warning: braces around scalar initializer 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:935: 
warning: (near initialization for 'pfx.fp_proto') 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:936: 
error: field name not in record or union initializer 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:936: 
error: (near initialization for 'pfx.fp_proto') 
/home/vpp/build-data/../src/vnet/ip/ip4_source_and_port_range_check.c:936: 
error: incompatible types in initialization 

We fix this problem by the method below,but it seems not a good method , 
because there are too many similar definitions in VPP.
Do you have any suggestion to solve it without updating the gcc version?

 fib_prefix_t pfx; 
memset(,0,sizeof(fib_prefix_t)); 
pfx.fp_proto = FIB_PROTOCOL_IP4; 
pfx.fp_len = length; 
pfx.fp_addr.ip4 = *address; 


Thanks,
Xyxue




[vpp-dev] problem in pppoe testing

2018-04-27 Thread xyxue

Hi guys,

I'm testing the pppoe . I followed the guide of docs.fd.io . But can't confige 
success. Is there any configuration manual ?
The configuration of mine is shown below:

DBGvpp# create pppoe cp cp-if-index 1 
DBGvpp# show interface 
  Name   Idx   State  Counter  
Count 
host-eth1 1down  
local00down  

DBGvpp# create pppoe session client-ip 10.0.3.1 session-id 13 client-mac 
00:01:02:03:04:05 
create pppoe session: vnet_pppoe_add_del_session returned -2
DBGvpp# show pppoe fib 
no pppoe fib entries
DBGvpp# 

Is there any problem in my configuration?

Thanks,
Xyxue




Re: [vpp-dev] question about set ip arp

2018-04-24 Thread xyxue
Hi Neale,

After merged the patch ,the he configuration time of 100k cost 7+ mins .

The most time-consuming part is 'fib_node_list_walk' .  The stack info is shown 
below:
0x7717d0eb in round_pow2 (x=44, pow2=8) at 
/home/vpp/build-data/../src/vppinfra/clib.h:277
277 {
(gdb) bt
#0  0x7717d0eb in round_pow2 (x=44, pow2=8) at 
/home/vpp/build-data/../src/vppinfra/clib.h:277
#1  0x7717d19c in vec_aligned_header_bytes (header_bytes=40, align=8) 
at /home/vpp/build-data/../src/vppinfra/vec_bootstrap.h:112
#2  0x7717d1e8 in vec_aligned_header (v=0x7fffba3af5a0, 
header_bytes=40, align=8) at 
/home/vpp/build-data/../src/vppinfra/vec_bootstrap.h:118
#3  0x7717de8a in pool_header (v=0x7fffba3af5a0) at 
/home/vpp/build-data/../src/vppinfra/pool.h:79
#4  0x7717dfd2 in fib_node_list_elt_get (fi=13419) at 
/home/vpp/build-data/../src/vnet/fib/fib_node_list.c:80
#5  0x7717f34b in fib_node_list_walk (list=26, fn=0x7718c56c 
, args=0x7fffb6cc5240)
at /home/vpp/build-data/../src/vnet/fib/fib_node_list.c:382
#6  0x7718c647 in fib_entry_cover_walk (cover=0x7fffb7d46eb8, 
walk=0x7718c65d , args=0xccbc)
at /home/vpp/build-data/../src/vnet/fib/fib_entry_cover.c:104
#7  0x7718c74a in fib_entry_cover_change_notify (cover_index=0, 
covered=52412) at /home/vpp/build-data/../src/vnet/fib/fib_entry_cover.c:158
#8  0x77172655 in fib_table_post_insert_actions 
(fib_table=0x7fffb6b409c0, prefix=0x7fffb6cc5490, fib_entry_index=52412)
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:193
#9  0x77172772 in fib_table_entry_insert (fib_table=0x7fffb6b409c0, 
prefix=0x7fffb6cc5490, fib_entry_index=52412) at 
/home/vpp/build-data/../src/vnet/fib/fib_table.c:230
#10 0x771732da in fib_table_entry_path_add2 (fib_index=0, 
prefix=0x7fffb6cc5490, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
rpath=0x7fffb74ebf74)
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:601
#11 0x771731a0 in fib_table_entry_path_add (fib_index=0, 
prefix=0x7fffb6cc5490, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
next_hop_proto=DPO_PROTO_IP4, 
next_hop=0x7fffb6cc5494, next_hop_sw_if_index=1, 
next_hop_fib_index=4294967295, next_hop_weight=1, next_hop_labels=0x0, 
path_flags=FIB_ROUTE_PATH_FLAG_NONE)
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:569
#12 0x76cb1d4f in arp_adj_fib_add (e=0x7fffb9040ad4, fib_index=0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:550
#13 0x76cb249e in vnet_arp_set_ip4_over_ethernet_internal 
(vnm=0x7763cfc0 , args=0x7fffb6cc5790) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:618
#14 0x76cb7d74 in set_ip4_over_ethernet_rpc_callback (a=0x7fffb6cc5790) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1989
#15 0x779472ce in vl_api_rpc_call_main_thread_inline (fp=0x76cb7c63 
, data=0x7fffb6cc5790 "\001", 
data_length=28, force_rpc=0 '\000')
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2061
#16 0x77947421 in vl_api_rpc_call_main_thread (fp=0x76cb7c63 
, data=0x7fffb6cc5790 "\001", 
data_length=28)
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2107
#17 0x76cb8421 in vnet_arp_set_ip4_over_ethernet (vnm=0x7763cfc0 
, sw_if_index=1, a_arg=0x7fffb6cc5890, is_static=0, 
is_no_fib_entry=0)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2074


Thanks,
Xyxue

From: Neale Ranns (nranns)
Date: 2018-04-23 20:36
To: 薛欣颖; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] question about set ip arp
HI Xyxue,
 
Can you please test to see if the situation improves with:
  https://gerrit.fd.io/r/#/c/12012/
 
thanks,
neale
 
From: <vpp-dev@lists.fd.io> on behalf of xyxue <xy...@fiberhome.com>
Date: Friday, 20 April 2018 at 11:31
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: [vpp-dev] question about set ip arp
 
 
Hi guys,

I'm testing 'set ip arp' . When I don't configure the param 'no-fib-entry' , 
the configuration time of 100k cost 19+ mins. When I configure  the param 
'no-fib-entry' the time is 9 s.
Can I use 'set ip arp ... + no-fib-entry  and ip route add ' achieve the same 
goal with 'set ip arp without no-fib-entry'?
The most time-consuming part is 'clib_bihash_foreach_key_value_pair_24_8' .  
The stack info is shown below:
0 clib_bihash_foreach_key_value_pair_24_8 (h=0x7fffb5d4c840, 
callback=0x7719c98d , arg=0x7fffb5d33dc0) 
at /home/vpp/build-data/../src/vppinfra/bihash_template.c:589 
#1 0x7719cafd in adj_nbr_walk_nh4 (sw_if_index=1, addr=0x7fffb5d4c0f8, 
cb=0x76cacb17 , ctx=0x7fffb5d4c0f4) 
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:642 
#2 0x76cacd64 in arp_update_adjacency (vnm=0x7763a540 , 
sw_if_index=1, ai=1) at /home/vpp/build-data/../src/vnet/ethernet/arp.c:466 
#3 0x76cbb6fe in ethernet_update_adjacency (vnm=0x7763a540 
, sw_if_index=1, ai=1) at 
/home/vpp/bui

Re: [vpp-dev] mheap performance issue and fixup

2018-04-23 Thread xyxue
./src/vppinfra/format.c:402 
#132 0x76932876 in format (s=0x0, fmt=0x75f1b18b "%s%c") at 
/home/vpp/build-data/../src/vppinfra/format.c:421 
#133 0x75f0ce10 in shm_name_from_svm_map_region_args (a=0x7fffb6cf4cf0) 
at /home/vpp/build-data/../src/svm/svm.c:525 
#134 0x75f0d60d in svm_map_region (a=0x7fffb6cf4cf0) at 
/home/vpp/build-data/../src/svm/svm.c:658 
#135 0x75f0e663 in svm_region_find_or_create (a=0x7fffb6cf4cf0) at 
/home/vpp/build-data/../src/svm/svm.c:995 
#136 0x77938e9a in vl_map_shmem (region_name=0x779554c7 "/vpe-api", 
is_vlib=1) at /home/vpp/build-data/../src/vlibmemory/memory_shared.c:514 
#137 0x779413fc in memory_api_init (region_name=0x779554c7 
"/vpe-api") at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:651 
---Type  to continue, or q  to quit--- 
#138 0x77942ed0 in memclnt_process (vm=0x77926840 
, node=0x7fffb6cec000, f=0x0) 
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:952 
#139 0x776a603c in vlib_process_bootstrap (_a=140736237530192) at 
/home/vpp/build-data/../src/vlib/main.c:1253 
#140 0x76941570 in clib_calljmp () at 
/home/vpp/build-data/../src/vppinfra/longjmp.S:128 
#141 0x7fffb571ec20 in ?? () 
#142 0x776a6179 in vlib_process_startup (vm=0x77926840 
, p=0x7fffb6cec000, f=0x0) 
at /home/vpp/build-data/../src/vlib/main.c:1275 
Backtrace stopped: previous frame inner to this frame (corrupt stack?) 
(gdb) 
(gdb)

Thanks,
Xyxue



 
From: Kingwel Xie
Date: 2018-04-20 17:29
To: Damjan Marion; Neale Ranns (nranns); 薛欣颖
CC: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
Hi,
 
Finally I managed to create 3 patches to include all modifications to mheap. 
Please check below for details. I’ll do some other patches later…
 
https://gerrit.fd.io/r/11950
https://gerrit.fd.io/r/11952
https://gerrit.fd.io/r/11957
 
Hi Xue, you need at least the first one for your test.
 
Regards,
Kingwel
 
From: Kingwel Xie 
Sent: Thursday, April 19, 2018 9:20 AM
To: Damjan Marion <damar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] mheap performance issue and fixup
 
Hi Damjan,
 
We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so. 
 
Regards,
Kingwel
 
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan 
Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie <kingwel@ericsson.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
 
Dear Kingwel, 
 
Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...
 
Thanks in Advance,
 
-- 
Damjan
 
On 16 Apr 2018, at 10:13, Kingwel Xie <kingwel@ericsson.com> wrote:
 
Hi all,
 
We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.
 
First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.
 
The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like t

Re: [vpp-dev] mheap performance issue and fixup

2018-04-20 Thread xyxue
Hi Kingwel,

Thank you very much for your help. 

Thanks,
Xyxue


 
From: Kingwel Xie
Date: 2018-04-20 17:29
To: Damjan Marion; Neale Ranns (nranns); 薛欣颖
CC: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
Hi,
 
Finally I managed to create 3 patches to include all modifications to mheap. 
Please check below for details. I’ll do some other patches later…
 
https://gerrit.fd.io/r/11950
https://gerrit.fd.io/r/11952
https://gerrit.fd.io/r/11957
 
Hi Xue, you need at least the first one for your test.
 
Regards,
Kingwel
 
From: Kingwel Xie 
Sent: Thursday, April 19, 2018 9:20 AM
To: Damjan Marion <damar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] mheap performance issue and fixup
 
Hi Damjan,
 
We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so. 
 
Regards,
Kingwel
 
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan 
Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie <kingwel@ericsson.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
 
Dear Kingwel, 
 
Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...
 
Thanks in Advance,
 
-- 
Damjan
 
On 16 Apr 2018, at 10:13, Kingwel Xie <kingwel@ericsson.com> wrote:
 
Hi all,
 
We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.
 
First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.
 
The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:
 
Mheap.c:545 
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);
 
The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:
 
Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:
28340k(80569efc 60)
total 1
bin 276:
215323k(8c88df6c 44)
total 1
Total count in free bin: 3
 
You can see, as pointed out before, the hit rat

[vpp-dev] question about set ip arp

2018-04-20 Thread xyxue

Hi guys,

I'm testing 'set ip arp' . When I don't configure the param 'no-fib-entry' , 
the configuration time of 100k cost 19+ mins. When I configure  the param 
'no-fib-entry' the time is 9 s.
Can I use 'set ip arp ... + no-fib-entry  and ip route add ' achieve the same 
goal with 'set ip arp without no-fib-entry'?
The most time-consuming part is 'clib_bihash_foreach_key_value_pair_24_8' .  
The stack info is shown below:
0 clib_bihash_foreach_key_value_pair_24_8 (h=0x7fffb5d4c840, 
callback=0x7719c98d , arg=0x7fffb5d33dc0) 
at /home/vpp/build-data/../src/vppinfra/bihash_template.c:589 
#1 0x7719cafd in adj_nbr_walk_nh4 (sw_if_index=1, addr=0x7fffb5d4c0f8, 
cb=0x76cacb17 , ctx=0x7fffb5d4c0f4) 
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:642 
#2 0x76cacd64 in arp_update_adjacency (vnm=0x7763a540 , 
sw_if_index=1, ai=1) at /home/vpp/build-data/../src/vnet/ethernet/arp.c:466 
#3 0x76cbb6fe in ethernet_update_adjacency (vnm=0x7763a540 
, sw_if_index=1, ai=1) at 
/home/vpp/build-data/../src/vnet/ethernet/interface.c:208 
#4 0x771aca55 in vnet_update_adjacency_for_sw_interface 
(vnm=0x7763a540 , sw_if_index=1, ai=1) 
at /home/vpp/build-data/../src/vnet/adj/rewrite.c:225 
#5 0x7719c201 in adj_nbr_add_or_lock (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4, nh_addr=0x7fffb5d47ab0, sw_if_index=1) 
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:246 
#6 0x7718eb6a in fib_path_attached_next_hop_get_adj 
(path=0x7fffb5d47a88, link=VNET_LINK_IP4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:664 
#7 0x7718ebc8 in fib_path_attached_next_hop_set (path=0x7fffb5d47a88) 
at /home/vpp/build-data/../src/vnet/fib/fib_path.c:678 
#8 0x77191077 in fib_path_resolve (path_index=14) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1862 
#9 0x7718adb4 in fib_path_list_resolve (path_list=0x7fffb5ade9a4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:567 
#10 0x7718b27d in fib_path_list_create (flags=FIB_PATH_LIST_FLAG_NONE, 
rpaths=0x7fffb5d4c56c) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:734 
#11 0x77185732 in fib_entry_src_adj_path_swap (src=0x7fffb5c3aa94, 
entry=0x7fffb5d3ad2c, pl_flags=FIB_PATH_LIST_FLAG_NONE, paths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src_adj.c:110 
#12 0x77181ed7 in fib_entry_src_action_path_swap 
(fib_entry=0x7fffb5d3ad2c, source=FIB_SOURCE_ADJ, 
flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src.c:1191 
#13 0x7717d63c in fib_entry_create (fib_index=0, prefix=0x7fffb5d34400, 
source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, paths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry.c:828 
#14 0x7716dcca in fib_table_entry_path_add2 (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
rpath=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:597 
#15 0x7716dba9 in fib_table_entry_path_add (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
next_hop_proto=DPO_PROTO_IP4, 
next_hop=0x7fffb5d34404, next_hop_sw_if_index=1, next_hop_fib_index=4294967295, 
next_hop_weight=1, next_hop_labels=0x0, path_flags=FIB_ROUTE_PATH_FLAG_NONE) 
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:569 
#16 0x76cacef5 in arp_adj_fib_add (e=0x7fffb5d4c0f4, fib_index=0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:550 
#17 0x76cad644 in vnet_arp_set_ip4_over_ethernet_internal 
(vnm=0x7763a540 , args=0x7fffb5d34700) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:618 
#18 0x76cb2f1a in set_ip4_over_ethernet_rpc_callback (a=0x7fffb5d34700) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1989 
#19 0x779442c9 in vl_api_rpc_call_main_thread_inline (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28, 
force_rpc=0 '\000') at 
/home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2061 
#20 0x7794441c in vl_api_rpc_call_main_thread (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28) 
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2107 
#21 0x76cb35c7 in vnet_arp_set_ip4_over_ethernet (vnm=0x7763a540 
, sw_if_index=1, a_arg=0x7fffb5d34800, is_static=0, 
is_no_fib_entry=0) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2074 
#22 0x76cb4015 in ip_arp_add_del_command_fn (vm=0x77923420 
, is_del=0, input=0x7fffb5d34ec0, cmd=0x7fffb5c78864) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2233 

Thanks,
Xyxue






Re: [vpp-dev] questions in configuring tunnel

2018-04-19 Thread xyxue
Hi Kingwel, 

Thank you very much for your share of the solution of  'cache line alignment'. 
I saw you configured 2M gtpu tunnel in 200s .  When I merge the patch 10216 , 
configure 100K gtpu tunnel cost 7 mins.  How do your configure rate reach so 
fast?

Thanks,
Xyxue


 
From: Kingwel Xie
Date: 2018-04-19 17:11
To: 薛欣颖; nranns
CC: vpp-dev
Subject: Re: [vpp-dev] questions in configuring tunnel
Hi Xue, 
 
I’m afraid it will take a few days to commit the code. 
 
For now I copied the key changes for your reference. It should work. 
 
Regards,
Kingwel
 
 
/* Search free lists for object with given size and alignment. */
static uword
mheap_get_search_free_list (void *v,
uword * n_user_bytes_arg,
uword align, uword 
align_offset)
{
  mheap_t *h = mheap_header (v);
  uword bin, n_user_bytes, i, bi;
 
  n_user_bytes = *n_user_bytes_arg;
  bin = user_data_size_to_bin_index (n_user_bytes);
 
  if (MHEAP_HAVE_SMALL_OBJECT_CACHE
  && (h->flags & MHEAP_FLAG_SMALL_OBJECT_CACHE)
  && bin < 255
  && align == STRUCT_SIZE_OF (mheap_elt_t, user_data[0])
  && align_offset == 0)
{
  uword r = mheap_get_small_object (h, bin);
  h->stats.n_small_object_cache_attempts += 1;
  if (r != MHEAP_GROUNDED)
{
  h->stats.n_small_object_cache_hits += 1;
  return r;
}
}
 
  /* kingwel, lookup a free bin which is big enough to hold everything 
align+align_offset+lo_free_size+overhead */
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);
  for (i = bin / BITS (uword); i < ARRAY_LEN (h->non_empty_free_elt_heads);
   i++)
{
  uword non_empty_bin_mask = h->non_empty_free_elt_heads[i];
 
  /* No need to search smaller bins. */
  if (i == bin / BITS (uword))
non_empty_bin_mask &= ~pow2_mask (bin % BITS (uword));
 
  /* Search each occupied free bin which is large enough. */
  /* *INDENT-OFF* */
  foreach_set_bit (bi, non_empty_bin_mask,
  ({
uword r =
  mheap_get_search_free_bin (v, bi + i * BITS (uword),
 n_user_bytes_arg,
 align,
 align_offset);
if (r != MHEAP_GROUNDED) return r;
  }));
  /* *INDENT-ON* */
}
 
  return MHEAP_GROUNDED;
}
 
 
 
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of xyxue
Sent: Thursday, April 19, 2018 4:02 PM
To: Kingwel Xie <kingwel@ericsson.com>; nranns <nra...@cisco.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] questions in configuring tunnel
 
Hi,

Thank you all for your help . I've learned so much in your discussion . There 
is some questions to ask for advice:

About the 3th advice's solution , Can you commit this part, or tell us the 
method to handle it?


The patch 10216 is the solution of 'gtpu geneve vxlan vxlan-gre' .  When we 
create gtpu tunnel ,vpp add 'virtual node' . But create mpls and gre , vpp add 
'true node' to trans.  
We can delete the gtpu's 'virtual node'  but not the 'true node' . Is there any 
solution for mpls and gre?
 
Thanks,
Xyxue


 
From: Kingwel Xie
Date: 2018-04-19 13:44
To: Neale Ranns (nranns); 薛欣颖
CC: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel
Thanks for the comments. Please see mine in line.
 
 
From: Neale Ranns (nranns) [mailto:nra...@cisco.com] 
Sent: Wednesday, April 18, 2018 9:18 PM
To: Kingwel Xie <kingwel@ericsson.com>; xyxue <xy...@fiberhome.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel
 
Hi Kingwei,
 
Thank you for your analysis. Some comments inline (on subjects I know a bit 
about J )
 
Regards,
neale
 
From: Kingwel Xie <kingwel@ericsson.com>
Date: Wednesday, 18 April 2018 at 13:49
To: "Neale Ranns (nranns)" <nra...@cisco.com>, xyxue <xy...@fiberhome.com>
Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: RE: [vpp-dev] questions in configuring tunnel
 
Hi,
 
As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the 
nodes. 
 
However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.
 
Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding t

Re: [vpp-dev] questions in configuring tunnel

2018-04-19 Thread xyxue
Hi,

Thank you all for your help . I've learned so much in your discussion . There 
is some questions to ask for advice:

About the 3th advice's solution , Can you commit this part, or tell us the 
method to handle it?

The patch 10216 is the solution of 'gtpu geneve vxlan vxlan-gre' .  When we 
create gtpu tunnel ,vpp add 'virtual node' . But create mpls and gre , vpp add 
'true node' to trans.  
We can delete the gtpu's 'virtual node'  but not the 'true node' . Is there any 
solution for mpls and gre?

Thanks,
Xyxue


 
From: Kingwel Xie
Date: 2018-04-19 13:44
To: Neale Ranns (nranns); 薛欣颖
CC: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel
Thanks for the comments. Please see mine in line.
 
 
From: Neale Ranns (nranns) [mailto:nra...@cisco.com] 
Sent: Wednesday, April 18, 2018 9:18 PM
To: Kingwel Xie <kingwel@ericsson.com>; xyxue <xy...@fiberhome.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel
 
Hi Kingwei,
 
Thank you for your analysis. Some comments inline (on subjects I know a bit 
about J )
 
Regards,
neale
 
From: Kingwel Xie <kingwel@ericsson.com>
Date: Wednesday, 18 April 2018 at 13:49
To: "Neale Ranns (nranns)" <nra...@cisco.com>, xyxue <xy...@fiberhome.com>
Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: RE: [vpp-dev] questions in configuring tunnel
 
Hi,
 
As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the 
nodes. 
 
However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.
 
Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding tunnel would become slower and slower. BTW, it is not a good fix, but it 
works.
  #if 0
  t->sibling_index = fib_entry_child_add
(t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);
  #endif
 
[nr] if you skip this then the tunnels are not part of the FIB graph and hence 
any updates in the forwarding to the tunnel’s destination will go unnoticed and 
hence you potentially black hole the tunnel traffic indefinitely (since the 
tunnel is not re-stacked). It is a linked list, but apart from the pool 
allocation of the list element, the list element insertion is O(1), no?
[kingwel] You are right that the update will not be noticed, but we think it is 
acceptable for a p2p tunnel interface. The list element itself is ok when being 
inserted, but the following restack operation will walk through all inserted 
elements. This is the point I’m talking about.

The bihash for Adj_nbr. Each tunnel interface would create one bihash which by 
default is 32MB, mmap and memset then. Typically you don’t need that many 
adjacencies for a p2p tunnel interface. We change the code to use a common heap 
for all p2p interfaces
 
[nr] if you would push these changes upstream, I would be grateful.
[kingwel] The fix is quite ugly. Let’s see what we can do to make it better.  
 
As mentioned in my email, rewrite requires cache line alignment, which mheap 
cannot handle very well. Mheap might be super slow when you add too many 
tunnels. 
In vl_api_clnt_process, make sleep_time always 100us. This is to avoid main 
thread yielding to linux_epoll_input_inline 10ms wait time. This is not a 
perfect fix either. But if don’t do this, probably each API call would probably 
have to wait for 10ms until main thread has chance to polling API events.
Be careful with the counters. It would eat up your memory very quick. Each 
counter will be expanded to number of thread multiply number of tunnels. In 
other words, 1M tunnels means 1M x 8 x 8B = 64MB, if you have 8 workers. The 
combined counter will take double size because it has 16 bytes. Each interface 
has 9 simple and 2 combined counters. Besides, load_balance_t and adjacency_t 
also have some counters. You will have at least that many objects if you have 
that many interfaces. The solution is simple – to make a dedicated heap for all 
counters.
 
[nr] this would also be a useful addition to the upstream
[kingwel] will do later. 
 
We also did some other fixes to speed up memory allocation, f.g., pre-allocate 
a big enough pool for gtpu_tunnel_t
 
[nr] I understand why you would do this and knobs in the startup.conf to enable 
might be a good approach, but for general consumption, IMHO, it’s too specific 
– others may disagree.
[kingwel] agreeJ
 
To honest, it is not easy. It took us quite some time to figure it out. In the 
end, we manage to add 2M tunnels & 2M

[vpp-dev] questions in configuring tunnel

2018-04-18 Thread xyxue

Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory ) 
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue




Re: [vpp-dev] the source_and_port_range_check and the URPF support ipv6?

2018-04-11 Thread xyxue
Hi Ole,

ACLS be extended to support port-ranges.
Is there a plan to support ipv6 in URPF?

Thanks,
Xyxue


 
From: Ole Troan
Date: 2018-04-09 19:23
To: 薛欣颖
CC: vpp-dev
Subject: Re: [vpp-dev] the source_and_port_range_check and the URPF support 
ipv6?
Xyxue,
 
> Do the source_and_port_range_check and the URPF support ipv6?
> I can find the 'ip4_source_and_port_range_check.c'. Is there a plan to 
> support ipv6 in the source_and_port_range_check and the URPF?
 
I don't think source_and_port_range supports IPv6.
I've always been a little curious about the port_and_range_check use case. 
Could ACLs do the job?
(or could ACLS be extended to support port-ranges?)
 
Cheers,
Ole
 


[vpp-dev] the source_and_port_range_check and the URPF support ipv6?

2018-04-09 Thread xyxue

Hi guys,

Do the source_and_port_range_check and the URPF support ipv6?
I can find the 'ip4_source_and_port_range_check.c'. Is there a plan to support 
ipv6 in the source_and_port_range_check and the URPF?
 
Thanks,
Xyxue




[vpp-dev] the problem of deleting the static arp

2018-04-04 Thread xyxue

Hi guys,

I’m testing the static arp. 
When I configured one hundred thousand static arp ,and delete to 34467th, there 
is a SIGABRT. More info is shown below:

VPP# set ip arp host-eth1 1.1.69.234 00:01:00:00:45:ea del
/home/vpp/build-data/../src/vnet/fib/ip4_fib.c:230 (ip4_fib_table_destroy) 
assertion `0 == fib_table->ft_total_route_counts' fails

Program received signal SIGABRT, Aborted.
0x76168c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: 
(gdb) bt
#0  0x76168c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7616c028 in __GI_abort () at abort.c:89
#2  0x004061ef in os_panic () at 
/home/vpp/build-data/../src/vpp/vnet/main.c:302
#3  0x7693eb88 in debugger () at 
/home/vpp/build-data/../src/vppinfra/error.c:84
#4  0x7693ef8f in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x772f4758 "%s:%d (%s) assertion `%s' fails")
at /home/vpp/build-data/../src/vppinfra/error.c:143
#5  0x77176289 in ip4_fib_table_destroy (fib_index=0) at 
/home/vpp/build-data/../src/vnet/fib/ip4_fib.c:230
#6  0x7718346d in fib_table_destroy (fib_table=0x7fffb5a9cf00) at 
/home/vpp/build-data/../src/vnet/fib/fib_table.c:1209
#7  0x771835f2 in fib_table_unlock (fib_index=0, 
proto=FIB_PROTOCOL_IP4, source=FIB_SOURCE_ADJ) at 
/home/vpp/build-data/../src/vnet/fib/fib_table.c:1265
#8  0x76cc64a9 in arp_adj_fib_remove (e=0x7fffb77cb170, fib_index=0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:1777
#9  0x76cc75df in arp_entry_free (eai=0x7fffb5d268f4, e=0x7fffb77cb170) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1877
#10 0x76cc797f in vnet_arp_unset_ip4_over_ethernet_internal 
(vnm=0x77647dc0 , args=0x7fffb5d209d0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:1902
#11 0x76cc7d20 in set_ip4_over_ethernet_rpc_callback (a=0x7fffb5d209d0) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1981
#12 0x7794b297 in vl_api_rpc_call_main_thread_inline (fp=0x76cc7c8d 
, data=0x7fffb5d209d0 "\001", 
data_length=28, force_rpc=0 '\000')
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2061
#13 0x7794b3ea in vl_api_rpc_call_main_thread (fp=0x76cc7c8d 
, data=0x7fffb5d209d0 "\001", 
data_length=28)
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2107
#14 0x76cc5aaf in vnet_arp_unset_ip4_over_ethernet (vnm=0x77647dc0 
, sw_if_index=1, a_arg=0x7fffb5d20ac0)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1594
#15 0x76cc8f48 in ip_arp_add_del_command_fn (vm=0x7792abe0 
, is_del=1, input=0x7fffb5d20ec0, cmd=0x7fffb5c9295c)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2254
#16 0x7767d217 in cli_no_one_cmd (vm=0x7792abe0 , 
is_del=0, in=0x7fffb5d20ec0, no=0x7fffb5c96b5c) at 
/home/vpp/build-data/../src/vlib/cli/cli_help.c:494
#17 0x776746fe in vlib_cli_dispatch_sub_commands (vm=0x7792abe0 
, cm=0x64f098, input=0x7fffb5d20ec0, parent_command_index=0, 
poss_cmds=0x7fffb5d20da8, 
poss_helps=0x7fffb5d20db0) at /home/vpp/build-data/../src/vlib/cli.c:878
#18 0x77674be3 in vlib_cli_input (vm=0x7792abe0 , 
input=0x7fffb5d20ec0, function=0x776e164d , 
function_arg=0)
at /home/vpp/build-data/../src/vlib/cli.c:970
#19 0x776e77ee in unix_cli_process_input (cm=0x7792a920 
, cli_file_index=0) at 
/home/vpp/build-data/../src/vlib/unix/cli.c:2511
#20 0x776e8358 in unix_cli_process (vm=0x7792abe0 
, rt=0x7fffb5d1, f=0x0) at 
/home/vpp/build-data/../src/vlib/unix/cli.c:2623
#21 0x776aba61 in vlib_process_bootstrap (_a=140736237603344) at 
/home/vpp/build-data/../src/vlib/main.c:1253
#22 0x76953560 in clib_calljmp () at 
/home/vpp/build-data/../src/vppinfra/longjmp.S:128
#23 0x7fffb57309e0 in ?? ()
#24 0x776abb96 in vlib_process_startup (vm=0x2c, p=0x8, 
f=0x7fffb8e179c4) at /home/vpp/build-data/../src/vlib/main.c:1278
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) f 5
#5  0x77176289 in ip4_fib_table_destroy (fib_index=0) at 
/home/vpp/build-data/../src/vnet/fib/ip4_fib.c:230
230 ASSERT(0 == fib_table->ft_total_route_counts);
(gdb) p  fib_table->ft_total_route_counts
$1 = 9
(gdb) 

I doubt whether there is something remains in fib_table_entry, when we delete 
the static arp. 

for (ii = ARRAY_LEN(ip4_specials) - 1; ii >= 0; ii--) 
{ 
fib_prefix_t prefix = ip4_specials[ii].ift_prefix; 

prefix.fp_addr.ip4.data_u32 = 
clib_host_to_net_u32(prefix.fp_addr.ip4.data_u32); 

fib_table_entry_special_remove(fib_table->ft_index, 
, 
ip4_specials[ii].ift_source); 
}

Is there any problem of the process?

Thanks,
Xyxue