Re: [vpp-dev] questions in configuring tunnel

2018-04-18 Thread Kingwel Xie
Thanks for the comments. Please see mine in line.


From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, April 18, 2018 9:18 PM
To: Kingwel Xie ; xyxue 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Kingwei,

Thank you for your analysis. Some comments inline (on subjects I know a bit 
about ☺ )

Regards,
neale

From: Kingwel Xie >
Date: Wednesday, 18 April 2018 at 13:49
To: "Neale Ranns (nranns)" >, xyxue 
>
Cc: "vpp-dev@lists.fd.io" 
>
Subject: RE: [vpp-dev] questions in configuring tunnel

Hi,

As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the nodes.

However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.


  1.  Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding tunnel would become slower and slower. BTW, it is not a good fix, but it 
works.
  #if 0
  t->sibling_index = fib_entry_child_add
(t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);
  #endif

[nr] if you skip this then the tunnels are not part of the FIB graph and hence 
any updates in the forwarding to the tunnel’s destination will go unnoticed and 
hence you potentially black hole the tunnel traffic indefinitely (since the 
tunnel is not re-stacked). It is a linked list, but apart from the pool 
allocation of the list element, the list element insertion is O(1), no?
[kingwel] You are right that the update will not be noticed, but we think it is 
acceptable for a p2p tunnel interface. The list element itself is ok when being 
inserted, but the following restack operation will walk through all inserted 
elements. This is the point I’m talking about.


  1.  The bihash for Adj_nbr. Each tunnel interface would create one bihash 
which by default is 32MB, mmap and memset then. Typically you don’t need that 
many adjacencies for a p2p tunnel interface. We change the code to use a common 
heap for all p2p interfaces

[nr] if you would push these changes upstream, I would be grateful.
[kingwel] The fix is quite ugly. Let’s see what we can do to make it better.


  1.  As mentioned in my email, rewrite requires cache line alignment, which 
mheap cannot handle very well. Mheap might be super slow when you add too many 
tunnels.
  2.  In vl_api_clnt_process, make sleep_time always 100us. This is to avoid 
main thread yielding to linux_epoll_input_inline 10ms wait time. This is not a 
perfect fix either. But if don’t do this, probably each API call would probably 
have to wait for 10ms until main thread has chance to polling API events.
  3.  Be careful with the counters. It would eat up your memory very quick. 
Each counter will be expanded to number of thread multiply number of tunnels. 
In other words, 1M tunnels means 1M x 8 x 8B = 64MB, if you have 8 workers. The 
combined counter will take double size because it has 16 bytes. Each interface 
has 9 simple and 2 combined counters. Besides, load_balance_t and adjacency_t 
also have some counters. You will have at least that many objects if you have 
that many interfaces. The solution is simple – to make a dedicated heap for all 
counters.

[nr] this would also be a useful addition to the upstream
[kingwel] will do later.


  1.  We also did some other fixes to speed up memory allocation, f.g., 
pre-allocate a big enough pool for gtpu_tunnel_t

[nr] I understand why you would do this and knobs in the startup.conf to enable 
might be a good approach, but for general consumption, IMHO, it’s too specific 
– others may disagree.
[kingwel] agree☺

To honest, it is not easy. It took us quite some time to figure it out. In the 
end, we manage to add 2M tunnels & 2M routes in 250s.

Hope it helps.

Regard,
Kingwel


From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Neale Ranns
Sent: Wednesday, April 18, 2018 4:33 PM
To: xyxue >; Kingwel Xie 
>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Xyxue,

Try applying the changes in this patch:
   https://gerrit.fd.io/r/#/c/10216/
to MPLS tunnels. Please contribute any changes back to the community so 

[vpp-dev] vpp 1804stable assembly problem

2018-04-18 Thread wangchuan...@163.com
Hello,
I do "ping XXX -l 1600"   from ubuntu to VPP.
It looks like that  2 part of icmp have been reassembled .
But, 
when the VPP-intfc's MTU is 9216 by default, it would not split the response 
ICMP pkt,so the ubuntu receive nothing.
set MTU to be 1500, it don't split response ICMP pkt too, only notice 'ICMP 
destination_unreachable fragmentation_needed_and_dont_fragment_set'

Help please!

trace info:

MTU:9216
Packet 1

00:03:52:633983: dpdk-input
  GigabitEthernet2/0/0 rx queue 0
  buffer 0x2a5d: current data 14, length 1500, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
 ext-hdr-valid 
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 1, nb_segs 1, pkt_len 1514
buf_len 2176, data_len 1514, ol_flags 0x0, data_off 128, phys_addr 
0x5dea97c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
  IP4: 28:d2:44:0d:45:c1 -> 68:05:ca:47:2b:9e
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 1500, checksum 0xb43e
fragment id 0x56be, flags MORE_FRAGMENTS
  ICMP echo_request checksum 0x3622
00:03:52:633989: ip4-input
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 1500, checksum 0xb43e
fragment id 0x56be, flags MORE_FRAGMENTS
  ICMP echo_request checksum 0x3622
00:03:52:633992: ip4-reassembly-feature
  reass id: 1, op id: 0 first bi: 10845, data len: 1480, ip/fragment[0, 1479]
new range: [0, 1479], off 0, len 1480, bi 10845
  reass id: 1, op id: 2 first bi: 10845, data len: 1608, ip/fragment[0, 1479]
finalize reassembly
00:03:52:633998: ip4-lookup
  fib 0 dpo-idx 7 flow hash: 0x
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 1628, checksum 0xd3be
fragment id 0x56be
  ICMP echo_request checksum 0x3622
00:03:52:634000: ip4-local
ICMP: 192.168.100.111 -> 192.168.100.100
  tos 0x00, ttl 64, length 1628, checksum 0xd3be
  fragment id 0x56be
ICMP echo_request checksum 0x3622
00:03:52:634002: ip4-icmp-input
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 1628, checksum 0xd3be
fragment id 0x56be
  ICMP echo_request checksum 0x3622
00:03:52:634002: ip4-icmp-echo-request
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 1628, checksum 0xd3be
fragment id 0x56be
  ICMP echo_request checksum 0x3622
00:03:52:634003: ip4-load-balance
  fib 0 dpo-idx 13 flow hash: 0x
  ICMP: 192.168.100.100 -> 192.168.100.111
tos 0x00, ttl 64, length 1628, checksum 0x259e
fragment id 0x04df
  ICMP echo_reply checksum 0x3e22
00:03:52:634003: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 1 : ipv4 via 192.168.100.111 GigabitEthernet2/0/0: 
mtu:9202 28d2440d45c16805ca472b9e0800 flow hash: 0x
  : 28d2440d45c16805ca472b9e08004500065c04df4001259ec0a86464c0a8
  0020: 646f3e2200010d2d6162636465666768696a6b6c6d6e6f707172
00:03:52:634004: GigabitEthernet2/0/0-output
  GigabitEthernet2/0/0
  IP4: 68:05:ca:47:2b:9e -> 28:d2:44:0d:45:c1
  ICMP: 192.168.100.100 -> 192.168.100.111
tos 0x00, ttl 64, length 1628, checksum 0x259e
fragment id 0x04df
  ICMP echo_reply checksum 0x3e22
00:03:52:634005: GigabitEthernet2/0/0-tx
  GigabitEthernet2/0/0 tx queue 0
  buffer 0x2a5d: current data 0, length 1514, free-list 0, clone-count 0, 
totlen-nifb 128, trace 0x0
 ext-hdr-valid 
 next-buffer 0x2a36, segment length 128, clone-count 0
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 1, nb_segs 2, pkt_len 1642
buf_len 2176, data_len 1514, ol_flags 0x0, data_off 128, phys_addr 
0x5dea97c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
  IP4: 68:05:ca:47:2b:9e -> 28:d2:44:0d:45:c1
  ICMP: 192.168.100.100 -> 192.168.100.111
tos 0x00, ttl 64, length 1628, checksum 0x259e
fragment id 0x04df
  ICMP echo_reply checksum 0x3e22

Packet 2

00:03:52:633995: dpdk-input
  GigabitEthernet2/0/0 rx queue 0
  buffer 0x2a36: current data 14, length 148, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
 ext-hdr-valid 
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 1, nb_segs 1, pkt_len 162
buf_len 2176, data_len 162, ol_flags 0x0, data_off 128, phys_addr 0x5dea8e00
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
  IP4: 28:d2:44:0d:45:c1 -> 68:05:ca:47:2b:9e
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 148, checksum 0xd8cd
fragment id 0x56be offset 1480, flags 
  ICMP unknown 0x61 checksum 0x6364
00:03:52:633995: ip4-input
  ICMP: 192.168.100.111 -> 192.168.100.100
tos 0x00, ttl 64, length 148, checksum 0xd8cd
fragment id 0x56be offset 1480, flags 
  ICMP unknown 0x61 checksum 0x6364
00:03:52:633997: ip4-reassembly-feature
  reass id: 1, op id: 1 first bi: 10845, data len: 1608, 

[vpp-dev] Are there any DUT devices that know vagrant pull up to sign the default account password?

2018-04-18 Thread 汤超
Are there any DUT devices that know vagrant pull up to sign the default account 
password?
I searched for many default account passwords on the Internet, all of which 
were wrong.






nwnj...@fiberhome.com


Re: [vpp-dev] mheap performance issue and fixup

2018-04-18 Thread Kingwel Xie
Hi Damjan,

We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so.

Regards,
Kingwel

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan 
Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup

Dear Kingwel,

Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...

Thanks in Advance,

--
Damjan

On 16 Apr 2018, at 10:13, Kingwel Xie 
> wrote:

Hi all,

We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.

First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.

The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:

Mheap.c:545
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);

The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:

Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:
28340k(80569efc 60)
total 1
bin 276:
215323k(8c88df6c 44)
total 1
Total count in free bin: 3

You can see, as pointed out before, the hit rate is very high, >99.9%, and 
per-attempt is ~1. Furthermore, the total elements in free list is only 3.

Apart from we discussed above, we also made some other improvements/bug fixes 
to mheap:


  1.  Bug fix: macros MHEAP_ELT_OVERHEAD_BYTES & MHEAP_MIN_USER_DATA_BYTES are 
wrongly defined. In fact MHEAP_ELT_OVERHEAD_BYTES should be (STRUCT_OFFSET_OF 
(mheap_elt_t, user_data))
  2.  mheap_bytes_overhead is wrongly calculating the total overhead – should 
be number of elements * MHEAP_ELT_OVERHEAD_BYTES
  3.  Do not make an element if hi_free_size is smaller than 4 times of 
MHEAP_MIN_USER_DATA_BYTES. This is to avoid memory fragmentation
  4.  Bug fix: register_node.c:336 is wrongly using vector memory,  should be 
like this: 

[vpp-dev] Vagrant up is wrong, dut device can't get up

2018-04-18 Thread 汤超
Excuse me, when vagrant up --parallel --provision came up, there was a lot of 
errors. How to solve it?

Unpack, print the following:

fiber@ubuntu:~/toni/csit-vagrant$ vagrant up --parallel --provision
Bringing machine 'tg' up with 'virtualbox' provider...
Bringing machine 'dut1' up with 'virtualbox' provider...
Bringing machine 'dut2' up with 'virtualbox' provider...
==> tg: Running provisioner: shell...
tg: Running: inline script
tg: Removing user `csit' ...
tg: Warning: group `csit' has no more members.
tg: Done.
tg: Adding user `csit' ...
tg: Adding new group `csit' (1001) ...
tg: Adding new user `csit' (1001) with group `csit' ...
tg: The home directory `/home/csit' already exists.  Not copying from 
`/etc/skel'.
tg: Adding user `csit' to group `vagrant' ...
tg: Adding user csit to group vagrant
tg: Done.
tg: uid=1001(csit) gid=1001(csit) groups=1001(csit),1000(vagrant)
tg: csit ALL=(root) NOPASSWD:ALL
==> tg: Running provisioner: shell...
tg: Running: inline script
tg: Reading package lists...
tg: Reading package lists...
tg: Building dependency tree...
tg: Reading state information...
tg: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
tg: Reading package lists...
tg: Building dependency tree...
tg: Reading state information...
tg: Package debhelper is not available, but is referred to by another 
package.
tg: This may mean that the package is missing, has been obsoleted, or
tg: is only available from another source
tg: E
tg: : 
tg: Package 'debhelper' has no installation candidate
==> dut1: Running provisioner: shell...
dut1: Running: inline script
dut1: Removing user `csit' ...
dut1: Warning: group `csit' has no more members.
dut1: Done.
dut1: Adding user `csit' ...
dut1: Adding new group `csit' (1001) ...
dut1: Adding new user `csit' (1001) with group `csit' ...
dut1: The home directory `/home/csit' already exists.  Not copying from 
`/etc/skel'.
dut1: Adding user `csit' to group `vagrant' ...
dut1: Adding user csit to group vagrant
dut1: Done.
dut1: uid=1001(csit) gid=1001(csit) groups=1001(csit),1000(vagrant)
dut1: csit ALL=(root) NOPASSWD:ALL
==> dut1: Running provisioner: shell...
dut1: Running: inline script
dut1: Reading package lists...
dut1: Reading package lists...
dut1: Building dependency tree...
dut1: Reading state information...
dut1: Correcting dependencies...
dut1:  Done
dut1: The following packages will be REMOVED:
dut1:   vpp vpp-api-java vpp-api-lua vpp-api-python vpp-lib vpp-plugins
dut1: 0 upgraded, 0 newly installed, 6 to remove and 0 not upgraded.
dut1: 6 not fully installed or removed.
dut1: After this operation, 85.7 MB disk space will be freed.
dut1: (Reading database ... 93104 files and directories currently 
installed.)
dut1: Removing vpp-api-lua (18.07-rc0~41-g410bcca) ...
dut1: Removing vpp-plugins (18.07-rc0~41-g410bcca) ...
dut1: Removing vpp (18.07-rc0~41-g410bcca) ...
dut1: There weren't PCI devices binded
dut1: Removing vpp-api-java (18.07-rc0~41-g410bcca) ...
dut1: Removing vpp-api-python (18.07-rc0~41-g410bcca) ...
dut1: Removing vpp-lib (18.07-rc0~41-g410bcca) ...
dut1: Processing triggers for libc-bin (2.19-0ubuntu6.7) ...
dut1: Reading package lists...
dut1: Building dependency tree...
dut1: Reading state information...
dut1: Package debhelper is not available, but is referred to by another 
package.
dut1: This may mean that the package is missing, has been obsoleted, or
dut1: is only available from another source
dut1: E
dut1: : 
dut1: Package 'debhelper' has no installation candidate
==> dut1: Running provisioner: shell...
dut1: Running: inline script
dut1: Reading package lists...
dut1: Building dependency tree...
dut1: Reading state information...
dut1: Package 'vpp-lib' is not installed, so not removed
dut1: The following packages will be REMOVED:
dut1:   vpp* vpp-dbg* vpp-dev*
dut1: 0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
dut1: After this operation, 158 MB disk space will be freed.
dut1: (Reading database ... 92876 files and directories currently 
installed.)
dut1: Removing vpp (18.07-rc0~41-g410bcca) ...
dut1: Purging configuration files for vpp (18.07-rc0~41-g410bcca) ...
dut1: There weren't PCI devices binded
dut1: Removing vpp-dbg (18.07-rc0~41-g410bcca) ...
dut1: Removing vpp-dev (18.07-rc0~41-g410bcca) ...
dut1: W
dut1: : 
dut1: Can not find PkgVer for 'vpp'
dut1: Selecting previously unselected package vpp.
dut1: (Reading database ... 91951 files and directories currently 
installed.)
dut1: Preparing to unpack vpp_18.07-rc0~41-g410bcca_amd64.deb ...
dut1: Unpacking vpp (18.07-rc0~41-g410bcca) ...
dut1: Selecting previously unselected 

Re: [vpp-dev] mheap performance issue and fixup

2018-04-18 Thread Damjan Marion
Dear Kingwel,

Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...

Thanks in Advance,

--
Damjan

On 16 Apr 2018, at 10:13, Kingwel Xie 
> wrote:

Hi all,

We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.

First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.

The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:

Mheap.c:545
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);

The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:

Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:
28340k(80569efc 60)
total 1
bin 276:
215323k(8c88df6c 44)
total 1
Total count in free bin: 3

You can see, as pointed out before, the hit rate is very high, >99.9%, and 
per-attempt is ~1. Furthermore, the total elements in free list is only 3.

Apart from we discussed above, we also made some other improvements/bug fixes 
to mheap:


  1.  Bug fix: macros MHEAP_ELT_OVERHEAD_BYTES & MHEAP_MIN_USER_DATA_BYTES are 
wrongly defined. In fact MHEAP_ELT_OVERHEAD_BYTES should be (STRUCT_OFFSET_OF 
(mheap_elt_t, user_data))
  2.  mheap_bytes_overhead is wrongly calculating the total overhead – should 
be number of elements * MHEAP_ELT_OVERHEAD_BYTES
  3.  Do not make an element if hi_free_size is smaller than 4 times of 
MHEAP_MIN_USER_DATA_BYTES. This is to avoid memory fragmentation
  4.  Bug fix: register_node.c:336 is wrongly using vector memory,  should be 
like this: clib_mem_is_heap_object (vec_header (r->name, 0))
  5.  Bug fix: dpo_stack_from_node in dpo.c: memory leak, of parent_indices
  6.  Some fixes and improvements of format_mheap to show more information of 
heap


The code including all fixes is tentatively in our private code base. It can be 
of course shared if wanted.

Really appreciate any comments!

Regards,
Kingwel





Re: [vpp-dev] vpp io Multi-thread Model #vpp

2018-04-18 Thread Damjan Marion

If software RSS is not available or not sufficient  you can use vpp worker 
handoff infrastructure.
See "set interface handoff" command. Same infra can be used for your own nodes 
(i.e. NAT plugin is using it).

-- 
Damjan

> On 18 Apr 2018, at 09:53, tieudaotu...@gmail.com wrote:
> 
> More than one worker thread.
> I dig in deep and I know that when enable handoff feature, "worker thread 
> read from dpdk-input after that, it will also redirect packet to frame queue 
> for appropriate worker thread queue based on hash key function". That is why 
> i'm said worker thread is io threads also.
> 
> Of course, I know that RSS feature, but it depends on the NIC feature.
> --> Supporting RSS software (using IO thread) on NIC without RSS feature and 
> I can enable if needed.
> 



Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Maciek Konstantynowicz (mkonstan)
+1

-Maciek
goo.gl/pR4k3y

On 18 Apr 2018, at 16:13, Luke, Chris 
> wrote:

Given 18.04 is just one week away I would suggest the path of least disturbance 
at least until after the release. I abhor complications. :)

Chris.


From: csit-...@lists.fd.io 
[mailto:csit-...@lists.fd.io] On Behalf Of Ed Kern
Sent: Wednesday, April 18, 2018 11:10 AM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>
Cc: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Marek Gradzki -X (mgradzki - 
PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Dont get me wrong..im behind vratko’s thinking about doing the restructure…just 
didnt want to rush that in..(unless that fix is more simple than it appears) 
but wanted
to get you working again right away.

There is no point in just deleting the arm packages since they would just get 
repopulated quickly

although I am still thinking about other options to explore..

Ed


On Apr 18, 2018, at 9:05 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >
Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>;hc2...@lists.fd.io;
 honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing makes sense since the arm64
flavor certainly came long after the amd64 was build and pushed to nexus.

BUT..just as a datapoint and possible short term workaround

If csit would just use apt-get download vpp-dpdk-dkms it appears (even with the 
current config/misconfig ) to pull the correct deb
depending on your source arch. not sure if this could be a short term 
fix/workaround/replacement to your curl command.

Ed





On Apr 18, 2018, at 8:46 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io 
> On Behalf Of Marek Gradzki 
-X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Chris Luke
Given 18.04 is just one week away I would suggest the path of least disturbance 
at least until after the release. I abhor complications. :)

Chris.


From: csit-...@lists.fd.io [mailto:csit-...@lists.fd.io] On Behalf Of Ed Kern
Sent: Wednesday, April 18, 2018 11:10 AM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
Cc: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
; Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at 
Cisco) ; Maciek Konstantynowicz (mkonstan) 
; Vanessa Valderrama ; 
csit-...@lists.fd.io; vpp-dev ; hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Dont get me wrong..im behind vratko’s thinking about doing the restructure…just 
didnt want to rush that in..(unless that fix is more simple than it appears) 
but wanted
to get you working again right away.

There is no point in just deleting the arm packages since they would just get 
repopulated quickly

although I am still thinking about other options to explore..

Ed


On Apr 18, 2018, at 9:05 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >
Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing makes sense since the arm64
flavor certainly came long after the amd64 was build and pushed to nexus.

BUT..just as a datapoint and possible short term workaround

If csit would just use apt-get download vpp-dpdk-dkms it appears (even with the 
current config/misconfig ) to pull the correct deb
depending on your source arch. not sure if this could be a short term 
fix/workaround/replacement to your curl command.

Ed





On Apr 18, 2018, at 8:46 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io 
> On Behalf Of Marek Gradzki 
-X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >; Ed 
Kern (ejk) >; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>; 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Ed Kern
Dont get me wrong..im behind vratko’s thinking about doing the restructure…just 
didnt want to rush that in..(unless that fix is more simple than it appears) 
but wanted
to get you working again right away.

There is no point in just deleting the arm packages since they would just get 
repopulated quickly

although I am still thinking about other options to explore..

Ed

On Apr 18, 2018, at 9:05 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >
Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing makes sense since the arm64
flavor certainly came long after the amd64 was build and pushed to nexus.

BUT..just as a datapoint and possible short term workaround

If csit would just use apt-get download vpp-dpdk-dkms it appears (even with the 
current config/misconfig ) to pull the correct deb
depending on your source arch. not sure if this could be a short term 
fix/workaround/replacement to your curl command.

Ed




On Apr 18, 2018, at 8:46 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io 
> On Behalf Of Marek Gradzki 
-X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >; Ed 
Kern (ejk) >; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

+hc2vpp list

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: 18 kwietnia 2018 16:23
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>; Ed Kern (ejk) 
>; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Thanks Peter !
This is affecting all CSIT vpp and honeycomb 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Peter Mikus
Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
; Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 

Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
; Maciek Konstantynowicz (mkonstan) ; 
Vanessa Valderrama ; csit-...@lists.fd.io; 
vpp-dev ; hc2...@lists.fd.io; honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing makes sense since the arm64
flavor certainly came long after the amd64 was build and pushed to nexus.

BUT..just as a datapoint and possible short term workaround

If csit would just use apt-get download vpp-dpdk-dkms it appears (even with the 
current config/misconfig ) to pull the correct deb
depending on your source arch. not sure if this could be a short term 
fix/workaround/replacement to your curl command.

Ed




On Apr 18, 2018, at 8:46 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io 
> On Behalf Of Marek Gradzki 
-X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >; Ed 
Kern (ejk) >; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

+hc2vpp list

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: 18 kwietnia 2018 16:23
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>; Ed Kern (ejk) 
>; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Thanks Peter !
This is affecting all CSIT vpp and honeycomb jobs that rely on Nexus images,
including ALL periodic jobs - daily performance trending jobs, and semi-weekly
and weekly jobs..

Ed, Vanessa, Could you pls help here? We will open the helpdesk ticket (Peter 
pls do that).

-Maciek
goo.gl/pR4k3y

On 18 Apr 2018, at 12:47, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello,

I’ve recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64
   

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Ed Kern
vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing makes sense since the arm64
flavor certainly came long after the amd64 was build and pushed to nexus.

BUT..just as a datapoint and possible short term workaround

If csit would just use apt-get download vpp-dpdk-dkms it appears (even with the 
current config/misconfig ) to pull the correct deb
depending on your source arch. not sure if this could be a short term 
fix/workaround/replacement to your curl command.

Ed



On Apr 18, 2018, at 8:46 AM, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES 
at Cisco) > wrote:

I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io 
> On Behalf Of Marek Gradzki 
-X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >; Ed 
Kern (ejk) >; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

+hc2vpp list

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: 18 kwietnia 2018 16:23
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>; Ed Kern (ejk) 
>; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Thanks Peter !
This is affecting all CSIT vpp and honeycomb jobs that rely on Nexus images,
including ALL periodic jobs - daily performance trending jobs, and semi-weekly
and weekly jobs..

Ed, Vanessa, Could you pls help here? We will open the helpdesk ticket (Peter 
pls do that).

-Maciek
goo.gl/pR4k3y

On 18 Apr 2018, at 12:47, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello,

I’ve recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64

  18.02-vpp1_amd64
  18.02-vpp1_arm64

20180417190341
  


This breaks our CSIT trending jobs as we are always pulling latest RELEASE 
version (once job is fired) and expecting that correct version is downloaded.

Our command
$ curl 
https://nexus.fd.io/service/local/artifact/maven/content?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS}"
 -O –J
+ curl 
'https://nexus.fd.io/service/local/artifact/maven/content?r=fd.io.master.ubuntu.xenial.main=io.fd.vpp=vpp-dpdk-dkms=deb.md5=RELEASE=deb'
 -O -J


As of now, is there any way to get the proper release version from nexus based 
on ARCH? E.g “PLATFORM” or similar [2]?

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1_arm64
  deb
  deb

If no, may I suggest to start using a new field in nexus for architecture if 
they will sit in one repo/dir so it can be easily downloaded for target 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco)
I have some experience with Maven and Nexus.

>   io.fd.vpp
>   vpp-dpdk-dkms
>   18.02-vpp1_arm64
>   deb
>   deb

That sets "deb" to be both the classifier and the type.
That is correct for the type, as it usually corresponds to the file extension.
But we can use classifier to designate architecture.
For more information, search for "Classifier:" (without quotes) here [3].

By the way, the incorrect thing we currently do
is to overload the version field.
Versions are assumed to be ordered linearly,
so when comparing 18.02-vpp1_arm64 and 18.02-vpp1_amd64, one has to be newer.

Here is my proposed fix:

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1
  arm64
  deb

Also, the code snippets which currently download the deb file
should be updated to include the classifier value in the URL.

Vratko.

[3] https://maven.apache.org/pom.html#Dependencies

From: csit-...@lists.fd.io  On Behalf Of Marek Gradzki -X 
(mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, 2018-April-18 16:24
To: Maciek Konstantynowicz (mkonstan) ; Peter Mikus -X 
(pmikus - PANTHEON TECHNOLOGIES at Cisco) ; Ed Kern (ejk) 
; Vanessa Valderrama 
Cc: csit-...@lists.fd.io; vpp-dev ; hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

+hc2vpp list

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: 18 kwietnia 2018 16:23
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>; Ed Kern (ejk) 
>; Vanessa Valderrama 
>
Cc: csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Thanks Peter !
This is affecting all CSIT vpp and honeycomb jobs that rely on Nexus images,
including ALL periodic jobs - daily performance trending jobs, and semi-weekly
and weekly jobs..

Ed, Vanessa, Could you pls help here? We will open the helpdesk ticket (Peter 
pls do that).

-Maciek
goo.gl/pR4k3y

On 18 Apr 2018, at 12:47, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello,

I’ve recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64

  18.02-vpp1_amd64
  18.02-vpp1_arm64

20180417190341
  


This breaks our CSIT trending jobs as we are always pulling latest RELEASE 
version (once job is fired) and expecting that correct version is downloaded.

Our command
$ curl 
https://nexus.fd.io/service/local/artifact/maven/content?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS}"
 -O –J
+ curl 
'https://nexus.fd.io/service/local/artifact/maven/content?r=fd.io.master.ubuntu.xenial.main=io.fd.vpp=vpp-dpdk-dkms=deb.md5=RELEASE=deb'
 -O -J


As of now, is there any way to get the proper release version from nexus based 
on ARCH? E.g “PLATFORM” or similar [2]?

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1_arm64
  deb
  deb

If no, may I suggest to start using a new field in nexus for architecture if 
they will sit in one repo/dir so it can be easily downloaded for target 
platform?

Thank you.

[1] 
https://nexus.fd.io/service/local/repositories/fd.io.master.ubuntu.xenial.main/content/io/fd/vpp/vpp-dpdk-dkms/maven-metadata.xml
[2] 
https://nexus.fd.io/#view-repositories;fd.io.master.ubuntu.xenial.main~browsestorage


Peter Mikus
Engineer – Software
Cisco Systems Limited

Think before you print.
This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html




Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES@Cisco)
+hc2vpp list

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Maciek 
Konstantynowicz (mkonstan)
Sent: 18 kwietnia 2018 16:23
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
; Ed Kern (ejk) ; Vanessa Valderrama 

Cc: csit-...@lists.fd.io; vpp-dev 
Subject: Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Thanks Peter !
This is affecting all CSIT vpp and honeycomb jobs that rely on Nexus images,
including ALL periodic jobs - daily performance trending jobs, and semi-weekly
and weekly jobs..

Ed, Vanessa, Could you pls help here? We will open the helpdesk ticket (Peter 
pls do that).

-Maciek
goo.gl/pR4k3y


On 18 Apr 2018, at 12:47, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello,

I’ve recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64

  18.02-vpp1_amd64
  18.02-vpp1_arm64

20180417190341
  


This breaks our CSIT trending jobs as we are always pulling latest RELEASE 
version (once job is fired) and expecting that correct version is downloaded.

Our command
$ curl 
https://nexus.fd.io/service/local/artifact/maven/content?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS}"
 -O –J
+ curl 
'https://nexus.fd.io/service/local/artifact/maven/content?r=fd.io.master.ubuntu.xenial.main=io.fd.vpp=vpp-dpdk-dkms=deb.md5=RELEASE=deb'
 -O -J


As of now, is there any way to get the proper release version from nexus based 
on ARCH? E.g “PLATFORM” or similar [2]?

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1_arm64
  deb
  deb

If no, may I suggest to start using a new field in nexus for architecture if 
they will sit in one repo/dir so it can be easily downloaded for target 
platform?

Thank you.

[1] 
https://nexus.fd.io/service/local/repositories/fd.io.master.ubuntu.xenial.main/content/io/fd/vpp/vpp-dpdk-dkms/maven-metadata.xml
[2] 
https://nexus.fd.io/#view-repositories;fd.io.master.ubuntu.xenial.main~browsestorage


Peter Mikus
Engineer – Software
Cisco Systems Limited

Think before you print.
This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html




Re: [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Maciek Konstantynowicz (mkonstan)
Thanks Peter !
This is affecting all CSIT vpp and honeycomb jobs that rely on Nexus images,
including ALL periodic jobs - daily performance trending jobs, and semi-weekly
and weekly jobs..

Ed, Vanessa, Could you pls help here? We will open the helpdesk ticket (Peter 
pls do that).

-Maciek
goo.gl/pR4k3y

On 18 Apr 2018, at 12:47, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello,

I’ve recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64

  18.02-vpp1_amd64
  18.02-vpp1_arm64

20180417190341
  


This breaks our CSIT trending jobs as we are always pulling latest RELEASE 
version (once job is fired) and expecting that correct version is downloaded.

Our command
$ curl 
https://nexus.fd.io/service/local/artifact/maven/content?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS};
 -O –J
+ curl 
'https://nexus.fd.io/service/local/artifact/maven/content?r=fd.io.master.ubuntu.xenial.main=io.fd.vpp=vpp-dpdk-dkms=deb.md5=RELEASE=deb'
 -O -J


As of now, is there any way to get the proper release version from nexus based 
on ARCH? E.g “PLATFORM” or similar [2]?

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1_arm64
  deb
  deb

If no, may I suggest to start using a new field in nexus for architecture if 
they will sit in one repo/dir so it can be easily downloaded for target 
platform?

Thank you.

[1] 
https://nexus.fd.io/service/local/repositories/fd.io.master.ubuntu.xenial.main/content/io/fd/vpp/vpp-dpdk-dkms/maven-metadata.xml
[2] 
https://nexus.fd.io/#view-repositories;fd.io.master.ubuntu.xenial.main~browsestorage


Peter Mikus
Engineer – Software
Cisco Systems Limited

Think before you print.
This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: [vpp-dev] questions in configuring tunnel

2018-04-18 Thread Neale Ranns
Hi Kingwei,

Thank you for your analysis. Some comments inline (on subjects I know a bit 
about ☺ )

Regards,
neale

From: Kingwel Xie 
Date: Wednesday, 18 April 2018 at 13:49
To: "Neale Ranns (nranns)" , xyxue 
Cc: "vpp-dev@lists.fd.io" 
Subject: RE: [vpp-dev] questions in configuring tunnel

Hi,

As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the nodes.

However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.


  1.  Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding tunnel would become slower and slower. BTW, it is not a good fix, but it 
works.
  #if 0
  t->sibling_index = fib_entry_child_add
(t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);
  #endif

[nr] if you skip this then the tunnels are not part of the FIB graph and hence 
any updates in the forwarding to the tunnel’s destination will go unnoticed and 
hence you potentially black hole the tunnel traffic indefinitely (since the 
tunnel is not re-stacked). It is a linked list, but apart from the pool 
allocation of the list element, the list element insertion is O(1), no?


  1.  The bihash for Adj_nbr. Each tunnel interface would create one bihash 
which by default is 32MB, mmap and memset then. Typically you don’t need that 
many adjacencies for a p2p tunnel interface. We change the code to use a common 
heap for all p2p interfaces

[nr] if you would push these changes upstream, I would be grateful.


  1.  As mentioned in my email, rewrite requires cache line alignment, which 
mheap cannot handle very well. Mheap might be super slow when you add too many 
tunnels.
  2.  In vl_api_clnt_process, make sleep_time always 100us. This is to avoid 
main thread yielding to linux_epoll_input_inline 10ms wait time. This is not a 
perfect fix either. But if don’t do this, probably each API call would probably 
have to wait for 10ms until main thread has chance to polling API events.
  3.  Be careful with the counters. It would eat up your memory very quick. 
Each counter will be expanded to number of thread multiply number of tunnels. 
In other words, 1M tunnels means 1M x 8 x 8B = 64MB, if you have 8 workers. The 
combined counter will take double size because it has 16 bytes. Each interface 
has 9 simple and 2 combined counters. Besides, load_balance_t and adjacency_t 
also have some counters. You will have at least that many objects if you have 
that many interfaces. The solution is simple – to make a dedicated heap for all 
counters.

[nr] this would also be a useful addition to the upstream


  1.  We also did some other fixes to speed up memory allocation, f.g., 
pre-allocate a big enough pool for gtpu_tunnel_t

[nr] I understand why you would do this and knobs in the startup.conf to enable 
might be a good approach, but for general consumption, IMHO, it’s too specific 
– others may disagree.

To honest, it is not easy. It took us quite some time to figure it out. In the 
end, we manage to add 2M tunnels & 2M routes in 250s.

Hope it helps.

Regard,
Kingwel


From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Neale Ranns
Sent: Wednesday, April 18, 2018 4:33 PM
To: xyxue ; Kingwel Xie 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Xyxue,

Try applying the changes in this patch:
   https://gerrit.fd.io/r/#/c/10216/
to MPLS tunnels. Please contribute any changes back to the community so we can 
all benefit.

Regards,
Neale


From: > on behalf of xyxue 
>
Date: Wednesday, 18 April 2018 at 09:48
To: Xie >
Cc: "vpp-dev@lists.fd.io" 
>
Subject: [vpp-dev] questions in configuring tunnel


Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory )
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue

Re: [vpp-dev] questions in configuring tunnel

2018-04-18 Thread Kingwel Xie
Hi,

As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the nodes.

However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.


  1.  Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding tunnel would become slower and slower. BTW, it is not a good fix, but it 
works.
  #if 0
  t->sibling_index = fib_entry_child_add
(t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);
  #endif


  1.  The bihash for Adj_nbr. Each tunnel interface would create one bihash 
which by default is 32MB, mmap and memset then. Typically you don’t need that 
many adjacencies for a p2p tunnel interface. We change the code to use a common 
heap for all p2p interfaces
  2.  As mentioned in my email, rewrite requires cache line alignment, which 
mheap cannot handle very well. Mheap might be super slow when you add too many 
tunnels.
  3.  In vl_api_clnt_process, make sleep_time always 100us. This is to avoid 
main thread yielding to linux_epoll_input_inline 10ms wait time. This is not a 
perfect fix either. But if don’t do this, probably each API call would probably 
have to wait for 10ms until main thread has chance to polling API events.
  4.  Be careful with the counters. It would eat up your memory very quick. 
Each counter will be expanded to number of thread multiply number of tunnels. 
In other words, 1M tunnels means 1M x 8 x 8B = 64MB, if you have 8 workers. The 
combined counter will take double size because it has 16 bytes. Each interface 
has 9 simple and 2 combined counters. Besides, load_balance_t and adjacency_t 
also have some counters. You will have at least that many objects if you have 
that many interfaces. The solution is simple – to make a dedicated heap for all 
counters.
  5.  We also did some other fixes to speed up memory allocation, f.g., 
pre-allocate a big enough pool for gtpu_tunnel_t

To honest, it is not easy. It took us quite some time to figure it out. In the 
end, we manage to add 2M tunnels & 2M routes in 250s.

Hope it helps.

Regard,
Kingwel


From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Neale Ranns
Sent: Wednesday, April 18, 2018 4:33 PM
To: xyxue ; Kingwel Xie 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Xyxue,

Try applying the changes in this patch:
   https://gerrit.fd.io/r/#/c/10216/
to MPLS tunnels. Please contribute any changes back to the community so we can 
all benefit.

Regards,
Neale


From: > on behalf of xyxue 
>
Date: Wednesday, 18 April 2018 at 09:48
To: Xie >
Cc: "vpp-dev@lists.fd.io" 
>
Subject: [vpp-dev] questions in configuring tunnel


Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory )
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue




[vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-18 Thread Peter Mikus
Hello,

I've recently found that a new package was introduced: 18.02-vpp1_arm64. I 
guess this is expected as of adding new architecture and onboarding ARM.
I also found that currently vpp-dpdk-dkms package points the RELEASE version to 
arm64 instead of amd64 [1].


  io.fd.vpp
  vpp-dpdk-dkms
  
17.11-vpp1_amd64
18.02-vpp1_arm64

  18.02-vpp1_amd64
  18.02-vpp1_arm64

20180417190341
  


This breaks our CSIT trending jobs as we are always pulling latest RELEASE 
version (once job is fired) and expecting that correct version is downloaded.

Our command
$ curl 
https://nexus.fd.io/service/local/artifact/maven/content?r=${REPO}=${GROUP}=${ART}=${PAC}=${VER}=${CLASS};
 -O -J
+ curl 
'https://nexus.fd.io/service/local/artifact/maven/content?r=fd.io.master.ubuntu.xenial.main=io.fd.vpp=vpp-dpdk-dkms=deb.md5=RELEASE=deb'
 -O -J


As of now, is there any way to get the proper release version from nexus based 
on ARCH? E.g "PLATFORM" or similar [2]?

  io.fd.vpp
  vpp-dpdk-dkms
  18.02-vpp1_arm64
  deb
  deb

If no, may I suggest to start using a new field in nexus for architecture if 
they will sit in one repo/dir so it can be easily downloaded for target 
platform?

Thank you.

[1] 
https://nexus.fd.io/service/local/repositories/fd.io.master.ubuntu.xenial.main/content/io/fd/vpp/vpp-dpdk-dkms/maven-metadata.xml
[2] 
https://nexus.fd.io/#view-repositories;fd.io.master.ubuntu.xenial.main~browsestorage


Peter Mikus
Engineer - Software
Cisco Systems Limited
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Think before you print.
This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: [vpp-dev] questions in configuring tunnel

2018-04-18 Thread Neale Ranns
Hi Xyxue,

Try applying the changes in this patch:
   https://gerrit.fd.io/r/#/c/10216/
to MPLS tunnels. Please contribute any changes back to the community so we can 
all benefit.

Regards,
Neale


From:  on behalf of xyxue 
Date: Wednesday, 18 April 2018 at 09:48
To: Xie 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] questions in configuring tunnel


Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory )
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue




Re: [vpp-dev] vpp io Multi-thread Model #vpp

2018-04-18 Thread tieudaotu137
More than one worker thread.
I dig in deep and I know that when enable handoff feature, "worker thread read 
from dpdk-input after that, it will also redirect packet to frame queue for 
appropriate worker thread queue based on hash key function". That is why i'm 
said worker thread is io threads also.

Of course, I know that RSS feature, but it depends on the NIC feature.
--> Supporting RSS software (using IO thread) on NIC without RSS feature and I 
can enable if needed.


[vpp-dev] questions in configuring tunnel

2018-04-18 Thread xyxue

Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory ) 
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue




Re: [vpp-dev] vpp io Multi-thread Model #vpp

2018-04-18 Thread Avinash Gonsalves
Worker Thread is your IO thread? How many worker-threads have you enabled?
If you're flow based, you might want to explore the RSS feature
https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model



On Wed, Apr 18, 2018 at 9:16 AM,  wrote:

> Thanks Damjan, Avinash
>
> I just want the packet flow always goes directly to the same worker
> thread. And I think that it should use io thread to redirect packet of the
> flow to an appropriate worker thread.
> has that already existed in vpp?  Damjan, Could you tell me how can i do
> that?
>
> I follow Avinash's answer and enable handoff_worker feature,  I find out
> worker thread is my needed io thread also, i am wrong?
> Any help will be highly appreciated!
> 
>
>