Thanks for the comments. Please see mine in line.

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, April 18, 2018 9:18 PM
To: Kingwel Xie <kingwel....@ericsson.com>; xyxue <xy...@fiberhome.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Kingwei,

Thank you for your analysis. Some comments inline (on subjects I know a bit 
about ☺ )

Regards,
neale

From: Kingwel Xie <kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>>
Date: Wednesday, 18 April 2018 at 13:49
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, xyxue 
<xy...@fiberhome.com<mailto:xy...@fiberhome.com>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] questions in configuring tunnel

Hi,

As we understand, this patch would bypass the node replication, so that adding 
tunnel would not cause main thread to wait for workers  synchronizing the nodes.

However, in addition to that, you have to do more things to be able to add 40k 
or more tunnels in a predictable time period. Here is what we did for adding 2M 
gtp tunnels, for your reference. Mpls tunnel should be pretty much the same.


  1.  Don’t call fib_entry_child_add after adding fib entry to the tunnel 
(fib_table_entry_special_add ). This will create a linked list for all child 
nodes belonged to the fib entry pointed to the tunnel endpoint. As a result, 
adding tunnel would become slower and slower. BTW, it is not a good fix, but it 
works.
                  #if 0
                  t->sibling_index = fib_entry_child_add
                    (t->fib_entry_index, gtm->fib_node_type, t - gtm->tunnels);
                  #endif

[nr] if you skip this then the tunnels are not part of the FIB graph and hence 
any updates in the forwarding to the tunnel’s destination will go unnoticed and 
hence you potentially black hole the tunnel traffic indefinitely (since the 
tunnel is not re-stacked). It is a linked list, but apart from the pool 
allocation of the list element, the list element insertion is O(1), no?
[kingwel] You are right that the update will not be noticed, but we think it is 
acceptable for a p2p tunnel interface. The list element itself is ok when being 
inserted, but the following restack operation will walk through all inserted 
elements. This is the point I’m talking about.


  1.  The bihash for Adj_nbr. Each tunnel interface would create one bihash 
which by default is 32MB, mmap and memset then. Typically you don’t need that 
many adjacencies for a p2p tunnel interface. We change the code to use a common 
heap for all p2p interfaces

[nr] if you would push these changes upstream, I would be grateful.
[kingwel] The fix is quite ugly. Let’s see what we can do to make it better.


  1.  As mentioned in my email, rewrite requires cache line alignment, which 
mheap cannot handle very well. Mheap might be super slow when you add too many 
tunnels.
  2.  In vl_api_clnt_process, make sleep_time always 100us. This is to avoid 
main thread yielding to linux_epoll_input_inline 10ms wait time. This is not a 
perfect fix either. But if don’t do this, probably each API call would probably 
have to wait for 10ms until main thread has chance to polling API events.
  3.  Be careful with the counters. It would eat up your memory very quick. 
Each counter will be expanded to number of thread multiply number of tunnels. 
In other words, 1M tunnels means 1M x 8 x 8B = 64MB, if you have 8 workers. The 
combined counter will take double size because it has 16 bytes. Each interface 
has 9 simple and 2 combined counters. Besides, load_balance_t and adjacency_t 
also have some counters. You will have at least that many objects if you have 
that many interfaces. The solution is simple – to make a dedicated heap for all 
counters.

[nr] this would also be a useful addition to the upstream
[kingwel] will do later.


  1.  We also did some other fixes to speed up memory allocation, f.g., 
pre-allocate a big enough pool for gtpu_tunnel_t

[nr] I understand why you would do this and knobs in the startup.conf to enable 
might be a good approach, but for general consumption, IMHO, it’s too specific 
– others may disagree.
[kingwel] agree☺

To honest, it is not easy. It took us quite some time to figure it out. In the 
end, we manage to add 2M tunnels & 2M routes in 250s.

Hope it helps.

Regard,
Kingwel


From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Neale Ranns
Sent: Wednesday, April 18, 2018 4:33 PM
To: xyxue <xy...@fiberhome.com<mailto:xy...@fiberhome.com>>; Kingwel Xie 
<kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] questions in configuring tunnel

Hi Xyxue,

Try applying the changes in this patch:
   https://gerrit.fd.io/r/#/c/10216/
to MPLS tunnels. Please contribute any changes back to the community so we can 
all benefit.

Regards,
Neale


From: <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of xyxue 
<xy...@fiberhome.com<mailto:xy...@fiberhome.com>>
Date: Wednesday, 18 April 2018 at 09:48
To: Xie <kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] questions in configuring tunnel


Hi,

We are testing mpls tunnel.The problems shown below appear in our configuration:
1.A configuration of one tunnel will increase two node (this would lead to a 
very high consumption of memory )
2.more node number, more time to update vlib_node_runtime_update and node info 
traversal;

When we configured 40 thousand mpls tunnels , the configure time is 10+ minutes 
, and the occurrence of out of memory.
How can you configure 2M gtpu tunnels , Can I know the configuration speed and 
the memory usage?

Thanks,
Xyxue
________________________________

Reply via email to