Re: [vpp-dev] CPU usage on vpp_main thread is high when using tap cli between vpp and another userspace application #vpp

2019-01-24 Thread Damjan Marion via Lists.Fd.Io

Don't use tapcli code, it is outdated and I just submitted patch to deprecate 
it (as agreed few months ago on the community call).

https://gerrit.fd.io/r/17073 

tapv2 (create interface tap) should be able to run on worker, but don't expect 
magic, it's jus tap


> On 25 Jan 2019, at 08:14, ranadip@gmail.com wrote:
> 
> Hello, 
> 
> I am trying to use vpp with 4 worker threads. 
> Client is sending packet to server through vpp. This packet comes to vpp, and 
> gets picked up from a VF. VPP sends this packet to the another user-space 
> application over a tapcli interface. 
> The user-space application processes the packet, and then sends it back to 
> vpp. Vpp forwards the packet to the server. 
> 
> I am using vpp with 4 worker threads.
> 
> Without the traffic, four worker threads are all showing ~100% cpu usage 
> (which is expected). However, with traffic, the vpp main thread is also 
> showing ~100% cpu usage. I do not want vpp main thread to reach 100% cpu. 
> With the limited knowledge that I have on vpp, it seems like main thread is 
> doing lots of processing in the tapcli-rx node. Is there a way to keep vpp 
> main relatively free? Is tap interface the correct choice to communicate 
> between vpp and another user-space application? If not, what should be used? 
> 
> I have tried this on both vpp 18.07 and 18.10. 
>  
> vpp# show run
> Thread 0 vpp_main (lcore 2)
> Time 916.9, average vectors/node 40.16, last 128 main loops 0.00 per node 0.00
>   vector rates in 1.3178e5, out 1.3178e5, drop 1.3087e-2, punt 0.e0
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
> VirtualFunctionEthernet3/10/1-   active3008704   120840447
>0  1.36e1   40.16
> VirtualFunctionEthernet3/10/1-   active3008704   120840447
>0  1.02e2   40.16
> acl-plugin-fa-cleaner-process  event wait0   0
>1  1.85e40.00
> admin-up-down-process  event wait0   0
>1  1.20e30.00
> api-rx-from-ringany wait 0   0
>   50  7.80e40.00
> arp-inputactive  1   1
>0  2.39e61.00
> avf-processevent wait0   0
>1  2.14e30.00
> bfd-processevent wait0   0
>1  1.84e30.00
> bond-process   event wait0   0
>1  1.44e30.00
> cdp-process any wait 0   0
>1  1.38e70.00
> dhcp-client-process any wait 0   0
>   10  1.02e40.00
> dhcp6-client-cp-process any wait 0   0
>1  1.07e30.00
> dhcp6-pd-client-cp-process  any wait 0   0
>1  9.28e20.00
> dhcp6-pd-reply-publisher-proce event wait0   0
>1  8.88e20.00
> dhcp6-reply-publisher-process  event wait0   0
>1  1.02e30.00
> dns-resolver-processany wait 0   0
>1  2.18e30.00
> dpdk-ipsec-processdone   1   0
>0  1.29e50.00
> dpdk-processany wait 0   0
>  306  5.73e40.00
> error-drop   active 11  12
>0  5.73e31.09
> ethernet-input   active3008710   120840454
>0  3.78e1   40.16
> fib-walkany wait 0   0
>  459  3.03e30.00
> flow-report-process any wait 0   0
>1  6.92e20.00
> flowprobe-timer-process any wait 0   0
>1  3.09e30.00
> igmp-timer-process event wait0   0
>1  2.18e30.00
> ikev2-manager-process   any wait 0   0
>  916  2.46e30.00
> ioam-export-process any wait 0   0

[vpp-dev] CPU usage on vpp_main thread is high when using tap cli between vpp and another userspace application #vpp

2019-01-24 Thread ranadip . das
Hello, 

I am trying to use vpp with 4 worker threads. 
Client is sending packet to server through vpp. This packet comes to vpp, and 
gets picked up from a VF. VPP sends this packet to the another user-space 
application over a tapcli interface. 
The user-space application processes the packet, and then sends it back to vpp. 
Vpp forwards the packet to the server. 

I am using vpp with 4 worker threads.

Without the traffic, four worker threads are all showing ~100% cpu usage (which 
is expected). However, with traffic, the vpp main thread is also showing ~100% 
cpu usage. I do not want vpp main thread to reach 100% cpu. With the limited 
knowledge that I have on vpp, it seems like main thread is doing lots of 
processing in the tapcli-rx node. Is there a way to keep vpp main relatively 
free? Is tap interface the correct choice to communicate between vpp and 
another user-space application? If not, what should be used? 

I have tried this on both vpp 18.07 and 18.10. 
 
vpp# show run
Thread 0 vpp_main (lcore 2)
Time 916.9, average vectors/node 40.16, last 128 main loops 0.00 per node 0.00
  vector rates in 1.3178e5, out 1.3178e5, drop 1.3087e-2, punt 0.e0
             Name                 State         Calls          Vectors        
Suspends         Clocks       Vectors/Call
VirtualFunctionEthernet3/10/1-   active            3008704       120840447      
         0          1.36e1           40.16
VirtualFunctionEthernet3/10/1-   active            3008704       120840447      
         0          1.02e2           40.16
acl-plugin-fa-cleaner-process  event wait                0               0      
         1          1.85e4            0.00
admin-up-down-process          event wait                0               0      
         1          1.20e3            0.00
api-rx-from-ring                any wait                 0               0      
        50          7.80e4            0.00
arp-input                        active                  1               1      
         0          2.39e6            1.00
avf-process                    event wait                0               0      
         1          2.14e3            0.00
bfd-process                    event wait                0               0      
         1          1.84e3            0.00
bond-process                   event wait                0               0      
         1          1.44e3            0.00
cdp-process                     any wait                 0               0      
         1          1.38e7            0.00
dhcp-client-process             any wait                 0               0      
        10          1.02e4            0.00
dhcp6-client-cp-process         any wait                 0               0      
         1          1.07e3            0.00
dhcp6-pd-client-cp-process      any wait                 0               0      
         1          9.28e2            0.00
dhcp6-pd-reply-publisher-proce event wait                0               0      
         1          8.88e2            0.00
dhcp6-reply-publisher-process  event wait                0               0      
         1          1.02e3            0.00
dns-resolver-process            any wait                 0               0      
         1          2.18e3            0.00
dpdk-ipsec-process                done                   1               0      
         0          1.29e5            0.00
dpdk-process                    any wait                 0               0      
       306          5.73e4            0.00
error-drop                       active                 11              12      
         0          5.73e3            1.09
ethernet-input                   active            3008710       120840454      
         0          3.78e1           40.16
fib-walk                        any wait                 0               0      
       459          3.03e3            0.00
flow-report-process             any wait                 0               0      
         1          6.92e2            0.00
flowprobe-timer-process         any wait                 0               0      
         1          3.09e3            0.00
igmp-timer-process             event wait                0               0      
         1          2.18e3            0.00
ikev2-manager-process           any wait                 0               0      
       916          2.46e3            0.00
ioam-export-process             any wait                 0               0      
         1          1.10e3            0.00
ip-neighbor-scan-process        any wait                 0               0      
        16          7.03e3            0.00
ip-route-resolver-process       any wait                 0               0      
        10          8.13e3            0.00
ip4-glean                        active                  2               2      
         0          9.34e3            1.00
ip4-input                        active            3008704       120840447      
         0          

[vpp-dev] Flowprobe/IPFIX export

2019-01-24 Thread Harish Patil
Hi,
Need few clarifications on IPFIX support in the latest VPP. I went thru'
codebase of older releases and understand how IPFIX support has evolved
from 16.09 thru' the latest, in terms of enhancements and refactoring from
vnet/vnet/flow,classify to flowperpkt plugin and now as flowprobe plugin.

With latest VPP, I have few questions:
1) Flowprobe depends on vnet/ipfix-export for exporting flow data(template
rewrite etc). But does flowprobe depend on VPP classifiers anymore, i.e.
does it expect to create classifier table/session for flow matching? Where
is the hash/lookup done?
2) Flowprobe seems TX only feature, is it supported for RX also?
3) What does it exactly mean  "plugin generates ipfix flow records on
interfaces which have the feature enabled"? Does it mean flow records are
generated for "all" flows on that particular interface or can we
selectively enable flow record generation on only specified flows?
Thanks,

harish
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11998): https://lists.fd.io/g/vpp-dev/message/11998
Mute This Topic: https://lists.fd.io/mt/29533414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-24 Thread Damjan Marion via Lists.Fd.Io

In theory like any other cmake project:

$ mkdir build
$ cd build
$ cmake /path/to/vpp/src 
$ make
$ make install

In practice probably few lines should be modified in 
src/plugins/dodkCMakeLists.txt to enable linking with shared libs, as today we 
do static...

— 
Damjan

> On Jan 24, 2019, at 9:15 PM, Marco Varlese  wrote:
> 
> Hi Damjan and all,
> 
> How do I get VPP master and / or 19.01-rcX to build against a DPDK
> already on my system?
> 
> I am basically talking about the previously available feature driven via
> vpp.mk as per below snippet:
>vpp_uses_external_dpdk = yes
>vpp_dpdk_inc_dir = /usr/include/dpdk
>vpp_dpdk_lib_dir = /usr/lib
>vpp_dpdk_shared_lib = yes
> 
> I can't find how right now...
> 
> 
> Thanks in advance,
> 
> Marco
> 
> -- 
> Marco Varlese, Architect Developer Technologies, SUSE Labs
> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11995): https://lists.fd.io/g/vpp-dev/message/11995
> Mute This Topic: https://lists.fd.io/mt/29529767/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11997): https://lists.fd.io/g/vpp-dev/message/11997
Mute This Topic: https://lists.fd.io/mt/29529767/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Question about vlib_next_frame_change_ownership

2019-01-24 Thread Dave Barach via Lists.Fd.Io
The vpp packet trace which I extracted from your dispatch trace seems exactly 
as I would have expected. See below. In a pg test like this one using a 
loopback interface, anything past loopN-tx is irrelevant. The ipsec packet 
turns into an ARP request for 18.1.0.241.

In non-cyclic graph cases, we don’t end up changing frame ownership at all. In 
this case, you’re doing a double lookup. One small memcpy per frame is a 
trivial cost, especially when one remembers that the cost is amortized over all 
the packets in the frame.

Until you produce a repeatable demonstration of the claimed issue, there’s 
nothing that we can do.

Thanks... Dave

VPP Buffer Trace
Trace:
Trace: 00:00:53:959410: pg-input
Trace:   stream ipsec0, 100 bytes, 0 sw_if_index
Trace:   current data 0, length 100, buffer-pool 0, clone-count 0, trace 0x0
Trace:   UDP: 192.168.2.255 -> 1.2.3.4
Trace: tos 0x00, ttl 64, length 28, checksum 0xb324
Trace: fragment id 0x
Trace:   UDP: 4321 -> 1234
Trace: length 80, checksum 0x30d9
Trace: 00:00:53:959426: ip4-input
Trace:   UDP: 192.168.2.255 -> 1.2.3.4
Trace: tos 0x00, ttl 64, length 28, checksum 0xb324
Trace: fragment id 0x
Trace:   UDP: 4321 -> 1234
Trace: length 80, checksum 0x30d9
Trace: 00:00:53:959519: ip4-lookup
Trace:   fib 0 dpo-idx 2 flow hash: 0x
Trace:   UDP: 192.168.2.255 -> 1.2.3.4
Trace: tos 0x00, ttl 64, length 28, checksum 0xb324
Trace: fragment id 0x
Trace:   UDP: 4321 -> 1234
Trace: length 80, checksum 0x30d9
Trace: 00:00:53:959598: ip4-rewrite
Trace:   tx_sw_if_index 2 dpo-idx 2 : ipv4 via 0.0.0.0 ipsec0: mtu:9000 
flow hash: 0x
Trace:   : 
451c3f11b424c0a802ff0102030410e104d2005030d900010203
Trace:   0020: 0405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Trace: 00:00:53:959687: ipsec0-output
Trace:   ipsec0
Trace:   : 
451c3f11b424c0a802ff0102030410e104d2005030d900010203
Trace:   0020: 
0405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223
Trace:   0040: 
2425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f40414243
Trace:   0060: 44454647
Trace: 00:00:53:959802: ipsec0-tx
Trace:   IPSec: spi 1 seq 1
Trace: 00:00:53:959934: esp4-encrypt
Trace:   esp: spi 1 seq 1 crypto aes-cbc-128 integrity sha1-96
Trace: 00:00:53:960084: ip4-lookup
Trace:   fib 0 dpo-idx 0 flow hash: 0x
Trace:   IPSEC_ESP: 18.1.0.71 -> 18.1.0.241
Trace: tos 0x00, ttl 254, length 168, checksum 0x96ea
Trace: fragment id 0x
Trace: 00:00:53:960209: ip4-glean
Trace: IPSEC_ESP: 18.1.0.71 -> 18.1.0.241
Trace:   tos 0x00, ttl 254, length 168, checksum 0x96ea
Trace:   fragment id 0x
Trace: 00:00:53:960336: loop0-output
Trace:   loop0
Trace:   ARP: de:ad:00:00:00:00 -> ff:ff:ff:ff:ff:ff
Trace:   request, type ethernet/IP4, address size 6/4
Trace:   de:ad:00:00:00:00/18.1.0.71 -> 00:00:00:00:00:00/18.1.0.241
Trace: 00:00:53:960491: error-drop
Trace:   ip4-glean: ARP requests sent

From: Kingwel Xie 
Sent: Thursday, January 24, 2019 2:43 AM
To: Dave Barach (dbarach) ; vpp-dev 
Subject: RE: [vpp-dev] Question about vlib_next_frame_change_ownership

Ok. As requested, pcap trace & test script attached. Actually I made some 
simplification to indicate the problem – using native IPSEC instead of DPDK.

You can see in the buffer trace that ip-lookup is referred by ip-input in the 
beginning then by esp-encrypt later. It means the ownership of ip-lookup will 
be changed back and forth, 16x3=48 bytes memcpy, per frame basis. Under some 
case, the trace flag in next_frame will be lost, then it leads to buffer trace 
broken. I made a patch for further discussion about it:  
https://gerrit.fd.io/r/17037

Test log shown below:

DBGvpp# show version
vpp v19.04-rc0~24-g0702554 built by root on ubuntu89 at Sat Jan 19 22:13:50 EST 
2019
DBGvpp#
DBGvpp# exec ipsec
loop0
DBGvpp#
DBGvpp# pcap dispatch trace on max 1000 file vpp.pcap buffer-trace pg-input 10
Buffer tracing of 10 pkts from pg-input enabled...
pcap dispatch capture on...
DBGvpp#
DBGvpp#
DBGvpp# packet-generator enable-stream ipsec0
DBGvpp#
DBGvpp# pcap dispatch trace off
captured 14 pkts...
saved to /tmp/vpp.pcap...
DBGvpp#
DBGvpp# show trace
--- Start of thread 0 kw_main ---
Packet 1

00:00:53:959410: pg-input
  stream ipsec0, 100 bytes, 0 sw_if_index
  current data 0, length 100, buffer-pool 0, clone-count 0, trace 0x0
  UDP: 192.168.2.255 -> 1.2.3.4
tos 0x00, ttl 64, length 28, checksum 0xb324
fragment id 0x
  UDP: 4321 -> 1234
length 80, checksum 0x30d9
00:00:53:959426: ip4-input
  UDP: 192.168.2.255 -> 1.2.3.4
tos 0x00, ttl 64, length 28, checksum 0xb324
fragment id 0x
  UDP: 4321 -> 

[vpp-dev] How do I get the "dpdk-shared" in VPP ?

2019-01-24 Thread Marco Varlese
Hi Damjan and all,

How do I get VPP master and / or 19.01-rcX to build against a DPDK
already on my system?

I am basically talking about the previously available feature driven via
vpp.mk as per below snippet:
   vpp_uses_external_dpdk = yes
   vpp_dpdk_inc_dir = /usr/include/dpdk
   vpp_dpdk_lib_dir = /usr/lib
   vpp_dpdk_shared_lib = yes

I can't find how right now...


Thanks in advance,

Marco

-- 
Marco Varlese, Architect Developer Technologies, SUSE Labs
SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg




signature.asc
Description: OpenPGP digital signature
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11995): https://lists.fd.io/g/vpp-dev/message/11995
Mute This Topic: https://lists.fd.io/mt/29529767/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Andrew Yourtchenko


On 24 Jan 2019, at 14:45, Benoit Ganne (bganne)  wrote:

>>> 4) keeps the 3 flavors as they are but add helper to register
>>> nodes to the 3 arcs - basically move helpers from GBP plugin to vnet/l2.
>>> Basically same up/downside as (3)
> 
>> This would be my favourite approach. The benefit of having per-arc nodes is
>> that we can get compiler optimizations by using constant propagation of
>> “is_ip6” parameter to a common inline node function... overhead in total is
>> about 20 LOC per node, and is a one-time thing.
> 
> Note that (1) does not prevent that either and is simpler in my opinion:
> - (4) adds some overhead because node declaration must be duplicated (the 
> feature node memory structure is duplicated for each arc)

With gigabytes of memory used for pools and other datastructs, node memory 
structure overhead is negligible imho. Plus, mind my point about 
address-family-specific optimizations. Check the acl plugin. I have 8 nodes and 
only one instance of actual “code”. Not to say that it works perfect but 
maintenance wise it is quite reasonable.

> - no need for helpers to register nodes to the 3 arcs

I just have difficulty to see how duplicating the packets is a workable idea.

> 
> Finally, when moving from L2 feature bits to arcs, the fact that the nodes 
> cannot be reordered between "all" and eg. "ip4" is no different that the 
> current situation with L2 feature bits (that cannot be reordered dynamically 
> AFAICT).
> 
> Anyway, I do not have a strong opinion (apart that I do not like (5)), so 
> I'll probably stop here and wait for your decision :)

#4. :)

The rationale behind the way i did it the way I did it was to harmonize the way 
the l2 and l3 nodes can work, as much as possible. 

—a

> 
> Ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11994): https://lists.fd.io/g/vpp-dev/message/11994
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Benoit Ganne (bganne) via Lists.Fd.Io
>> 4) keeps the 3 flavors as they are but add helper to register
>> nodes to the 3 arcs - basically move helpers from GBP plugin to vnet/l2.
>> Basically same up/downside as (3)

> This would be my favourite approach. The benefit of having per-arc nodes is
> that we can get compiler optimizations by using constant propagation of
> “is_ip6” parameter to a common inline node function... overhead in total is
> about 20 LOC per node, and is a one-time thing.

Note that (1) does not prevent that either and is simpler in my opinion:
 - (4) adds some overhead because node declaration must be duplicated (the 
feature node memory structure is duplicated for each arc)
 - no need for helpers to register nodes to the 3 arcs

Finally, when moving from L2 feature bits to arcs, the fact that the nodes 
cannot be reordered between "all" and eg. "ip4" is no different that the 
current situation with L2 feature bits (that cannot be reordered dynamically 
AFAICT).

Anyway, I do not have a strong opinion (apart that I do not like (5)), so I'll 
probably stop here and wait for your decision :)

Ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11993): https://lists.fd.io/g/vpp-dev/message/11993
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Andrew Yourtchenko


> On 24 Jan 2019, at 11:16, Damjan Marion via Lists.Fd.Io 
>  wrote:
> 
> 
> 
>> On 24 Jan 2019, at 10:39, Benoit Ganne (bganne) via Lists.Fd.Io 
>>  wrote:
>> 
>> Hi all,
>> 
>> While refactoring the GBP plugin to use feature arcs instead of hardcoded L2 
>> feature bits, I had to basically duplicate my feature arc nodes 3x (and 
>> disable/enable them 3x etc.) because the L2 feature arcs are divided in 3 
>> flavors: nonip (no IP ethertype), ip4 (IPv4 ethertype) and ip6 (IPv6 
>> ethertype). It works but I'd prefer to hide this complexity from the plugins.
>> I can see several possibilities:
>> 1) add a new feature arc flavor 'all' alongside nonip, ip4 and ip6. Nodes on 
>> this arc will gets all packets regardless of the headers. It keeps backward 
>> compat but should add a small performance hit when L2 feature arcs are 
>> enabled (we have to visit 4 feature arcs instead of 3). This is my favorite.
> 
> How this works if you have features enabled on both all and ip4 for example?
> 
>> 2) remove feature arcs flavors and just pass all packets to feature nodes. 
>> It is the responsibility of the nodes to check the packet type. It 
>> simplifies the L2 feature arc code but breaks backward compat. It could also 
>> be slightly less efficient as the ethertype test will happen later and must 
>> be duplicating in all feature nodes.

Exactly the reason why I did it this way and why I would not go that route.

>> 3) add a new feature arc using a new feature bit. It does not modify the 
>> current path at all, so perf & compat is unchanged but adds some complexity 
>> in the L2 path.
>> 4) keeps the 3 flavors as they are but add helper to register nodes to the 3 
>> arcs - basically move helpers from GBP plugin to vnet/l2. Basically same 
>> up/downside as (3)

This would be my favourite approach. The benefit of having per-arc nodes is 
that we can get compiler optimizations by using constant propagation of 
“is_ip6” parameter to a common inline node function... overhead in total is 
about 20 LOC per node, and is a one-time thing.

—a


>> 5) keep it as is (boilerplate in GBP)
>> 
>> My favorite would be (1) but I'd like to hear from more experience VPP dev.
> 
> 
> I would say 2, but that's just my 2 cents...
> 
> -- 
> Damjan
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11987): https://lists.fd.io/g/vpp-dev/message/11987
> Mute This Topic: https://lists.fd.io/mt/29523811/675608
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ayour...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11992): https://lists.fd.io/g/vpp-dev/message/11992
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP/Using VPP as a VXLAN Tunnel Terminator

2019-01-24 Thread topuri brahmaiah
Hi John and Everyone,

We are trying  use case mentioned in the
https://wiki.fd.io/view/VPP/Using_VPP_as_a_VXLAN_Tunnel_Terminator ( 
https://wiki.fd.io/view/VPP/Using_VPP_as_a_VXLAN_Tunnel_Terminator ) 
and have issue with resolution of ARP and packet forwarding to non vxlan 
segment.

.  

 

My setup has follows

 

 

 VM1  VM2   
VM3

   3/10/0

  VPP  3/10/4(7.0.0.3)  3/10/2  VPP 3/10/1(30.30.30.30) 
  3/10/3(30.30.30.35) linux

VNI 13  
VNI 13 6.0.4.4

 

 Trying pinging from VM3 on 7.0.0.3 using vxlan13 interface. (ping -I vxlan13 
7.0.0.3 -c 1)

 3/10/0,3/10/2 and 3/10/4 are connected one SRIOV switch and 3/10/1 and 3/10/3 
are connected another SRIOV switch.

  Let me know the setup is correct or not for validating the VPP as a VXLAN 
tunnel terminator use case mentioned in the link.

Observed arp request received to VM1 vpp but dropped as IP4 source address not 
local to subnet.

Tried configuring the arp entry manually in VM3 then observed the ping packets 
received VM1 but failed with ip4 source lookup miss.

Looks I am missing some more configuration or the setup is wrong. Please help 
me what is the wrong with steps below.

 

 

The steps on VM2 of VPP are

 

set interface ip address VirtualFunctionEthernet3/10/1 30.30.30.30/24

set interface state VirtualFunctionEthernet3/10/1 up

 

loopback create mac 1a:2b:3c:4d:5e:6f

create vxlan tunnel src 30.30.30.30 dst 30.30.30.35 vni 13 encap-vrf-id 0 
decap-next l2

set interface state loop0 up

set interface state VirtualFunctionEthernet3/10/2 up

set interface l2 bridge VirtualFunctionEthernet3/10/2 13 0

set interface l2 bridge vxlan_tunnel0 13 1

set interface l2 bridge loop0 13 bvi 0

set interface ip table loop0 5

set interface ip address loop0 6.0.0.250/24

 

loopback create mac 1a:2b:3c:4d:5e:7f

set interface state loop1 up

set interface state VirtualFunctionEthernet3/10/0 up

set interface l2 bridge VirtualFunctionEthernet3/10/0 11 0

set interface l2 bridge loop1 11 bvi 0

set interface ip table loop1 5

set interface ip address loop1 7.0.0.250/24

 

Pakcet traces fro ARP are

Packet 1

 

00:05:50:988601: dpdk-input

 VirtualFunctionEthernet3/10/1 rx queue 0

 buffer 0x400346f: current data 14, length 78, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0

 PKT MBUF: port 1, nb_segs 1, pkt_len 92

   buf_len 2176, data_len 92, ol_flags 0x180, data_off 128, phys_addr 0xfd6cdac0

   packet_type 0x241

   Packet Offload Flags

 PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid

 PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid

   Packet Types

 RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet

 RTE_PTYPE_L3_IPV6 (0x0040) IPv6 packet without extension headers

 RTE_PTYPE_L4_UDP (0x0200) UDP packet

 IP4: 2e:49:93:5f:1b:c1 -> 02:09:c0:31:8a:ba

 UDP: 30.30.30.35 -> 30.30.30.30

   tos 0x00, ttl 64, length 78, checksum 0x302f

   fragment id 0xd1f3

 UDP: 40014 -> 4789

   length 58, checksum 0xac09

00:05:50:988633: ip4-input-no-checksum

 UDP: 30.30.30.35 -> 30.30.30.30

   tos 0x00, ttl 64, length 78, checksum 0x302f

   fragment id 0xd1f3

 UDP: 40014 -> 4789

 length 58, checksum 0xac09

00:05:50:988644: ip4-lookup

 fib 0 dpo-idx 6 flow hash: 0x

 UDP: 30.30.30.35 -> 30.30.30.30

   tos 0x00, ttl 64, length 78, checksum 0x302f

   fragment id 0xd1f3

 UDP: 40014 -> 4789

   length 58, checksum 0xac09

00:05:50:988662: ip4-local

   UDP: 30.30.30.35 -> 30.30.30.30

 tos 0x00, ttl 64, length 78, checksum 0x302f

 fragment id 0xd1f3

   UDP: 40014 -> 4789

 length 58, checksum 0xac09

00:05:50:988665: ip4-udp-lookup

 UDP: src-port 40014 dst-port 4789

00:05:50:988671: vxlan4-input

 VXLAN decap from vxlan_tunnel0 vni 13 next 1 error 0

00:05:50:988679: l2-input

 l2-input: sw_if_index 5 dst ff:ff:ff:ff:ff:ff src fa:a7:97:c4:e5:81

00:05:50:988683: l2-learn

 l2-learn: sw_if_index 5 dst ff:ff:ff:ff:ff:ff src fa:a7:97:c4:e5:81 bd_index 1

00:05:50:988688: l2-flood

 l2-flood: sw_if_index 5 dst ff:ff:ff:ff:ff:ff src fa:a7:97:c4:e5:81 bd_index 1

00:05:50:988693: l2-output

 l2-output: sw_if_index 3 dst ff:ff:ff:ff:ff:ff src fa:a7:97:c4:e5:81 data 08 
06 00 01 08 00 06 04 00 01 fa a7

00:05:50:988696: VirtualFunctionEthernet3/10/2-output

 VirtualFunctionEthernet3/10/2

 ARP: fa:a7:97:c4:e5:81 -> ff:ff:ff:ff:ff:ff

 request, type ethernet/IP4, address size 6/4

 fa:a7:97:c4:e5:81/6.0.4.4 -> 00:00:00:00:00:00/7.0.0.3

00:05:50:988699: VirtualFunctionEthernet3/10/2-tx

 VirtualFunctionEthernet3/10/2 tx queue 0

 buffer 0x400346f: current data 50, length 42, free-list 5, clone-count 0, 
totlen-nifb 0, trace 0x0

 ARP: fa:a7:97:c4:e5:81 -> ff:ff:ff:ff:ff:ff

 request, type 

Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Benoit Ganne (bganne) via Lists.Fd.Io
> Let's say you have l2 acl which sits on ip4 arc and yout gbp node which sits 
> on
> "all" arc.
> What will be the packet flow, and how you can say who is 1st?

Good point. You cannot order nodes between arcs, so GBP could not say "runs 
before acl" in that case.
This would basically be dictated by the order of invocation of the arcs by L2 
input, eg. If  "all" is invoked before or after "ip4".
If this is a real issue then the only solutions would be (2) (breaking backward 
compat), (4) or (5) (keeps boilerplate).

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11990): https://lists.fd.io/g/vpp-dev/message/11990
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Damjan Marion via Lists.Fd.Io


> On 24 Jan 2019, at 11:37, Benoit Ganne (bganne)  wrote:
> 
>>> 1) add a new feature arc flavor 'all' alongside nonip, ip4 and ip6.
>>> Nodes on this arc will gets all packets regardless of the headers. It keeps
>>> backward compat but should add a small performance hit when L2 feature
>>> arcs are enabled (we have to visit 4 feature arcs instead of 3). This is my
>>> favorite.
> 
>> How this works if you have features enabled on both all and ip4 for
>> example?
> 
> The easy solution would be that the nodes receive the packet twice. Is there 
> any issue with that?

Let's say you have l2 acl which sits on ip4 arc and yout gbp node which sits on 
"all" arc.
What will be the packet flow, and how you can say who is 1st?

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11989): https://lists.fd.io/g/vpp-dev/message/11989
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Benoit Ganne (bganne) via Lists.Fd.Io
>> 1) add a new feature arc flavor 'all' alongside nonip, ip4 and ip6.
>> Nodes on this arc will gets all packets regardless of the headers. It keeps
>> backward compat but should add a small performance hit when L2 feature
>> arcs are enabled (we have to visit 4 feature arcs instead of 3). This is my
>> favorite.

> How this works if you have features enabled on both all and ip4 for
> example?

The easy solution would be that the nodes receive the packet twice. Is there 
any issue with that?

>> 2) remove feature arcs flavors and just pass all packets to feature
>> nodes. It is the responsibility of the nodes to check the packet type.

> I would say 2, but that's just my 2 cents...

My main concern with that is we break backward compatibility, especially for 
out-of-tree nodes, so I'd advocate against it.

Best
Ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11988): https://lists.fd.io/g/vpp-dev/message/11988
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Damjan Marion via Lists.Fd.Io


> On 24 Jan 2019, at 10:39, Benoit Ganne (bganne) via Lists.Fd.Io 
>  wrote:
> 
> Hi all,
> 
> While refactoring the GBP plugin to use feature arcs instead of hardcoded L2 
> feature bits, I had to basically duplicate my feature arc nodes 3x (and 
> disable/enable them 3x etc.) because the L2 feature arcs are divided in 3 
> flavors: nonip (no IP ethertype), ip4 (IPv4 ethertype) and ip6 (IPv6 
> ethertype). It works but I'd prefer to hide this complexity from the plugins.
> I can see several possibilities:
> 1) add a new feature arc flavor 'all' alongside nonip, ip4 and ip6. Nodes on 
> this arc will gets all packets regardless of the headers. It keeps backward 
> compat but should add a small performance hit when L2 feature arcs are 
> enabled (we have to visit 4 feature arcs instead of 3). This is my favorite.

How this works if you have features enabled on both all and ip4 for example?

> 2) remove feature arcs flavors and just pass all packets to feature nodes. It 
> is the responsibility of the nodes to check the packet type. It simplifies 
> the L2 feature arc code but breaks backward compat. It could also be slightly 
> less efficient as the ethertype test will happen later and must be 
> duplicating in all feature nodes.
> 3) add a new feature arc using a new feature bit. It does not modify the 
> current path at all, so perf & compat is unchanged but adds some complexity 
> in the L2 path.
> 4) keeps the 3 flavors as they are but add helper to register nodes to the 3 
> arcs - basically move helpers from GBP plugin to vnet/l2. Basically same 
> up/downside as (3)
> 5) keep it as is (boilerplate in GBP)
> 
> My favorite would be (1) but I'd like to hear from more experience VPP dev.


I would say 2, but that's just my 2 cents...

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11987): https://lists.fd.io/g/vpp-dev/message/11987
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] l2 input/output feature arcs

2019-01-24 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi all,

While refactoring the GBP plugin to use feature arcs instead of hardcoded L2 
feature bits, I had to basically duplicate my feature arc nodes 3x (and 
disable/enable them 3x etc.) because the L2 feature arcs are divided in 3 
flavors: nonip (no IP ethertype), ip4 (IPv4 ethertype) and ip6 (IPv6 
ethertype). It works but I'd prefer to hide this complexity from the plugins.
I can see several possibilities:
 1) add a new feature arc flavor 'all' alongside nonip, ip4 and ip6. Nodes on 
this arc will gets all packets regardless of the headers. It keeps backward 
compat but should add a small performance hit when L2 feature arcs are enabled 
(we have to visit 4 feature arcs instead of 3). This is my favorite.
 2) remove feature arcs flavors and just pass all packets to feature nodes. It 
is the responsibility of the nodes to check the packet type. It simplifies the 
L2 feature arc code but breaks backward compat. It could also be slightly less 
efficient as the ethertype test will happen later and must be duplicating in 
all feature nodes.
 3) add a new feature arc using a new feature bit. It does not modify the 
current path at all, so perf & compat is unchanged but adds some complexity in 
the L2 path.
 4) keeps the 3 flavors as they are but add helper to register nodes to the 3 
arcs - basically move helpers from GBP plugin to vnet/l2. Basically same 
up/downside as (3)
 5) keep it as is (boilerplate in GBP)

My favorite would be (1) but I'd like to hear from more experience VPP dev.

Best,
Ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11986): https://lists.fd.io/g/vpp-dev/message/11986
Mute This Topic: https://lists.fd.io/mt/29523811/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP 19.01 RC2 milestone is complete

2019-01-24 Thread Andrew Yourtchenko
Hi all,

As per schedule, yesterday I have created the v19.01-rc2 tag on
stable/1901 and verified that 19.01-rc2 build artifacts have been
 uploaded to nexus.fd.io.  VPP 19.01 Release Milestone RC2 is complete!

As a reminder, the VPP 19.01 Release is in two weeks on Wednesday
January 30, 2019.

https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_19.01#Release_Milestones

The CSIT team has pulled the CSIT 19.01 release branch (rls1901) and
will be kicking off a dry-run of formal tests later today.  Once the
dry-runs are complete, the official release testing will begin which
will generate all of the test and performance data to be released in
the CSIT 19.01 Release Report on February 13 2019:

https://wiki.fd.io/view/CSIT/csit1901_plan#Release_Milestones

It is important that only bugs which address critical issues (as
determined by the VPP committers), ideally limited to addressing bugs
found by CSIT testing, go into the stable branch.

Per the standard process, all bug fixes to stable branches should
follow the best practices:

All bug fixes must be double-committed to the release throttle as
well as to the master branch
Commit first to the release throttle, then "git cherry-pick" into master
Manual merges may be required, depending on the degree of
divergence between throttle and master
All bug fixes need to have a Jira ticket
Please put Jira IDs into the commit messages.
Please use the same Jira ID for both the stable branch and master.

--a
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11985): https://lists.fd.io/g/vpp-dev/message/11985
Mute This Topic: https://lists.fd.io/mt/29523561/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-