[vpp-dev] Performance off handoff #vpp #handoff

2020-01-21 Thread chewjob
Yes, agree with you.

Here I got a thought about handoff mechanism in vPP. If looking into the DPDK 
crypto scheduler, you will find out that it heavily depends on DPDK rings, for 
buffer delivery among CPU cores and even for the packet reordering. Therefore, 
something comes to my mind, why can’t we use a ring for handoff?

First, as you know, the existing handoff is somewhat limited – the queue size 
is 32 by default, very little, and each queue item is a vector with up to 256 
buffer indices, but each vector might only have very few buffers when system is 
not so high. It is not efficient as I can see, and system might drop packets 
due to queue full.

Second, I think the technique used in vlib_get_frame_queue_elt might be slower 
or less efficient than compare-swap in dpdk ring.

Even more, this 2-dimension data structure also brings up complexity when it 
comes to coding. F.g., handoff-dispatch needs to consolidate buffers into a 
size 128 vector.

In general, I’d believe a ring-like mechanism probably makes handoff easier. I 
understand the ring requires compare-swap instruction which definitely 
introduces performance penalty, but on the other hand, handoff itself always 
introduces massive data cache misses, even worse than compare-swap. However, 
handoff  is always worthwhile in some case even there is penalty.

Appreciate you can share your opinion.

Regards,

Kingwel

---
Hi vpp-dev,

Does there have plan to impove performance off vpp's handoff ?
It's very usefull to handoff the same session's packets to same thread as 
lock-free.

Regards
chew
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15217): https://lists.fd.io/g/vpp-dev/message/15217
Mute This Topic: https://lists.fd.io/mt/69974860/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #handoff: https://lists.fd.io/mk?hashtag=handoff=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Florin Coras
Hi Raj, 

Inline.

> On Jan 21, 2020, at 3:41 PM, Raj Kumar  wrote:
> 
> Hi Florin,
> There is no drop on the interfaces. It is 100G card. 
> In UDP tx application, I am using 1460 bytes of buffer to send on select(). I 
> am getting 5 Gbps throughput  ,but if I start one more application then total 
> throughput goes down to 4 Gbps as both the sessions are on the same thread.   
> I increased the tx buffer to 8192 bytes and then I can get 11 Gbps throughput 
>  but again if I start one more application the throughput goes down to 10 
> Gbps.

FC: I assume you’re using vppcom_session_write to write to the session. How 
large is “len” typically? See lower on why that matters.
 
> 
> I found one issue in the code ( You must be aware of that) , the UDP send MSS 
> is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large packets  
> are getting fragmented. 
> udp_send_mss (transport_connection_t * t)
> {
>   /* TODO figure out MTU of output interface */
>   return 1460;
> }

FC: That’s a typical mss and actually what tcp uses as well. Given the nics, 
they should be fine sending a decent number of mpps without the need to do 
jumbo ip datagrams. 

> if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if i 
> start one more application then throughput is going down to 13 Mbps. 

> 
> It looks like the 17 Mbps is per core limit and since all the sessions are 
> pined to the same thread we can not get more throughput.  Here, per core 
> throughput look good to me. Please let me know there is any way to use 
> multiple threads for UDP tx applications. 
> 
> In your previous email you mentioned that we can use connected udp socket in 
> the UDP receiver. Can we do something similar for UDP tx ?

FC: I think it may work fine if vpp has main + 1 worker. I have a draft patch 
here [1] that seems to work with multiple workers but it’s not heavily tested. 

Out of curiosity, I ran a vcl_test_client/server test with 1 worker and with 
XL710s, I’m seeing this:

CLIENT RESULTS: Streamed 65536017791 bytes
  in 14.392678 seconds (36.427420 Gbps half-duplex)!

Should be noted that because of how datagrams are handled in the session layer, 
throughput is sensitive to write sizes. I ran the client like:
~/vcl_client -p udpc 6.0.1.2 1234 -U -N 100 -T 65536

Or in english, unidirectional test, tx buffer of 64kB and 1M writes of that 
buffer. My vcl config was such that tx fifos were 4MB and rx fifos 2MB. The 
sender had few tx packet drops (1657) and the receiver few rx packet drops 
(801). If you plan to use it, make sure arp entries are first resolved (e.g., 
use ping) otherwise the first packet is lost. 

Throughput drops to ~15Gbps with 8kB writes. You should probably also test with 
bigger writes with udp. 

[1] https://gerrit.fd.io/r/c/vpp/+/24462

> 
> From the hardware stats , it seems that UDP tx checksum offload is not 
> enabled/active  which could impact the performance. I think, udp tx checksum 
> should be enabled by default if it is not disabled using parameter  
> "no-tx-checksum-offload".

FC: Performance might be affected by the limited number of offloads available. 
Here’s what I see on my XL710s:

rx offload active: ipv4-cksum jumbo-frame scatter
tx offload active: udp-cksum tcp-cksum multi-segs

> 
> Ethernet address b8:83:03:79:af:8c
>   Mellanox ConnectX-4 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
> rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)

FC: Are you running with 5 vpp workers? 

Regards,
Florin

> tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
>jumbo-frame scatter timestamp keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-segs
>udp-tnl-tso ip-tnl-tso
> tx offload active: multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> tx burst function: (nil)
> rx burst function: mlx5_rx_burst
> 
> thanks,
> -Raj
> 
> On Mon, Jan 20, 2020 at 7:55 PM Florin Coras  > wrote:
> Hi Raj, 
> 
> Good to see progress. Check with “show int” the tx counters on the sender and 
> rx counters on the receiver as 

Re: [vpp-dev] routing configuration other than default

2020-01-21 Thread Neale Ranns via Lists.Fd.Io
Hi Sothy,

If you want ping to use a non-default table to lookup the address, you have to 
specify the table:

vpp# ping 172.30.1.1 table-id 1

/neale


From:  on behalf of sothy 
Date: Wednesday 22 January 2020 at 08:43
To: "Balaji Venkatraman (balajiv)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] routing configuration other than default

Hello Balaji

On Tue, Jan 21, 2020 at 8:31 PM Balaji Venkatraman (balajiv) 
mailto:bala...@cisco.com>> wrote:
Hi Sothy,

I think you need to have a default route defined in table 1.

ip route add 0.0.0.0/0 table 1 via 172.30.1.2 host-vpp1eth1

I guess you have typo in the command.host-vpp1eth1=> host-vpp1eth0
vpp# ip route add 0.0.0.0/0 table 1 via 172.30.1.1 
host-vpp1eth1
vpp#
vpp#
vpp#
vpp# ping 172.30.1.1
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface

Thanks
Sothy

could u try that?
--
Regards,
Balaji.


From: mailto:vpp-dev@lists.fd.io>> on behalf of sothy 
mailto:sothy@gmail.com>>
Date: Tuesday, January 21, 2020 at 9:47 AM
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] routing configuration other than default

Hi,
I'm following the UPF-VPP development in 
https://github.com/travelping/vpp/tree/feature/1908/upf
VPP version is
 show version
vpp v19.08.1-393~g25cea519e-dirty built by root on buildkitsandbox at Tue Dec  
3 14:06:31 UTC 2019

I use the following the interfaces and routing tables.(init.conf). I dont use 
DPDK , but I used veth

ip table add 1
ip table add 2
ip6 table add 1
ip6 table add 2

create host-interface name vpp1eth0
set interface mac address host-vpp1eth0 00:0c:29:46:1f:53
set interface ip table host-vpp1eth0 1
set interface ip6 table host-vpp1eth0 1
set interface ip address host-vpp1eth0 172.30.1.2/24
set interface state host-vpp1eth0 up

create host-interface name vpp1eth1
set interface mac address host-vpp1eth1 00:50:56:86:ed:f9
set interface ip table host-vpp1eth1 2
set interface ip6 table host-vpp1eth1 2
set interface ip address host-vpp1eth1 172.31.1.2/24
set interface state host-vpp1eth1 up

create host-interface name vpp1eth2
set interface mac address host-vpp1eth2 02:fe:f5:6f:45:72
set int ip address host-vpp1eth2 172.32.1.2/24
set int state host-vpp1eth2 up

ip route add 0.0.0.0/0 table 2 via 172.31.1.2 host-vpp1eth1

upf pfcp endpoint ip 172.32.1.2 vrf 0

upf nwi name cp vrf 0
upf nwi name internet vrf 1
upf nwi name sgi vrf 2

upf gtpu endpoint ip 172.32.1.2 nwi cp teid 0x8000/2
upf gtpu endpoint ip 172.30.1.2 nwi internet teid 0x8000/2
++
When I ping from vpp,
vpp#ping 172.30.1.1
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface

Statistics: 0 sent, 0 received, 0% packet loss
*
When I ping from vpp for 172.32.1.1
vpp# ping 172.32.1.1
116 bytes from 172.32.1.1: icmp_seq=1 ttl=64 time=.1633 ms
116 bytes from 172.32.1.1: icmp_seq=2 ttl=64 time=3.2511 ms
Aborted due to a keypress.

Statistics: 2 sent, 2 received, 0% packet loss
**
Based on the above test, I feel I missed something in table 1. Default routing 
table configuration is working. I wish to know what I missed in table 1

Thanks
Sothy

.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15215): https://lists.fd.io/g/vpp-dev/message/15215
Mute This Topic: https://lists.fd.io/mt/69961536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Raj Kumar
Correction : -
Please read 17 Mbps as 17 Gbps and 13Mbps as 13Gbps in my previous mail.

thanks,
-Raj

On Tue, Jan 21, 2020 at 6:41 PM Raj Kumar  wrote:

> Hi Florin,
> There is no drop on the interfaces. It is 100G card.
> In UDP tx application, I am using 1460 bytes of buffer to send on
> select(). I am getting 5 Gbps throughput  ,but if I start one more
> application then total throughput goes down to 4 Gbps as both the sessions
> are on the same thread.
> I increased the tx buffer to 8192 bytes and then I can get 11 Gbps
> throughput  but again if I start one more application the throughput goes
> down to 10 Gbps.
>
> I found one issue in the code ( You must be aware of that) , the UDP send
> MSS is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large
> packets  are getting fragmented.
> udp_send_mss (transport_connection_t * t)
> {
>   /* TODO figure out MTU of output interface */
>   return 1460;
> }
> if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if
> i start one more application then throughput is going down to 13 Mbps.
>
> It looks like the 17 Mbps is per core limit and since all the sessions are
> pined to the same thread we can not get more throughput.  Here, per core
> throughput look good to me. Please let me know there is any way to use
> multiple threads for UDP tx applications.
>
> In your previous email you mentioned that we can use connected udp socket
> in the UDP receiver. Can we do something similar for UDP tx ?
>
> From the hardware stats , it seems that UDP tx checksum offload is not
> enabled/active  which could impact the performance. I think, udp tx
> checksum should be enabled by default if it is not disabled using
> parameter  "no-tx-checksum-offload".
>
> Ethernet address b8:83:03:79:af:8c
>   Mellanox ConnectX-4 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
> rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)
> tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> vlan-filter
>jumbo-frame scatter timestamp keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso
> multi-segs
>udp-tnl-tso ip-tnl-tso
> tx offload active: multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
> ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> tx burst function: (nil)
> rx burst function: mlx5_rx_burst
>
> thanks,
> -Raj
>
> On Mon, Jan 20, 2020 at 7:55 PM Florin Coras 
> wrote:
>
>> Hi Raj,
>>
>> Good to see progress. Check with “show int” the tx counters on the sender
>> and rx counters on the receiver as the interfaces might be dropping
>> traffic. One sender should be able to do more than 5Gbps.
>>
>> How big are the writes to the tx fifo? Make sure the tx buffer is some
>> tens of kB.
>>
>> As for the issue with the number of workers, you’ll have to switch to
>> udpc (connected udp), to ensure you have a separate connection for each
>> ‘flow’, and to use accept in combination with epoll to accept the sessions
>> udpc creates.
>>
>> Note that udpc currently does not work correctly with vcl and multiple
>> vpp workers if vcl is the sender (not the receiver) and traffic is
>> bidirectional. The sessions are all created on the first thread and once
>> return traffic is received, they’re migrated to the thread selected by RSS
>> hashing. VCL is not notified when that happens and it runs out of sync. You
>> might not be affected by this, as you’re not receiving any return traffic,
>> but because of that all sessions may end up stuck on the first thread.
>>
>> For udp transport, the listener is connection-less and bound to the main
>> thread. As a result, all incoming packets, even if they pertain to multiple
>> flows, are written to the listener’s buffer/fifo.
>>
>> Regards,
>> Florin
>>
>> On Jan 20, 2020, at 3:50 PM, Raj Kumar  wrote:
>>
>> Hi Florin,
>> I changed my application as you suggested. Now, I am able to achieve 5
>> Gbps with a single UDP stream.  Overall, I can get ~20Gbps with multiple
>> host application . Also, the TCP throughput  is improved to ~28Gbps after
>> tuning as mentioned in  [1].
>> On the similar topic; the UDP tx throughput is throttled to 5Gbps. Even
>> if I run the 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-21 Thread Raj Kumar
Hi Florin,
There is no drop on the interfaces. It is 100G card.
In UDP tx application, I am using 1460 bytes of buffer to send on select().
I am getting 5 Gbps throughput  ,but if I start one more application then
total throughput goes down to 4 Gbps as both the sessions are on the same
thread.
I increased the tx buffer to 8192 bytes and then I can get 11 Gbps
throughput  but again if I start one more application the throughput goes
down to 10 Gbps.

I found one issue in the code ( You must be aware of that) , the UDP send
MSS is hard-coded to 1460 ( /vpp/src/vnet/udp/udp.c file). So, the large
packets  are getting fragmented.
udp_send_mss (transport_connection_t * t)
{
  /* TODO figure out MTU of output interface */
  return 1460;
}
if I change the MSS to 8192 then I am getting 17 Mbps throughput. But , if
i start one more application then throughput is going down to 13 Mbps.

It looks like the 17 Mbps is per core limit and since all the sessions are
pined to the same thread we can not get more throughput.  Here, per core
throughput look good to me. Please let me know there is any way to use
multiple threads for UDP tx applications.

In your previous email you mentioned that we can use connected udp socket
in the UDP receiver. Can we do something similar for UDP tx ?

>From the hardware stats , it seems that UDP tx checksum offload is not
enabled/active  which could impact the performance. I think, udp tx
checksum should be enabled by default if it is not disabled using
parameter  "no-tx-checksum-offload".

Ethernet address b8:83:03:79:af:8c
  Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
rx: queues 5 (max 65535), desc 1024 (min 0 max 65535 align 1)
tx: queues 6 (max 65535), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1017 subsystem 1590:0246 address :12:00.00 numa 0
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
   jumbo-frame scatter timestamp keep-crc
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-segs
   udp-tnl-tso ip-tnl-tso
tx offload active: multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
tx burst function: (nil)
rx burst function: mlx5_rx_burst

thanks,
-Raj

On Mon, Jan 20, 2020 at 7:55 PM Florin Coras  wrote:

> Hi Raj,
>
> Good to see progress. Check with “show int” the tx counters on the sender
> and rx counters on the receiver as the interfaces might be dropping
> traffic. One sender should be able to do more than 5Gbps.
>
> How big are the writes to the tx fifo? Make sure the tx buffer is some
> tens of kB.
>
> As for the issue with the number of workers, you’ll have to switch to udpc
> (connected udp), to ensure you have a separate connection for each ‘flow’,
> and to use accept in combination with epoll to accept the sessions udpc
> creates.
>
> Note that udpc currently does not work correctly with vcl and multiple vpp
> workers if vcl is the sender (not the receiver) and traffic is
> bidirectional. The sessions are all created on the first thread and once
> return traffic is received, they’re migrated to the thread selected by RSS
> hashing. VCL is not notified when that happens and it runs out of sync. You
> might not be affected by this, as you’re not receiving any return traffic,
> but because of that all sessions may end up stuck on the first thread.
>
> For udp transport, the listener is connection-less and bound to the main
> thread. As a result, all incoming packets, even if they pertain to multiple
> flows, are written to the listener’s buffer/fifo.
>
> Regards,
> Florin
>
> On Jan 20, 2020, at 3:50 PM, Raj Kumar  wrote:
>
> Hi Florin,
> I changed my application as you suggested. Now, I am able to achieve 5
> Gbps with a single UDP stream.  Overall, I can get ~20Gbps with multiple
> host application . Also, the TCP throughput  is improved to ~28Gbps after
> tuning as mentioned in  [1].
> On the similar topic; the UDP tx throughput is throttled to 5Gbps. Even if
> I run the multiple host applications the overall throughput is 5Gbps. I
> also tried by configuring multiple worker threads . But the problem is that
> all the application sessions are assigned to the same worker thread. Is
> there any way to assign each session  to a different worker thread?
>
> vpp# sh session verbose 2
> Thread 0: no 

Re: [vpp-dev] routing configuration other than default

2020-01-21 Thread sothy
Hello Balaji

On Tue, Jan 21, 2020 at 8:31 PM Balaji Venkatraman (balajiv) <
bala...@cisco.com> wrote:

> Hi Sothy,
>
>
>
> I think you need to have a default route defined in table 1.
>
>
>
> ip route add 0.0.0.0/0 table 1 via 172.30.1.2 *host-vpp1eth1*
>
>
>
I guess you have typo in the command.host-vpp1eth1=> host-vpp1eth0
vpp# ip route add 0.0.0.0/0 table 1 via 172.30.1.1 host-vpp1eth1
vpp#
vpp#
vpp#
vpp# ping 172.30.1.1
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface

Thanks
Sothy


> could u try that?
>
> --
>
> Regards,
>
> Balaji.
>
>
>
>
>
> *From: * on behalf of sothy 
> *Date: *Tuesday, January 21, 2020 at 9:47 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] routing configuration other than default
>
>
>
> Hi,
>
> I'm following the UPF-VPP development in
> https://github.com/travelping/vpp/tree/feature/1908/upf
>
> VPP version is
>
>  show version
> vpp v19.08.1-393~g25cea519e-dirty built by root on buildkitsandbox at Tue
> Dec  3 14:06:31 UTC 2019
>
>
>
> I use the following the interfaces and routing tables.(init.conf). I dont
> use DPDK , but I used veth
>
> 
>
> ip table add 1
> ip table add 2
> ip6 table add 1
> ip6 table add 2
>
> create host-interface name vpp1eth0
> set interface mac address host-vpp1eth0 00:0c:29:46:1f:53
> set interface ip table host-vpp1eth0 1
> set interface ip6 table host-vpp1eth0 1
> set interface ip address host-vpp1eth0 172.30.1.2/24
> set interface state host-vpp1eth0 up
>
> create host-interface name vpp1eth1
> set interface mac address host-vpp1eth1 00:50:56:86:ed:f9
> set interface ip table host-vpp1eth1 2
> set interface ip6 table host-vpp1eth1 2
> set interface ip address host-vpp1eth1 172.31.1.2/24
> set interface state host-vpp1eth1 up
>
> create host-interface name vpp1eth2
> set interface mac address host-vpp1eth2 02:fe:f5:6f:45:72
> set int ip address host-vpp1eth2 172.32.1.2/24
> set int state host-vpp1eth2 up
>
> ip route add 0.0.0.0/0 table 2 via 172.31.1.2 host-vpp1eth1
>
> upf pfcp endpoint ip 172.32.1.2 vrf 0
>
> upf nwi name cp vrf 0
> upf nwi name internet vrf 1
> upf nwi name sgi vrf 2
>
> upf gtpu endpoint ip 172.32.1.2 nwi cp teid 0x8000/2
> upf gtpu endpoint ip 172.30.1.2 nwi internet teid 0x8000/2
>
> ++
>
> When I ping from vpp,
>
> vpp#ping 172.30.1.1
> Failed: no egress interface
> Failed: no egress interface
> Failed: no egress interface
> Failed: no egress interface
> Failed: no egress interface
>
> Statistics: 0 sent, 0 received, 0% packet loss
>
> *
>
> When I ping from vpp for 172.32.1.1
>
> vpp# ping 172.32.1.1
> 116 bytes from 172.32.1.1: icmp_seq=1 ttl=64 time=.1633 ms
> 116 bytes from 172.32.1.1: icmp_seq=2 ttl=64 time=3.2511 ms
> Aborted due to a keypress.
>
> Statistics: 2 sent, 2 received, 0% packet loss
>
> **
>
> Based on the above test, I feel I missed something in table 1. Default
> routing table configuration is working. I wish to know what I missed in
> table 1
>
>
>
> Thanks
>
> Sothy
>
>
>
> .
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15212): https://lists.fd.io/g/vpp-dev/message/15212
Mute This Topic: https://lists.fd.io/mt/69961536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Reminder: VPP 20.01 RC2 is *tomorrow* 22 January

2020-01-21 Thread Andrew Yourtchenko
Hi All,

Just a kind reminder that we have the VPP 20.01 RC2 milestone tomorrow 18:00 
UTC  after that we will accept only the critical bugfixes on the stable 
branch in preparation for the release, which will happen a week after.

Thanks! :-)

--a (Your friendly 20.01 release manager)-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15211): https://lists.fd.io/g/vpp-dev/message/15211
Mute This Topic: https://lists.fd.io/mt/69965365/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] routing configuration other than default

2020-01-21 Thread Balaji Venkatraman via Lists.Fd.Io
Hi Sothy,

I think you need to have a default route defined in table 1.

ip route add 0.0.0.0/0 table 1 via 172.30.1.2 host-vpp1eth1

could u try that?
--
Regards,
Balaji.


From:  on behalf of sothy 
Date: Tuesday, January 21, 2020 at 9:47 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] routing configuration other than default

Hi,
I'm following the UPF-VPP development in 
https://github.com/travelping/vpp/tree/feature/1908/upf
VPP version is
 show version
vpp v19.08.1-393~g25cea519e-dirty built by root on buildkitsandbox at Tue Dec  
3 14:06:31 UTC 2019

I use the following the interfaces and routing tables.(init.conf). I dont use 
DPDK , but I used veth

ip table add 1
ip table add 2
ip6 table add 1
ip6 table add 2

create host-interface name vpp1eth0
set interface mac address host-vpp1eth0 00:0c:29:46:1f:53
set interface ip table host-vpp1eth0 1
set interface ip6 table host-vpp1eth0 1
set interface ip address host-vpp1eth0 172.30.1.2/24
set interface state host-vpp1eth0 up

create host-interface name vpp1eth1
set interface mac address host-vpp1eth1 00:50:56:86:ed:f9
set interface ip table host-vpp1eth1 2
set interface ip6 table host-vpp1eth1 2
set interface ip address host-vpp1eth1 172.31.1.2/24
set interface state host-vpp1eth1 up

create host-interface name vpp1eth2
set interface mac address host-vpp1eth2 02:fe:f5:6f:45:72
set int ip address host-vpp1eth2 172.32.1.2/24
set int state host-vpp1eth2 up

ip route add 0.0.0.0/0 table 2 via 172.31.1.2 host-vpp1eth1

upf pfcp endpoint ip 172.32.1.2 vrf 0

upf nwi name cp vrf 0
upf nwi name internet vrf 1
upf nwi name sgi vrf 2

upf gtpu endpoint ip 172.32.1.2 nwi cp teid 0x8000/2
upf gtpu endpoint ip 172.30.1.2 nwi internet teid 0x8000/2
++
When I ping from vpp,
vpp#ping 172.30.1.1
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface

Statistics: 0 sent, 0 received, 0% packet loss
*
When I ping from vpp for 172.32.1.1
vpp# ping 172.32.1.1
116 bytes from 172.32.1.1: icmp_seq=1 ttl=64 time=.1633 ms
116 bytes from 172.32.1.1: icmp_seq=2 ttl=64 time=3.2511 ms
Aborted due to a keypress.

Statistics: 2 sent, 2 received, 0% packet loss
**
Based on the above test, I feel I missed something in table 1. Default routing 
table configuration is working. I wish to know what I missed in table 1

Thanks
Sothy

.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15210): https://lists.fd.io/g/vpp-dev/message/15210
Mute This Topic: https://lists.fd.io/mt/69961536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] routing configuration other than default

2020-01-21 Thread sothy
Hi,
I'm following the UPF-VPP development in
https://github.com/travelping/vpp/tree/feature/1908/upf
VPP version is
 show version
vpp v19.08.1-393~g25cea519e-dirty built by root on buildkitsandbox at Tue
Dec  3 14:06:31 UTC 2019

I use the following the interfaces and routing tables.(init.conf). I dont
use DPDK , but I used veth

ip table add 1
ip table add 2
ip6 table add 1
ip6 table add 2

create host-interface name vpp1eth0
set interface mac address host-vpp1eth0 00:0c:29:46:1f:53
set interface ip table host-vpp1eth0 1
set interface ip6 table host-vpp1eth0 1
set interface ip address host-vpp1eth0 172.30.1.2/24
set interface state host-vpp1eth0 up

create host-interface name vpp1eth1
set interface mac address host-vpp1eth1 00:50:56:86:ed:f9
set interface ip table host-vpp1eth1 2
set interface ip6 table host-vpp1eth1 2
set interface ip address host-vpp1eth1 172.31.1.2/24
set interface state host-vpp1eth1 up

create host-interface name vpp1eth2
set interface mac address host-vpp1eth2 02:fe:f5:6f:45:72
set int ip address host-vpp1eth2 172.32.1.2/24
set int state host-vpp1eth2 up

ip route add 0.0.0.0/0 table 2 via 172.31.1.2 host-vpp1eth1

upf pfcp endpoint ip 172.32.1.2 vrf 0

upf nwi name cp vrf 0
upf nwi name internet vrf 1
upf nwi name sgi vrf 2

upf gtpu endpoint ip 172.32.1.2 nwi cp teid 0x8000/2
upf gtpu endpoint ip 172.30.1.2 nwi internet teid 0x8000/2
++
When I ping from vpp,
vpp#ping 172.30.1.1
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface

Statistics: 0 sent, 0 received, 0% packet loss
*
When I ping from vpp for 172.32.1.1
vpp# ping 172.32.1.1
116 bytes from 172.32.1.1: icmp_seq=1 ttl=64 time=.1633 ms
116 bytes from 172.32.1.1: icmp_seq=2 ttl=64 time=3.2511 ms
Aborted due to a keypress.

Statistics: 2 sent, 2 received, 0% packet loss
**
Based on the above test, I feel I missed something in table 1. Default
routing table configuration is working. I wish to know what I missed in
table 1

Thanks
Sothy

.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15209): https://lists.fd.io/g/vpp-dev/message/15209
Mute This Topic: https://lists.fd.io/mt/69961536/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-01-21 14:00:23 UTC

2020-01-21 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 3
Newly detected: 0
Eliminated: 4
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15208): https://lists.fd.io/g/vpp-dev/message/15208
Mute This Topic: https://lists.fd.io/mt/69956915/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp

2020-01-21 Thread Segey Yelantsev
Hi Neale, Thank you - adding interface to a specific table worked! 15.01.2020, 21:26, "Neale Ranns via Lists.Fd.Io" : Hi Sergey, It would work if your gre interface were also in the non-default table, i.e.  set int ip table 10 gre0 /neale From: Segey Yelantsev Date: Thursday 16 January 2020 at 02:16To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp Hi! 2Neale: my scenario is taking MPLS traffic, routing it to a specific ip table based on MPLS label. Then IP packets should be routed within that specific table with some routes pointing to virtual devices, including a default. The backtrace and reason for firing assert is the same as in this gre scenario. And it works for table 0, but I wonder why it doesn't work for a non-zero ip table. Unfotunately, Aleksander's patch did not help:```DBGvpp# ip table add 10DBGvpp# create gre tunnel src 1.1.1.1 dst 2.2.2.2gre0DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0DBGvpp# sh ip fib table 10 0.0.0.0/0ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[CLI:1, ]0.0.0.0/0 fib:1 index:7 locks:3CLI refs:1 entry-flags:attached,import, src-flags:added,contributing,active,path-list:[16] locks:2 flags:shared, uPRF-list:12 len:1 itfs:[1, ]path:[16] pl-index:16 ip4 weight=1 pref=0 attached:gre0 default-route refs:1 entry-flags:drop, src-flags:added,path-list:[11] locks:1 flags:drop, uPRF-list:7 len:0 itfs:[]path:[11] pl-index:11 ip4 weight=1 pref=0 special: cfg-flags:drop,[@0]: dpo-drop ip4 forwarding: unicast-ip4-chain[@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:12 to:[0:0]][0] [@0]: dpo-drop ip4DBGvpp# ip route del 0.0.0.0/0 table 10 via gre0/home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367 (fib_attached_export_purge) assertion `NULL != fed' fails Thread 1 "vpp_main" received signal SIGABRT, Aborted.__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5151 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51#1 0x75ac9801 in __GI_abort () at abort.c:79#2 0xbe0b in os_panic () at /home/elantsev/vpp/src/vpp/vnet/main.c:355#3 0x75eacde9 in debugger () at /home/elantsev/vpp/src/vppinfra/error.c:84#4 0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at /home/elantsev/vpp/src/vppinfra/error.c:143#5 0x774cbbe9 in fib_attached_export_purge (fib_entry=0x7fffb83886b0) at /home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367#6 0x774919de in fib_entry_post_flag_update_actions (fib_entry=0x7fffb83886b0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:674#7 0x77491a3c in fib_entry_post_install_actions (fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:709#8 0x77491d78 in fib_entry_post_update_actions (fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:804#9 0x774923f9 in fib_entry_source_removed (fib_entry=0x7fffb83886b0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:992#10 0x774925e7 in fib_entry_path_remove (fib_entry_index=7, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:1072#11 0x7747980b in fib_table_entry_path_remove2 (fib_index=1, prefix=0x7fffb8382b00, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at /home/elantsev/vpp/src/vnet/fib/fib_table.c:680#12 0x76fb870c in vnet_ip_route_cmd (vm=0x766b6680 , main_input=0x7fffb8382f00, cmd=0x7fffb50373b8) at /home/elantsev/vpp/src/vnet/ip/lookup.c:449#13 0x763d402f in vlib_cli_dispatch_sub_commands (vm=0x766b6680 , cm=0x766b68b0 , input=0x7fffb8382f00, parent_command_index=431)at /home/elantsev/vpp/src/vlib/cli.c:568#14 0x763d3ead in vlib_cli_dispatch_sub_commands (vm=0x766b6680 , cm=0x766b68b0 , input=0x7fffb8382f00, parent_command_index=0)at /home/elantsev/vpp/src/vlib/cli.c:528#15 0x763d4434 in vlib_cli_input (vm=0x766b6680 , input=0x7fffb8382f00, function=0x7646dc89 , function_arg=0) at /home/elantsev/vpp/src/vlib/cli.c:667#16 0x7647476d in unix_cli_process_input (cm=0x766b7020 , cli_file_index=0) at /home/elantsev/vpp/src/vlib/unix/cli.c:2572#17 0x7647540e in unix_cli_process (vm=0x766b6680 , rt=0x7fffb8342000, f=0x0) at /home/elantsev/vpp/src/vlib/unix/cli.c:2688#18 0x764161d4 in vlib_process_bootstrap (_a=140736272894320) at /home/elantsev/vpp/src/vlib/main.c:1475#19