Re: [vpp-dev] [csit-dev] t4-virl3 moved from testing back into production - SSH timouts on VIRL

2018-02-15 Thread Thomas F Herbert

Jan and Ed,

I recommended we go back into production when I saw the tests running 
consistently.


However, I did ssh timeouts during my testing. but only when I had more 
then 3 simulations running simultaneously on VIRL3 with all the tests 
enabled.


I figured that was due to the unusual use case where all the 
simultaneous tests were running from all the simulations on the same 
server and guessed that was less likely to happen when the tests were 
getting distributed across all 3 servers.


--Tom


On 02/14/2018 10:47 AM, Jan Gelety wrote:


Hello Ed,

First occurrence of connection issues in logs below (including elapsed 
time)


Regards,

Jan

*From:*vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] *On Behalf Of 
*Ed Kern (ejk)

*Sent:* Tuesday, February 13, 2018 6:59 PM
*To:* vpp-dev@lists.fd.io
*Cc:* csit-...@lists.fd.io; vpp-dev@lists.fd.io
*Subject:* Re: [vpp-dev] [csit-dev] t4-virl3 moved from testing back 
into production - SSH timouts on VIRL

*Importance:* High

ok im looking (and for the time being ive flipped it back to testing)..

I am not seeing issues on the server end.  Dont get me wrong totally 
believe you that there is a problem..but im not seeing in the


logs below specific timeout issues related to virl3.

Could you schedule up a webex or pointers to help me track this down?

thanks,

Ed



On Feb 13, 2018, at 6:23 AM, Jan Gelety -X (jgelety - PANTHEON
TECHNOLOGIES at Cisco) > wrote:

Hello Ed,

Unfortunately we are facing to SSH timeout quite often after
moving virl3 to production:

https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/9418/console

*00:40:09.239*10:46:26 [ ERROR ] Node 10.30.52.131 setup failed,
error:''

*00:40:09.239*10:47:11 Extracting tarball to /tmp/openvpp-testing
on 10.30.52.130

*00:40:09.239*10:47:12 Setup of node 10.30.52.130 done

*00:40:09.239*10:47:12 All nodes are ready

*00:40:09.239*10:47:37 Tests.Vpp.Func.Interfaces

*00:40:09.239*10:47:37



*00:40:09.239*10:47:37 Tests.Vpp.Func.Interfaces.Api-Crud-Tap-Func
:: *Tap Interface CRUD Tests*

*00:40:09.239*10:47:37



*00:40:09.239*10:47:37 TC01: Tap Interface Modify And Delete ::
[Top] TG-DUT1-TG.
| FAIL |

*00:40:09.239*10:47:37 Parent suite setup failed:

*00:40:09.239*10:47:37 NoValidConnectionsError: [Errno None]
Unable to connect to port 22 on  or 10.30.52.131

*00:40:09.239*10:47:37



https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/9420/console

*00:47:19.083*12:12:41 TC01: IPv4 Equal-cost multipath routing ::
[Top] TG=DUT   
 [ WARN ] None

*00:47:19.083*12:12:41 None

*00:47:19.083*12:12:41 | FAIL |

*00:47:19.083*12:12:41 Setup failed:

*00:47:19.083*12:12:41 SSHTimeout: Timeout exception during
execution of command: pidof vpp

*00:47:19.083*12:12:41 Current contents of stdout buffer:

*00:47:19.083*12:12:41 Current contents of stderr buffer:

https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-virl-master/9423/console

-Here it failed during startup of simulation

*00:27:07.065*DEBUG: Node tg1 is of type tg and has mgmt IP
10.30.54.141

*00:27:07.065*DEBUG: Node sut1 is of type sut and has mgmt IP
10.30.54.139

*00:27:07.065*DEBUG: Node sut2 is of type sut and has mgmt IP
10.30.54.140

*00:27:07.065*DEBUG: Waiting for hosts to become reachable over SSH

*00:27:12.070*DEBUG: Attempt 1 out of 48, waiting for 2 hosts

...

*00:31:07.245*DEBUG: Attempt 48 out of 48, waiting for 2 hosts

*00:31:07.245*ERROR: Simulation started OK but 2 hosts never
mounted their NFS directory


https://jenkins.fd.io/job/csit-vpp-functional-1801-ubuntu1604-virl/211/console

*02:01:43.419*TC02: DUT with iACL MAC dst-addr drops matching pkts
:: [Top] TG-DUT1-DUT2-TG. [ WARN ]
Tests.Vpp.Func.L2Xc.Eth2P-Eth-L2Xcbase-Iaclbase-Func - TC02: DUT
with iACL MAC dst-addr drops matching pkts

*02:03:23.456*The VPP PIDs are not equal!

*02:03:23.456*Test Setup VPP PIDs: {'10.30.53.219': 23424,
'10.30.53.218': 5433}

*02:03:23.456*Test Teardown VPP PIDs: None

*02:03:23.456*Tests.Vpp.Func.L2Xc.Eth2P-Eth-L2Xcbase-Iaclbase-Func
- TC02: DUT with iACL MAC dst-addr drops matching pkts

*02:03:23.456*The VPP PIDs are not equal!

*02:03:23.456*Test Setup VPP PIDs: 

Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux containers

2018-02-15 Thread Heqing
Avi:

You have got to some level with a virtio-user interface, in the backend, it 
does allow OVS-DPDK (or VPP, but not tested) talked to the container guest with 
the virtio interface, 

There was a proposal to merge that interface to VPP community. But it does not 
fly much with the memif context, the mail archive is moving to the new place, 
the google search cannot help me to identify the mail thread yet.  

The performance will be similar as the OVS-DPDK for the container use case. 

A few interesting link are provided here. 
https://dl.acm.org/citation.cfm?id=3098583.3098586  (Paper) 
https://schd.ws/hosted_files/lc3china2017/22/Danny%20Zhou_High%20Performance%20Container%20Networking_v4.pdf
 
https://schd.ws/hosted_files/ossna2017/1e/VPP_K8S_GTPU_OSSNA.pdf 
http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.html  
(DPDK Doc)
https://github.com/lagopus/lagopus/blob/master/docs/how-to-use-virtio-user.md 
(Lagopus view)



-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Avi Cohen 
(A)
Sent: Wednesday, February 14, 2018 1:08 AM

To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux 
containers

Thank you Jarome
I'm working with LXC , but should be applied for docker as well I can connect 
the container through  a virtio-user port in  VPP, and a tap interface in 
kernel But we pay for a  vhost kthread  that is copying  data from kernel to 
user space and viceversa.
Another option is to connect with veth pair - but the performance is further 
degraded

Another issue is how VPP interface with sandbox?

Best Regards
Avi

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of 
> Jerome Tollet
> Sent: Tuesday, 13 February, 2018 11:27 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux 
> containers
> 
> Hi Avi,
> Can you elaborate a bit on the kind of containers you'd like to run.
> Interfaces exposed to containers may be different if you are looking 
> to run regular endpoints vs cVNF.
> Jerome
> 
> Le 13/02/2018 15:04, « vpp-dev@lists.fd.io au nom de Avi Cohen (A) » 
>  a écrit :
> 
> Hello
> Are there 'numbers' for performance - VPP  vs XDP-eBPF for 
> container networking.
> 
> Since the DPDK and linux-containers are not compatible, is a sense 
> that container and host share the same kernel - hence pkts received at 
> VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt 
> can be forward to the container ip-stack directly from the kernel.
> 
> I heard that a vhostuser interface for containers is 'in-working' stage.
> Can anyone assist with the performance numbers and the status of 
> this vhost-user for containers ?
> 
> Best Regards
> Avi
> 
> 
> 
> 
> 
> 
> 





-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8235): https://lists.fd.io/g/vpp-dev/message/8235
View All Messages In Topic (5): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] gtpu ipv4 decap issue

2018-02-15 Thread Andreas Schultz
Just a note about GTP-U. The implementation uses GTP-U headers to create a
tunnel, it is in no way compatible with "real" 3GPP GTP nodes.
If you just need a tunnel, don't use GTP. There are plenty of other tunnel
protocols (e.g. GRE, VXLAN)  that are more widely used.

If you want to play with GTP, you have to be aware that it makes assumtions
that are not in line with many of the requirements in the 3GPP
specifications, e.g. TEID's are not bidirectional, ingress GTP-U should not
be filtered by source IP, GTP-U endpoints can't be multicast addresses,
Error Indication's and End Markers are unsupported, ... just to name a few.
Getting it to talk to a real EPC GTP node would be really really hard.

Andreas

Ole Troan  schrieb am Do., 15. Feb. 2018 um 12:58 Uhr:

> Instead of assigning the tunnel endpoint address to an interface, it is
> also possible to reserve an IP address to a particular tunnel function, by
> setting the next-index in the corresponding FIB entry/adjacency.
> We typically do this for the mesh type tunnel solutions, were on some of
> the we don't even use interfaces to represent them. Downside is that you
> cannot run any other services on those IP addresses, and e.g. you will not
> respond to ping.
>
> Cheers,
> Ole
>
> > On 15 Feb 2018, at 12:04, Neale Ranns  wrote:
> >
> > Hi Jakub,
> >
> > A quick refresher on IP Addressing J
> >
> > In the world or tunnels we typically talk about the underlay and the
> overlay. The underlay will contain the addresses that form the tunnel’s
> source and destination address (the addresses one sees in the outer IP
> header) – i.e. the address in ‘create gtpu tunnel …’. The overlay contains
> the address configured on the tunnel interface, that is used for routing
> via the tunnel – i.e. the address in ‘set int ip addr gtpu_tunnel0 …’.
> > The tunnel’s source address and interface address should not be the
> same, if they were then if the tunnel were to go down (say a keep-alive
> mechanism failed) then the tunnel’s interface address is removed from the
> FIB and hence the tunnel’s source address is no longer reachable and hence
> it can never receive more packets and consequently never come back up.
> > Instead one chooses the tunnel’s source address to be an address
> configured on another interface in the underlay. This could be a physical
> interface, usually the interface over which the tunnel destination is
> reachable, or a loopback. The downside of using a physical interface is if
> that physical goes down, then again the tunnel is unreachable, despite
> perhaps there being an alternate from the peer to VPP. The benefit of using
> a loopback is that these never go down. So, to configure the underlay do;
> >   loop create
> >   set int sate loop0 up
> >   set int ip addr loop0 1.1.1.1/32
> > note my use of a /32 as the loopback’s interface address. This is
> possible since one cannot connect peers to a loopback, hence the network
> comprises only one device.
> >
> > Next create some tunnels using the loopback’s interface address as the
> tunnel source;
> >   create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid  decap-next ip4
> >   create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid 1112 decap-next ip4
> >   create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid 1113 decap-next ip4
> >
> > Now for the overlay addressing. Here we have choices. Firstly, we can
> assign each of the tunnel’s their own overlay address:
> >   set int ip addr gptu_tunnel0 1.1.1.2/31
> >   set int ip addr gptu_tunnel1 1.1.1.4/31
> >   set int ip addr gptu_tunnel2 1.1.1.6/31
> > note the use of a /31. GTPU tunnels are point-to-point, so we only need
> 2 address, one for us, one for the peer.
> > Or secondly, we can use the same address for each of the tunnels, if we
> make them unnumbered.
> >   loop create
> >   set int sate loop1 up
> >   set int ip addr loop1 1.1.1.2/31
> >   set int unnumbered gtpu_tunnel0 use loop1
> >   set int unnumbered gtpu_tunnel1 use loop1
> >   set int unnumbered gtpu_tunnel2 use loop1
> >
> > hope that helps,
> > neale
> >
> >
> >> From:  on behalf of "Jakub Horn (jakuhorn)" <
> jakuh...@cisco.com>
> >> Date: Wednesday, 14 February 2018 at 23:35
> >> To: "vpp-dev@lists.fd.io" 
> >> Subject: [vpp-dev] gtpu ipv4 decap issue
> >>
> >> Hi,
> >>
> >> To assign ipv4 decap for GTPu tunnel we need to assign IPv4 address to
> the tunnel:
> >> set interface ip address gtpu_tunnel0 10.9.9.9/24
> >>
> >> but we cannot assign same address to more than one GTPu tunnel.
> >> But if there are more than one tunnel (same SRC, same DST) differs but
> only by TEID we cannot do that
> >>
> >> Then secondary tunnels does not decapsulate GTP packets!!!
> >>
> >>
> >> create gtpu tunnel src 10.9.9.9 dst 10.6.6.6 teid  decap-next ip4
> >> ip route add 10.1.1.1/32 via gtpu_tunnel0
> >> set interface ip address gtpu_tunnel0 10.9.9.9/24
> >>
> >> create gtpu tunnel src 10.9.9.9 dst 10.6.6.6 teid 

Re: [vpp-dev] gtpu ipv4 decap issue

2018-02-15 Thread Neale Ranns
Hi Jakub,

A quick refresher on IP Addressing ☺

In the world or tunnels we typically talk about the underlay and the overlay. 
The underlay will contain the addresses that form the tunnel’s source and 
destination address (the addresses one sees in the outer IP header) – i.e. the 
address in ‘create gtpu tunnel …’. The overlay contains the address configured 
on the tunnel interface, that is used for routing via the tunnel – i.e. the 
address in ‘set int ip addr gtpu_tunnel0 …’.
The tunnel’s source address and interface address should not be the same, if 
they were then if the tunnel were to go down (say a keep-alive mechanism 
failed) then the tunnel’s interface address is removed from the FIB and hence 
the tunnel’s source address is no longer reachable and hence it can never 
receive more packets and consequently never come back up.
Instead one chooses the tunnel’s source address to be an address configured on 
another interface in the underlay. This could be a physical interface, usually 
the interface over which the tunnel destination is reachable, or a loopback. 
The downside of using a physical interface is if that physical goes down, then 
again the tunnel is unreachable, despite perhaps there being an alternate from 
the peer to VPP. The benefit of using a loopback is that these never go down. 
So, to configure the underlay do;
  loop create
  set int sate loop0 up
  set int ip addr loop0 1.1.1.1/32
note my use of a /32 as the loopback’s interface address. This is possible 
since one cannot connect peers to a loopback, hence the network comprises only 
one device.

Next create some tunnels using the loopback’s interface address as the tunnel 
source;
  create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid  decap-next ip4
  create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid 1112 decap-next ip4
  create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid 1113 decap-next ip4

Now for the overlay addressing. Here we have choices. Firstly, we can assign 
each of the tunnel’s their own overlay address:
  set int ip addr gptu_tunnel0 1.1.1.2/31
  set int ip addr gptu_tunnel1 1.1.1.4/31
  set int ip addr gptu_tunnel2 1.1.1.6/31
note the use of a /31. GTPU tunnels are point-to-point, so we only need 2 
address, one for us, one for the peer.
Or secondly, we can use the same address for each of the tunnels, if we make 
them unnumbered.
  loop create
  set int sate loop1 up
  set int ip addr loop1 1.1.1.2/31
  set int unnumbered gtpu_tunnel0 use loop1
  set int unnumbered gtpu_tunnel1 use loop1
  set int unnumbered gtpu_tunnel2 use loop1

hope that helps,
neale


From:  on behalf of "Jakub Horn (jakuhorn)" 

Date: Wednesday, 14 February 2018 at 23:35
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] gtpu ipv4 decap issue

Hi,

To assign ipv4 decap for GTPu tunnel we need to assign IPv4 address to the 
tunnel:
set interface ip address gtpu_tunnel0 10.9.9.9/24

but we cannot assign same address to more than one GTPu tunnel.
But if there are more than one tunnel (same SRC, same DST) differs but only by 
TEID we cannot do that

Then secondary tunnels does not decapsulate GTP packets!!!


create gtpu tunnel src 10.9.9.9 dst 10.6.6.6 teid  decap-next ip4
ip route add 10.1.1.1/32 via gtpu_tunnel0
set interface ip address gtpu_tunnel0 10.9.9.9/24

create gtpu tunnel src 10.9.9.9 dst 10.6.6.6 teid  decap-next ip4
ip route add 10.2.2.1/32 via gtpu_tunnel1


Is there any other way to make GTP tunnel to decapsulate packet based on DST IP 
address of outer packet?

Thanks a lot in advance

Jakub






Re: [vpp-dev] gdb does not work with vpp 18.04..

2018-02-15 Thread Shiv
Ray, Neale,

 Sorry for the late response. It works now - looks like there was another
instances of VPP running.

Regards,
Shiv

On Fri, Feb 9, 2018 at 5:41 PM, Ray Kinsella  wrote:

> Works for me .. please send on output.
>
> Ray K
>
> On 09/02/2018 05:34, Shiv wrote:
>
>> Hi,
>>
>>Doing a "make debug" with the latest 18.04 does not give the DBGVPP
>> prompt. The same works with 18.01rc2. Anyone else facing the same issue ?
>>
>> Regards,
>> Shiv
>>
>>
> 
>
>


Re: [vpp-dev] VPP Node or plug-in load dynamically at run-time

2018-02-15 Thread Satish
Thanks Chris and Neale.


On Wed, Feb 14, 2018 at 8:46 PM, Neale Ranns  wrote:

> Hi Satish,
>
>
>
> It’s important I think to make the distinction here between nodes and
> plugins, Plugins are .so files, as Chris says, one cannot load these once
> VPP has ‘booted’. Nodes, which can be specified/contained within plugins,
> can be added to the VLIB graph at any time. So, start VPP with all the
> plugins you will ever need, then use the nodes therein on demand.
>
>
>
> Regards,
>
> neale
>
>
>
> *From: * on behalf of Chris Luke <
> chris_l...@comcast.com>
> *Date: *Wednesday, 14 February 2018 at 15:53
> *To: *"vpp-dev@lists.fd.io" 
> *Cc: *"vpp-dev@lists.fd.io" 
> *Subject: *Re: [vpp-dev] VPP Node or plug-in load dynamically at run-time
>
>
>
> We’ve talked about it a few times. It’s not inconceivable to add the
> ability to load plugins at runtime but it would come with risks to
> stability and so quite unlikely.
>
>
>
> Being able to unload them at runtime is however a complete non-starter
> without a herculean effort and even more serious risks; ergo, very unlikely.
>
>
>
> Chris.
>
>
>
> *From:* vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] *On Behalf Of *
> Satish
> *Sent:* Wednesday, February 14, 2018 9:22 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] VPP Node or plug-in load dynamically at run-time
>
>
>
> Thanks For the clarification Ray and the quick response :)
>
>
>
> On Wed, Feb 14, 2018 at 7:49 PM, Ray Kinsella  wrote:
>
> I don't believe it is.
>
> Ray K
>
>
>
> On 14/02/2018 14:19, satish karunanithi wrote:
>
> Thanks for the pointers. But i would like to know is it possible to load
> the plugin/nodes without restarting the VPP?
>
>
>
> 
>
>