Re: [vpp-dev] Is the TCP implementation multi-instance capable?

2017-05-24 Thread Florin Coras
Hi Nagp, 

No, the current implementation is not multi-fib capable, i.e., all connections 
go through the main fib. Transport endpoint structs used in the session api and 
the application interface already carry vrfs but we’re not yet using them in 
tcp and for session lookup. We do however plan to support this in the future. 

Could you elaborate on why you would need support for this?

HTH,
Florin

> On May 24, 2017, at 10:28 PM, Nagaprabhanjan Bellaru  
> wrote:
> 
> I am asking this because the session lookup functions in tcp_input.c does not 
> seem to be taking a fib index or a table id to look up the session - just the 
> usual 4-tuple.
> 
> If it is done in a different way, please help me see it.
> 
> Thanks,
> -nagp
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] MPLS L3VPN PING FAILED

2017-05-24 Thread 薛欣颖
Hi Neale,

I forgot to say ,I neither created tunnel nor used LDP.I used the static 
command configuration.

Thanks,
xyxue
 
From: 薛欣颖
Date: 2017-05-25 13:36
To: nranns; vpp-dev
Subject: Re: [vpp-dev] MPLS L3VPN PING FAILED

Hi Neale,

The ping  still failed .Here is the specific information:
DBGvpp# show trace 
 CLIB unknown format `%#' x label 0 eos 1024
17:23:38:439098: lookup-mpls-dst
 fib-index:0 hdr:[1023:85:0:eos] load-balance:29
17:23:38:439159: ip4-mpls-label-disposition
  disp:0
17:23:38:439198: lookup-ip4-dst
 fib-index:1 addr:63.1.94.231 load-balance:9
17:23:38:439325: ip4-drop
IP6_HOP_BY_HOP_OPTIONS: 85.93.65.0 -> 63.1.94.231
  version 0, header length 0
  tos 0x3f, ttl 69, length 61781, checksum 0x0054 (should be 0x)
  fragment id 0x0002 offset 35320, flags CONGESTION
17:23:38:439391: error-drop
  ip4-input: ip4 adjacency drop

By the way , I didn't build tunnels.

Thanks,
xyxue


 
From: Neale Ranns (nranns)
Date: 2017-05-24 18:18
To: 薛欣颖; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] MPLS L3VPN PING FAILED
Hi Xyxue,
 
The lookup was performed in FIB index 1– you must have used ‘set int ip table 
host-XXX YYY’ - but the route you added is in the default table.
 
If you want the routes in the same table as the interface do;
  Ip route add table YYY 192.168.3.0/24 via mpls-tunnel0 out-label 1023
 
Regards,
Neale
 
p.s. are you really constructing the L3VPN from a [full] mesh of MPLS tunnels, 
or is it LDP in the core?
 
 
 
From:  on behalf of 薛欣颖 
Date: Wednesday, 24 May 2017 at 09:09
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] MPLS L3VPN PING FAILED
 
 
Hi guys,
 
I have the following configuration:
mpls tunnel add via 2.1.1.1 host-eth1 out-label 1024 
set int state mpls-tunnel0 up 
ip route add 192.168.3.0/24 via mpls-tunnel0 out-label 1023
 
Ping from CE to PE ,and the PE drop it.
 
That is the fib :
192.168.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:34 buckets:1 uRPF:36 to:[15:1260]]
[0] [@11]: mpls-label:[3]:[1023:255:0:eos]
[@2]: mpls via 0.0.0.0  mpls-tunnel0: 
  stacked-on:
[@5]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:-1 
to:[0:0] via:[15:1320]]
  [0] [@8]: mpls-label:[1]:[1024:255:0:neos]
  [@1]: mpls via 2.1.1.1 host-eth1: 00037ffe0e1a0d0050438847
 
The following is the trace info:
00:17:54:791606: af-packet-input
  af_packet: hw_if_index 1 next-index 4 


tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x16645 nsec 0x34a33284 vlan 0
00:17:54:791899: ethernet-input
  IP4: 2c:53:4a:02:91:95 -> 00:50:43:00:02:02
00:17:54:791956: ip4-input
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792005: ip4-lookup
  fib 1 dpo-idx 1 flow hash: 0x
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792062: ip4-drop
ICMP: 192.168.2.10 -> 192.168.3.10
  tos 0x00, ttl 64, length 84, checksum 0x0886
  fragment id 0xabbe, flags DONT_FRAGMENT
ICMP echo_request checksum 0xae6a
00:17:54:792110: error-drop
  ip4-input: ip4 adjacency drop
 
How can I solve the problem?
 
Thanks,
xyxue
 


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] MPLS L3VPN PING FAILED

2017-05-24 Thread 薛欣颖

Hi Neale,

The ping  still failed .Here is the specific information:
DBGvpp# show trace 
 CLIB unknown format `%#' x label 0 eos 1024
17:23:38:439098: lookup-mpls-dst
 fib-index:0 hdr:[1023:85:0:eos] load-balance:29
17:23:38:439159: ip4-mpls-label-disposition
  disp:0
17:23:38:439198: lookup-ip4-dst
 fib-index:1 addr:63.1.94.231 load-balance:9
17:23:38:439325: ip4-drop
IP6_HOP_BY_HOP_OPTIONS: 85.93.65.0 -> 63.1.94.231
  version 0, header length 0
  tos 0x3f, ttl 69, length 61781, checksum 0x0054 (should be 0x)
  fragment id 0x0002 offset 35320, flags CONGESTION
17:23:38:439391: error-drop
  ip4-input: ip4 adjacency drop

By the way , I didn't build tunnels.

Thanks,
xyxue


 
From: Neale Ranns (nranns)
Date: 2017-05-24 18:18
To: 薛欣颖; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] MPLS L3VPN PING FAILED
Hi Xyxue,
 
The lookup was performed in FIB index 1– you must have used ‘set int ip table 
host-XXX YYY’ - but the route you added is in the default table.
 
If you want the routes in the same table as the interface do;
  Ip route add table YYY 192.168.3.0/24 via mpls-tunnel0 out-label 1023
 
Regards,
Neale
 
p.s. are you really constructing the L3VPN from a [full] mesh of MPLS tunnels, 
or is it LDP in the core?
 
 
 
From:  on behalf of 薛欣颖 
Date: Wednesday, 24 May 2017 at 09:09
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] MPLS L3VPN PING FAILED
 
 
Hi guys,
 
I have the following configuration:
mpls tunnel add via 2.1.1.1 host-eth1 out-label 1024 
set int state mpls-tunnel0 up 
ip route add 192.168.3.0/24 via mpls-tunnel0 out-label 1023
 
Ping from CE to PE ,and the PE drop it.
 
That is the fib :
192.168.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:34 buckets:1 uRPF:36 to:[15:1260]]
[0] [@11]: mpls-label:[3]:[1023:255:0:eos]
[@2]: mpls via 0.0.0.0  mpls-tunnel0: 
  stacked-on:
[@5]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:-1 
to:[0:0] via:[15:1320]]
  [0] [@8]: mpls-label:[1]:[1024:255:0:neos]
  [@1]: mpls via 2.1.1.1 host-eth1: 00037ffe0e1a0d0050438847
 
The following is the trace info:
00:17:54:791606: af-packet-input
  af_packet: hw_if_index 1 next-index 4 


tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x16645 nsec 0x34a33284 vlan 0
00:17:54:791899: ethernet-input
  IP4: 2c:53:4a:02:91:95 -> 00:50:43:00:02:02
00:17:54:791956: ip4-input
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792005: ip4-lookup
  fib 1 dpo-idx 1 flow hash: 0x
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792062: ip4-drop
ICMP: 192.168.2.10 -> 192.168.3.10
  tos 0x00, ttl 64, length 84, checksum 0x0886
  fragment id 0xabbe, flags DONT_FRAGMENT
ICMP echo_request checksum 0xae6a
00:17:54:792110: error-drop
  ip4-input: ip4 adjacency drop
 
How can I solve the problem?
 
Thanks,
xyxue
 


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Is the TCP implementation multi-instance capable?

2017-05-24 Thread Nagaprabhanjan Bellaru
I am asking this because the session lookup functions in tcp_input.c does
not seem to be taking a fib index or a table id to look up the session -
just the usual 4-tuple.

If it is done in a different way, please help me see it.

Thanks,
-nagp
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] vpp-verify-master-centos7 failure

2017-05-24 Thread Burt Silverman
I am guessing the fix will work long term, but if not, perhaps an
alternative is to add --oldpackage to the rpm install options.

Burt

On Fri, May 19, 2017 at 8:34 AM, Thomas F Herbert 
wrote:

>
>
> On 05/18/2017 04:37 PM, Florin Coras wrote:
>
> Well, uri.am is a known issue, apparently not the only one ...
>
> As you’ve previously noted, probably the vm is not cleaned up before the
> testing and because the suffix (note *vpp1 instead of *vpp3) is older
> Centos refuses to do our bidding. Can you try a rebase to pick up 6745 and
> the vpp3 suffix?
>
> I wonder if the package naming convention is causing issues wrt rpm
> version checking at install time.
>
> Florin
>
> On May 18, 2017, at 12:51 PM, Dave Wallace  wrote:
>
> Florin,
>
> After reverting the changes to uri.am, I'm still seeing the same failure
> signature:
>
>  %< 
> 18:24:18 sudo rpm -Uih vpp-dpdk-devel-17.05-vpp1.x86_64.rpm
> 18:24:18 
> 18:24:19 package vpp-dpdk-devel-17.05-vpp3.x86_64 (which is newer
> than vpp-dpdk-devel-17.05-vpp1.x86_64) is already installed
> 18:24:19 make[1]: *** [install-rpm] Error 2
> 18:24:19 make[1]: Leaving directory `/w/workspace/vpp-verify-
> master-centos7/dpdk'
> 18:24:19 make: *** [dpdk-install-dev] Error 2
>  %< 
>
> Thus this failure has nothing to do with the changes made to uri.am
>
> Thanks,
> -daw-
>
> On 05/18/2017 08:00 AM, Dave Wallace wrote:
>
> Florin,
>
> I was also curious as to why this failure only happened on centos. I
> suspect that the debian package manager is less strict about allowing a
> prior version package to be installed than the rpm package manager.  I
> don't have any experience with the rpm package manager, and haven't been
> doing any analysis on the random centos verify job failures that have been
> happening.  Thus my question for Tom who has been leading the rpm packaging
> and centos verification efforts.
>
> This patch is going to be abandoned anyways, so I'm not that concerned
> about resolving the actual verify job failure.
>
> Thanks,
> -daw-
>
> On 05/18/2017 12:49 AM, Florin Coras wrote:
>
> Dave,
>
> A quick solution for that problem is to switch uri.am back to
> noinst_PROGRAMS.
>
> Still, I’m also curious as to why that fails only for Centos.
>
> HTH,
> Florin
>
> On May 17, 2017, at 7:27 PM, Dave Wallace < 
> dwallac...@gmail.com> wrote:
>
> Tom,
>
> The verify job that builds VPP on Centos7 failed on the patch
> https://gerrit.fd.io/r/6672 due to an error
> installing DPDK:
>
>  %< 
> 00:04:23.601 make[2]: Leaving directory `/w/workspace/vpp-verify-
> master-centos7/dpdk'
> 00:04:23.601 sudo rpm -Uih vpp-dpdk-devel-17.05-vpp1.x86_64.rpm
> 00:04:23.710 
> 00:04:23.778 package vpp-dpdk-devel-17.05-vpp3.x86_64 (which is newer
> than vpp-dpdk-devel-17.05-vpp1.x86_64) is already installed
> 00:04:23.781 make[1]: *** [install-rpm] Error 2
> 00:04:23.781 make[1]: Leaving directory `/w/workspace/vpp-verify-
> master-centos7/dpdk'
> 00:04:23.782 make: *** [dpdk-install-dev] Error 2
> 00:04:23.811 Build step 'Execute shell' marked build as failure
>  %< 
>
> Is this a known failure signature?
> I think that vpp-verify-master-centos7 should  un-install the
> vpp-dpdk-dev-*.rpm before the running the build to avoid this error case.
> Do you agree?
>
> I wonder if this case shows up because the build minion was re-used as
> opposed to being freshly spawned.  If that is the case, then all of the vpp
> packages that are installed as part of the verify job should be
> un-installed after the test build has completed (i.e. in the teardown
> phase).
>
> Thanks,
> -daw-
>
>  Forwarded Message 
> Subject: Change in vpp[master]: Build uri_udp_test app
> Date: Wed, 17 May 2017 23:21:15 +
> From: fd.io JJB (Code Review)  
> Reply-To: jobbuilder@projectrotterdam.
> info
> To: shrinivasan ganapathy 
>  
> CC: Florin Coras 
> , Dave Wallace 
>  , Keith Burns
>  , Ed Kern
>  
>
> fd.io JJB has posted comments on this change. ( https://gerrit.fd.io/r/6672 )
>
> Change subject: Build uri_udp_test app
> ..
>
>
> Patch Set 5: Verified-1
>
> Build Failed
> https://jenkins.fd.io/job/vpp-verify-master-centos7/5490/ : FAILURE
>
> No problems were identified. If you know why this problem occurred, please 
> add a suitable Cause for it. ( 
> https://jenkins.fd.io/job/vpp-verify-master-centos7/5490/ )
>
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/5490
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/5495/ : SUCCESS
>
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/5495
> https://jenkins.fd.io/job/vpp-csit-verify-virl-master/5494/ : SUCCESS
>
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-virl-master/5494
> https://jenkins.fd.io/job/vpp-docs-

Re: [vpp-dev] Some error in L3VPN

2017-05-24 Thread Luke, Chris
That specific trace formatting error was fixed on master earlier today. Could 
you give it a try?

Chris.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Wednesday, May 24, 2017 22:03
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Some error in L3VPN


Hi guys,

There is a ping failed occured in L3vpn,and the trace had incorrect print 
information below:
DBGvpp# show trace
 CLIB unknown format `%#' x label 0 eos 1024
17:23:38:439098: lookup-mpls-dst
 fib-index:0 hdr:[1023:85:0:eos] load-balance:29
17:23:38:439159: ip4-mpls-label-disposition
  disp:0
17:23:38:439198: lookup-ip4-dst
 fib-index:1 addr:63.1.94.231 load-balance:9
17:23:38:439325: ip4-drop
IP6_HOP_BY_HOP_OPTIONS: 85.93.65.0 -> 63.1.94.231
  version 0, header length 0
  tos 0x3f, ttl 69, length 61781, checksum 0x0054 (should be 0x)
  fragment id 0x0002 offset 35320, flags CONGESTION
17:23:38:439391: error-drop
  ip4-input: ip4 adjacency drop

 What should I do to solve this?

 Thanks,
 xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Some error in L3VPN

2017-05-24 Thread 薛欣颖

Hi guys,

There is a ping failed occured in L3vpn,and the trace had incorrect print 
information below:
DBGvpp# show trace 
 CLIB unknown format `%#' x label 0 eos 1024
17:23:38:439098: lookup-mpls-dst
 fib-index:0 hdr:[1023:85:0:eos] load-balance:29
17:23:38:439159: ip4-mpls-label-disposition
  disp:0
17:23:38:439198: lookup-ip4-dst
 fib-index:1 addr:63.1.94.231 load-balance:9
17:23:38:439325: ip4-drop
IP6_HOP_BY_HOP_OPTIONS: 85.93.65.0 -> 63.1.94.231
  version 0, header length 0
  tos 0x3f, ttl 69, length 61781, checksum 0x0054 (should be 0x)
  fragment id 0x0002 offset 35320, flags CONGESTION
17:23:38:439391: error-drop
  ip4-input: ip4 adjacency drop
  
 What should I do to solve this?
 
 Thanks,
 xyxue
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] IPsec interface handling in FIB

2017-05-24 Thread Neale Ranns (nranns)

Hi Matt,

Glad to hear it. And thank you for the patch.

Regards,
neale

-Original Message-
From: Matthew Smith 
Date: Wednesday, 24 May 2017 at 22:24
To: "Neale Ranns (nranns)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] IPsec interface handling in FIB


Hi Neale,

I set that flag and have been testing with it and it seems to have solved 
the problem.

Thanks!

-Matt


> On May 20, 2017, at 2:18 AM, Neale Ranns (nranns)  
wrote:
> 
> Hi Matt,
> 
> No ARP lookup is needed for interfaces that are point-2-point. The FIB 
will link entries reachable through a p2p interface using a special ‘auto’ 
adjacency. The auto adj has the all zeros address as a next-hop and a rewrite 
that is constructed by the interface type (i.e. for GRE has tunnel src,dst) and 
since the interface is P2P, it’s independent of the packet’s destination.
> 
> The construction of the special adj and the config to set the interface 
as P2P is, e.g.;
> 
> VNET_HW_INTERFACE_CLASS (gre_hw_interface_class) = {
>  .name = "GRE",
> …
>  .update_adjacency = gre_update_adj,
>  .flags = VNET_HW_INTERFACE_CLASS_FLAG_P2P,
> };
> 
> similar config for IPSEC would be required.
> 
> Thanks,
> neale
> 
> -Original Message-
> From:  on behalf of Matthew Smith 

> Date: Saturday, 20 May 2017 at 01:36
> To: "vpp-dev@lists.fd.io" 
> Subject: [vpp-dev] IPsec interface handling in FIB
> 
> 
>Hi,
> 
>In the course of testing IPsec interfaces in VPP, I managed to make 
VPP crash on a SEGV by setting an IP address on an established IPsec tunnel 
interface and then trying to send packets through the tunnel to the IPsec peer 
by pinging an address in the same subnet as that address. I.e. I set the 
address 10.0.0.2/30 on the ipsec0 interface and tried to ping to 10.0.0.1. It 
looks like VPP was trying to resolve the address via ARP and crashed because it 
was trying to memcpy the hardware address of the IPsec tunnel interface, which 
was NULL, to build the ARP packet.
> 
>GRE tunnel interfaces allow this sort of configuration without 
crashing. I took a look at some of the GRE code and it looked like there was 
some setup & maintenance that is done for GRE tunnels so that FIB lookups treat 
packets destined for a GRE tunnel in a special way. No ARP lookup is initiated 
when I send a packet to an address in the same subnet as an IP address 
configured on a GRE tunnel interface.
> 
>I’d like to fix this for IPsec tunnel interfaces. Does anyone have any 
pointers on what I would need to do? I been looking at the GRE code to get an 
idea, but it would save me a lot of time if anyone could share a high-level 
description of what needs to be done, or point me at any relevant documentation.
> 
>Thanks,
>-Matt Smith
> 
>___
>vpp-dev mailing list
>vpp-dev@lists.fd.io
>https://lists.fd.io/mailman/listinfo/vpp-dev
> 



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] IPsec interface handling in FIB

2017-05-24 Thread Matthew Smith

Hi Neale,

I set that flag and have been testing with it and it seems to have solved the 
problem.

Thanks!

-Matt


> On May 20, 2017, at 2:18 AM, Neale Ranns (nranns)  wrote:
> 
> Hi Matt,
> 
> No ARP lookup is needed for interfaces that are point-2-point. The FIB will 
> link entries reachable through a p2p interface using a special ‘auto’ 
> adjacency. The auto adj has the all zeros address as a next-hop and a rewrite 
> that is constructed by the interface type (i.e. for GRE has tunnel src,dst) 
> and since the interface is P2P, it’s independent of the packet’s destination.
> 
> The construction of the special adj and the config to set the interface as 
> P2P is, e.g.;
> 
> VNET_HW_INTERFACE_CLASS (gre_hw_interface_class) = {
>  .name = "GRE",
> …
>  .update_adjacency = gre_update_adj,
>  .flags = VNET_HW_INTERFACE_CLASS_FLAG_P2P,
> };
> 
> similar config for IPSEC would be required.
> 
> Thanks,
> neale
> 
> -Original Message-
> From:  on behalf of Matthew Smith 
> 
> Date: Saturday, 20 May 2017 at 01:36
> To: "vpp-dev@lists.fd.io" 
> Subject: [vpp-dev] IPsec interface handling in FIB
> 
> 
>Hi,
> 
>In the course of testing IPsec interfaces in VPP, I managed to make VPP 
> crash on a SEGV by setting an IP address on an established IPsec tunnel 
> interface and then trying to send packets through the tunnel to the IPsec 
> peer by pinging an address in the same subnet as that address. I.e. I set the 
> address 10.0.0.2/30 on the ipsec0 interface and tried to ping to 10.0.0.1. It 
> looks like VPP was trying to resolve the address via ARP and crashed because 
> it was trying to memcpy the hardware address of the IPsec tunnel interface, 
> which was NULL, to build the ARP packet.
> 
>GRE tunnel interfaces allow this sort of configuration without crashing. I 
> took a look at some of the GRE code and it looked like there was some setup & 
> maintenance that is done for GRE tunnels so that FIB lookups treat packets 
> destined for a GRE tunnel in a special way. No ARP lookup is initiated when I 
> send a packet to an address in the same subnet as an IP address configured on 
> a GRE tunnel interface.
> 
>I’d like to fix this for IPsec tunnel interfaces. Does anyone have any 
> pointers on what I would need to do? I been looking at the GRE code to get an 
> idea, but it would save me a lot of time if anyone could share a high-level 
> description of what needs to be done, or point me at any relevant 
> documentation.
> 
>Thanks,
>-Matt Smith
> 
>___
>vpp-dev mailing list
>vpp-dev@lists.fd.io
>https://lists.fd.io/mailman/listinfo/vpp-dev
> 

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP interface configuration problem

2017-05-24 Thread Alessio Silvestro
Dear Neale,

your guess was right.

With ping it works fine and vpp does not crash.

I was using prove because is suggested by the official wiki in order to
test the interface, therefore I would suggest to replace it also in the
wiki.

Thanks for the prompt answer.

Best regards,
Alessio

On Wed, May 24, 2017 at 8:17 PM, Neale Ranns (nranns) 
wrote:

>
>
> Hi Alessio,
>
>
>
> My guess is the probe command caused VPP to crash, and since your running
> it as a deamon, it restarted with your previous configs gone.
>
>
>
> The probe command sends an ARP request to the address in question, so
> don’t probe your own address, instead probe a peer’s. These days we also
> have ‘ping’ which is more useful.
>
>
>
> Regards,
>
> neale
>
>
>
> *From: * on behalf of Alessio Silvestro <
> ale.silver...@gmail.com>
> *Date: *Wednesday, 24 May 2017 at 17:20
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] VPP interface configuration problem
>
>
>
> Dear all,
>
>
>
> I use the vagrant installation.
>
>
>
> I can start the VM, build the packages and start vpp process.
>
>
>
> I am stuck at the step (Step 1: Configure and enable an interface) of the
> guide available at https://wiki.fd.io/view/VPP/Build,_install,_and_test_
> images.
>
>
>
> At the first boot I have:
>
>
>
> vagrant@localhost:/$ ifconfig
>
> enp0s3Link encap:Ethernet  HWaddr 08:00:27:33:82:8a
>
>   inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
>
>   inet6 addr: fe80::a00:27ff:fe33:828a/64 Scope:Link
>
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>
>   RX packets:376 errors:0 dropped:0 overruns:0 frame:0
>
>   TX packets:290 errors:0 dropped:0 overruns:0 carrier:0
>
>   collisions:0 txqueuelen:1000
>
>   RX bytes:34649 (34.6 KB)  TX bytes:29047 (29.0 KB)
>
>
>
> enp0s8Link encap:Ethernet  HWaddr 08:00:27:1e:20:8b
>
>   inet addr:172.28.128.5  Bcast:172.28.128.255  Mask:255.255.255.0
>
>   inet6 addr: fe80::a00:27ff:fe1e:208b/64 Scope:Link
>
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>
>   RX packets:20 errors:0 dropped:0 overruns:0 frame:0
>
>   TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
>
>   collisions:0 txqueuelen:1000
>
>   RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)
>
>
>
> enp0s9Link encap:Ethernet  HWaddr 08:00:27:da:51:35
>
>   inet addr:172.28.128.6  Bcast:172.28.128.255  Mask:255.255.255.0
>
>   inet6 addr: fe80::a00:27ff:feda:5135/64 Scope:Link
>
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>
>   RX packets:20 errors:0 dropped:0 overruns:0 frame:0
>
>   TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
>
>   collisions:0 txqueuelen:1000
>
>   RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)
>
>
>
> loLink encap:Local Loopback
>
>   inet addr:127.0.0.1  Mask:255.0.0.0
>
>   inet6 addr: ::1/128 Scope:Host
>
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>
>   collisions:0 txqueuelen:1
>
>   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
>
>
>
>
> I selected enp0s9 as target interface, thus:
>
>
>
> sudo ifconfig enp0s9 down
>
> sudo ip addr flush dev enp0s9
>
> sudo stop vpp
>
> sudo start vpp
>
>
>
> Therefore, I configure the interface for VPP. Its name is
> GigabitEthernet0/9/0.
>
>
>
> sudo vppctl set int ip address GigabitEthernet0/9/0 172.28.128.6/24
> (the IP address is the same of the one shown by ifconfig in the beginning)
>
> sudo vppctl set int state GigabitEthernet0/9/0 up
>
>
>
> Now I can see this:
>
>
>
> vagrant@localhost:/$ sudo vppctl show int address
>
> GigabitEthernet0/9/0 (up):
>
>   172.28.128.6/24
>
> local0 (dn):
>
>
>
>
>
> So, it looks to me that it is correctly configured.
>
>
>
> However when I do this:
>
>
>
> sudo vppctl ip probe 172.28.128.6 GigabitEthernet0/9/0
>
> exec error: Misc
>
>
>
> I got this error and the interface is down and without an IP address
> assigned.
>
> sudo vppctl show int address
>
>  GigabitEthernet0/9/0 (dn):
>
>  local0 (dn):
>
>
>
>
>
> Do I do something wrong, or what is it happening?
>
>
>
> Thanks in advance for any help.
>
>
>
> Best regards,
>
> Alessio
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP interface configuration problem

2017-05-24 Thread Neale Ranns (nranns)

Hi Alessio,

My guess is the probe command caused VPP to crash, and since your running it as 
a deamon, it restarted with your previous configs gone.

The probe command sends an ARP request to the address in question, so don’t 
probe your own address, instead probe a peer’s. These days we also have ‘ping’ 
which is more useful.

Regards,
neale

From:  on behalf of Alessio Silvestro 

Date: Wednesday, 24 May 2017 at 17:20
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP interface configuration problem

Dear all,

I use the vagrant installation.

I can start the VM, build the packages and start vpp process.

I am stuck at the step (Step 1: Configure and enable an interface) of the guide 
available at https://wiki.fd.io/view/VPP/Build,_install,_and_test_images.

At the first boot I have:

vagrant@localhost:/$ ifconfig
enp0s3Link encap:Ethernet  HWaddr 08:00:27:33:82:8a
  inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:fe33:828a/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:376 errors:0 dropped:0 overruns:0 frame:0
  TX packets:290 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:34649 (34.6 KB)  TX bytes:29047 (29.0 KB)

enp0s8Link encap:Ethernet  HWaddr 08:00:27:1e:20:8b
  inet addr:172.28.128.5  Bcast:172.28.128.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:fe1e:208b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:20 errors:0 dropped:0 overruns:0 frame:0
  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)

enp0s9Link encap:Ethernet  HWaddr 08:00:27:da:51:35
  inet addr:172.28.128.6  Bcast:172.28.128.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:feda:5135/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:20 errors:0 dropped:0 overruns:0 frame:0
  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


I selected enp0s9 as target interface, thus:

sudo ifconfig enp0s9 down
sudo ip addr flush dev enp0s9
sudo stop vpp
sudo start vpp

Therefore, I configure the interface for VPP. Its name is GigabitEthernet0/9/0.

sudo vppctl set int ip address GigabitEthernet0/9/0 
172.28.128.6/24   (the IP address is the same of the 
one shown by ifconfig in the beginning)
sudo vppctl set int state GigabitEthernet0/9/0 up

Now I can see this:

vagrant@localhost:/$ sudo vppctl show int address
GigabitEthernet0/9/0 (up):
  172.28.128.6/24
local0 (dn):


So, it looks to me that it is correctly configured.

However when I do this:

sudo vppctl ip probe 172.28.128.6 GigabitEthernet0/9/0
exec error: Misc

I got this error and the interface is down and without an IP address assigned.
sudo vppctl show int address
 GigabitEthernet0/9/0 (dn):
 local0 (dn):


Do I do something wrong, or what is it happening?

Thanks in advance for any help.

Best regards,
Alessio
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP interface configuration problem

2017-05-24 Thread Alessio Silvestro
Dear all,

I use the vagrant installation.

I can start the VM, build the packages and start vpp process.

I am stuck at the step (Step 1: Configure and enable an interface) of the
guide available at
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images.

At the first boot I have:

vagrant@localhost:/$ ifconfig
enp0s3Link encap:Ethernet  HWaddr 08:00:27:33:82:8a
  inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:fe33:828a/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:376 errors:0 dropped:0 overruns:0 frame:0
  TX packets:290 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:34649 (34.6 KB)  TX bytes:29047 (29.0 KB)

enp0s8Link encap:Ethernet  HWaddr 08:00:27:1e:20:8b
  inet addr:172.28.128.5  Bcast:172.28.128.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:fe1e:208b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:20 errors:0 dropped:0 overruns:0 frame:0
  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)

enp0s9Link encap:Ethernet  HWaddr 08:00:27:da:51:35
  inet addr:172.28.128.6  Bcast:172.28.128.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:feda:5135/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:20 errors:0 dropped:0 overruns:0 frame:0
  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:5907 (5.9 KB)  TX bytes:1482 (1.4 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


I selected enp0s9 as target interface, thus:

sudo ifconfig enp0s9 down
sudo ip addr flush dev enp0s9
sudo stop vpp
sudo start vpp

Therefore, I configure the interface for VPP. Its name is
GigabitEthernet0/9/0.

sudo vppctl set int ip address GigabitEthernet0/9/0 172.28.128.6/24   (the
IP address is the same of the one shown by ifconfig in the beginning)
sudo vppctl set int state GigabitEthernet0/9/0 up

Now I can see this:

vagrant@localhost:/$ sudo vppctl show int address
GigabitEthernet0/9/0 (up):
  172.28.128.6/24
local0 (dn):


So, it looks to me that it is correctly configured.

However when I do this:

sudo vppctl ip probe 172.28.128.6 GigabitEthernet0/9/0
exec error: Misc

I got this error and the interface is down and without an IP address
assigned.
sudo vppctl show int address
 GigabitEthernet0/9/0 (dn):
 local0 (dn):


Do I do something wrong, or what is it happening?

Thanks in advance for any help.

Best regards,
Alessio
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPLS VPWS

2017-05-24 Thread Neale Ranns (nranns)
Hi Xyxue,

VPP does not support VP{WL]S.

However, I’ve made a patch to enable it;
  https://gerrit.fd.io/r/#/c/6861/1
please experiment and let me know how you get on.

Changes to your configs below:

-  Create the mpls tunnel in L2 mode
mpls tunnel l2-only via 1.1.1.2 host-eth0 out-label 34 [out-label PW_CW]

-  Pop the inner label with an instruction to send the payload to an L2 
interface

mpls local-label add eos 1023 l2-input-on mpls-tunnelXXX

add the use of the l2 configs remains the same.

Regards,
Neale



From:  on behalf of 薛欣颖 
Date: Wednesday, 24 May 2017 at 03:24
To: vpp-dev 
Subject: [vpp-dev] VPLS VPWS

Hi guys,
Does VPP support VPWS 、VPLS? How should it be configured?Does my configuration  
below implement VPWS?

 PE1
create host-interface name eth0
create host-interface name eth1
set int state host-eth1 up
set int state host-eth0 up
set interface ip table host-eth0 0
set int ip address host-eth0 1.1.1.1/24
set interface mac address host-eth0 00:03:7F:FF:FF:FF
set interface mac address host-eth1 00:03:7F:FF:FF:FE
set interface mpls  host-eth0   enable

mpls tunnel add via 1.1.1.2 host-eth0 out-label 34
set int state mpls-tunnel0 up
set interface l2 xconnect mpls-tunnel0  host-eth1 / To implement VPWS 
,directly xconnect eth interface to MPLS TUNNEL,is that correct? How to add PW 
LABEL 33?

mpls local-label add eos 1023 ip4-lookup-in-table 1   ###PW Label, how to 
pop?
mpls local-label add non-eos 1024 mpls-lookup-in-table 0  ###LSP Label pop,is 
that correct?

Thanks,
xyxue
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] The Case Of the Missing API Definition

2017-05-24 Thread Lori Jakab
On Wed, May 24, 2017 at 6:21 PM, Jon Loeliger  wrote:

> On Wed, May 24, 2017 at 2:00 AM,   wrote:
> > Hi Jon,
>
> Hi Ole,
>
> > Thanks for the poetry! ;-)
>
> Most welcome.
>
> > This is me in 01384fe
> > Apologies for that. Next time I see you let me bring both band aid and
> whiskey.
>
> Apology accepted!
>
> > To my excuse, there has been this list of "broken" APIs maintained by the
> > Python and Java language binding implementors.
>
> See?  Java is bad for you. :-)
>
> > It's been on the todo list for a long time and I finally found some time
> to
> > deal with them. Obviously not everyone was aware of those planned
> > changes. I did include those I thought affected as code reviewers. #fail.
>
> It's OK, really.  I've been making a pretty systematic march
> through many of the VPP API areas, so there's no telling what
> or where I'll be poking next... :-)
>
> > I feel your pain. How can we make this better?
> > 1) Never change APIs, regardless the reason
> > 2) Announce and discuss changes a priori. Separate mailing list for
> > API changes?
> > {vpp-users, vpp-api-announce}
> > 3) ...
>
> This is a really good question.  I confess it is also a difficult one.
> We certainly can't abide by 1) Never change APIs.  That is just not
> a realistic approach.
>
> I follow the vpp-dev list pretty regularly these days, and I try
> to watch the upcoming Gerrit patches and reviews.  And I try
> to keep our VPP repo up-to-date WRT the fd.io Git repo; I pull
> maybe every couple days or so.
>
> So for me, I am willing to suffer a bit of tip-of-tree development
> pain and angst.  Even fixing this API call removal wasn't that
> difficult for our code.
>
> I think the thing that was most troubling for me was getting it
> pretty much without warning.  So I think at the very least, it
> would be good to make a statement on the vpp-dev list.
>
> But when?  Certainly it would be nice if it were in advance of
> the patch being committed.  I realize that may not always be
> possible.  But some indication on the list as it is being committed
> would also be nice.  Maybe even some indication from the patch
> author on list like "Hey, I have a patch in review that removes
> *this* API call.  If you are using it you will need to change your
> code like _this_."
>
> The process of adding API calls, and even changing an existing
> one is easier, of course.  But still, some advance notice would
> be appreciated.
>
> How hard would it be to have a Jenkins nightly build job that
> diff'ed yesterday's complete API list versus today's list and
> generated sent that diff mail?  Dunno.  Just an idea.
>

In the OpenDaylight community we have something called the "Weather" page
on the wiki [0], where information about potentially disruptive patches is
posted, then the link posted on one of the mailing list with "[WEATHER]" in
the subject so that it stands out and can be filtered. I would say it
worked reasonably well for us, maybe it's worth a try?

Then again, this approach was useful because we have many different
projects and tons of mailing lists, FD.io being more focused it may be
overkill.

HTH,
-Lori

[0] https://wiki.opendaylight.org/view/Weather


>
> > Do we reach everyone using vpp on  vpp-dev and a heads-up there
> > would suffice?
>
> Quite possibly.  I don't really have a feel for how many people
> use each of the various APIs for active development these days.
> It may very well justify a specific vpp-api-discussion list or so.
>
> > Cheers,
> > Ole
>
> HTH,
> jdl
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] shadow build system change adding test-debug job

2017-05-24 Thread Ed Kern (ejk)
well sure enough…i doubled the cpu reservation and it all passed clean…
things are pretty quiet right now so ill throw it up again when things are busy
but right now I’m just happy to see a full test-all-debug pass.


thanks,

Ed


> On May 24, 2017, at 8:42 AM, Klement Sekera -X (ksekera - PANTHEON 
> TECHNOLOGIES at Cisco)  wrote:
> 
> I know that the functional BFD tests passed so unless there is a bug in
> the tests, the failures are pretty much timing issues. From my
> experience the load is the culprit as the BFD tests test interactive
> sessions, which need to be kept alive. The timings currently are set at
> 300ms and for most tests two keep-alives can be missed before the session
> goes down on vpp side and asserts start failing. While this might seem
> like ample time, especially on loaded systems there is a high chance
> that at least one test will derp ...
> 
> I've also seen derps even on idle systems, where a select() call (used
> by python in its own sleep() implementation) with timeout of 100ms returns
> after 1-3 seconds.
> 
> Try running the bfd tests only (make test-all TEST=bfd) while no other tasks
> are running - I think they should pass on your box just fine.
> 
> Thanks,
> Klement
> 
> Quoting Ed Kern (ejk) (2017-05-24 16:27:10)
>>   right now its a VERY intentional mix…but depending on loading I could
>>   easily see this coming up if those timings are strict.  
>>   To not dodge your question max loading on my slowest node would be 3
>>   concurrent builds on an Xeon™ E3-1240 v3 (4 cores @ 3.4Ghz)
>> yeah yeah stop laughing…..Do you have suggested or even guesstimate
>>   minimums in this regard…I could pretty trivially route them towards
>>   the larger set that I have right now if you think magic will result :)
>>   Ed
>>   PS thanks though..for whatever reason the type of errors I was getting
>>   didn’t naturally steer my mind towards cpu/io binding.
>> 
>> On May 24, 2017, at 12:57 AM, Klement Sekera -X (ksekera - PANTHEON
>> TECHNOLOGIES at Cisco) <[1]ksek...@cisco.com> wrote:
>> Hi Ed,
>> 
>> how fast are your boxes? And how many cores? The BFD tests struggle to
>> meet
>> the aggresive timings on slower boxes...
>> 
>> Thanks,
>> Klement
>> 
>> Quoting Ed Kern (ejk) (2017-05-23 20:43:55)
>> 
>> No problem.
>> If anyone is curious in rubbernecking the accident that is the
>>   current
>> test-all (at least for my build system)
>> adding a comment of
>> testall
>> SHOULD trigger and fire it off on my end.
>> make it all pass and you win a beer (or beverage of your choice)  
>> Ed
>> 
>>   On May 23, 2017, at 11:34 AM, Dave Wallace
>>   <[1][2]dwallac...@gmail.com>
>>   wrote:
>>   Ed,
>> 
>>   Thanks for adding this to the shadow build system.  Real data on
>>   the
>>   cost and effectiveness of this will be most useful.
>> 
>>   -daw-
>>   On 5/23/2017 1:30 PM, Ed Kern (ejk) wrote:
>> 
>>   In the vpp-dev call a couple hours ago there was a discussion of
>>   running test-debug on a regular/default? basis.
>>   As a trial I’ve added a new job to the shadow build system:
>> 
>>   vpp-test-debug-master-ubuntu1604
>> 
>>   Will do a make test-debug,  as part of verify set, as an ADDITIONAL
>>   job.
>> 
>>   I gave a couple passes with test-all but can’t ever get a clean run
>>   with test-all (errors in test_bfd and test_ip6 ).
>>   I don’t think this is unusual or unexpected.  Ill leave it to someone
>>   else to say that ‘all’ throwing failures is a good thing.
>>   I’m happy to add another job for/with test-all if someone wants to
>>   actually debug those errors.
>> 
>>   flames, comments,concerns welcome..
>> 
>>   Ed
>> 
>>   PS Please note/remember that all these tests are non-voting regardless
>>   of success or failure.
>>   ___
>>   vpp-dev mailing list
>>   [2][3]vpp-dev@lists.fd.io
>>   [3][4]https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
>>   References
>> 
>> Visible links
>> 1. [5]mailto:dwallac...@gmail.com
>> 2. [6]mailto:vpp-dev@lists.fd.io
>> 3. [7]https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
>> References
>> 
>>   Visible links
>>   1. mailto:ksek...@cisco.com
>>   2. mailto:dwallac...@gmail.com
>>   3. mailto:vpp-dev@lists.fd.io
>>   4. https://lists.fd.io/mailman/listinfo/vpp-dev
>>   5. mailto:dwallac...@gmail.com
>>   6. mailto:vpp-dev@lists.fd.io
>>   7. https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running VPP in a non-default network namespace

2017-05-24 Thread Luke, Chris
It may be extreme, but perhaps detect if any namespace is configured (whether 
we’re running in a namespace or in the default) and then fall back to the 
safest behavior (with a note in the startup output saying so).

Is it reasonable? Could add a flag that always binds every device found for 
people who explicitly want the current behavior.

I suppose if we’re in the default ns we could iterate every other ns to ignore 
ports mapped to them but it feels fragile somehow.

Chris.


On 5/24/17, 04:33, "Damjan Marion"  wrote:

Yeah, I’m aware of this issue but I don’t have any good idea how to address 
it.

We can disable unbind if we detect that we are running inside namespace, 
but that will not fix problem in opposite direction, when specific interface is 
mapped to namespace and vpp running in global.

any ideas?


> On 24 May 2017, at 05:14, Luke, Chris  wrote:
> 
> Ah, I see what you mean.
> 
> The issue being that inside the namespace it cannot query the state of 
the Linux-bound interface (whether up/down) since the namespace doesn't have 
the interface. The behavior falls-back to slurping up all ports Linux says 
doesn't exist; this is at least in part to make sure it captures ports already 
unbound from the kernel.
> 
> I'd agree this is not acting in the manner of least surprise. Damjan is 
the best person to comment on this and whether a more consistent behavior can 
be crafted. One simple approach might be to detect we're in a namespace and 
only bind detected-down interfaces and explicitly provided PCI ID's.
> 
> Chris.
> 
>> -Original Message-
>> From: Renato Westphal [mailto:ren...@opensourcerouting.org]
>> Sent: Tuesday, May 23, 2017 22:01
>> To: Luke, Chris 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Running VPP in a non-default network namespace
>> 
>> Thank you Chris, disabling the DPDK plugin did the trick for me. My plan 
is to
>> use veth/AF_PACKET interfaces only.
>> 
>> I think I found a small problem in VPP though.
>> 
>> When you start VPP and the 'dpdk' section of the configuration file is 
empty,
>> DPDK snatches all physical interfaces that are administratively down.
>> 
>> This works ok when running VPP in the default netns. But when you run VPP
>> in a non-default netns, all physical interfaces are snatched regardless 
of their
>> administrative status.
>> 
>> Could you confirm if this is a bug? This inconsistent behavior was a 
source of
>> confusion to me.
>> 
>> Best Regards,
>> Renato.
>> 
>> On Tue, May 23, 2017 at 9:00 PM, Luke, Chris 
>> wrote:
>>> If you're using DPDK and all your physical interfaces are DPDK-capable, 
then
>> it's snatching the PCI device away from Linux. This has nothing to do 
with
>> Linux namespaces; ns can't prevent it from happening because it's 
working at
>> a different layer in the stack. The point of DPDK is to go straight to 
the
>> hardware.
>>> 
>>> The only time Linux namespace interactions come into play with VPP is
>>> with interfaces like 'tap' and 'host' (aka af_packet) which use
>>> syscalls to manipulate Linux network state (create ports, listen on a
>>> raw socket, etc) or anything else that uses Linux networking (console
>>> TCP connections, etc)
>>> 
>>> You can either limit which PCI devices VPP asks DPDK to bind to in the 
dpdk
>> section of the config:
>>> 
>>> dpdk {
>>>  dev 
>>>  ...
>>> }
>>> 
>>> or, if only inter-ns is what you want to do, just disable DPDK 
altogether:
>>> 
>>> plugins {
>>>  plugin dpdk_plugin.so { disable }
>>> }
>>> 
>>> Chris.
>>> 
>>> 
 -Original Message-
 From: vpp-dev-boun...@lists.fd.io
 [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Renato Westphal
 Sent: Tuesday, May 23, 2017 19:38
 To: vpp-dev@lists.fd.io
 Subject: [vpp-dev] Running VPP in a non-default network namespace
 
 Hi all,
 
 For learning purposes, I'm trying to set up a test topology using
 multiple instances of VPP running in different network namespaces.
 
 I see that there's documentation showing how to use VPP as a router
 between namespaces, but in all examples I found VPP is always running
 in the default network namespace.
 
 If I try to run VPP in a non-default network namespace, something
 weird
 happens: all interfaces from the default netns disappear!
 
 Does anyone know what might be the cause of this?
 
 I'm using the official vagrant VM (Ubuntu 16.04) and VPP v17.04.1,
 and I can see that the same problem occurs on master and on older
>> releases as well.
 
 More details about the issue below.
  

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco)
Hi,

Seems that your last patch fixed the issue. I was now able to bind both unbound 
interfaces and also interfaces that were bound to kernel but were down. Thanks
_____   _  ___
 __/ __/ _ \  (_)__| | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/

vpp# sh int
  Name   Idx   State  Counter  Count
TenGigabitEthernet10/0/0  3down  rx-miss
3
TenGigabitEthernet11/0/0  4down  rx-miss
3
TenGigabitEthernet12/0/0  5down  rx-miss
3
TenGigabitEthernet7/0/0   1down  rx-miss
1
TenGigabitEthernet8/0/0   2down  rx-miss
   34
local00down
vpp#


Michal

-Original Message-
From: Damjan Marion (damarion) 
Sent: Wednesday, May 24, 2017 4:40 PM
To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
master


I think i fixed the issue. New version is in gerrit. If you still see the crash 
please try to capture backtrace.

Thanks,

Damajn

> On 24 May 2017, at 16:29, Damjan Marion (damarion)  wrote:
> 
> 
> Any chance you can capture backtrace?
> 
> just "gdb —args vpp unix interactive”
> 
> Thanks,
> 
> Damjan
> 
>> On 24 May 2017, at 13:10, Michal Cmarada -X (mcmarada - PANTHEON 
>> TECHNOLOGIES at Cisco)  wrote:
>> 
>> Hi,
>> 
>> I tried your patch, I built rpms from it and then reinstalled vpp with those 
>> rpms. I ensured that the interface was not bound to kernel or dpdk. But I 
>> got Segmentation fault. See output:
>> 
>> [root@overcloud-novacompute-1 ~]# dpdk-devbind --status
>> 
>> Network devices using DPDK-compatible driver 
>> 
>> 
>> 
>> Network devices using kernel driver
>> ===
>> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> 
>> Other network devices
>> =
>> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> 
>> Crypto devices using DPDK-compatible driver 
>> ===
>> 
>> 
>> Crypto devices using kernel driver
>> ==
>> 
>> 
>> Other crypto devices
>> 
>> 
>> 
>> 
>> [root@overcloud-novacompute-1 ~]# vpp unix interactive
>> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
>> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control 
>> Lists)
>> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane 
>> Development Kit (DPDK))
>> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per 
>> Packet)
>> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
>> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
>> addressing for IPv6)
>> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
>> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
>> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
>> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid 
>> Deployment on IPv4 Infrastructure (RFC5969))
>> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory 
>> Interface (experimetal))
>> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
>> Translation)
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
>> load_one_plugin:63: Loaded plugin

Re: [vpp-dev] The Case Of the Missing API Definition

2017-05-24 Thread Jon Loeliger
On Wed, May 24, 2017 at 2:00 AM,   wrote:
> Hi Jon,

Hi Ole,

> Thanks for the poetry! ;-)

Most welcome.

> This is me in 01384fe
> Apologies for that. Next time I see you let me bring both band aid and 
> whiskey.

Apology accepted!

> To my excuse, there has been this list of "broken" APIs maintained by the
> Python and Java language binding implementors.

See?  Java is bad for you. :-)

> It's been on the todo list for a long time and I finally found some time to
> deal with them. Obviously not everyone was aware of those planned
> changes. I did include those I thought affected as code reviewers. #fail.

It's OK, really.  I've been making a pretty systematic march
through many of the VPP API areas, so there's no telling what
or where I'll be poking next... :-)

> I feel your pain. How can we make this better?
> 1) Never change APIs, regardless the reason
> 2) Announce and discuss changes a priori. Separate mailing list for
> API changes?
> {vpp-users, vpp-api-announce}
> 3) ...

This is a really good question.  I confess it is also a difficult one.
We certainly can't abide by 1) Never change APIs.  That is just not
a realistic approach.

I follow the vpp-dev list pretty regularly these days, and I try
to watch the upcoming Gerrit patches and reviews.  And I try
to keep our VPP repo up-to-date WRT the fd.io Git repo; I pull
maybe every couple days or so.

So for me, I am willing to suffer a bit of tip-of-tree development
pain and angst.  Even fixing this API call removal wasn't that
difficult for our code.

I think the thing that was most troubling for me was getting it
pretty much without warning.  So I think at the very least, it
would be good to make a statement on the vpp-dev list.

But when?  Certainly it would be nice if it were in advance of
the patch being committed.  I realize that may not always be
possible.  But some indication on the list as it is being committed
would also be nice.  Maybe even some indication from the patch
author on list like "Hey, I have a patch in review that removes
*this* API call.  If you are using it you will need to change your
code like _this_."

The process of adding API calls, and even changing an existing
one is easier, of course.  But still, some advance notice would
be appreciated.

How hard would it be to have a Jenkins nightly build job that
diff'ed yesterday's complete API list versus today's list and
generated sent that diff mail?  Dunno.  Just an idea.

> Do we reach everyone using vpp on  vpp-dev and a heads-up there
> would suffice?

Quite possibly.  I don't really have a feel for how many people
use each of the various APIs for active development these days.
It may very well justify a specific vpp-api-discussion list or so.

> Cheers,
> Ole

HTH,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] DPDK Crypto Plugin: rte_cryptodev_enqueue_burst does not return Decrypted packets

2017-05-24 Thread Avinash Gonsalves
Hi Sergio,


Right, without the change to dpdk.am, it builds correctly with the
crypto libraries, but the plugin fails to load.
After the change, i'm able to load the plugin, but the enqueue_crypto
call does not return, no crypto error ops counters are incremented
either

Building it using the "make release".


-Avinash



Hi,

So without the change to dpdk.am, does the build completes correctly
(including both crypto libraries) but you fail to load the plugin?
Are you building a VPP release or from master?

Sergio


On Mon, May 22, 2017 at 9:51 AM, Avinash Gonsalves <
avinash.gonsal...@gmail.com> wrote:

> Need help with the  DPDK crypto plugin,
>
>
> After I moved to the DPDK Crypto plugin, and built it with:
>
> *make vpp_uses_dpdk_cryptodev_sw=yes build*
>
>
> On running VPP faced the following error:
>
> *load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)*
>
> *load_one_plugin:142:
> /vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins/dpdk_plugin.so:
> undefined symbol: aesni_gcm128_init*
>
> *load_one_plugin:143: Failed to load plugin 'dpdk_plugin.so'*
>
> *Aborted (core dumped)*
>
>
> I fixed this one by modifying to use a share lib,
>
> --- a/fastpath/vpp/vpp-src/src/plugins/dpdk.am
>
> +++ b/fastpath/vpp/vpp-src/src/plugins/dpdk.am
>
> @@ -17,9 +17,10 @@ vppplugins_LTLIBRARIES += dpdk_plugin.la
>
>  dpdk_plugin_la_LDFLAGS = $(AM_LDFLAGS) -Wl,--whole-archive,-l:
> libdpdk.a,--no-whole-archive
>
>  if WITH_DPDK_CRYPTO_SW
>
>  dpdk_plugin_la_LDFLAGS += -Wl,--exclude-libs,libIPSec_
> MB.a,-l:libIPSec_MB.a
>
> -dpdk_plugin_la_LDFLAGS += -Wl,--exclude-libs,libisal_
> crypto.a,-l:libisal_crypto.a
>
> -endif
>
> +dpdk_plugin_la_LDFLAGS += -Wl,-lm,-ldl,-lisal_crypto
>
> +else
>
>  dpdk_plugin_la_LDFLAGS += -Wl,-lm,-ldl
>
> +endif
>
>
> After this change, the rte_cryptodev_enqueue_burst, does not return
> decrypted packets.
> Any configuration that I'm missing?
>
>
> This is my system configuration:
>
> *No LSB modules are available.*
>
> *Distributor ID:  Ubuntu*
>
> *Description: Ubuntu 16.04.1 LTS*
>
> *Release: 16.04*
>
> *Codename:  xenial*
>
> *# uname -a*
>
> *Linux VPP 4.4.0-75-generic #96-Ubuntu SMP Thu Apr 20 09:56:33 UTC 2017
> x86_64 x86_64 x86_64 GNU/Linux*
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] shadow build system change adding test-debug job

2017-05-24 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
I know that the functional BFD tests passed so unless there is a bug in
the tests, the failures are pretty much timing issues. From my
experience the load is the culprit as the BFD tests test interactive
sessions, which need to be kept alive. The timings currently are set at
300ms and for most tests two keep-alives can be missed before the session
goes down on vpp side and asserts start failing. While this might seem
like ample time, especially on loaded systems there is a high chance
that at least one test will derp ...

I've also seen derps even on idle systems, where a select() call (used
by python in its own sleep() implementation) with timeout of 100ms returns
after 1-3 seconds.

Try running the bfd tests only (make test-all TEST=bfd) while no other tasks
are running - I think they should pass on your box just fine.

Thanks,
Klement

Quoting Ed Kern (ejk) (2017-05-24 16:27:10)
>right now its a VERY intentional mix…but depending on loading I could
>easily see this coming up if those timings are strict.  
>To not dodge your question max loading on my slowest node would be 3
>concurrent builds on an Xeon™ E3-1240 v3 (4 cores @ 3.4Ghz)
>  yeah yeah stop laughing…..Do you have suggested or even guesstimate
>minimums in this regard…I could pretty trivially route them towards
>the larger set that I have right now if you think magic will result :)
>Ed
>PS thanks though..for whatever reason the type of errors I was getting
>didn’t naturally steer my mind towards cpu/io binding.
> 
>  On May 24, 2017, at 12:57 AM, Klement Sekera -X (ksekera - PANTHEON
>  TECHNOLOGIES at Cisco) <[1]ksek...@cisco.com> wrote:
>  Hi Ed,
> 
>  how fast are your boxes? And how many cores? The BFD tests struggle to
>  meet
>  the aggresive timings on slower boxes...
> 
>  Thanks,
>  Klement
> 
>  Quoting Ed Kern (ejk) (2017-05-23 20:43:55)
> 
>  No problem.
>  If anyone is curious in rubbernecking the accident that is the
>current
>  test-all (at least for my build system)
>  adding a comment of
>  testall
>  SHOULD trigger and fire it off on my end.
>  make it all pass and you win a beer (or beverage of your choice)  
>  Ed
> 
>On May 23, 2017, at 11:34 AM, Dave Wallace
><[1][2]dwallac...@gmail.com>
>wrote:
>Ed,
> 
>Thanks for adding this to the shadow build system.  Real data on
>the
>cost and effectiveness of this will be most useful.
> 
>-daw-
>On 5/23/2017 1:30 PM, Ed Kern (ejk) wrote:
> 
>In the vpp-dev call a couple hours ago there was a discussion of
>running test-debug on a regular/default? basis.
>As a trial I’ve added a new job to the shadow build system:
> 
>vpp-test-debug-master-ubuntu1604
> 
>Will do a make test-debug,  as part of verify set, as an ADDITIONAL
>job.
> 
>I gave a couple passes with test-all but can’t ever get a clean run
>with test-all (errors in test_bfd and test_ip6 ).
>I don’t think this is unusual or unexpected.  Ill leave it to someone
>else to say that ‘all’ throwing failures is a good thing.
>I’m happy to add another job for/with test-all if someone wants to
>actually debug those errors.
> 
>flames, comments,concerns welcome..
> 
>Ed
> 
>PS Please note/remember that all these tests are non-voting regardless
>of success or failure.
>___
>vpp-dev mailing list
>[2][3]vpp-dev@lists.fd.io
>[3][4]https://lists.fd.io/mailman/listinfo/vpp-dev
> 
>References
> 
>  Visible links
>  1. [5]mailto:dwallac...@gmail.com
>  2. [6]mailto:vpp-dev@lists.fd.io
>  3. [7]https://lists.fd.io/mailman/listinfo/vpp-dev
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:dwallac...@gmail.com
>3. mailto:vpp-dev@lists.fd.io
>4. https://lists.fd.io/mailman/listinfo/vpp-dev
>5. mailto:dwallac...@gmail.com
>6. mailto:vpp-dev@lists.fd.io
>7. https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Damjan Marion (damarion)

I think i fixed the issue. New version is in gerrit. If you still see the crash 
please try to capture backtrace.

Thanks,

Damajn

> On 24 May 2017, at 16:29, Damjan Marion (damarion)  wrote:
> 
> 
> Any chance you can capture backtrace?
> 
> just "gdb —args vpp unix interactive”
> 
> Thanks,
> 
> Damjan
> 
>> On 24 May 2017, at 13:10, Michal Cmarada -X (mcmarada - PANTHEON 
>> TECHNOLOGIES at Cisco)  wrote:
>> 
>> Hi,
>> 
>> I tried your patch, I built rpms from it and then reinstalled vpp with those 
>> rpms. I ensured that the interface was not bound to kernel or dpdk. But I 
>> got Segmentation fault. See output:
>> 
>> [root@overcloud-novacompute-1 ~]# dpdk-devbind --status
>> 
>> Network devices using DPDK-compatible driver
>> 
>> 
>> 
>> Network devices using kernel driver
>> ===
>> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> 
>> Other network devices
>> =
>> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> 
>> Crypto devices using DPDK-compatible driver
>> ===
>> 
>> 
>> Crypto devices using kernel driver
>> ==
>> 
>> 
>> Other crypto devices
>> 
>> 
>> 
>> 
>> [root@overcloud-novacompute-1 ~]# vpp unix interactive
>> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
>> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
>> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
>> Kit (DPDK))
>> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
>> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
>> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
>> addressing for IPv6)
>> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
>> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
>> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
>> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid 
>> Deployment on IPv4 Infrastructure (RFC5969))
>> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
>> (experimetal))
>> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
>> Translation)
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/snat_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
>> vlib_pci_bind_to_uio: Skipping PCI device :06:00.0 as host interface 
>> enp6s0 is up
>> Segmentation fault
>> 
>> Michal
>> 
>> -Original Message-
>> From: Damjan Marion (damarion) 
>> Sent: Tuesday, May 23, 2017 6:39 PM
>> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
>> 
>> Cc: Dave Barach (dbarach) ; Marco Varlese 
>> ; Kinsella, Ray ; 
>> vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
>> master
>> 
>> 
>> Can you try following patch without manual bind:
>> 
>> https://gerrit.fd.io/r/#/c/6846
>> 
>> Thanks,
>> 
>> Damjan
>> 
>> 
>>> On 23 May 2017, at 15:53, Michal Cmarada -X (mcmarada - PANTHEON 
>>> TECHNOLOGIES at Cisco)  wrote:
>>> 
>>> Hi Dave,
>>> 
>>> The manual binding helped. I used uio_pci_generic and now VPP finally sees 
>>> them.

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Damjan Marion (damarion)

Any chance you can capture backtrace?

 just "gdb —args vpp unix interactive”

Thanks,

Damjan

> On 24 May 2017, at 13:10, Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi,
> 
> I tried your patch, I built rpms from it and then reinstalled vpp with those 
> rpms. I ensured that the interface was not bound to kernel or dpdk. But I got 
> Segmentation fault. See output:
> 
> [root@overcloud-novacompute-1 ~]# dpdk-devbind --status
> 
> Network devices using DPDK-compatible driver
> 
> 
> 
> Network devices using kernel driver
> ===
> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> 
> Other network devices
> =
> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> 
> Crypto devices using DPDK-compatible driver
> ===
> 
> 
> Crypto devices using kernel driver
> ==
> 
> 
> Other crypto devices
> 
> 
> 
> 
> [root@overcloud-novacompute-1 ~]# vpp unix interactive
> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
> Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
> on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
> (experimetal))
> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
> Translation)
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/snat_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
> vlib_pci_bind_to_uio: Skipping PCI device :06:00.0 as host interface 
> enp6s0 is up
> Segmentation fault
> 
> Michal
> 
> -Original Message-
> From: Damjan Marion (damarion) 
> Sent: Tuesday, May 23, 2017 6:39 PM
> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: Dave Barach (dbarach) ; Marco Varlese 
> ; Kinsella, Ray ; 
> vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
> master
> 
> 
> Can you try following patch without manual bind:
> 
> https://gerrit.fd.io/r/#/c/6846
> 
> Thanks,
> 
> Damjan
> 
> 
>> On 23 May 2017, at 15:53, Michal Cmarada -X (mcmarada - PANTHEON 
>> TECHNOLOGIES at Cisco)  wrote:
>> 
>> Hi Dave,
>> 
>> The manual binding helped. I used uio_pci_generic and now VPP finally sees 
>> them. Thanks.
>> 
>> Michal
>> 
>> -Original Message-
>> From: Dave Barach (dbarach)
>> Sent: Tuesday, May 23, 2017 3:39 PM
>> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
>> ; Marco Varlese ; 
>> Kinsella, Ray ; vpp-dev@lists.fd.io
>> Subject: RE: [vpp-dev] VPP has no interfaces after update from 1704 to 
>> 1707

Re: [vpp-dev] shadow build system change adding test-debug job

2017-05-24 Thread Ed Kern (ejk)
right now its a VERY intentional mix…but depending on loading I could easily 
see this coming up if those timings are strict.

To not dodge your question max loading on my slowest node would be 3 concurrent 
builds on an Xeon™ E3-1240 v3 (4 cores @ 3.4Ghz)

  yeah yeah stop laughing…..Do you have suggested or even guesstimate minimums 
in this regard…I could pretty trivially route them towards
the larger set that I have right now if you think magic will result :)

Ed

PS thanks though..for whatever reason the type of errors I was getting didn’t 
naturally steer my mind towards cpu/io binding.



On May 24, 2017, at 12:57 AM, Klement Sekera -X (ksekera - PANTHEON 
TECHNOLOGIES at Cisco) mailto:ksek...@cisco.com>> wrote:

Hi Ed,

how fast are your boxes? And how many cores? The BFD tests struggle to meet
the aggresive timings on slower boxes...

Thanks,
Klement

Quoting Ed Kern (ejk) (2017-05-23 20:43:55)
  No problem.
  If anyone is curious in rubbernecking the accident that is the current
  test-all (at least for my build system)
  adding a comment of
  testall
  SHOULD trigger and fire it off on my end.
  make it all pass and you win a beer (or beverage of your choice)
  Ed

On May 23, 2017, at 11:34 AM, Dave Wallace 
<[1]dwallac...@gmail.com>
wrote:
Ed,

Thanks for adding this to the shadow build system.  Real data on the
cost and effectiveness of this will be most useful.

-daw-
On 5/23/2017 1:30 PM, Ed Kern (ejk) wrote:

In the vpp-dev call a couple hours ago there was a discussion of running 
test-debug on a regular/default? basis.
As a trial I’ve added a new job to the shadow build system:

vpp-test-debug-master-ubuntu1604

Will do a make test-debug,  as part of verify set, as an ADDITIONAL job.


I gave a couple passes with test-all but can’t ever get a clean run with 
test-all (errors in test_bfd and test_ip6 ).
I don’t think this is unusual or unexpected.  Ill leave it to someone else to 
say that ‘all’ throwing failures is a good thing.
I’m happy to add another job for/with test-all if someone wants to actually 
debug those errors.

flames, comments,concerns welcome..

Ed

PS Please note/remember that all these tests are non-voting regardless of 
success or failure.
___
vpp-dev mailing list
[2]vpp-dev@lists.fd.io
[3]https://lists.fd.io/mailman/listinfo/vpp-dev

References

  Visible links
  1. mailto:dwallac...@gmail.com
  2. mailto:vpp-dev@lists.fd.io
  3. https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco)
Hi,

I tried your patch, I built rpms from it and then reinstalled vpp with those 
rpms. I ensured that the interface was not bound to kernel or dpdk. But I got 
Segmentation fault. See output:

[root@overcloud-novacompute-1 ~]# dpdk-devbind --status

Network devices using DPDK-compatible driver



Network devices using kernel driver
===
:06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
unused=vfio-pci,uio_pci_generic
:12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
unused=vfio-pci,uio_pci_generic

Other network devices
=
:07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
:08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic

Crypto devices using DPDK-compatible driver
===


Crypto devices using kernel driver
==


Other crypto devices




[root@overcloud-novacompute-1 ~]# vpp unix interactive
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address Translation)
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/snat_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vlib_pci_bind_to_uio: Skipping PCI device :06:00.0 as host interface enp6s0 
is up
Segmentation fault

Michal

-Original Message-
From: Damjan Marion (damarion) 
Sent: Tuesday, May 23, 2017 6:39 PM
To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 

Cc: Dave Barach (dbarach) ; Marco Varlese 
; Kinsella, Ray ; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
master


Can you try following patch without manual bind:

https://gerrit.fd.io/r/#/c/6846

Thanks,

Damjan


> On 23 May 2017, at 15:53, Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi Dave,
> 
> The manual binding helped. I used uio_pci_generic and now VPP finally sees 
> them. Thanks.
> 
> Michal
> 
> -Original Message-
> From: Dave Barach (dbarach)
> Sent: Tuesday, May 23, 2017 3:39 PM
> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
> ; Marco Varlese ; 
> Kinsella, Ray ; vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] VPP has no interfaces after update from 1704 to 
> 1707 master
> 
> Please attempt to bind the VIC device(s) manually - to uio_pci_generic - 
> using dpdk-devbind. 
> 
> Until / unless that works, there isn't a chance that vpp will drive the 
> devices. You may have better luck with the igb_uio kernel module, or not... 
> 
> Thanks… Dave
> 
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Michal Cmarada -X (mcmarada - PANTHEON TECHNOL

Re: [vpp-dev] MPLS L3VPN PING FAILED

2017-05-24 Thread Neale Ranns (nranns)
Hi Xyxue,

The lookup was performed in FIB index 1– you must have used ‘set int ip table 
host-XXX YYY’ - but the route you added is in the default table.

If you want the routes in the same table as the interface do;
  Ip route add table YYY 192.168.3.0/24 via mpls-tunnel0 out-label 1023

Regards,
Neale

p.s. are you really constructing the L3VPN from a [full] mesh of MPLS tunnels, 
or is it LDP in the core?



From:  on behalf of 薛欣颖 
Date: Wednesday, 24 May 2017 at 09:09
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] MPLS L3VPN PING FAILED


Hi guys,

I have the following configuration:
mpls tunnel add via 2.1.1.1 host-eth1 out-label 1024
set int state mpls-tunnel0 up
ip route add 192.168.3.0/24 via mpls-tunnel0 out-label 1023

Ping from CE to PE ,and the PE drop it.

That is the fib :
192.168.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:34 buckets:1 uRPF:36 to:[15:1260]]
[0] [@11]: mpls-label:[3]:[1023:255:0:eos]
[@2]: mpls via 0.0.0.0  mpls-tunnel0:
  stacked-on:
[@5]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:-1 
to:[0:0] via:[15:1320]]
  [0] [@8]: mpls-label:[1]:[1024:255:0:neos]
  [@1]: mpls via 2.1.1.1 host-eth1: 00037ffe0e1a0d0050438847

The following is the trace info:
00:17:54:791606: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x16645 nsec 0x34a33284 vlan 0
00:17:54:791899: ethernet-input
  IP4: 2c:53:4a:02:91:95 -> 00:50:43:00:02:02
00:17:54:791956: ip4-input
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792005: ip4-lookup
  fib 1 dpo-idx 1 flow hash: 0x
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792062: ip4-drop
ICMP: 192.168.2.10 -> 192.168.3.10
  tos 0x00, ttl 64, length 84, checksum 0x0886
  fragment id 0xabbe, flags DONT_FRAGMENT
ICMP echo_request checksum 0xae6a
00:17:54:792110: error-drop
  ip4-input: ip4 adjacency drop

How can I solve the problem?

Thanks,
xyxue



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Running VPP in a non-default network namespace

2017-05-24 Thread Damjan Marion
Yeah, I’m aware of this issue but I don’t have any good idea how to address it.

We can disable unbind if we detect that we are running inside namespace, but 
that will not fix problem in opposite direction, when specific interface is 
mapped to namespace and vpp running in global.

any ideas?


> On 24 May 2017, at 05:14, Luke, Chris  wrote:
> 
> Ah, I see what you mean.
> 
> The issue being that inside the namespace it cannot query the state of the 
> Linux-bound interface (whether up/down) since the namespace doesn't have the 
> interface. The behavior falls-back to slurping up all ports Linux says 
> doesn't exist; this is at least in part to make sure it captures ports 
> already unbound from the kernel.
> 
> I'd agree this is not acting in the manner of least surprise. Damjan is the 
> best person to comment on this and whether a more consistent behavior can be 
> crafted. One simple approach might be to detect we're in a namespace and only 
> bind detected-down interfaces and explicitly provided PCI ID's.
> 
> Chris.
> 
>> -Original Message-
>> From: Renato Westphal [mailto:ren...@opensourcerouting.org]
>> Sent: Tuesday, May 23, 2017 22:01
>> To: Luke, Chris 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Running VPP in a non-default network namespace
>> 
>> Thank you Chris, disabling the DPDK plugin did the trick for me. My plan is 
>> to
>> use veth/AF_PACKET interfaces only.
>> 
>> I think I found a small problem in VPP though.
>> 
>> When you start VPP and the 'dpdk' section of the configuration file is empty,
>> DPDK snatches all physical interfaces that are administratively down.
>> 
>> This works ok when running VPP in the default netns. But when you run VPP
>> in a non-default netns, all physical interfaces are snatched regardless of 
>> their
>> administrative status.
>> 
>> Could you confirm if this is a bug? This inconsistent behavior was a source 
>> of
>> confusion to me.
>> 
>> Best Regards,
>> Renato.
>> 
>> On Tue, May 23, 2017 at 9:00 PM, Luke, Chris 
>> wrote:
>>> If you're using DPDK and all your physical interfaces are DPDK-capable, then
>> it's snatching the PCI device away from Linux. This has nothing to do with
>> Linux namespaces; ns can't prevent it from happening because it's working at
>> a different layer in the stack. The point of DPDK is to go straight to the
>> hardware.
>>> 
>>> The only time Linux namespace interactions come into play with VPP is
>>> with interfaces like 'tap' and 'host' (aka af_packet) which use
>>> syscalls to manipulate Linux network state (create ports, listen on a
>>> raw socket, etc) or anything else that uses Linux networking (console
>>> TCP connections, etc)
>>> 
>>> You can either limit which PCI devices VPP asks DPDK to bind to in the dpdk
>> section of the config:
>>> 
>>> dpdk {
>>>  dev 
>>>  ...
>>> }
>>> 
>>> or, if only inter-ns is what you want to do, just disable DPDK altogether:
>>> 
>>> plugins {
>>>  plugin dpdk_plugin.so { disable }
>>> }
>>> 
>>> Chris.
>>> 
>>> 
 -Original Message-
 From: vpp-dev-boun...@lists.fd.io
 [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Renato Westphal
 Sent: Tuesday, May 23, 2017 19:38
 To: vpp-dev@lists.fd.io
 Subject: [vpp-dev] Running VPP in a non-default network namespace
 
 Hi all,
 
 For learning purposes, I'm trying to set up a test topology using
 multiple instances of VPP running in different network namespaces.
 
 I see that there's documentation showing how to use VPP as a router
 between namespaces, but in all examples I found VPP is always running
 in the default network namespace.
 
 If I try to run VPP in a non-default network namespace, something
 weird
 happens: all interfaces from the default netns disappear!
 
 Does anyone know what might be the cause of this?
 
 I'm using the official vagrant VM (Ubuntu 16.04) and VPP v17.04.1,
 and I can see that the same problem occurs on master and on older
>> releases as well.
 
 More details about the issue below.
 
 1 - My VPP config:
 # cat /etc/vpp/startup.conf
 unix {
  nodaemon
  log /tmp/vpp.log
  full-coredump
 }
 
 api-trace {
  on
 }
 
 api-segment {
  gid vpp
 }
 
 2 - VPP running ok in the default netns:
 # vpp -c /etc/vpp/startup.conf
 vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
 load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control
 Lists)
 load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane
 Development Kit (DPDK))
 load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per
 Packet)
 load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
 load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
 addressing for IPv6)
 load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
 load_one_plugin:114: Plugin dis

[vpp-dev] MPLS L3VPN PING FAILED

2017-05-24 Thread 薛欣颖

Hi guys,

I have the following configuration:
mpls tunnel add via 2.1.1.1 host-eth1 out-label 1024 
set int state mpls-tunnel0 up 
ip route add 192.168.3.0/24 via mpls-tunnel0 out-label 1023

Ping from CE to PE ,and the PE drop it.

That is the fib :
192.168.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:34 buckets:1 uRPF:36 to:[15:1260]]
[0] [@11]: mpls-label:[3]:[1023:255:0:eos]
[@2]: mpls via 0.0.0.0  mpls-tunnel0: 
  stacked-on:
[@5]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:-1 
to:[0:0] via:[15:1320]]
  [0] [@8]: mpls-label:[1]:[1024:255:0:neos]
  [@1]: mpls via 2.1.1.1 host-eth1: 00037ffe0e1a0d0050438847

The following is the trace info:
00:17:54:791606: af-packet-input
  af_packet: hw_if_index 1 next-index 4 


tpacket2_hdr:
  status 0x1 len 98 snaplen 98 mac 66 net 80
  sec 0x16645 nsec 0x34a33284 vlan 0
00:17:54:791899: ethernet-input
  IP4: 2c:53:4a:02:91:95 -> 00:50:43:00:02:02
00:17:54:791956: ip4-input
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792005: ip4-lookup
  fib 1 dpo-idx 1 flow hash: 0x
  ICMP: 192.168.2.10 -> 192.168.3.10
tos 0x00, ttl 64, length 84, checksum 0x0886
fragment id 0xabbe, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xae6a
00:17:54:792062: ip4-drop
ICMP: 192.168.2.10 -> 192.168.3.10
  tos 0x00, ttl 64, length 84, checksum 0x0886
  fragment id 0xabbe, flags DONT_FRAGMENT
ICMP echo_request checksum 0xae6a
00:17:54:792110: error-drop
  ip4-input: ip4 adjacency drop

How can I solve the problem?

Thanks,
xyxue


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco)
Yes, I can try that. Will send you more details after it is done.

Michal

-Original Message-
From: Damjan Marion (damarion) 
Sent: Tuesday, May 23, 2017 6:39 PM
To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 

Cc: Dave Barach (dbarach) ; Marco Varlese 
; Kinsella, Ray ; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
master


Can you try following patch without manual bind:

https://gerrit.fd.io/r/#/c/6846

Thanks,

Damjan


> On 23 May 2017, at 15:53, Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi Dave,
> 
> The manual binding helped. I used uio_pci_generic and now VPP finally sees 
> them. Thanks.
> 
> Michal
> 
> -Original Message-
> From: Dave Barach (dbarach)
> Sent: Tuesday, May 23, 2017 3:39 PM
> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
> ; Marco Varlese ; 
> Kinsella, Ray ; vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] VPP has no interfaces after update from 1704 to 
> 1707 master
> 
> Please attempt to bind the VIC device(s) manually - to uio_pci_generic - 
> using dpdk-devbind. 
> 
> Until / unless that works, there isn't a chance that vpp will drive the 
> devices. You may have better luck with the igb_uio kernel module, or not... 
> 
> Thanks… Dave
> 
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at 
> Cisco)
> Sent: Tuesday, May 23, 2017 9:13 AM
> To: Marco Varlese ; Kinsella, Ray 
> ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 
> 1707 master
> 
> Hi,
> 
> I meant that they are in DOWN state in "ip link list":
> [root@overcloud-novacompute-1 ~]# ip link list
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
> DEFAULT qlen 1
>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: enp6s0:  mtu 1500 qdisc mq state UP mode 
> DEFAULT qlen 1000
>link/ether 00:25:b5:00:01:50 brd ff:ff:ff:ff:ff:ff
> 3: enp7s0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
> qlen 1000
>link/ether 00:25:b5:00:01:4f brd ff:ff:ff:ff:ff:ff
> 4: enp8s0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
> qlen 1000
>link/ether 00:25:b5:00:01:4e brd ff:ff:ff:ff:ff:ff
> 
> I also tried to unbind them like you suggested. then the status of 
> dpdk-nicbind is:
> [root@overcloud-novacompute-1 tools]# dpdk_nic_bind --status
> 
> Network devices using DPDK-compatible driver 
> 
> 
> 
> Network devices using kernel driver
> ===
> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> 
> Other network devices
> =
> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> 
> [root@overcloud-novacompute-1 tools]# vpp unix interactive
> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control 
> Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane 
> Development Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per 
> Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid 
> Deployment on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory 
> Interface (experimetal))
> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
> Translation)
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> lo

Re: [vpp-dev] The Case Of the Missing API Definition

2017-05-24 Thread otroan
Hi Jon,

Thanks for the poetry! ;-)

This is me in 01384fe
Apologies for that. Next time I see you let me bring both band aid and whiskey.

To my excuse, there has been this list of "broken" APIs maintained by the 
Python and Java language binding implementors. It's been on the todo list for a 
long time and I finally found some time to deal with them. Obviously not 
everyone was aware of those planned changes. I did include those I thought 
affected as code reviewers. #fail.

I feel your pain. How can we make this better?
1) Never change APIs, regardless the reason
2) Announce and discuss changes a priori. Separate mailing list for API changes?
{vpp-users, vpp-api-announce}
3) ...

Do we reach everyone using vpp on  vpp-dev and a heads-up there would suffice?

Cheers,
Ole






> On 23 May 2017, at 19:19, Jon Loeliger  wrote:
> 
> Folks,
> 
> I was causally walking down Update VPP Master Lane when I was
> suddenly attacked from behind by a case of the missing API call!
> I read vpp-dev mail daily, and I watch the Gerrit fervently, so I was
> pretty sure I wouldn't be blind-sided by this sort of Silent Gotcha.
> 
> But there was no mistaking it:  My API call bridge_domain_sw_if_details
> was gone.  And only two days ago too!  I was shocked.  Horrified, even.
> I knew the next build my code would fail.  There would be no updating
> to Top-Of-Tree VPP today.
> 
> What would I tell my boss?  *My* code was broken?  Surely you wouldn't
> expect me to fall on the "I'm sorry.  My code is broken." sword.  My own
> code!  Surely I could blame someone else?  I mean, what if there were
> some email from the developers?  A little heads-up that the API was on
> The Out and would soon go the way of Sonny Bono.  But no, no, there
> wasn't even a hint.
> 
> I was going to have to admit I failed to see this coming in the Gerrit 
> reviews.
> 
> And now, without even lunch, I would have to deduce what data used to be
> in that API call, and how it was cached in my VPP interface library, and yes,
> I'd have to scurry to find where that data was located now.
> 
> But how?  How could this be?  I lamented still.  I just knew last time *I*
> wanted an API interface change, I spent a week discussing it on the list,
> and, after deliberation a-plenty, a new API was needed, and then later,
> in fact after a complete release cycle, we could begin to discuss how the
> old API call might be deprecated and finally removed.  I longed for the day
> that we would finally make progress, content in the knowledge that we
> had not, in fact, blind-sided anyone with our API deprecation plan.
> 
> But those days are behind us now, and the future comes at us plenty fast.
> Commits are committed, and progress is progressed.  My scars are healing,
> and after all this water under and through the bridge, I have learned now to
> just laugh at these situations.
> 
> Ah, to be young again, and not have wasted my youth on backward compatibility
> and cheap Scotch.
> 
> jdl
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev