Re: [vpp-dev] Verify issues (GRE)

2018-11-29 Thread Neale Ranns via Lists.Fd.Io

Hi Ole,

I think this should fix the GRE tests:
  https://gerrit.fd.io/r/#/c/16272/

/Neale


-Message d'origine-
De :  au nom de Ole Troan 
Date : mercredi 28 novembre 2018 à 19:55
À : vpp-dev 
Objet : [vpp-dev] Verify issues (GRE)

Guys,

The verify job have been unstable over the last few days.
We see some instability in the Jenkins build system, in the test harness 
itself, and in the tests.
On my 18.04 machine I’m seeing intermittent failures in GRE, GBP, DHCP, VCL.

It looks like Jenkins is functioning correctly now.
Ed and I are also testing a revert of all the changes made to the test 
framework itself over the last couple of days. A bit harsh, but we think this 
might be the quickest way back to some level of stability.

Then we need to fix the tests that are in themselves unstable.

Any volunteers to see if they can figure out why GRE fails?

Cheers,
Ole


GRE Test Case 

==
GRE IPv4 tunnel TestsOK
GRE IPv6 tunnel TestsOK
GRE tunnel L2 Tests  OK
19:37:47,505 Unexpected packets captured:
Packet #0:
  0201FF0202FE70A06AD308004500 p.j...E.
0010  002A00013F11219FAC100101AC10 .*?.!...
0020  010204D204D2001672A9343336392033 r.4369 3
0030  2033202D31202D31  3 -1 -1

###[ Ethernet ]### 
  dst   = 02:01:00:00:ff:02
  src   = 02:fe:70:a0:6a:d3
  type  = IPv4
###[ IP ]### 
 version   = 4
 ihl   = 5
 tos   = 0x0
 len   = 42
 id= 1
 flags = 
 frag  = 0
 ttl   = 63
 proto = udp
 chksum= 0x219f
 src   = 172.16.1.1
 dst   = 172.16.1.2
 \options   \
###[ UDP ]### 
sport = 1234
dport = 1234
len   = 22
chksum= 0x72a9
###[ Raw ]### 
   load  = '4369 3 3 -1 -1'

Ten more packets


###[ UDP ]### 
sport = 1234
dport = 1234
len   = 22
chksum= 0x72a9
###[ Raw ]### 
   load  = '4369 3 3 -1 -1'

** Ten more packets

Print limit reached, 10 out of 257 packets printed
19:37:47,770 REG: Couldn't remove configuration for object(s):
19:37:47,770 
GRE tunnel VRF Tests 
ERROR [ temp dir used by test case: /tmp/vpp-unittest-TestGRE-hthaHC ]


==
ERROR: GRE tunnel VRF Tests

--
Traceback (most recent call last):
  File "/vpp/16257/test/test_gre.py", line 61, in tearDown
super(TestGRE, self).tearDown()
  File "/vpp/16257/test/framework.py", line 546, in tearDown
self.registry.remove_vpp_config(self.logger)
  File "/vpp/16257/test/vpp_object.py", line 86, in remove_vpp_config
(", ".join(str(x) for x in failed)))
Exception: Couldn't remove configuration for object(s): 1:2.2.2.2/32


==
FAIL: GRE tunnel VRF Tests

--
Traceback (most recent call last):
  File "/vpp/16257/test/test_gre.py", line 787, in test_gre_vrf
remark="GRE decap packets in wrong VRF")
  File "/vpp/16257/test/vpp_pg_interface.py", line 264, in 
assert_nothing_captured
(self.name, remark))
AssertionError: Non-empty capture file present for interface pg0 (GRE decap 
packets in wrong VRF)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11458): https://lists.fd.io/g/vpp-dev/message/11458
Mute This Topic: https://lists.fd.io/mt/28473762/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about ROSEN MVPN

2018-11-28 Thread Neale Ranns via Lists.Fd.Io
Hi Xue,

To my knowledge it has not been tried nor tested. GRE interfaces today do not 
support a multicast destination address. However, other tunnel types (like 
VXLAN) do so adding support shouldn’t be too hard. After that the mfib support 
egress out of any interface type.

I also have a draft in the pipeline that supports recursing through an mfib 
entry that will simplify multicast tunnel implementations.

Regards,
neale


De :  au nom de xyxue 
Date : mercredi 28 novembre 2018 à 13:06
À : vpp-dev 
Objet : [vpp-dev] question about ROSEN MVPN

Hi guys,

Is the vpp support the forwarding about ROSEN MVPN(described in RFC 6037)?
Or is the vpp support roadmap?

Thank you very much for your reply.

Thanks,
Xue

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11450): https://lists.fd.io/g/vpp-dev/message/11450
Mute This Topic: https://lists.fd.io/mt/28430065/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about multicast mpls

2018-11-28 Thread Neale Ranns via Lists.Fd.Io
Hi Xue,

MPLS multicast has been supported for a while. Please see the unit tests for 
examples: test/test_mpls.py test_mcast_*()

Regards,
Neale


De :  au nom de xyxue 
Date : mercredi 28 novembre 2018 à 13:04
À : vpp-dev 
Objet : [vpp-dev] question about multicast mpls


Hi guys,

I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ?
Is there any example show about multicast mpls?

Thank you very much for your reply.

Thanks,
Xue

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11449): https://lists.fd.io/g/vpp-dev/message/11449
Mute This Topic: https://lists.fd.io/mt/28430049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

2018-11-21 Thread Neale Ranns via Lists.Fd.Io

Hi Chetan,

The null-node should not be encountered under normal operation. The null-node 
always has an index/value of 0, therefore if the previous node has not been 
properly configured, or the arc taken from that node was wrong, then the packet 
can likely end up at the null-node.
To debug this I would suggest you run a debug image and enable the packet 
trajectory tracer (grep VLIB_BUFFER_TRACE_TRAJECTORY) so you can see where 
these packets originate from.

Regards,
Neale

De :  au nom de chetan bhasin 
Date : mercredi 21 novembre 2018 à 06:16
À : "Dave Barach (dbarach)" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Hi Dave,

Thanks a lot.

one more query ,what is the purpose of null_node ? and when this scenario will 
be provoked that null_node is hit?

Thanks,
Chetan Bhasin

On Tue, Nov 20, 2018 at 10:57 PM Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
See 
https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pushing_Code_with_git_review

From: chetan bhasin 
mailto:chetan.bhasin...@gmail.com>>
Sent: Tuesday, November 20, 2018 11:43 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Subject: Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Thanks Dave!

I will try with DEBUG too.

Just want to understand the procedure to check in the patches,  actually we 
have done several fixes in VPP,  so we are planning to Check-in all patches.

Thanks,
Chetan Bhasin

On Tue, Nov 20, 2018, 18:02 Dave Barach (dbarach) 
mailto:dbar...@cisco.com> wrote:
Several suggestions:

· Try a debug image (PLATFORM=vpp TAG=vpp_debug) so the crash will be 
more enlightening
· Switch to 18.10. 18.01 is no longer supported. We don’t use the 
mheap.c memory allocator anymore, and so on and so forth.
· See https://wiki.fd.io/view/VPP/BugReports


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of chetan bhasin
Sent: Tuesday, November 20, 2018 5:31 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Hi Vpp-dev,

We are facing issues while running load for ~6 hours . getting below crash.

Your Suggestion is really appreciated.


#1  0x2b00b990e8f8 in __GI_abort () at abort.c:90
#2  0x00405f23 in os_panic () at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268
#3  0x2b00b8d60710 in mheap_put (v=0x2b00ba3d8000, uoffset=2382207096) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mheap.c:798
#4  0x2b00b8d8959e in clib_mem_free (p=0x2b00c8ba84a0) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:213
#5  vec_resize_allocate_memory (v=, 
length_increment=length_increment@entry=1, data_bytes=, 
header_bytes=, header_bytes@entry=0, 
data_align=data_align@entry=4) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:96
#6  0x2b00b79e899d in _vec_resize (data_align=, 
header_bytes=, data_bytes=, 
length_increment=, v=) at 
/nfs-bfs/workspace/build-data/../src/vppinfra/vec.h:142
#7  get_frame_size_info (n_scalar_bytes=, 
n_vector_bytes=, nm=0x2b00c87a3160, nm=0x2b00c87a3160) at 
/nfs-bfs/workspace//build-data/../src/vlib/main.c:107
#8  0x2b00b79e8d79 in vlib_frame_free (vm=vm@entry=0x2b00c87a3050, 
r=r@entry=0x2b00c86ca368, f=f@entry=0x2b014b2ecb80) at 
/nfs-bfs//vpp_1801/build-data/../src/vlib/main.c:221
#9  0x2b00b79fe6e6 in null_node_fn (vm=0x2b00c87a3050, node=0x2b00c86ca368, 
frame=0x2b014b2ecb80) at /nfs-bfs/workspace/build-data/../src/vlib/node.c:512

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11349): https://lists.fd.io/g/vpp-dev/message/11349
Mute This Topic: https://lists.fd.io/mt/28266210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] About FRR(fast re-routing)

2018-11-16 Thread Neale Ranns via Lists.Fd.Io

Hi,

We do not support FRR, nor is there currently a plan to.

However, if your label/tunnel/route has only one path, you can achieve a 
similar result to FRR by installing the primary path with a better (lower) 
preference to the backup path. VPP will then cutover when the primary path goes 
down.

/neale

De :  au nom de 倪宝景 
Date : vendredi 16 novembre 2018 à 09:34
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] About FRR(fast re-routing)

Dear 
Mr/Miss/Ms
 :
 I am Baojing Ni, working in FIBERHOME TELECOMMUNICATION TECHNOLOGIES Co.,Ltd
 I have a question to consult :
 Do you have the plan about FRR(fast re-routing) MPLS traffic engineering in 
VPP ?

I am looking forward for your answers.
Thank you.

--







-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11287): https://lists.fd.io/g/vpp-dev/message/11287
Mute This Topic: https://lists.fd.io/mt/28165403/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ip4-load-balance

2018-11-14 Thread Neale Ranns via Lists.Fd.Io
Hi Ray,

By way of explanation.. without the interface the route is recursive, i.e. 
20.20.20.20/24 is sent via 1.1.1.2. So the forwarding can be thought of as 
happening in two stages, firstly the lookup for the packet’s destination that 
matches 20.20.20.20/24 then the ‘lookup’ on the result of 1.1.1.2. One of the 
prime functions of the FIB is to resolve and cache that second lookup during 
route programming, so the data-plane can simply follow the result. The 
ip4-load-balance node is where this occurs.

/neale


-Message d'origine-
De : Ray Kinsella 
Date : mardi 13 novembre 2018 à 19:09
À : "vpp-dev@lists.fd.io" , "Neale Ranns (nranns)" 

Objet : Re: [vpp-dev] ip4-load-balance

Mystery solved,

I was missing the interface on the IP Route.

ip route add count 1 20.20.20.0/24 via 1.1.1.2 TenGigabitEthernet83/0/1

Ray K

On 13/11/2018 15:39, Ray Kinsella wrote:
> Folks,
> 
> I have configuring my system to get something comparable to CSIT 
> performance and I am a few mpps off at the moment, using FD.io VPP 18.07.
> 
> I duplicated the IPv4 Base and Scale Test Cases (and environment) 
> locally and I end up with extra graph node 'ip4-load-balance' in my 
> pipeline?
> 
> CSIT records the following pipeline in Test Operation Data.
> https://docs.fd.io/csit/rls1807/report/test_operational_data
> 
> Thread 1 vpp_wk_0 (lcore 2)
>   Time 5.7, average vectors/node 245.79, last 128 main loops 13.03 per 
> node 151.64
> vector rates in 1.2082e7, out 1.2082e7, drop 0.e0, punt 0.e0
>Name State Calls  Vectors 
> Suspends Clocks   Vectors/Call
>   TenGigabitEtherneta/0/0-output   active 140125 
> 34429184   0  8.41e0  245.70
>   TenGigabitEtherneta/0/0-tx   active 140125 
> 34429184   0  4.09e1  245.70
>   TenGigabitEtherneta/0/1-output   active 140071 
> 34428928   0  8.58e0  245.79
>   TenGigabitEtherneta/0/1-tx   active 140071 
> 34428928   0  3.93e1  245.79
>   dpdk-input   polling140580 
> 68858112   0  6.07e1  489.81
>   ip4-input-no-checksumactive 280127 
> 68858112   0  2.05e1  245.81
>   ip4-lookup   active 280127 
> 68858112   0  3.03e1  245.81
>   ip4-rewrite  active 280127 
> 68858112   0  2.92e1  245.81
> 
> 
> I get the following pipeline, with the additional graph node - 
> ip4-load-balance.
> 
> Thread 2 vpp_wk_1 (lcore 20)
> Time 188.9, average vectors/node 256.00, last 128 main loops 14.00 per 
> node 256.00
>vector rates in 9.3287e6, out 9.3287e6, drop 0.e0, punt 0.e0
>   Name State Calls  Vectors 
>Suspends Clocks   Vectors/Call
> TenGigabitEthernet83/0/1-outpu   active6881842 
> 1761751552   0  8.46e0  256.00
> TenGigabitEthernet83/0/1-tx  active6881842 
> 1761751552   0  5.53e1  256.00
> dpdk-input   polling   6881842 
> 1761751552   0  8.58e1  256.00
> ip4-input-no-checksumactive6881842 
> 1761751552   0  2.19e1  256.00
> ip4-load-balance active6881842 
> 1761751552   0  1.68e1  25
> 6.00
> ip4-lookup   active6881842 
> 1761751552   0  2.80e1  256.00
> ip4-rewrite  active6881842 
> 1761751552   0  2.89e1  256.00
> 
> Any idea where ip4-load-balance is coming from?
> 
> Ray K
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11223): https://lists.fd.io/g/vpp-dev/message/11223
> Mute This Topic: https://lists.fd.io/mt/28123777/675355
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [m...@ashroe.eu]
> -=-=-=-=-=-=-=-=-=-=-=-
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11237): https://lists.fd.io/g/vpp-dev/message/11237
Mute This Topic: https://lists.fd.io/mt/28123777/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=

Re: [vpp-dev] New Committer Nomination: Andrew Yourtchenko

2018-11-08 Thread Neale Ranns via Lists.Fd.Io
+1

-Message d'origine-
De :  au nom de "Dave Barach via Lists.Fd.Io" 

Répondre à : "Dave Barach (dbarach)" 
Date : jeudi 8 novembre 2018 à 13:13
À : "vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] New Committer Nomination: Andrew Yourtchenko

In view of significant code contributions to the vpp project - see below - 
I'm pleased to nominate Andrew Yourtchenko as a vpp project committer. I have 
high confidence that he'll be a major asset to the project in a committer role. 
 

Andrew has contributed 181 merged patches, including significant new 
feature work in the ACL plugin. Example: https://gerrit.fd.io/r/#/c/13162
 
Committers, please vote (+1, 0, -1) on vpp-dev@ We'll need a recorded 
vote so that the TSC will approve Andrew's nomination.

Thanks... Dave



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11164): https://lists.fd.io/g/vpp-dev/message/11164
Mute This Topic: https://lists.fd.io/mt/28035544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Problem on VxLAN multicast mode

2018-11-05 Thread Neale Ranns via Lists.Fd.Io

Hi Eyal, John,

I missed the fact that the tunnel classification is based only on the senders 
IP. Now it makes sense.

Thanks,
Neale


-Message d'origine-
De :  au nom de "John Lo (loj) via Lists.Fd.Io" 

Répondre à : "John Lo (loj)" 
Date : lundi 5 novembre 2018 à 16:17
À : Xuekun , "Eyal Bari (ebari)" , 
"vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Problem on VxLAN multicast mode

VPP does not support receiving of VXLAN packets from an unknown VTEP.  
Thus, any packet received in a BD from a VXLAN multicast tunnel must have its 
source IP match of the remote VTEP of an existing VXLAN unicast tunnel in the 
same BD.  If no such unicast tunnel is found, packets are dropped.  If it is 
found and MAC learning is enabled, the MAC will be learned with its output 
associated with the matching unicast VXLAN tunnel.  VPP does not learn unknown 
remote VTEPs and create a unicast tunnel automatically for better security.   

Regards,
John

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Xuekun
Sent: Monday, November 05, 2018 8:07 AM
To: Eyal Bari (ebari) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Problem on VxLAN multicast mode

Hi, Eyal

If need to create a unicast tunnel also, it means need to know the remote 
vtep address first. However the purpose of using mcast tunnel is to build 
tunnels across multiple vtep addresses which don't be known in advance.  For 
example, if there is a third server (server3: 192.168.1.3) in my env, what 
should I do? Still add a unicast tunnel ("create vxlan tunnel src 172.168.1.1 
dst 172.168.1.3 vni 100"), if so, just to setup the point-to-point vxlan tunnel 
and put all the tunnels in the same BD, then all servers are connected, and 
don't need to create the mcast tunnel. 

Is my understanding about mcast tunnel not correct? 
Thanks. 

Thx, Xuekun  

-Original Message-
From: Eyal Bari (ebari)  
Sent: Monday, November 05, 2018 7:58 PM
To: Hu, Xuekun ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Problem on VxLAN multicast mode

Just to clarify, the mcast tunnel does not need to be under the bridge for 
receiving traffic, however for sending flooded traffic through the mcast tunnel 
it needs to be under the bridge...

eyal 

On 05/11/2018, 13:47, "vpp-dev@lists.fd.io on behalf of eyal bari via 
Lists.Fd.Io"  
wrote:

Hi, Xuekun,

Packets are only received on unicast tunnels.
So in your case you would need to create one and put it under the 
bridge-domain (the multicast tunnel does not need to be under the 
bridge-domain):
create vxlan tunnel src 172.168.1.1 dst 172.168.1.2 vni 100
set interface l2 bridge vxlan_tunnel0 1000
create vxlan tunnel src 172.168.1.1 group 239.1.1.1 
TenGigabitEthernet3d/0/1 vni 100


eyal

On 05/11/2018, 9:18, "vpp-dev@lists.fd.io on behalf of Xuekun" 
 wrote:

Hi, All

I'm configuring VPP as VxLAN multicast mode over multiple servers. 
To simplify the topology, I only used two servers: server1 as VPP; server2 
using vxlan kernel mode.

Server1:
set interface state TenGigabitEthernet3d/0/1 up
set interface ip address TenGigabitEthernet3d/0/1 172.168.1.1/24
create bridge-domain 1000 learn 1 forward 1 uu-flood 1 flood 1 
arp-term 0
create vxlan tunnel src 172.168.1.1 group 239.1.1.1 
TenGigabitEthernet3d/0/1 vni 100
set interface l2 bridge vxlan_tunnel0 1000  
loopback create 
set interface l2 bridge loop0 1000 bvi
set interface state loop0 up
set interface ip address loop0 192.168.1.1/24

Server2:
ifconfig enp11s0f1 172.168.1.2/24 up
ip link add vxlan0 type vxlan id 100 dstport 4789 group 239.1.1.1 
dev enp11s0f1
ifconfig vxlan0 192.168.1.2/24 up

Now, server1 and server2 are connected with VxLAN VNI 100 through 
multicast group 239.1.1.1.  
However, 192.168.1.1 and 192.168.1.2 could NOT be pingable each 
other. 

Trace log: 
Packet 1

00:01:02:563831: dpdk-input
  TenGigabitEthernet3d/0/1 rx queue 0
  buffer 0x4b93: current data 14, length 78, free-list 0, 
clone-count 0, totlen-nifb 0, trace 0x0
 ext-hdr-valid
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 0, nb_segs 1, pkt_len 92
buf_len 2176, data_len 92, ol_flags 0x180, data_off 128, 
phys_addr 0x4012e540
packet_type 0x291 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
 

Re: [vpp-dev]ping local address

2018-10-31 Thread Neale Ranns via Lists.Fd.Io
Hi Saint,

With this change an attacker could send a packet with both the source and 
destination both set to one of VPP’s own addresses. If you include in this new 
sub-condition to only accept locally generated packets, then we should be good 
(b->flags & VNET_BUFFER_F_LOCALLY_ORIGINATED).

Regards,
neale

De : "saint_...@aliyun.com" 
Date : mercredi 31 octobre 2018 à 08:49
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: Re: [vpp-dev]ping local address

hello neale,
I found and modified a piece of code in the ip4_forward.c, and now it is 
able to ping local address, as follows:

I think the source- check should only discard the packet which comes from the 
attacker(forged a source address) and wants to attack another host, so I 
changed the judgement conditions.
can you help me to check it right or wrong?


The attachment is the modified file.

saint_...@aliyun.com

From: Neale Ranns (nranns)
Date: 2018-10-25 15:55
To: saint_...@aliyun.com; 
vpp-dev
Subject: Re: [vpp-dev]ping local address

It’s a known limitation. Contributions to fix it would be welcome.

/neale


De :  au nom de "saint_sun 孙 via Lists.Fd.Io" 

Répondre à : "saint_...@aliyun.com" 
Date : jeudi 25 octobre 2018 à 09:40
À : vpp-dev 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev]ping local address

Hello all:
An basic features: ping myself. when I configure an IP address for an 
interface, then I ping the address from VPP, it's failed, why?should I do other 
more settings?

DBGvpp# ping 10.0.0.1
Aborted due to a keypress.

Statistics: 1 sent, 0 received, 100% packet loss


DBGvpp# show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:default-route:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:21 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:27 to:[0:0]]
[0] [@4]: ipv4-glean: line1: mtu:9000 000e5e513c380806
10.0.0.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:25 to:[0:0]]
[0] [@2]: dpo-receive: 10.0.0.1 on line1
10.0.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:23 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4




saint_...@aliyun.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11050): https://lists.fd.io/g/vpp-dev/message/11050
Mute This Topic: https://lists.fd.io/mt/27630267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev]ping local address

2018-10-25 Thread Neale Ranns via Lists.Fd.Io

It’s a known limitation. Contributions to fix it would be welcome.

/neale


De :  au nom de "saint_sun 孙 via Lists.Fd.Io" 

Répondre à : "saint_...@aliyun.com" 
Date : jeudi 25 octobre 2018 à 09:40
À : vpp-dev 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev]ping local address

Hello all:
An basic features: ping myself. when I configure an IP address for an 
interface, then I ping the address from VPP, it's failed, why?should I do other 
more settings?

DBGvpp# ping 10.0.0.1
Aborted due to a keypress.

Statistics: 1 sent, 0 received, 100% packet loss


DBGvpp# show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:default-route:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:21 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:27 to:[0:0]]
[0] [@4]: ipv4-glean: line1: mtu:9000 000e5e513c380806
10.0.0.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:25 to:[0:0]]
[0] [@2]: dpo-receive: 10.0.0.1 on line1
10.0.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:23 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4




saint_...@aliyun.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10982): https://lists.fd.io/g/vpp-dev/message/10982
Mute This Topic: https://lists.fd.io/mt/27630267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp crash when handling IGMP with router alert

2018-10-17 Thread Neale Ranns via Lists.Fd.Io
Hi Jeff,

Thank you for the bug report.

As you mention the graph node path taken by these packets does not go thru 
ip4-lookup and so does not have the fib-index set. Since ip4-lookup is replaced 
by ip4-options, IMO ip4-options would be the place to add the code you have 
identified, so that we don’t do unnecessary work on packets that are for-us and 
were subject to the lookup. Please submit a patch with the change.

We have some unit tests in test/test_igmp.py (test_igmp_router()) where we send 
IGMP packets to an interface that is not IGMP enabled and so they take the same 
graph node paths as your packet. This tests passes. Is your input interface 
bound to a non-default VRF? If so would you be able to add a new test to that 
file where the input interface is bound to another table?

Thanks,
neale

De :  au nom de Jeff 
Date : mardi 16 octobre 2018 à 22:27
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] vpp crash when handling IGMP with router alert

Hello,

I have a tap interface connected to a noisy LAN and I found that a certain type 
of IGMP packet will sometimes cause a crash (backtrace at the end) in 
ip4_fib_mtrie_lookup_step_one().  More specifically its an IGMP packet with the 
router alert IP option.  Here's a packet trace:

00:02:41:522429: virtio-input
  virtio: hw_if_index 6 next-index 4 vring 0 len 54
hdr: flags 0x00 gso_type 0x00 hdr_len 0 gso_size 0 csum_start 0 csum_offset 
0 num_buffers 1
00:02:41:522430: ethernet-input
  IP4: 00:0c:29:1f:43:a4 -> 01:00:5e:00:00:16
00:02:41:522430: ip4-input
  IGMP: 172.20.2.194 -> 224.0.0.22
version 4, header length 24
tos 0xc0, ttl 1, length 40, checksum 0x5523
fragment id 0x, flags DONT_FRAGMENT
00:02:41:522431: ip4-options
option:[0x94,0x4,0x0,0x0]
00:02:41:522431: ip4-local
IGMP: 172.20.2.194 -> 224.0.0.22
  version 4, header length 24
  tos 0xc0, ttl 1, length 40, checksum 0x5523
  fragment id 0x, flags DONT_FRAGMENT
00:02:41:522434: igmp-input
  sw_if_index 6 next-index 0
  membership_report_v3: code 0, checksum 0xfbf4
00:02:41:522435: error-drop
  igmp-input: IGMP not enabled on this interface


I found that when the crash occurs vnet_buffer(b)->ip.fib_index is ~0 in 
ip4_local_check_src().  Here's an example debug print just added just after "if 
(PREDICT_FALSE (last_check->src.as_u32 != ip0->src_address.as_u32))" in 
ip4_local_check_src()

Usual case:
ip4_local_check_src: ( != 0101A8C0), buf 0x7f6b6301b900, vlib_tx 
4294967295 fib index 0

When crash happens:
ip4_local_check_src: ( != 0100A8C0), buf 0x7f6b63a0, vlib_tx 
4294967295 fib index 4294967295

I think the problem is that vnet_buffer(b)->ip.fib_index isn't set anywhere in 
this processing chain (ip4-input -> ip4-options -> ip4-local).  This can cause 
an invalid pointer to be used when looking up the mtrie in 
ip4_local_check_src().  Normally the fib_index metadata is assigned by 
ip4-lookup via ip_lookup_set_buffer_fib_index().  But since the packet doesn't 
traverse that node the metadata is unset.  I'm guessing that due to luck and/or 
initialization the fib_index metadata is usually zero, so the crash won't 
happen until the metadata is modified elsewhere and then the buffer is reused 
for this IGMP packet with router alert.  I hope this is what's happening and 
it's not something more nefarious like memory corruption.

I made the following change at the top of ip4_local_check_src (taken from 
ip_lookup_set_buffer_fib_index())

   const dpo_id_t *dpo0;
   load_balance_t *lb0;
   u32 lbi0;
+  ip4_main_t *im = &ip4_main;

   vnet_buffer (b)->ip.fib_index =
+vec_elt (im->fib_index_by_sw_if_index, vnet_buffer 
(b)->sw_if_index[VLIB_RX]);
+  vnet_buffer (b)->ip.fib_index =
 vnet_buffer (b)->sw_if_index[VLIB_TX] != ~0 ?
 vnet_buffer (b)->sw_if_index[VLIB_TX] : vnet_buffer (b)->ip.fib_index;

With this change I was unable to trigger the crash.  Don't know if this is a 
proper fix though.

Here's the backtrace (some of the line numbers might be offset due to my 
debugging):

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7f73861c2748 in ip4_fib_mtrie_lookup_step_one 
(dst_address=0x7f717de38e1a, m=) at 
/home/jeff/vpp/src/vnet/ip/ip4_mtrie.h:229
229 /home/jeff/vpp/src/vnet/ip/ip4_mtrie.h: No such file or directory.
(gdb) bt
#0  0x7f73861c2748 in ip4_fib_mtrie_lookup_step_one 
(dst_address=0x7f717de38e1a, m=) at 
/home/jeff/vpp/src/vnet/ip/ip4_mtrie.h:229
#1  ip4_local_check_src (error0=, last_check=, ip0=0x7f717de38e0e, b=)
at /home/jeff/vpp/src/vnet/ip/ip4_forward.c:1352
#2  ip4_local_inline (vm=, node=, 
frame=, head_of_feature_arc=)
at /home/jeff/vpp/src/vnet/ip/ip4_forward.c:1586
#3  0x7f7385c70014 in dispatch_node (last_time_stamp=17304359695215669, 
frame=0x7f718dcaf300, dispatch_state=VLIB_NODE_STATE_POLLING,
type=VLIB_NODE_TYPE_INTERNAL, node=0x7f7184ed2ec0, vm=0x7f7385ec9980 
) at /home/jeff/vpp/src/vlib/main.c:989
#4  dispatch_pending_node (vm=vm@entry=

Re: [vpp-dev] Delete IPv6 VXLAN fails

2018-10-11 Thread Neale Ranns via Lists.Fd.Io
+Eyal

I expect it was broken by efd9cf302.

/neale


De :  au nom de "Michal Cmarada via Lists.Fd.Io" 

Répondre à : "Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco)" 

Date : jeudi 11 octobre 2018 à 15:30
À : "vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Delete IPv6 VXLAN fails

Hi,

I was trying to setup and delete IPV6 vxlan tunel using vpp_api_test:
vat# vxlan_tunnel_dump
sw_if_index   instance src_address dst_address  
encap_vrf_id  decap_next_index  vni  mcast_sw_if_index
vat# vxlan_add_del_tunnel src 10::10 dst 10::11 vni 88 encap-vrf-id 0 
decap-next l2
vat# vxlan_tunnel_dump
sw_if_index   instance src_address dst_address  
encap_vrf_id  decap_next_index  vni  mcast_sw_if_index
  2  0  10::11  10::10  
   0 1   88 -1
vat# vxlan_add_del_tunnel src 10::10 dst 10::11 vni 88 encap-vrf-id 0 
decap-next l2 del
vxlan_add_del_tunnel error: Misc
vat# vxlan_tunnel_dump
sw_if_index   instance src_address dst_address  
encap_vrf_id  decap_next_index  vni  mcast_sw_if_index
vxlan_tunnel_dump error: Misc

The reason is that we have simmilar test in CSIT for honeycomb and it is 
failing, so I wanted to test it manually using vat. I tried to add tunnel which 
was successful, but the delete afterwards failed.
I tested this using 19.01-rc0~8-g642829d~b5417 build, but the tests are failing 
for few days so it is probably present in 18.10 too.
I have HC dumps from vppAPI call too which behaves the same. Add is succesfull 
and Delete fails.
ADD:
VxlanAddDelTunnel{isAdd=1, isIpv6=1, instance=0, srcAddress=[0, 16, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 16], dstAddress=[0, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 17], mcastSwIfIndex=0, encapVrfId=0, decapNextIndex=1, vni=88}
DELETE:
VxlanAddDelTunnel{isAdd=0, isIpv6=1, instance=0, srcAddress=[0, 16, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 16], dstAddress=[0, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 17], mcastSwIfIndex=0, encapVrfId=0, decapNextIndex=1, vni=88}

Seems there might be a bug in VPP. Can somebody check it?

Thanks

Michal

[ttps://www.cisco.com/c/dam/m/en_us/signaturetool/images/banners/standard/06_standard_graphic.png]




Michal Cmarada
Engineer - Software
mcmar...@cisco.com
Tel:










Cisco Systems, Inc.



Slovakia
cisco.com



[ttp://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]

Think before you print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here
 for Company Registration Information.







-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10798): https://lists.fd.io/g/vpp-dev/message/10798
Mute This Topic: https://lists.fd.io/mt/27241657/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question abot FRR

2018-10-10 Thread Neale Ranns via Lists.Fd.Io

Hi Xue,

which FRR ;)
This one:
  https://tools.ietf.org/html/rfc5286
we don’t support

For this one:
  https://frrouting.org/
I’ll leave it to the community to comment.

/neale


De :  au nom de xyxue 
Date : mercredi 10 octobre 2018 à 08:35
À : vpp-dev 
Objet : [vpp-dev] question abot FRR


Hi guys,

If the vpp support FRR? What's the configuration about FRR?

Thanks,
Xue

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10785): https://lists.fd.io/g/vpp-dev/message/10785
Mute This Topic: https://lists.fd.io/mt/27154774/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Master branch l2bd test perf dop

2018-10-04 Thread Neale Ranns via Lists.Fd.Io
Hi Yuwei,

Can you test this, please:
https://gerrit.fd.io/r/#/c/15100/

thanks,
neale

De : "Zhang, Yuwei1" 
Date : vendredi 28 septembre 2018 à 03:08
À : "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Objet : RE: [vpp-dev] Master branch l2bd test perf dop

Hi Neale,
 I assume the replications should related to the interfaces in the 
bridge, right? I just have 2 interfaces in the bridge which means one of the 
interface receive traffic and another send out. In the 64K size packet test 
case, the performance have almost 35% drop. I didn’t do other case yet.

Regards,
Yuwei
From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Thursday, September 27, 2018 8:31 PM
To: Zhang, Yuwei1 ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Master branch l2bd test perf dop


Hi Yuwei,

There was a change to the l2flood node recently:
  https://gerrit.fd.io/r/#/c/13578/
where we use the buffer clone mechanism rather than free-recycle. I would 
expect the CPU cycles per invocation of the l2-flood node to increase, but the 
number of invocations of l2flood to decrease (w.r.t. the interface-tx node).
How many replications does your test perform and is there a trend for perf 
change versus number of replications?

Thanks,
Neale


De : mailto:vpp-dev@lists.fd.io>> au nom de Zhang Yuwei 
mailto:yuwei1.zh...@intel.com>>
Date : jeudi 27 septembre 2018 à 05:02
À : "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Objet : [vpp-dev] Master branch l2bd test perf dop

Hi All,
 In our recent test, I found a performance drop in master branch. I 
execute the l2bd test case in a 2.5GHZ CPU and found almost 35% drop compared 
to 18.07 release. My test is set two NIC ports to a same bridge domain and send 
traffic to test the l2 forward performance. I found in the master branch, 
l2flood function consume much more CPU cycles than 18.07 which means any test 
use the l2flood function will also have a performance drop. Can anybody kindly 
help to check this issue? Thanks a lot.

Regards,
Yuwei

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10756): https://lists.fd.io/g/vpp-dev/message/10756
Mute This Topic: https://lists.fd.io/mt/26289209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, SPAN, ACL

2018-09-27 Thread Neale Ranns via Lists.Fd.Io


De :  au nom de Juraj Linkeš 
Date : jeudi 27 septembre 2018 à 09:21
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, 
GRE, SCTP, SPAN, ACL

Hi Neale,

I had a debugging session with Andrew about failing ACL testcases and he 
uncovered that the root cause is in l2 and ip4:

1) the timeout and big files

for some reason in the bridged setup done by a testcase, the VPP reinjects the 
packet being sent onto one of the interfaces of the bridge, in a loop.

The following crude diff eliminates the problem and the tests pass: 
https://paste.ubuntu.com/p/CSMYjXsZyX/

[nr] Can we please see the packet trace with that patch in place?

2) there is a failure of a mac acl testcase in the routed scenario, where the 
ip lookup picks up incorrect next index:

The following shows the problem for the properly and improperly routed packet:

https://paste.ubuntu.com/p/wTWWNhwSKY/

that’s bizarre. I’m not sure where to start debugging that other than attaching 
GDB and having a poke around.

/neale


Could you advise on the first issue (Andrew wasn't sure the diff is a proper 
fix) and help debug the other issue (or, most likely related, issues 
https://jira.fd.io/browse/VPP-1432 and https://jira.fd.io/browse/VPP-1433?) If 
not, could you suggest someone so I can ask them?

Thanks,
Juraj

From: Juraj Linkeš
Sent: Tuesday, September 25, 2018 10:07 AM
To: 'Juraj Linkeš' ; vpp-dev 
Cc: csit-dev 
Subject: RE: Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, 
SPAN, ACL

I created the new tickets under CSIT, which is an oversight, but I fixed it and 
now the tickets are under VPP:

· GRE crash

· SCTP failure/crash

oMe and Marco resolved a similar issue in the past, but this could be 
something different

· SPAN crash

· IP4 failures

oThese are multiple failures and I'm not sure that grouping them together 
is correct

· L2 failures/crash

oAs in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

· ECMP failure

· Multicast failure

· ACL failure

oI'm already working with Andrew on fixing this

There seem to be a lot of people who touched the code. I would like to ask the 
authors to tell me who to turn to (at least for IP and L2).

Regards,
Juraj

From: Juraj Linkeš [mailto:juraj.lin...@pantheon.tech]
Sent: Monday, September 24, 2018 6:26 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: csit-dev mailto:csit-...@lists.fd.io>>
Subject: [vpp-dev] Make test failures on ARM

Hi vpp-devs,

Especially ARM vpp devs ☺

We're experiencing a number of failures on Cavium ThunderX and we'd like to fix 
the issues. I've created a number of Jira tickets:

· GRE crash

· SCTP failure/crash

oMe and Marco resolved a similar issue in the past, but this could be 
something different

· SPAN crash

· IP4 failures

oThese are multiple failures and I'm not sure that grouping them together 
is correct

· L2 failures/crash

oAs in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

· ECMP failure

· Multicast failure

· ACL failure

oI'm already working with Andrew on fixing this

The reason I didn't reach out to all authors individually is that I wanted 
someone to look at the issues and assess whether there's an overlap (or I 
grouped the failures improperly), since some of the failures look similar.

Then there's the issue of hardware availability - if anyone willing to help has 
access to fd.io lab, I can setup access to a Cavium ThunderX, otherwise we 
could set up a call if further debugging is needed.

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10692): https://lists.fd.io/g/vpp-dev/message/10692
Mute This Topic: https://lists.fd.io/mt/26218436/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Master branch l2bd test perf dop

2018-09-27 Thread Neale Ranns via Lists.Fd.Io

Hi Yuwei,

There was a change to the l2flood node recently:
  https://gerrit.fd.io/r/#/c/13578/
where we use the buffer clone mechanism rather than free-recycle. I would 
expect the CPU cycles per invocation of the l2-flood node to increase, but the 
number of invocations of l2flood to decrease (w.r.t. the interface-tx node).
How many replications does your test perform and is there a trend for perf 
change versus number of replications?

Thanks,
Neale


De :  au nom de Zhang Yuwei 
Date : jeudi 27 septembre 2018 à 05:02
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Master branch l2bd test perf dop

Hi All,
 In our recent test, I found a performance drop in master branch. I 
execute the l2bd test case in a 2.5GHZ CPU and found almost 35% drop compared 
to 18.07 release. My test is set two NIC ports to a same bridge domain and send 
traffic to test the l2 forward performance. I found in the master branch, 
l2flood function consume much more CPU cycles than 18.07 which means any test 
use the l2flood function will also have a performance drop. Can anybody kindly 
help to check this issue? Thanks a lot.

Regards,
Yuwei

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10691): https://lists.fd.io/g/vpp-dev/message/10691
Mute This Topic: https://lists.fd.io/mt/26289209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [**EXTERNAL**] Fwd: [vpp-dev] Failing to create untagged sub-interface

2018-09-25 Thread Neale Ranns via Lists.Fd.Io
Hi Mike,

Perhaps you could tell us why you want to create an untagged sub-interface.

Regards,
Neale


De :  au nom de "Bly, Mike" 
Date : vendredi 21 septembre 2018 à 17:06
À : "John Lo (loj)" , Edward Warnicke , 
"vpp-dev@lists.fd.io" 
Objet : Re: [**EXTERNAL**] Fwd: [vpp-dev] Failing to create untagged 
sub-interface

John,

Any advise on this is appreciated. We can certainly dig into this, but we first 
wanted to sanity check with the community in case there was something obvious 
as to why it is working the way it is currently. I am hopeful that between you 
efforts and ours we can run this to ground in short order.

-Mike

From: vpp-dev@lists.fd.io  On Behalf Of John Lo (loj) via 
Lists.Fd.Io
Sent: Thursday, September 20, 2018 4:02 PM
To: Edward Warnicke ; vpp-dev@lists.fd.io; Bly, Mike 

Cc: vpp-dev@lists.fd.io
Subject: Re: [**EXTERNAL**] Fwd: [vpp-dev] Failing to create untagged 
sub-interface

When a sub-interface is created, matching of tags on the packet to the 
sub-interface can be specified as “exact-match”.  With exact-match, packet must 
have the same number of tags with values matching that specified for the 
sub-interface.  Otherwise, packets will belong to the best matched 
sub-interface.  A sub-interface to be used for L3 must be created with 
exact-match.  Otherwise, IP forwarding cannot get a proper L2 header rewrite 
for output on the sub-interface.

As for a main interface,  I suppose when it is in L2 mode, packets received 
with no tags or with tags without any specific sub-interface match is 
considered as on the main interface.  When the main interface is in L3 mode, it 
will only get untagged packets because of the exact match requirement.  I think 
this is why the default sub-interface starts to get non-matching tagged packets 
when main interface is in L3 mode, as observed.  Packets received on the main 
interface in L3 mode can be IP forwarded or be dropped.

It is a good question – what is the expected sub-interface classification 
behavior with untagged or default sub-interface?  I think this is the area of 
VPP that has not been used much and thus we have little knowledge of how it 
behaves without studying the code (hence lack of response to this thread of 
questions so far).  When I get a chance, I can take look into this issue – how 
VLAN match should work for default/untagged sub-interface and why untagged 
sub-interface creation fails.  I don’t know how soon I will get to it.  So, if 
anyone is willing to contribute and submit a patch to fix the issue, I will be 
happy to review and/or merge the patch as appropriate.

Regards,
John

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Edward Warnicke
Sent: Thursday, September 20, 2018 1:25 PM
To: vpp-dev@lists.fd.io; Bly, Mike 
mailto:m...@ciena.com>>
Subject: Re: [**EXTERNAL**] Fwd: [vpp-dev] Failing to create untagged 
sub-interface

Guys,
  Anyone have any thoughts on this?

Ed



On September 20, 2018 at 12:01:05 PM, Bly, Mike 
(m...@ciena.com) wrote:
Ed/Keith, et al,

What Vijay is digging into is trying to understand how to provide the following 
sub-interface setup on a common/single physical NIC. I am hoping you can shed 
some light on the feasibility of this, given the current code to date.

Our goal is to provide proper separation of untagged vs. explicit-vlan (EVPL) 
vs. default (all remaining vlans) vs. EPL as needed on a given NIC, independent 
of any choice of forwarding mode (L2 vs L3).

GigabitEthernet5/0/0 --> “not used to forward traffic” (see next three 
sub-if’s), calling it sub_if_0 for reference below (seen as possible EPL path, 
but not covered here, since already “working”)
GigabitEthernet5/0/0.untagged --> all untagged traffic on this port goes to 
sub_if_1
GigabitEthernet5/0/0.vid1 --> all traffic arriving with outer tag == 1 goes to 
sub_if_2
GigabitEthernet5/0/0.default --> all other tagged traffic goes to sub_if_3

The only way we seem to be able to get sub_if_3 to process traffic is to 
disable sub_if_0 (set mode to l3).

Additionally, the current configuration checking in src/vnet/ethernet/node.c 
does not seem amenable to allowing the actual configuration and support of 
untagged vs default as two distinct sub-if’s processing traffic at the same 
time (my sub_if_1 and sub_if_3 above). Are we missing something here in how 
this is supposed to work? We would be fine with letting “sub_if_0” carry the 
untagged traffic (in place of sub_if_1), but we have yet to figure out how to 
do that while still having sub_if_3 processing “all other tagged frames”. We 
can say in all of our testing that we in fact do correctly see sub_if_2 working 
as expected.

Here is a simple configuration showing our current efforts in this area:

create bridge-domain 1
create bridge-domain 2
create bridge-domain 3

set interface l2 bridge GigabitEthernet5/0/0 1
set interface l2 bridge GigabitEthernet5/0/1 1

c

Re: [vpp-dev] broken GRE tunnel

2018-09-19 Thread Neale Ranns via Lists.Fd.Io
Hi Fedro,

Thanks for the bug report. Fixed in:
  https://gerrit.fd.io/r/#/c/14891/

/neale


-Original Message-
From:  on behalf of Fedor Kazmin 
Date: Wednesday, 19 September 2018 at 10:46
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] broken GRE tunnel

Hello all,
I have got an issue with GRE encapsulation, need some help.
VPP creates broken tunnel, no actial connectivity.
Both stable/1804 and master, Ubuntu Xenial, gcc 5.4.0

Steps to reproduce:
1. Create and configure veth pair
 ip link add name veth0 type veth peer name vpp0
 ip link set dev vpp0 up
 ip link set dev veth0 up
 ip addr add 172.16.0.1/24 dev veth0

2. Run VPP and configure a tunnel
DBGvpp# sh ver
vpp v18.10-rc0~434-gb4603a7 built by kahzeemin on kahzeemin-nix at Wed 
Sep 19 10:50:56 MSK 2018

DBGvpp# create host name vpp0
host-vpp0

DBGvpp# set int state host-vpp0 up
DBGvpp# set int ip addr host-vpp0 172.16.0.2/24
DBGvpp# create gre tun src 172.16.0.2 dst 172.16.0.1
gre0

DBGvpp# ip route add 2001:db8::1/128 via gre0
DBGvpp# set int state gre0 up
DBGvpp# enable ip6 int gre0
DBGvpp#  sh ip6 fib 2001:db8::1/128
ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:1, src:default-route:1, ]
2001:db8::1/128 fib:0 index:12 locks:2
   src:CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
 path-list:[14] locks:2 flags:shared, uPRF-list:15 len:1 itfs:[2, ]
   path:[14] pl-index:14 ip6 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved, cfg-flags:attached,
 2001:db8::1 gre0 (p2p)
   [@0]: ipv6 via :: gre0: mtu:9000 
4500fe2f64abac12ac1186dd
  stacked-on:
[@3]: ipv6 via 172.16.0.1 host-vpp0: mtu:9000 
aae5c055aecc02fe6575de3586dd

  forwarding:   unicast-ip6-chain
   [@0]: dpo-load-balance: [proto:ip6 index:14 buckets:1 uRPF:15 to:[0:0]]
 [0] [@6]: ipv6 via :: gre0: mtu:9000 
4500fe2f64abac12ac1186dd
 stacked-on:
   [@3]: ipv6 via 172.16.0.1 host-vpp0: mtu:9000 
aae5c055aecc02fe6575de3586dd


Please note this 'stacked-on: ipv6 via 172.16.0.1' leading to icmp6 
neighbor discovery and no actual connectivity through the tunnel.

It looks like the problem is hardcoded next_hop_proto = DPO_PROTO_IP6 in 
add_del_route_t_handler invocation from ip6_add_del_route_t_handler 
(src/vnet/ip/ip_api.c:1091) and similar which is not necesseraly true 
about gre tunnels.



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10574): https://lists.fd.io/g/vpp-dev/message/10574
Mute This Topic: https://lists.fd.io/mt/25753662/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + Contiv-VPP network plugin)

2018-09-07 Thread Neale Ranns via Lists.Fd.Io
Hi Stan,

Thanks for the decode.

Given that I cannot analyse your core, I cannot be sure why the crash occurred. 
But I do notice when using the route type we see in the trace in a new unit 
test that it doesn’t produce the correct result. Here is the patch:
  https://gerrit.fd.io/r/#/c/14714/
maybe it will fix your crash too.

Regards
Neale


From: Stanislav Chlebec 
Date: Thursday, 6 September 2018 at 11:00
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Thans for advice.
Here is the result:
https://gist.github.com/stanislav-chlebec/7466935c41b60eb23ea711f6a4fcafeb

Stan
From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, September 5, 2018 1:58 PM
To: Stanislav Chlebec ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

On the exact same version of VPP that produced the crash do:
  api trace custom-dump /path/to/trace/flie.txt

/neale


From: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Wednesday, 5 September 2018 at 13:24
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
Could you please describe, how to do it?
Thanks
Stan

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, September 4, 2018 3:27 PM
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Stan,

Unfortunately I don’t have an ARM machine on to decode the post-mortem data. 
Could you do this?

Thanks,
Neale


From: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Tuesday, 4 September 2018 at 11:06
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>, 
"Neale Ranns (nranns)" mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
Have you had the occasion to look at that api_post_mortem data?
Have you found the reason of crash?
Thanks
Stan


From: Stanislav Chlebec [mailto:stanislav.chle...@pantheon.tech]
Sent: Wednesday, August 22, 2018 3:39 PM
To: Neale Ranns (nranns) mailto:nra...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
I attached the file api_post_mortem.43407
to the  issue https://jira.fd.io/browse/VPP-1394
Thanks
Stan

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, August 21, 2018 5:02 PM
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>; 
Nitin Saxena mailto:nitin.sax...@cavium.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Stan,

What route were you adding at the time? Can you give me the post-mortem API 
dump [1]

/neale

[1] see https://wiki.fd.io/view/VPP/BugReports


From: mailto:vpp-dev@lists.fd.io>> on behalf of Stanislav 
Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Tuesday, 21 August 2018 at 16:41
To: Nitin Saxena mailto:nitin.sax...@cavium.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hello all

Could you please help mi with this issue:
https://jira.fd.io/browse/VPP-1394

Thanks.
Stan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10425): https://lists.fd.io/g/vpp-dev/message/10425
Mute This Topic: https://lists.fd.io/mt/24876710/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + Contiv-VPP network plugin)

2018-09-05 Thread Neale Ranns via Lists.Fd.Io
On the exact same version of VPP that produced the crash do:
  api trace custom-dump /path/to/trace/flie.txt

/neale


From: Stanislav Chlebec 
Date: Wednesday, 5 September 2018 at 13:24
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
Could you please describe, how to do it?
Thanks
Stan

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, September 4, 2018 3:27 PM
To: Stanislav Chlebec ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Stan,

Unfortunately I don’t have an ARM machine on to decode the post-mortem data. 
Could you do this?

Thanks,
Neale


From: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Tuesday, 4 September 2018 at 11:06
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>, 
"Neale Ranns (nranns)" mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
Have you had the occasion to look at that api_post_mortem data?
Have you found the reason of crash?
Thanks
Stan


From: Stanislav Chlebec [mailto:stanislav.chle...@pantheon.tech]
Sent: Wednesday, August 22, 2018 3:39 PM
To: Neale Ranns (nranns) mailto:nra...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
I attached the file api_post_mortem.43407
to the  issue https://jira.fd.io/browse/VPP-1394
Thanks
Stan

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, August 21, 2018 5:02 PM
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>; 
Nitin Saxena mailto:nitin.sax...@cavium.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Stan,

What route were you adding at the time? Can you give me the post-mortem API 
dump [1]

/neale

[1] see https://wiki.fd.io/view/VPP/BugReports


From: mailto:vpp-dev@lists.fd.io>> on behalf of Stanislav 
Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Tuesday, 21 August 2018 at 16:41
To: Nitin Saxena mailto:nitin.sax...@cavium.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hello all

Could you please help mi with this issue:
https://jira.fd.io/browse/VPP-1394

Thanks.
Stan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10369): https://lists.fd.io/g/vpp-dev/message/10369
Mute This Topic: https://lists.fd.io/mt/24876710/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + Contiv-VPP network plugin)

2018-09-04 Thread Neale Ranns via Lists.Fd.Io
Hi Stan,

Unfortunately I don’t have an ARM machine on to decode the post-mortem data. 
Could you do this?

Thanks,
Neale


From: Stanislav Chlebec 
Date: Tuesday, 4 September 2018 at 11:06
To: Stanislav Chlebec , "Neale Ranns (nranns)" 
, "vpp-dev@lists.fd.io" 
Subject: RE: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
Have you had the occasion to look at that api_post_mortem data?
Have you found the reason of crash?
Thanks
Stan


From: Stanislav Chlebec [mailto:stanislav.chle...@pantheon.tech]
Sent: Wednesday, August 22, 2018 3:39 PM
To: Neale Ranns (nranns) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Neale
I attached the file api_post_mortem.43407
to the  issue https://jira.fd.io/browse/VPP-1394
Thanks
Stan

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Tuesday, August 21, 2018 5:02 PM
To: Stanislav Chlebec 
mailto:stanislav.chle...@pantheon.tech>>; 
Nitin Saxena mailto:nitin.sax...@cavium.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hi Stan,

What route were you adding at the time? Can you give me the post-mortem API 
dump [1]

/neale

[1] see https://wiki.fd.io/view/VPP/BugReports


From: mailto:vpp-dev@lists.fd.io>> on behalf of Stanislav 
Chlebec 
mailto:stanislav.chle...@pantheon.tech>>
Date: Tuesday, 21 August 2018 at 16:41
To: Nitin Saxena mailto:nitin.sax...@cavium.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hello all

Could you please help mi with this issue:
https://jira.fd.io/browse/VPP-1394

Thanks.
Stan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10349): https://lists.fd.io/g/vpp-dev/message/10349
Mute This Topic: https://lists.fd.io/mt/24876710/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vlib_buffer_t clone operation

2018-09-03 Thread Neale Ranns via Lists.Fd.Io

Hi Eason,

There’s
  vlib_buffer_clone()

it’s use in IP multicast can be found here:
  replicate_inline (…)
and in L2 multicast here:
   l2flood_node_fn(…)

/neale


-Original Message-
From:  on behalf of Eason Chen 

Reply-To: Eason Chen 
Date: Monday, 3 September 2018 at 04:12
To: vpp-dev 
Subject: [vpp-dev] vlib_buffer_t clone operation

Hi VPP Gurus,

I am trying to deep dive vlib_buffer_t operations but cannot find a clone 
operation,
could anyone elaborate this the reason why clone is not supported or I 
missed anything obvious?
My assumption is that there should be some use cases for a packet clone, 
e.g. multicast...

Thanks,
Eason

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10343): https://lists.fd.io/g/vpp-dev/message/10343
Mute This Topic: https://lists.fd.io/mt/25161404/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IGMP enable issue

2018-08-28 Thread Neale Ranns via Lists.Fd.Io

Hi Aleksander,

It’s not top of my TODO list right now. Your additions would be most welcome.

/neale


From:  on behalf of Aleksander Djuric 

Date: Tuesday, 28 August 2018 at 14:41
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] IGMP enable issue

In addition to my previous message...

Unfortunatelly it's not work for me (
I need IGMPv2 support.. and I have found this comment:

/* TODO: IGMPv2 and IGMPv1 */

Is it in your nearest plans?

Certainly I also will try to do something by myself..
Regards,
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10318): https://lists.fd.io/g/vpp-dev/message/10318
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] discarding label in ipv4 FIB.

2018-08-27 Thread Neale Ranns via Lists.Fd.Io
[fix missing information]

Hi,

Label value 16 is, incorrectly IMO, not a valid out-put label.

See:
  https://gerrit.fd.io/r/#/c/14508/
for a fix.

Thanks,
/neale

From:  on behalf of "Neale Ranns via Lists.Fd.Io" 

Reply-To: "Neale Ranns (nranns)" 
Date: Monday, 27 August 2018 at 16:58
To: "che...@yahoo.com" , "vpp-dev@lists.fd.io" 

Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] discarding label in ipv4 FIB.

Hi,

Label value is, incorrectly IMO, not a valid out-put label.

See:
  https://gerrit.fd.io/r/#/c/14508/
for a fix.

Thanks,
Neale


From:  on behalf of "abbas ali chezgi via Lists.Fd.Io" 

Reply-To: "che...@yahoo.com" 
Date: Monday, 27 August 2018 at 13:05
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] discarding label in ipv4 FIB.

i used  fib_table_entry_path_add2 function for adding entry to ipv4 fib but in 
some nodes it discards mpls labels.

discard sample:
---

ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ]
1.1.1.0/24 fib:0 index:32 locks:2
  src:API refs:1 src-flags:added,contributing,active,
path-list:[50] locks:20 flags:shared, uPRF-list:49 len:1 itfs:[2, ]
  path:[54] pl-index:50 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
200.2.3.2 host-eth1
  [@0]: ipv4 via 200.2.3.2 host-eth1: mtu:9000 02feaa73323f02fe77a26dec0800
Extensions:
 path:54  labels:[[16 pipe ttl:0 exp:0]]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:49 to:[0:0]]
[0] [@5]: ipv4 via 200.2.3.2 host-eth1: mtu:9000 
02feaa73323f02fe77a26dec0800


working sample:
---
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ]
3.1.1.0/24 fib:0 index:35 locks:2
  src:API refs:1 src-flags:added,contributing,active,
path-list:[50] locks:20 flags:shared, uPRF-list:49 len:1 itfs:[1, ]
  path:[54] pl-index:50 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
200.1.2.2 host-eth0
  [@0]: ipv4 via 200.1.2.2 host-eth0: mtu:9000 02fe7f1fa8d902feb93db6450800
Extensions:
 path:54  labels:[[19 pipe ttl:0 exp:0]]


 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:38 buckets:1 uRPF:49 to:[0:0]]
[0] [@11]: mpls-label[2]:[19:64:0:eos]
[@1]: mpls via 200.1.2.2 host-eth0: mtu:9000 
02fe7f1fa8d902feb93db6458847



when this happens? how can i correct this?

thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10301): https://lists.fd.io/g/vpp-dev/message/10301
Mute This Topic: https://lists.fd.io/mt/24972400/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] discarding label in ipv4 FIB.

2018-08-27 Thread Neale Ranns via Lists.Fd.Io
Hi,

Label value is, incorrectly IMO, not a valid out-put label.

See:
  https://gerrit.fd.io/r/#/c/14508/
for a fix.

Thanks,
Neale


From:  on behalf of "abbas ali chezgi via Lists.Fd.Io" 

Reply-To: "che...@yahoo.com" 
Date: Monday, 27 August 2018 at 13:05
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] discarding label in ipv4 FIB.

i used  fib_table_entry_path_add2 function for adding entry to ipv4 fib but in 
some nodes it discards mpls labels.

discard sample:
---

ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ]
1.1.1.0/24 fib:0 index:32 locks:2
  src:API refs:1 src-flags:added,contributing,active,
path-list:[50] locks:20 flags:shared, uPRF-list:49 len:1 itfs:[2, ]
  path:[54] pl-index:50 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
200.2.3.2 host-eth1
  [@0]: ipv4 via 200.2.3.2 host-eth1: mtu:9000 02feaa73323f02fe77a26dec0800
Extensions:
 path:54  labels:[[16 pipe ttl:0 exp:0]]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:49 to:[0:0]]
[0] [@5]: ipv4 via 200.2.3.2 host-eth1: mtu:9000 
02feaa73323f02fe77a26dec0800


working sample:
---
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ]
3.1.1.0/24 fib:0 index:35 locks:2
  src:API refs:1 src-flags:added,contributing,active,
path-list:[50] locks:20 flags:shared, uPRF-list:49 len:1 itfs:[1, ]
  path:[54] pl-index:50 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
200.1.2.2 host-eth0
  [@0]: ipv4 via 200.1.2.2 host-eth0: mtu:9000 02fe7f1fa8d902feb93db6450800
Extensions:
 path:54  labels:[[19 pipe ttl:0 exp:0]]


 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:38 buckets:1 uRPF:49 to:[0:0]]
[0] [@11]: mpls-label[2]:[19:64:0:eos]
[@1]: mpls via 200.1.2.2 host-eth0: mtu:9000 
02fe7f1fa8d902feb93db6458847



when this happens? how can i correct this?

thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10300): https://lists.fd.io/g/vpp-dev/message/10300
Mute This Topic: https://lists.fd.io/mt/24972400/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IGMP enable issue

2018-08-27 Thread Neale Ranns via Lists.Fd.Io
Hi Aleksander,

The API required to enable router mode did not have a CLI equivalent. I have 
added it in :
  https://gerrit.fd.io/r/#/c/14507/

now do:
  igmp enable router 
when done
  igmp disable router 

/neale


From:  on behalf of Aleksander Djuric 

Date: Monday, 27 August 2018 at 12:28
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] IGMP enable issue


[Edited Message Follows]
Hi Neale,

Thank you for quick answer.
I want to configure VPP like a router.

Maybe it can help:
The function igmp_listen always returns VNET_API_ERROR_INVALID_INTERFACE error, 
because igmp_config_lookup returns null.
The code comments says that the reason is that the interface is not IGMP 
enabled.
If it's true - how I can to configure IGMP enabled interface?

Thanks in advance for any help,
Aleksander

On Mon, Aug 27, 2018 at 12:59 PM, Neale Ranns wrote:

Hi Aleksander,



To you want VPP to act like a host or a router?



/neale
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10298): https://lists.fd.io/g/vpp-dev/message/10298
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP hangs on GRE interface going up

2018-08-27 Thread Neale Ranns via Lists.Fd.Io

Hi Fedor,

You have exposed a bug. Thanks.
Please re-test including:
   https://gerrit.fd.io/r/#/c/14504/

/neale


From:  on behalf of Fedor Kazmin 
Date: Monday, 27 August 2018 at 13:25
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP hangs on GRE interface going up


Hello, all

I have got an issue with MPLS-o-GRE encapsulation, need some help.
VPP hangs on GRE interface going up.
Both stable/1804 and master, Ubuntu Xenial, gcc 5.4.0

Steps to reproduce:
1. Create and configure veth pair
ip link add name veth0 type veth peer name vpp0
ip link set dev vpp0 up
ip link set dev veth0 up
ip addr add 172.16.0.1/24 dev veth0

2. Run VPP and configure a tunnel
DBGvpp# sh ver
vpp v18.10-rc0~248-g4553c95 built by kahzeemin on kahzeemin-nix at Mon Aug 27 
12:57:07 MSK 2018

DBGvpp# create host name vpp0
host-vpp0

DBGvpp# set int state host-vpp0 up
DBGvpp# set int ip addr host-vpp0 172.16.0.2/24
DBGvpp# create gre tun src 172.16.0.2 dst 172.16.0.1
gre0

DBGvpp# ip route add 10.0.0.1/32 via gre0 out-labels 100
DBGvpp# sh ip fib 10.0.0.1
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ]
10.0.0.1/32 fib:0 index:12 locks:2
  src:CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
path-list:[14] locks:2 flags:shared, uPRF-list:12 len:1 itfs:[2, ]
  path:[14] pl-index:14 ip4 weight=1 pref=0 attached-nexthop: 
cfg-flags:attached,
10.0.0.1 gre0 (p2p)
  [@0]: ipv4 via 0.0.0.0 gre0: mtu:9000 
4500fe2f64abac12ac110800
 stacked-on:
   [@0]: dpo-drop ip4
Extensions:
 path:14  labels:[[100 pipe ttl:0 exp:0]]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:12 to:[0:0]]
[0] [@0]: dpo-drop ip4

DBGvpp# set int state gre0 up
/*VPP becomes unresponive*/


Thank you in advance,
Fedor
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10297): https://lists.fd.io/g/vpp-dev/message/10297
Mute This Topic: https://lists.fd.io/mt/24972486/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IGMP enable issue

2018-08-27 Thread Neale Ranns via Lists.Fd.Io
Hi Aleksander,

To you want VPP to act like a host or a router?

/neale


From:  on behalf of Aleksander Djuric 

Date: Monday, 27 August 2018 at 10:39
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] IGMP enable issue

Hello,

I am trying to configure VPP for IGMP, but with this configuration it seems 
doesn't work:

vpp# igmp listen enable int GigabitEthernet0/3/0
vpp# sh igmp config
vpp# trace add dpdk-input 10
vpp# sh trace
16:38:35:357591: dpdk-input
  GigabitEthernet0/3/0 rx queue 0
  buffer 0x3666: current data 14, length 46, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
 ext-hdr-valid
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0
  PKT MBUF: port 1, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x0, data_off 128, phys_addr 0x928d9a00
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 00:50:06:00:03:00 -> 01:00:5e:00:00:16
  IGMP: 172.16.1.1 -> 224.0.0.22
version 4, header length 24
tos 0xc0, ttl 1, length 40, checksum 0x57e2
fragment id 0x, flags DONT_FRAGMENT
16:38:35:357630: ip4-input
  IGMP: 172.16.1.1 -> 224.0.0.22
version 4, header length 24
tos 0xc0, ttl 1, length 40, checksum 0x57e2
fragment id 0x, flags DONT_FRAGMENT
16:38:35:357648: ip4-options
option:[0x94,0x4,0x0,0x0]
16:38:35:357649: ip4-local
IGMP: 172.16.1.1 -> 224.0.0.22
  version 4, header length 24
  tos 0xc0, ttl 1, length 40, checksum 0x57e2
  fragment id 0x, flags DONT_FRAGMENT
16:38:35:357657: igmp-input
  sw_if_index 2 next-index 0
  membership_report_v3: code 0, checksum 0xeaec
16:38:35:357660: error-drop
  igmp-input: IGMP not enabled on this interface

Source host: Linux a 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) 
x86_64 GNU/Linux, IP 172.16.1.1
Target: VPP v18.10-rc0~248-g4553c95a on VM, compiled with GCC 6.3.0 20170516, 
IP 172.16.1.2
Multicast group address: 224.0.0.22

As you can see, the command "sh igmp config" shows nothing, and trace is 
reported error: IGMP not enabled on this interface.

Could you help me with this?  Thanks a lot.

Regards,
Aleksander


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10291): https://lists.fd.io/g/vpp-dev/message/10291
Mute This Topic: https://lists.fd.io/mt/24971765/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Source Based Routing #vpp

2018-08-27 Thread Neale Ranns via Lists.Fd.Io
Hi Georgi,

Are you asking for this:
  https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/vrfselec.html

we don’t support this feature specifically (i.e. a simple IP source based 
lookup as an input feature, though it would be easy to add) but we do support 
the more general case of ACL/policy based routing – which is similar to linux’s 
ip rule.
  https://wiki.fd.io/view/VPP/ABF

hth
/neale


From:  on behalf of "georgi.mel...@gmail.com" 

Date: Monday, 27 August 2018 at 07:28
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Source Based Routing #vpp

Hi VPP experts,

I would like to shed light on a particular scenario/usecase that I'm trying to 
implement in VPP.

The scenario demands egress packet routing to be done based on the source IP of 
the packet rather than the destination IP. I found a similar discussion in the 
VPP mail 
archive(https://www.mail-archive.com/vpp-dev@lists.fd.io/msg06886.html), but 
the solution discussed there would not be applicable for routing multiple 
source IP packets having different routes.

I understand that we can configure multiple routing table in VPP with unique 
routes in them, but would I be able to make the FIB lookup towards a particular 
table based on source IP?

If I take an analogy from Linux kernel, does VPP support functionality similar 
to 'ip rule' command, wherein we can specify a routing table to be used for a 
particular source IP.

Looking forward to your advice and support in finding a solution to this.

Thanks & Regards,
Georgi
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10290): https://lists.fd.io/g/vpp-dev/message/10290
Mute This Topic: https://lists.fd.io/mt/24970841/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + Contiv-VPP network plugin)

2018-08-21 Thread Neale Ranns via Lists.Fd.Io
Hi Stan,

What route were you adding at the time? Can you give me the post-mortem API 
dump [1]

/neale

[1] see https://wiki.fd.io/view/VPP/BugReports


From:  on behalf of Stanislav Chlebec 

Date: Tuesday, 21 August 2018 at 16:41
To: Nitin Saxena , "vpp-dev@lists.fd.io" 

Subject: [vpp-dev] Cavium ThunderX (ARM64) - Crash in VPP (Kubernetes + 
Contiv-VPP network plugin)

Hello all

Could you please help mi with this issue:
https://jira.fd.io/browse/VPP-1394

Thanks.
Stan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10236): https://lists.fd.io/g/vpp-dev/message/10236
Mute This Topic: https://lists.fd.io/mt/24876710/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

2018-08-09 Thread Neale Ranns via Lists.Fd.Io

Hi Justin,

It’s building packages. I imagine you can still ‘vagrant ssh’ despite that 
failure and do non VOM related things. If you want to build VOM then either, 
increase the memory or decrease the number of cores of your machine, or lower 
the number of parallel jobs.

/neale

-Original Message-
From: "Justin Pecqueur (jpecqueu)" 
Date: Thursday, 9 August 2018 at 16:10
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

Hi Neale,

Here are the steps I took:
> git clone https://gerrit.fd.io/r/vpp
> cd vpp/build-root/vagrant
> source ./env.sh
> vagrant up
I assume this includes building all the packages and doing a 'make 
test-ext'?  Are there other steps 
that will allow me to build vpp, but skip building VOM, or do I have to 
start throwing more 
resources at the env?

thanks,

--justin


On 8/9/18 9:17 AM, Neale Ranns (nranns) wrote:
> Hi Justin,
>
> In the master branch VOM is only built when building packages and doing 
‘make test-ext’, i.e. for release and for its own testing. There’s no finer 
avoidance control.
>
> /neale
>
>
> -Original Message-
> From: "Justin Pecqueur (jpecqueu)" 
> Date: Thursday, 9 August 2018 at 15:07
> To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

> Subject: Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"
>
>  Hi Neale,
>  
>  I suspected as much seeing as how it was occurring with some C++ 
code which can be very expensive to
>  compile.
>  Is there a way to avoid building this "vom" component?
>  
>  thanks,
>  
>  --justin
>  
>  On 8/9/18 5:07 AM, Neale Ranns (nranns) wrote:
>  > Hi Justin,
>  >
>  > Not enough memory for the number of CPUs/parallel jobs.
>  >
>  > Here’s my vagrant VM on OSx:
>  >
>  > vagrant@ ~/vpp (master) $ free
>  >totalusedfree  shared  
buff/cache   available
>  > Mem:4146932 2320260  947376   16732  
879296 1765252
>  > Swap:524284   60656  463628
>  > vagrant@ ~/vpp (master) $ cat /proc/cpuinfo | grep "model name"
>  > model name : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
>  > model name : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
>  > model name : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
>  >
>  >
>  > /neale
>  >
>  > -Original Message-
>  > From:  on behalf of "justin pecqueur via 
Lists.Fd.Io" 
>  > Reply-To: "Justin Pecqueur (jpecqueu)" 
>  > Date: Thursday, 9 August 2018 at 03:07
>  > To: "vpp-dev@lists.fd.io" 
>  > Cc: "vpp-dev@lists.fd.io" 
>  > Subject: [vpp-dev] Build: MacOS with Vagrant compiler crash in 
"vom"
>  >
>  >  Hi,
>  >
>  >  I'm trying to bring up VPP on OSX using Vagrant and I keep 
hitting the following crash:
>  >
>  >  > default:  Building vom in 
/vpp/build-root/build-vpp-native/vom 
>  >  > default: make[3]: Entering directory 
'/vpp/build-root/build-vpp-native/vom'
>  >  > default: Making all in vom
>  >  > default: make[4]: Entering directory 
'/vpp/build-root/build-vpp-native/vom/vom'
>  >  > default:   CXX  types.lo
>  >  > default:   CXX  arp_proxy_binding_cmds.lo
>  >  > default:   CXX  arp_proxy_binding.lo
>  >  > default:   CXX  arp_proxy_config_cmds.lo
>  >  > default:   CXX  arp_proxy_config.lo
>  >  > default:   CXX  bond_group_binding_cmds.lo
>  >  > default:   CXX  bond_group_binding.lo
>  >  > default:   CXX  bond_interface_cmds.lo
>  >  > default:   CXX  bond_interface.lo
>  >  > default:   CXX  bond_member.lo
>  >  > default:   CXX  bridge_domain_cmds.lo
>  >  > default:   CXX  bridge_domain.lo
>  >  > default:   CXX  bridge_domain_arp_entry.lo
>  >  > default:   CXX  bridge_domain_arp_entry_cmds.lo
>  >  > default:   CXX  bridge_domain_entry_cmds.lo
>  >  > default:   CXX  bridge_domain_entry.lo
>  >  > default:   CXX  client_db.lo
>  >  > default:   CXX  cmd.lo
>  >  > default:   CXX  connection.lo
>  >  > default:   CXX  dhcp_client_cmds.lo
>  >  > default:   CXX  dhcp_client.lo
>  >  > default:   CXX  hw_cmds.lo
>  >  > default

Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

2018-08-09 Thread Neale Ranns via Lists.Fd.Io

Hi Justin,

Note also you can control the number of paralle builds with:

# /proc/cpuinfo does not exist on platforms without a /proc and on some
# platforms, notably inside containers, it has no content. In those cases
# we assume there's 1 processor; we use 2*ncpu for the -j option.
# NB: GNU Make 4.2 will let us use '$(file 
Date: Thursday, 9 August 2018 at 15:07
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

Hi Neale,

I suspected as much seeing as how it was occurring with some C++ code which 
can be very expensive to 
compile.
Is there a way to avoid building this "vom" component?

thanks,

--justin

On 8/9/18 5:07 AM, Neale Ranns (nranns) wrote:
> Hi Justin,
>
> Not enough memory for the number of CPUs/parallel jobs.
>
> Here’s my vagrant VM on OSx:
>
> vagrant@ ~/vpp (master) $ free
>totalusedfree  shared  buff/cache   
available
> Mem:4146932 2320260  947376   16732  879296 
1765252
> Swap:524284   60656  463628
> vagrant@ ~/vpp (master) $ cat /proc/cpuinfo | grep "model name"
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
>
>
> /neale
>
> -Original Message-
> From:  on behalf of "justin pecqueur via 
Lists.Fd.Io" 
> Reply-To: "Justin Pecqueur (jpecqueu)" 
> Date: Thursday, 9 August 2018 at 03:07
> To: "vpp-dev@lists.fd.io" 
> Cc: "vpp-dev@lists.fd.io" 
> Subject: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"
>
>  Hi,
>  
>  I'm trying to bring up VPP on OSX using Vagrant and I keep hitting 
the following crash:
>  
>  > default:  Building vom in 
/vpp/build-root/build-vpp-native/vom 
>  > default: make[3]: Entering directory 
'/vpp/build-root/build-vpp-native/vom'
>  > default: Making all in vom
>  > default: make[4]: Entering directory 
'/vpp/build-root/build-vpp-native/vom/vom'
>  > default:   CXX  types.lo
>  > default:   CXX  arp_proxy_binding_cmds.lo
>  > default:   CXX  arp_proxy_binding.lo
>  > default:   CXX  arp_proxy_config_cmds.lo
>  > default:   CXX  arp_proxy_config.lo
>  > default:   CXX  bond_group_binding_cmds.lo
>  > default:   CXX  bond_group_binding.lo
>  > default:   CXX  bond_interface_cmds.lo
>  > default:   CXX  bond_interface.lo
>  > default:   CXX  bond_member.lo
>  > default:   CXX  bridge_domain_cmds.lo
>  > default:   CXX  bridge_domain.lo
>  > default:   CXX  bridge_domain_arp_entry.lo
>  > default:   CXX  bridge_domain_arp_entry_cmds.lo
>  > default:   CXX  bridge_domain_entry_cmds.lo
>  > default:   CXX  bridge_domain_entry.lo
>  > default:   CXX  client_db.lo
>  > default:   CXX  cmd.lo
>  > default:   CXX  connection.lo
>  > default:   CXX  dhcp_client_cmds.lo
>  > default:   CXX  dhcp_client.lo
>  > default:   CXX  hw_cmds.lo
>  > default:   CXX  hw.lo
>  > default:   CXX  inspect.lo
>  > default:   CXX  interface_cmds.lo
>  > default:   CXX  interface.lo
>  > default:   CXX  interface_factory.lo
>  > default:   CXX  interface_ip6_nd_cmds.lo
>  > default: g++: internal compiler error: Killed (program cc1plus)
>  > default: Please submit a full bug report,
>  > default: with preprocessed source if appropriate.
>  > default: See  for 
instructions.
>  > default: Makefile:877: recipe for target 'interface.lo' failed
>  > default: make[4]: *** [interface.lo] Error 1
>  > default: make[4]: *** Waiting for unfinished jobs
>  > default: make[4]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom/vom'
>  > default: Makefile:386: recipe for target 'all-recursive' failed
>  > default: make[3]: *** [all-recursive] Error 1
>  > default: make[3]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom'
>  > default: Makefile:691: recipe for target 'vom-build' failed
>  > default: make[2]: *** [vom-build] Error 2
>  > default: make[2]: Leaving directory '/vpp/build-root'
>  > default: /vpp/build-data/platforms.mk:20: recipe for target 
'install-deb' failed
>  > default: make[1]: *** [i

Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

2018-08-09 Thread Neale Ranns via Lists.Fd.Io

Hi Justin,

In the master branch VOM is only built when building packages and doing ‘make 
test-ext’, i.e. for release and for its own testing. There’s no finer avoidance 
control.

/neale


-Original Message-
From: "Justin Pecqueur (jpecqueu)" 
Date: Thursday, 9 August 2018 at 15:07
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

Hi Neale,

I suspected as much seeing as how it was occurring with some C++ code which 
can be very expensive to 
compile.
Is there a way to avoid building this "vom" component?

thanks,

--justin

On 8/9/18 5:07 AM, Neale Ranns (nranns) wrote:
> Hi Justin,
>
> Not enough memory for the number of CPUs/parallel jobs.
>
> Here’s my vagrant VM on OSx:
>
> vagrant@ ~/vpp (master) $ free
>totalusedfree  shared  buff/cache   
available
> Mem:4146932 2320260  947376   16732  879296 
1765252
> Swap:524284   60656  463628
> vagrant@ ~/vpp (master) $ cat /proc/cpuinfo | grep "model name"
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
> model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
>
>
> /neale
>
> -Original Message-
> From:  on behalf of "justin pecqueur via 
Lists.Fd.Io" 
> Reply-To: "Justin Pecqueur (jpecqueu)" 
> Date: Thursday, 9 August 2018 at 03:07
> To: "vpp-dev@lists.fd.io" 
> Cc: "vpp-dev@lists.fd.io" 
> Subject: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"
>
>  Hi,
>  
>  I'm trying to bring up VPP on OSX using Vagrant and I keep hitting 
the following crash:
>  
>  > default:  Building vom in 
/vpp/build-root/build-vpp-native/vom 
>  > default: make[3]: Entering directory 
'/vpp/build-root/build-vpp-native/vom'
>  > default: Making all in vom
>  > default: make[4]: Entering directory 
'/vpp/build-root/build-vpp-native/vom/vom'
>  > default:   CXX  types.lo
>  > default:   CXX  arp_proxy_binding_cmds.lo
>  > default:   CXX  arp_proxy_binding.lo
>  > default:   CXX  arp_proxy_config_cmds.lo
>  > default:   CXX  arp_proxy_config.lo
>  > default:   CXX  bond_group_binding_cmds.lo
>  > default:   CXX  bond_group_binding.lo
>  > default:   CXX  bond_interface_cmds.lo
>  > default:   CXX  bond_interface.lo
>  > default:   CXX  bond_member.lo
>  > default:   CXX  bridge_domain_cmds.lo
>  > default:   CXX  bridge_domain.lo
>  > default:   CXX  bridge_domain_arp_entry.lo
>  > default:   CXX  bridge_domain_arp_entry_cmds.lo
>  > default:   CXX  bridge_domain_entry_cmds.lo
>  > default:   CXX  bridge_domain_entry.lo
>  > default:   CXX  client_db.lo
>  > default:   CXX  cmd.lo
>  > default:   CXX  connection.lo
>  > default:   CXX  dhcp_client_cmds.lo
>  > default:   CXX  dhcp_client.lo
>  > default:   CXX  hw_cmds.lo
>  > default:   CXX  hw.lo
>  > default:   CXX  inspect.lo
>  > default:   CXX  interface_cmds.lo
>  > default:   CXX  interface.lo
>  > default:   CXX  interface_factory.lo
>  > default:   CXX  interface_ip6_nd_cmds.lo
>  > default: g++: internal compiler error: Killed (program cc1plus)
>  > default: Please submit a full bug report,
>  > default: with preprocessed source if appropriate.
>  > default: See  for 
instructions.
>  > default: Makefile:877: recipe for target 'interface.lo' failed
>  > default: make[4]: *** [interface.lo] Error 1
>  > default: make[4]: *** Waiting for unfinished jobs
>  > default: make[4]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom/vom'
>  > default: Makefile:386: recipe for target 'all-recursive' failed
>  > default: make[3]: *** [all-recursive] Error 1
>  > default: make[3]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom'
>  > default: Makefile:691: recipe for target 'vom-build' failed
>  > default: make[2]: *** [vom-build] Error 2
>  > default: make[2]: Leaving directory '/vpp/build-root'
>  > default: /vpp/build-data/platforms.mk:20: recipe for target 
'install-deb' failed
>  > default: make[1]: *** [install-deb] Error 1
>  > default: make[1]: Leaving directory '/v

Re: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

2018-08-09 Thread Neale Ranns via Lists.Fd.Io
Hi Justin,

Not enough memory for the number of CPUs/parallel jobs.

Here’s my vagrant VM on OSx:

vagrant@ ~/vpp (master) $ free
  totalusedfree  shared  buff/cache   available
Mem:4146932 2320260  947376   16732  879296 1765252
Swap:524284   60656  463628
vagrant@ ~/vpp (master) $ cat /proc/cpuinfo | grep "model name"
model name  : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
model name  : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
model name  : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz


/neale

-Original Message-
From:  on behalf of "justin pecqueur via Lists.Fd.Io" 

Reply-To: "Justin Pecqueur (jpecqueu)" 
Date: Thursday, 9 August 2018 at 03:07
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Build: MacOS with Vagrant compiler crash in "vom"

Hi,

I'm trying to bring up VPP on OSX using Vagrant and I keep hitting the 
following crash:

> default:  Building vom in /vpp/build-root/build-vpp-native/vom 

> default: make[3]: Entering directory 
'/vpp/build-root/build-vpp-native/vom'
> default: Making all in vom
> default: make[4]: Entering directory 
'/vpp/build-root/build-vpp-native/vom/vom'
> default:   CXX  types.lo
> default:   CXX  arp_proxy_binding_cmds.lo
> default:   CXX  arp_proxy_binding.lo
> default:   CXX  arp_proxy_config_cmds.lo
> default:   CXX  arp_proxy_config.lo
> default:   CXX  bond_group_binding_cmds.lo
> default:   CXX  bond_group_binding.lo
> default:   CXX  bond_interface_cmds.lo
> default:   CXX  bond_interface.lo
> default:   CXX  bond_member.lo
> default:   CXX  bridge_domain_cmds.lo
> default:   CXX  bridge_domain.lo
> default:   CXX  bridge_domain_arp_entry.lo
> default:   CXX  bridge_domain_arp_entry_cmds.lo
> default:   CXX  bridge_domain_entry_cmds.lo
> default:   CXX  bridge_domain_entry.lo
> default:   CXX  client_db.lo
> default:   CXX  cmd.lo
> default:   CXX  connection.lo
> default:   CXX  dhcp_client_cmds.lo
> default:   CXX  dhcp_client.lo
> default:   CXX  hw_cmds.lo
> default:   CXX  hw.lo
> default:   CXX  inspect.lo
> default:   CXX  interface_cmds.lo
> default:   CXX  interface.lo
> default:   CXX  interface_factory.lo
> default:   CXX  interface_ip6_nd_cmds.lo
> default: g++: internal compiler error: Killed (program cc1plus)
> default: Please submit a full bug report,
> default: with preprocessed source if appropriate.
> default: See  for 
instructions.
> default: Makefile:877: recipe for target 'interface.lo' failed
> default: make[4]: *** [interface.lo] Error 1
> default: make[4]: *** Waiting for unfinished jobs
> default: make[4]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom/vom'
> default: Makefile:386: recipe for target 'all-recursive' failed
> default: make[3]: *** [all-recursive] Error 1
> default: make[3]: Leaving directory 
'/vpp/build-root/build-vpp-native/vom'
> default: Makefile:691: recipe for target 'vom-build' failed
> default: make[2]: *** [vom-build] Error 2
> default: make[2]: Leaving directory '/vpp/build-root'
> default: /vpp/build-data/platforms.mk:20: recipe for target 
'install-deb' failed
> default: make[1]: *** [install-deb] Error 1
> default: make[1]: Leaving directory '/vpp/build-root'
> default: Makefile:473: recipe for target 'pkg-deb' failed
> default: make: *** [pkg-deb] Error 2
> The SSH command responded with a non-zero exit status. Vagrant
> assumes that this means the command failed. The output for this command
> should be in the log above. Please read the output to determine what
> went wrong.
Any thoughts?

thanks,

--justin




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10081): https://lists.fd.io/g/vpp-dev/message/10081
Mute This Topic: https://lists.fd.io/mt/24235467/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

2018-08-03 Thread Neale Ranns via Lists.Fd.Io

The C++ language bindings are all templates. It’s the VOM compilation (that 
uses those templates) that consumes the memory. VOM is already in extras and 
these days only compiled if you do ‘make test-ext’ or ‘make ’

/neale


From: Ole Troan 
Date: Friday, 3 August 2018 at 12:51
To: Juraj Linkeš 
Cc: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Move the C++ language binding to extras?

Ole

On 3 Aug 2018, at 12:45, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:
Hi Neale,

Yea they do require a lot of memory - the same is true for x86. Is there a way 
to specify the max number of these? Or is that done with -j?

Would it be worthwhile to investigate if it's possible to reduce the memory 
requirements of these?

Is there a way to clear the cache so that I could run make verify back to back 
without deleting and recloning the vpp repo? ccache -C didn't work for me.

Thanks,
Juraj

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Thursday, August 2, 2018 11:11 AM
To: Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Juraj,

I couldn’t say how much each compile ‘should’ use, but it has been noted in the 
past that these template heavy C++ files do require a lot of memory to compile. 
With the many cores you have, then that’s a lot in total.
‘make wipe’ does not clear the ccache, so any subsequent builds will require 
less memory because the compile is skipped.

/neale

From: mailto:vpp-dev@lists.fd.io>> on behalf of Juraj 
Linkeš mailto:juraj.lin...@pantheon.tech>>
Date: Thursday, 2 August 2018 at 10:10
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Neale,

I'm not specifying -j, but I see a lot of processes running in parallel when 
the spike is happening. The processes are attached. They utilized most of 96 
available cores and most of them used more than 400MB - is that how much they 
should be using?

Also, here's the gcc version on the box:
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.4.0-6ubuntu1~16.04.4' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror 
--enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu 
--target=aarch64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4)

Thanks,
Juraj

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, August 1, 2018 5:09 PM
To: Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Juraj,

How many parallel compiles do you have? What’s the j factor

/neale



From: mailto:vpp-dev@lists.fd.io>> on behalf of Juraj 
Linkeš mailto:juraj.lin...@pantheon.tech>>
Date: Wednesday, 1 August 2018 at 16:59
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

Hi vpp-devs,

I noticed that during a specific portion of make verify build on an ARM 
ThunderX machine the build consumes a lot of memory - around 25GB. I can 
identify the spot in the logs:
Jul 31 03:12:48   CXX  gbp_contract.lo

25GB memory hog

Jul 31 03:16:13   CXXLDlibvom.la

but not much else. I created a ticket which 
contains some more information. I didn't see this memory spike when trying to 
reproducing the behavior on my x86 laptop. Does anyone has any idea what could 
be the cause or how to debug this?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this 

Re: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

2018-08-03 Thread Neale Ranns via Lists.Fd.Io
Hi Juraj,

Answers/comments inline with [nr]

Regards,
neale

From: Juraj Linkeš 
Date: Friday, 3 August 2018 at 12:45
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: RE: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Neale,

Yea they do require a lot of memory - the same is true for x86. Is there a way 
to specify the max number of these? Or is that done with -j?

[nr] The j factor for a build is determined based on the number of cores your 
box has.
From build-root/Makefile

# /proc/cpuinfo does not exist on platforms without a /proc and on some
# platforms, notably inside containers, it has no content. In those cases
# we assume there's 1 processor; we use 2*ncpu for the -j option.
# NB: GNU Make 4.2 will let us use '$(file mailto:nra...@cisco.com]
Sent: Thursday, August 2, 2018 11:11 AM
To: Juraj Linkeš ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Juraj,

I couldn’t say how much each compile ‘should’ use, but it has been noted in the 
past that these template heavy C++ files do require a lot of memory to compile. 
With the many cores you have, then that’s a lot in total.
‘make wipe’ does not clear the ccache, so any subsequent builds will require 
less memory because the compile is skipped.

/neale

From: mailto:vpp-dev@lists.fd.io>> on behalf of Juraj 
Linkeš mailto:juraj.lin...@pantheon.tech>>
Date: Thursday, 2 August 2018 at 10:10
To: "Neale Ranns (nranns)" mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Neale,

I'm not specifying -j, but I see a lot of processes running in parallel when 
the spike is happening. The processes are attached. They utilized most of 96 
available cores and most of them used more than 400MB - is that how much they 
should be using?

Also, here's the gcc version on the box:
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.4.0-6ubuntu1~16.04.4' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror 
--enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu 
--target=aarch64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4)

Thanks,
Juraj

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, August 1, 2018 5:09 PM
To: Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Juraj,

How many parallel compiles do you have? What’s the j factor

/neale



From: mailto:vpp-dev@lists.fd.io>> on behalf of Juraj 
Linkeš mailto:juraj.lin...@pantheon.tech>>
Date: Wednesday, 1 August 2018 at 16:59
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

Hi vpp-devs,

I noticed that during a specific portion of make verify build on an ARM 
ThunderX machine the build consumes a lot of memory - around 25GB. I can 
identify the spot in the logs:
Jul 31 03:12:48   CXX  gbp_contract.lo

25GB memory hog

Jul 31 03:16:13   CXXLDlibvom.la

but not much else. I created a ticket which 
contains some more information. I didn't see this memory spike when trying to 
reproducing the behavior on my x86 laptop. Does anyone has any idea what could 
be the cause or how to debug this?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10031): https://lists.fd.io/g/vpp-dev/message/10031
Mute This Topic: https://lists.fd.io/mt/24005970/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

2018-08-02 Thread Neale Ranns via Lists.Fd.Io
Hi Juraj,

I couldn’t say how much each compile ‘should’ use, but it has been noted in the 
past that these template heavy C++ files do require a lot of memory to compile. 
With the many cores you have, then that’s a lot in total.
‘make wipe’ does not clear the ccache, so any subsequent builds will require 
less memory because the compile is skipped.

/neale

From:  on behalf of Juraj Linkeš 

Date: Thursday, 2 August 2018 at 10:10
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Neale,

I'm not specifying -j, but I see a lot of processes running in parallel when 
the spike is happening. The processes are attached. They utilized most of 96 
available cores and most of them used more than 400MB - is that how much they 
should be using?

Also, here's the gcc version on the box:
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.4.0-6ubuntu1~16.04.4' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror 
--enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu 
--target=aarch64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4)

Thanks,
Juraj

From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Wednesday, August 1, 2018 5:09 PM
To: Juraj Linkeš ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Large memory spike during make verify on ARM machine 
ThunderX

Hi Juraj,

How many parallel compiles do you have? What’s the j factor

/neale



From: mailto:vpp-dev@lists.fd.io>> on behalf of Juraj 
Linkeš mailto:juraj.lin...@pantheon.tech>>
Date: Wednesday, 1 August 2018 at 16:59
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

Hi vpp-devs,

I noticed that during a specific portion of make verify build on an ARM 
ThunderX machine the build consumes a lot of memory - around 25GB. I can 
identify the spot in the logs:
Jul 31 03:12:48   CXX  gbp_contract.lo

25GB memory hog

Jul 31 03:16:13   CXXLDlibvom.la

but not much else. I created a ticket which 
contains some more information. I didn't see this memory spike when trying to 
reproducing the behavior on my x86 laptop. Does anyone has any idea what could 
be the cause or how to debug this?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10021): https://lists.fd.io/g/vpp-dev/message/10021
Mute This Topic: https://lists.fd.io/mt/24005970/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

2018-08-01 Thread Neale Ranns via Lists.Fd.Io
Hi Juraj,

How many parallel compiles do you have? What’s the j factor

/neale



From:  on behalf of Juraj Linkeš 

Date: Wednesday, 1 August 2018 at 16:59
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Large memory spike during make verify on ARM machine ThunderX

Hi vpp-devs,

I noticed that during a specific portion of make verify build on an ARM 
ThunderX machine the build consumes a lot of memory - around 25GB. I can 
identify the spot in the logs:
Jul 31 03:12:48   CXX  gbp_contract.lo

25GB memory hog

Jul 31 03:16:13   CXXLDlibvom.la

but not much else. I created a ticket which 
contains some more information. I didn't see this memory spike when trying to 
reproducing the behavior on my x86 laptop. Does anyone has any idea what could 
be the cause or how to debug this?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10009): https://lists.fd.io/g/vpp-dev/message/10009
Mute This Topic: https://lists.fd.io/mt/24005970/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [SUSPICIOUS] Re: [SUSPICIOUS] [vpp-dev] L3VPN in VPP

2018-08-01 Thread Neale Ranns via Lists.Fd.Io
Hi,

You probably want:
ip route add 
192.168.23.3/32
 via TenGigabitEthernet4/0/1 out-labels imp-null

given that 192.168.23.2 is directly connected. We talked before about why 
labels for resolving routes are needed. Here it is again ;)

“
If you want to resolve a recursive path that has outgoing labels, ie.
  via 1.1.1.1 out-labels 33

then the resolving route in the FIB MUST also have out-labels. This is because 
you are in effect layering LSPs (the tunnel is the upper/inner layer and the 
route the lower/outer layer). The out-label for the tunnel, provided by the 
tunnel egress device, is not necessarily directly connected to the tunnel 
ingress device. Hence, if the route did not have an out label then a device in 
between the two (that is in the lower layer) would see the label for the 
tunnel/upper layer and mis-forward.
If your two devices are directly connected and so the problem above cannot 
occur, you still need an out-label for the route, but one describes such 
directly connectivity by giving the route an implicit-null out-label, i.e.
   ip route 1.1.1.1/32  via 192.168.1.1 GigabitEthernet13/0/0 out-label imp-null

“

where you replace ‘tunnel’ with ‘recursive route’.

Regards,
nelae


From:  on behalf of Gulakh 
Date: Wednesday, 1 August 2018 at 14:03
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: [SUSPICIOUS] Re: [SUSPICIOUS] [vpp-dev] L3VPN in VPP

Yes, that's right, the problem fixed. I should have inserted this rule : "ip 
route add 
192.168.23.3/32
 via TenGigabitEthernet4/0/1 out-labels 50"

But why doesn't work if I don't have a MPLS label for 
192.168.23.3/32
 ? suppose that the Core of the network is pure IP, no MPLS. I know that in 
L3VPN we need a MPLS enabled core but for the sake of IP resolution in another 
FIB, why does it need a second label i.e. MPLS label??

On Tue, Jul 31, 2018 at 5:54 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

Please show me:
  sh ip fib index 1 
5.5.5.5/32
and
  sh ip fib index 0 
192.168.23.3/32

I suspect you are missing an out-label on the latter.

/neale

From: mailto:vpp-dev@lists.fd.io>> on behalf of Gulakh 
mailto:holoogul...@gmail.com>>
Date: Tuesday, 31 July 2018 at 14:53
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [SUSPICIOUS] [vpp-dev] L3VPN in VPP

It seems that the Next hop IP resolution does not work correctly:
Here is my Configuration:

# set interface state GigabitEthernet4/0/0 up
# set interface state GigabitEthernet4/0/1 up

# ip table add 1   (create Customer VRF)

# set interface ip table GigabitEthernet 4/0/0 1  (Customer VRF)

# set interface ip address GigabitEthernet4/0/0 
192.168.12.2/24
   

Re: [SUSPICIOUS] [vpp-dev] L3VPN in VPP

2018-07-31 Thread Neale Ranns via Lists.Fd.Io
Hi,

Please show me:
  sh ip fib index 1 5.5.5.5/32
and
  sh ip fib index 0 192.168.23.3/32

I suspect you are missing an out-label on the latter.

/neale

From:  on behalf of Gulakh 
Date: Tuesday, 31 July 2018 at 14:53
To: "vpp-dev@lists.fd.io" 
Subject: [SUSPICIOUS] [vpp-dev] L3VPN in VPP

It seems that the Next hop IP resolution does not work correctly:
Here is my Configuration:

# set interface state GigabitEthernet4/0/0 up
# set interface state GigabitEthernet4/0/1 up

# ip table add 1   (create Customer VRF)

# set interface ip table GigabitEthernet 4/0/0 1  (Customer VRF)

# set interface ip address GigabitEthernet4/0/0 
192.168.12.2/24
  (Toward Customer)
# set interface ip address GigabitEthernet4/0/1 
192.168.23.2/24
  (Toward Core)

*** Now I want to add one of Customer's route into its VRF:
# ip route add 
5.5.5.5/32
 table 1 via 192.168.23.3 next-hop-table 0 out-labels 40

in which : 
5.5.5.5/32
 is the Customer's another site in somewhere else
   table 1 is the customer's VRF
   192.168.23.3 is the next hop which is in the core -> be resolved 
by Global VRF
   next-hop-table 0 is the Global VRF to resolve 192.168.23.3
   out-labels 40 is the VPN Label


Now When I see the VRF 1 ("show ip fib table 1"), here is the output for 
5.5.5.5/32

ipv4-VRF:1, fib_index:1, flow hash:[src dst sport dport proto ] 
locks:[src:CLI:2, ]
..
...

192.168.12.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:13 to:[0:0]]
[0] [@4]: ipv4-glean: GigabitEthernet4/0/0: mtu:9000 
a0369f23aa780806
5.5.5.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:25 to:[0:0]]
[0] [@0]: dpo-drop ip4


Here is the VRF 0:

ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:plugin-hi:2, src:default-route:1, ]
..
...

192.168.23.0/24

Re: [vpp-dev] L3VPN in VPP

2018-07-31 Thread Neale Ranns via Lists.Fd.Io

Hi,

You are correct on all points.

regards
/neale

From: Holoo Gulakh 
Date: Tuesday, 31 July 2018 at 12:19
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] L3VPN in VPP

Hi,
In order to have both VPLS and L3VPN works concurrently in a PE router, I guess 
that I should do the following things:

1- Regardless of the type of service that whether it's VPLS ,L3VPN or none(e.g. 
a simple connectivity) , the core of the network works the same, that is I 
should Insert everything about the core of the network in the Global VRF i.e. 
IP FIB 0 and MPLS FIB 0 in VPP.

The above step is done before even providing any services.

2- For the PW-Label of VPLS, the task is delivered to the mpls tunnel to put 
the PW-Label on the Packet (i.e. mpls tunnel add l2-only  
out-labels) then to resolve the PE-TARGET IP address the resolution 
is done by checking the Global VRF which contains information about the core 
and at that stage the MPLS label is added to packet.

   For the VPN-Label of the L3VPN the task of putting it on the packet is 
delivered to the VRF associated with the incoming Interface (i.e. # ip route 
add   table  via  out-labels ) 
and then to resolve the NEXT-HOP IP address, Global VRF must be checked since 
the routing information about the core is stored in the Global VRF (i.e. IP FIB 
0 and MPLS FIB 0 in VPP)
but the problem is that the route store in the customer's VRF must use Global 
VRF in order to resolve its NEXT-HOP.
Searching VPP Doc, I confronted with a parameter that I can use to select which 
VRF to use to resolve the next hop.
so the # command must be modified by (ip route add  table 
 via   next-hop-table  out-labels 
) and then during the resolution of the PE-TARGET IP address the 
MPLS Labels is added to the packet.

Question: Am I right??

Excuse me for my questions ... most of the materials found in Internet are 
about Cisco commands to run the service and they give my little insights on 
what to do with lower level configurations.
Thanks in advance

On Mon, Jul 30, 2018 at 1:31 PM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

Answers inline marked [nr]

/neale

From: mailto:vpp-dev@lists.fd.io>> on behalf of Gulakh 
mailto:holoogul...@gmail.com>>
Date: Saturday, 28 July 2018 at 13:45
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] L3VPN in VPP

Hi,
I have setup a VPLS scenario successfully and now I want to setup a L3VPN 
scenario in VPP (L3VPN topology is in attachment).

My configuration for VPLS is some how like this 
link.

As far as I searched Internet, L3VPN has a VPN Label that I think is somehow 
like PW Label in VPLS with difference that VPN Label is used to select VRF and 
PW Label is used to select mpls tunnel (hence bridge).

[nr] other label allocation schemes are available ☺

===
Part1:
I guess I should configure the source PE as follow:

 In VPLS: mpls tunnel add l2-only via  out-labels 
   ip route add  via  out-labels 


 In L3VPN: CMD1 ??? (insert in customer VRF)
 ip route add  via  out-labels 
 (insert in GLOBAL VRF)

I don't know what command I should use for CMD1 ... This command must add 
VPN-LABEL which is selected base on the customer's VRF to the packet and then 
lookup the GLOBAL VRF to push the MPLS Label. just like VPLS that the mpls 
tunnel first adds a PW Label and then in the destination IP resolution, MPLS 
Label is added to packet.

Question1: Am I right about the configurations in the source PE?


[nr] ip route table   via  out-labels 


you could use PREFIX=0.0.0.0/0 or many more specifics

and your route to the PE-TARGET would be better as a non-recursive route (i.e. 
if it is learned via e.g. OSPF and this is not an inter-AS option C) otherwise 
you’ll need another labelled route for the next-hop

non-recursive means specify the next-hop and interface.



Part2:
I guess I should configure the target PE as follow:

 In VPLS: mpls local-label add eos  via l2-input-on 

 In L3VPN: mpls local-label add eos  via ip4-lookup-in-table 
  (insert in GLOBAL VRF)

Question2: Am I right about the configurations in the target PE?

[nr] Yes. The mpls label is added to the MPLS global table, i.e. there’s no 
‘insert in global-VRF’, since the instruction associated with the label is to 
lookup the exposed IP destination address in the customer’s VRF/

=
Part3:
In order to fill customer's VRF, I should use control plane's RouteTarget (RT) 
to select the VRF ID and then use below command to fill the VRF:

  ip route add  via   table 

Question3: Am I right?

[nr] yes.

thanks in advance



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/R

Re: [vpp-dev] L3VPN in VPP

2018-07-30 Thread Neale Ranns via Lists.Fd.Io
Hi,

Answers inline marked [nr]

/neale

From:  on behalf of Gulakh 
Date: Saturday, 28 July 2018 at 13:45
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] L3VPN in VPP

Hi,
I have setup a VPLS scenario successfully and now I want to setup a L3VPN 
scenario in VPP (L3VPN topology is in attachment).

My configuration for VPLS is some how like this 
link.

As far as I searched Internet, L3VPN has a VPN Label that I think is somehow 
like PW Label in VPLS with difference that VPN Label is used to select VRF and 
PW Label is used to select mpls tunnel (hence bridge).

[nr] other label allocation schemes are available ☺

===
Part1:
I guess I should configure the source PE as follow:

 In VPLS: mpls tunnel add l2-only via  out-labels 
   ip route add  via  out-labels 


 In L3VPN: CMD1 ??? (insert in customer VRF)
 ip route add  via  out-labels 
 (insert in GLOBAL VRF)

I don't know what command I should use for CMD1 ... This command must add 
VPN-LABEL which is selected base on the customer's VRF to the packet and then 
lookup the GLOBAL VRF to push the MPLS Label. just like VPLS that the mpls 
tunnel first adds a PW Label and then in the destination IP resolution, MPLS 
Label is added to packet.

Question1: Am I right about the configurations in the source PE?


[nr] ip route table   via  out-labels 


you could use PREFIX=0.0.0.0/0 or many more specifics

and your route to the PE-TARGET would be better as a non-recursive route (i.e. 
if it is learned via e.g. OSPF and this is not an inter-AS option C) otherwise 
you’ll need another labelled route for the next-hop

non-recursive means specify the next-hop and interface.



Part2:
I guess I should configure the target PE as follow:

 In VPLS: mpls local-label add eos  via l2-input-on 

 In L3VPN: mpls local-label add eos  via ip4-lookup-in-table 
  (insert in GLOBAL VRF)

Question2: Am I right about the configurations in the target PE?

[nr] Yes. The mpls label is added to the MPLS global table, i.e. there’s no 
‘insert in global-VRF’, since the instruction associated with the label is to 
lookup the exposed IP destination address in the customer’s VRF/

=
Part3:
In order to fill customer's VRF, I should use control plane's RouteTarget (RT) 
to select the VRF ID and then use below command to fill the VRF:

  ip route add  via   table 

Question3: Am I right?

[nr] yes.

thanks in advance


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9970): https://lists.fd.io/g/vpp-dev/message/9970
Mute This Topic: https://lists.fd.io/mt/23840657/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP_STABLE_1710 Crash during IP Address Addition #vnet

2018-07-25 Thread Neale Ranns via Lists.Fd.Io

Hi Jitendra,

Addresses in overlapping subnets is not supported by VPP. The configuration you 
give below is not valid.

/neale

From:  on behalf of "sainijite...@gmail.com" 

Date: Tuesday, 24 July 2018 at 07:57
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] VPP_STABLE_1710 Crash during IP Address Addition #vnet

Hi Neale,

Thank you for the reply. I tried this patch but it blocks valid configuration 
as well.
For example -
set interface ip address EthernetSwitch2/0/0 10.10.10.33/24  <--- valid 
address
create sub EthernetSwitch2/0/0 8
set interface ip address EthernetSwitch2/0/0.8 10.10.10.66/24   <--- valid 
address but not allowed since in same subnet

Also, when i compare file "ip4_forward.c" from this patch against one in 
release 1804, the changes from this patch are not found in 1804.
release 1804 does not block overlapping subnets and crash is fixed as well. 
Unable to pin down the exact changes that went in for this.

Thanks
Jitendra



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9925): https://lists.fd.io/g/vpp-dev/message/9925
Mute This Topic: https://lists.fd.io/mt/23791651/21656
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP_STABLE_1710 Crash during IP Address Addition #vnet

2018-07-23 Thread Neale Ranns via Lists.Fd.Io
Hi,

I expected it was ‘fixed’ when we explicitly disallowed overlapping subnets:
  https://gerrit.fd.io/r/#/c/8057/

/neale


From:  on behalf of "sainijite...@gmail.com" 

Date: Monday, 23 July 2018 at 12:42
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP_STABLE_1710 Crash during IP Address Addition #vnet


[Edited Message Follows]
Hello,

We are using vpp stable release 1710 and encountered vpp crash in the following 
scenario (IP Address addition) -
1. configure ip
2. configure network / broadcast ip  explicitly on same interface
Same crash behavior is observed with VPP release 1801 but issue is not seen in 
release 1804.
 Since issue was fixed between release 1801 & 1804, I wanted to find out which 
code change exactly fixes this issue.
Could someone please help to get some starting pointers here.
In summary -
If we have an issue which is fixed in some latest version of the vpp build, how 
do we track the changes which fixed the issues?

Crash scenario example  and backtrace -

-
create sub EthernetSwitch2/0/0 8
set interface ip address EthernetSwitch2/0/0.8 10.10.10.128/24 <--- valid 
address
set interface ip address EthernetSwitch2/0/0 10.10.10.0/24 <--- network 
address <<< crash

-
-
create sub EthernetSwitch2/0/0 8
set interface ip address EthernetSwitch2/0/0.8 10.10.10.128/24 <--- valid 
address
set interface ip address EthernetSwitch2/0/0 10.10.10.255/24 <--- broadcast 
address <<< crash

-
-
create sub EthernetSwitch2/0/0 8
set interface ip address EthernetSwitch2/0/0.8 10.10.10.0/24 <--- network 
address
set interface ip address EthernetSwitch2/0/0 10.10.10.255/24 <--- broadcast 
address <<< crash

-

DBGvpp# set interface ip address VirtualFunctionEthernet0/5/0 10.10.10.128/24
DBGvpp# create sub VirtualFunctionEthernet0/5/0 8
VirtualFunctionEthernet0/5/0.8
DBGvpp# set interface ip address VirtualFunctionEthernet0/5/0.8 10.10.10.0/24
0: /home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_path.c:2085 
(fib_path_get_adj) assertion `dpo_is_adj(&path->fp_dpo)' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
0x7601b428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7601b428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7601d02a in __GI_abort () at abort.c:89
#2  0x00406a34 in os_panic () at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vpp/vnet/main.c:268
#3  0x768018ff in debugger () at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vppinfra/error.c:84
#4  0x76801d07 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7754d770 "%s:%d (%s) assertion `%s' fails") at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vppinfra/error.c:143
#5  0x773b18ba in fib_path_get_adj (path_index=12) at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_path.c:2085
#6  0x773ac75a in fib_path_list_get_adj (path_list_index=12, 
type=FIB_FORW_CHAIN_TYPE_UNICAST_IP4) at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_path_list.c:1178
#7  0x7739edfe in fib_entry_src_interface_path_swap 
(src=0x7fffb6393cc4, entry=0x7fffb6aa6e64, pl_flags=FIB_PATH_LIST_FLAG_LOCAL, 
paths=0x7fffb62db94c) at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_entry_src_interface.c:69
#8  0x7739d0cd in fib_entry_src_action_path_swap 
(fib_entry=0x7fffb6aa6e64, source=FIB_SOURCE_INTERFACE, 
flags=(FIB_ENTRY_FLAG_CONNECTED | FIB_ENTRY_FLAG_LOCAL), rpaths=0x7fffb62db94c)
at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_entry_src.c:1205
#9  0x77399203 in fib_entry_update (fib_entry_index=8, 
source=FIB_SOURCE_INTERFACE, flags=(FIB_ENTRY_FLAG_CONNECTED | 
FIB_ENTRY_FLAG_LOCAL), paths=0x7fffb62db94c) at 
/home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_entry.c:1113
#10 0x7738491c in fib_table_entry_update (fib_index=0, 
prefix=0x7fffb62fe770, source=FIB_SOURCE_INTERFACE, 
flags=(FIB_ENTRY_FLAG_CONNECTED | FIB_ENTRY_FLAG_LOCAL), paths=0x7fffb62db94c)
at /home/vagrant/jisaini/fdio/vpp/build-data/../src/vnet/fib/fib_table.c:743
#11 0x77384b14 in fib_table_entry_update_one_path (fib_index=0, 
prefix=0x7fffb62fe770, source=FIB_SOURCE_INTERFACE, 
flags=(FIB_ENTRY_FLAG_CONNECTED | FIB_ENTRY_FLAG_LOCAL), 
next_hop_proto=DPO_PROTO_IP4, next_hop=0x7fffb62fe774, next_hop_sw_if_index=3,
next_hop_fib_index=4294967295, next_hop_weight=1,

Re: [vpp-dev] [WARNING : MESSAGE ENCRYPTED] Re: [vpp] VXLAN arp response packet is dropped

2018-07-18 Thread Neale Ranns via Lists.Fd.Io
Hi Satomi,

That’s a big trace. Can you point to an example of a packet drop that is 
causing you problems.

Thanks
neale  



-Original Message-
From: 井上里美 
Date: Wednesday, 18 July 2018 at 10:58
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
Subject: [WARNING :  MESSAGE ENCRYPTED] Re: [vpp] VXLAN arp response packet is 
dropped

Hi, neale san,

Thank you for your reply.
Sure!

Satomi

On 2018/07/18 17:27, Neale Ranns (nranns) wrote:
> Can I see the packet trace?
>
> /neale
>
> -Original Message-
> From: 井上里美 
> Date: Wednesday, 18 July 2018 at 09:54
> To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

> Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
> Subject: [vpp] VXLAN arp response packet is dropped
>
>  Hi neale san,
>  
>  Thank you for your reply.
>  We used a vpp packet trace.
>  show trace is no error but on the caputure device,ARP request packet 
is
>  droped.
>  It happend the same event even L2.
>  
>  Could you give me some advice?
>  
>  【Architecture】
>    __
>  |    |→caputure device→IXIA(port 2)
>  |VPP|
>  |   |←IXIA(port 1)
>  |__|
>  
>  On 2018/07/06 21:39, Neale Ranns (nranns) wrote:
>  > Hi Satomi
>  >
>  > Debugging packet loss is much easier with a VPP packet trace…
>  >
>  > Regards,
>  > neale
>  >
>  > -Original Message-
>  > From:  on behalf of 井上里美 

>  > Date: Friday, 6 July 2018 at 12:38
>  > To: "vpp-dev@lists.fd.io" 
>  > Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
>  > Subject: [vpp-dev] [pw] [vpp] VXLAN arp response packet is dropped
>  >
>  >  The password is here.
>  >  1j^?iKvC]C;%
>  >
>  >  On 2018/07/06 19:37, 井上里美 wrote:
>  >  > Hi VPP Team,
>  >  >
>  >  > I'm Satomi Inoue and I belong to NTT laboratories.
>  >  > Could you tell me why ARP response packet is dropped?
>  >  >
>  >  > We set up vxlan while looking at
>  >  > ”Using_VPP_as_a_VXLAN_Tunnel_Terminator”manual.
>  >  > The procedure is as follows.
>  >  >
>  >  > [The result]
>  >  > ・ARP request packet : IXIA(port2)→VPP→IXIA(port1):OK
>  >  > ・ARP response packet : IXIA(port1)→VPP→IXIA(port2):NG
>  >  >  →We checked it by trace command. Loopback interface in VPP 
drop the
>  >  > ARP response packet.
>  >  >
>  >  > [set up vxlan]
>  >  > 1. Create sub-interface
>  >  > vpp# create sub-interfaces VirtualFunctionEthernet0/9/0 
1
>  >  > vpp# set interface state VirtualFunctionEthernet0/9/0.1 
up
>  >  >
>  >  > 2. Create bridge-domain
>  >  > create bridge-domain 10001 learn 1 forward 1 uu-flood 1 
arp-term 0
>  >  >
>  >  > 3. Create Loopback interface
>  >  > vpp# loopback create mac 1a:2b:3c:4d:5e:6f
>  >  > vpp# set interface state loop0 up
>  >  > vpp# set interface ip address loop0 1.1.1.1/32
>  >  > vpp# set interface ip table loop0 7
>  >  >
>  >  > 4. Apply loopback interface to bride-domain
>  >  > vpp# set interface l2 bridge loop0 10001 bvi
>  >  >
>  >  > 5.Apply sub-interface to bride-domain
>  >  > vpp# set interface l2 bridge 
VirtualFunctionEthernet0/9/0.1 10001 0
>  >  >
>  >  > 6.Create VXLAN tunnel
>  >  > vpp# create vxlan tunnel src 1.1.1.1 dst 20.10.0.1 vni 
10001
>  >  > encap-vrf-id 7 decap-next l2
>  >  > vpp# set interface l2 bridge vxlan_tunnel0 10001 1
>  >  >
>  >  > Regards,
>  >  > Satomi
>  >
>  >  --
>  >
>  >
>  >
>  >
>  >
>  
>  --
>  -
>  
>  井上里美(Satomi Inoue)
>  〒180-8585 東京都武蔵野市緑町3-9-11
>  PHONE:0422-59-4151
>  E-MAIL:inoue.sat...@lab.ntt.co.jp
>  
>  
>  
>

-- 
-
日本電信電話株式会社 情報ネットワーク総合研究所
ネットワークサービスシステム研究所 転送サービス基盤プロジェクト
井上里美(Satomi Inoue)
〒180-8585 東京都武蔵野市緑町3-9-11
PHONE:0422-59-4151
E-MAIL:inoue.sat...@lab.ntt.co.jp



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#98

Re: [vpp-dev] [vpp] VXLAN arp response packet is dropped

2018-07-18 Thread Neale Ranns via Lists.Fd.Io
Can I see the packet trace?

/neale

-Original Message-
From: 井上里美 
Date: Wednesday, 18 July 2018 at 09:54
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
Subject: [vpp] VXLAN arp response packet is dropped

Hi neale san,

Thank you for your reply.
We used a vpp packet trace.
show trace is no error but on the caputure device,ARP request packet is 
droped.
It happend the same event even L2.

Could you give me some advice?

【Architecture】
  __
|    |→caputure device→IXIA(port 2)
|VPP|
|   |←IXIA(port 1)
|__|

On 2018/07/06 21:39, Neale Ranns (nranns) wrote:
> Hi Satomi
>
> Debugging packet loss is much easier with a VPP packet trace…
>
> Regards,
> neale
>
> -Original Message-
> From:  on behalf of 井上里美 
> Date: Friday, 6 July 2018 at 12:38
> To: "vpp-dev@lists.fd.io" 
> Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
> Subject: [vpp-dev] [pw] [vpp] VXLAN arp response packet is dropped
>
>  The password is here.
>  1j^?iKvC]C;%
>  
>  On 2018/07/06 19:37, 井上里美 wrote:
>  > Hi VPP Team,
>  >
>  > I'm Satomi Inoue and I belong to NTT laboratories.
>  > Could you tell me why ARP response packet is dropped?
>  >
>  > We set up vxlan while looking at
>  > ”Using_VPP_as_a_VXLAN_Tunnel_Terminator”manual.
>  > The procedure is as follows.
>  >
>  > [The result]
>  > ・ARP request packet : IXIA(port2)→VPP→IXIA(port1):OK
>  > ・ARP response packet : IXIA(port1)→VPP→IXIA(port2):NG
>  >  →We checked it by trace command. Loopback interface in VPP drop 
the
>  > ARP response packet.
>  >
>  > [set up vxlan]
>  > 1. Create sub-interface
>  > vpp# create sub-interfaces VirtualFunctionEthernet0/9/0 1
>  > vpp# set interface state VirtualFunctionEthernet0/9/0.1 up
>  >
>  > 2. Create bridge-domain
>  > create bridge-domain 10001 learn 1 forward 1 uu-flood 1 
arp-term 0
>  >
>  > 3. Create Loopback interface
>  > vpp# loopback create mac 1a:2b:3c:4d:5e:6f
>  > vpp# set interface state loop0 up
>  > vpp# set interface ip address loop0 1.1.1.1/32
>  > vpp# set interface ip table loop0 7
>  >
>  > 4. Apply loopback interface to bride-domain
>  > vpp# set interface l2 bridge loop0 10001 bvi
>  >
>  > 5.Apply sub-interface to bride-domain
>  > vpp# set interface l2 bridge VirtualFunctionEthernet0/9/0.1 
10001 0
>  >
>  > 6.Create VXLAN tunnel
>  > vpp# create vxlan tunnel src 1.1.1.1 dst 20.10.0.1 vni 10001
>  > encap-vrf-id 7 decap-next l2
>  > vpp# set interface l2 bridge vxlan_tunnel0 10001 1
>  >
>  > Regards,
>  > Satomi
>  
>  --
>  
>  
>  
>  
>

-- 
-

井上里美(Satomi Inoue)
〒180-8585 東京都武蔵野市緑町3-9-11
PHONE:0422-59-4151
E-MAIL:inoue.sat...@lab.ntt.co.jp




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9862): https://lists.fd.io/g/vpp-dev/message/9862
Mute This Topic: https://lists.fd.io/mt/23669118/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [pw] [vpp] VXLAN arp response packet is dropped

2018-07-06 Thread Neale Ranns via Lists.Fd.Io
Hi Satomi

Debugging packet loss is much easier with a VPP packet trace…

Regards,
neale

-Original Message-
From:  on behalf of 井上里美 
Date: Friday, 6 July 2018 at 12:38
To: "vpp-dev@lists.fd.io" 
Cc: "Norimasa Asai (noasai)" , エッジ仮想化hcoML 
, 小柳達也様 , 
N転P_西岡孟朗様 
Subject: [vpp-dev] [pw] [vpp] VXLAN arp response packet is dropped

The password is here.
1j^?iKvC]C;%

On 2018/07/06 19:37, 井上里美 wrote:
> Hi VPP Team,
>
> I'm Satomi Inoue and I belong to NTT laboratories.
> Could you tell me why ARP response packet is dropped?
>
> We set up vxlan while looking at 
> ”Using_VPP_as_a_VXLAN_Tunnel_Terminator”manual.
> The procedure is as follows.
>
> [The result]
> ・ARP request packet : IXIA(port2)→VPP→IXIA(port1):OK
> ・ARP response packet : IXIA(port1)→VPP→IXIA(port2):NG
>  →We checked it by trace command. Loopback interface in VPP drop the 
> ARP response packet.
>
> [set up vxlan]
> 1. Create sub-interface
> vpp# create sub-interfaces VirtualFunctionEthernet0/9/0 1
> vpp# set interface state VirtualFunctionEthernet0/9/0.1 up
>
> 2. Create bridge-domain
> create bridge-domain 10001 learn 1 forward 1 uu-flood 1 arp-term 0
>
> 3. Create Loopback interface
> vpp# loopback create mac 1a:2b:3c:4d:5e:6f
> vpp# set interface state loop0 up
> vpp# set interface ip address loop0 1.1.1.1/32
> vpp# set interface ip table loop0 7
>
> 4. Apply loopback interface to bride-domain
> vpp# set interface l2 bridge loop0 10001 bvi
>
> 5.Apply sub-interface to bride-domain
> vpp# set interface l2 bridge VirtualFunctionEthernet0/9/0.1 10001 0
>
> 6.Create VXLAN tunnel
> vpp# create vxlan tunnel src 1.1.1.1 dst 20.10.0.1 vni 10001 
> encap-vrf-id 7 decap-next l2
> vpp# set interface l2 bridge vxlan_tunnel0 10001 1
>
> Regards,
> Satomi

-- 
-
日本電信電話株式会社 情報ネットワーク総合研究所
ネットワークサービスシステム研究所 転送サービス基盤プロジェクト
井上里美(Satomi Inoue)
〒180-8585 東京都武蔵野市緑町3-9-11
PHONE:0422-59-4151
E-MAIL:inoue.sat...@lab.ntt.co.jp




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9795): https://lists.fd.io/g/vpp-dev/message/9795
Mute This Topic: https://lists.fd.io/mt/23172883/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Does VPP support source base route?

2018-07-03 Thread Neale Ranns via Lists.Fd.Io
Hi David,

Yes and no.

No, because there is [today] no way to change *the* IP lookup to use the 
packet’s source address.

Yes, because VPP does support a source based lookup, but this would happen 
after *the* destination based lookup. So, the trick would be to configure a 
second IP table with routes for your source addresses, i.e:
  ip table add 1
  ip route add table 1 10.0.0.0/8 via 192.168.1.1 GigEthernet0/0/0/0
and then configure *the* destination based lookup to use this table for a 
second source based lookup
  ip route 0.0.0.0/0 via ip4-lookup-in-table 1 src-lookup
using the default route points all packets to the second lookup.

I just did:
  https://gerrit.fd.io/r/#/c/13337/
To accept the ‘src-lookup’ keyword for ip route configuration (which is today 
only available via the API).

Hth,
/neale


From:  on behalf of david zhang 

Date: Tuesday, 3 July 2018 at 15:39
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Does VPP support source base route?


Hi,



My application situation needs to route packet based on source ip address, but 
can it be implemented through some simple commands in VPP? such as "ip route".

I have try to read the related source code and find it really difficult.

I really hope I can get some advice.



Thanks in advance!



Regards,

David

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9773): https://lists.fd.io/g/vpp-dev/message/9773
Mute This Topic: https://lists.fd.io/mt/23024219/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] twamp

2018-07-02 Thread Neale Ranns via Lists.Fd.Io
Hi Avi,

None that I am aware of, but its inclusion would be welcome. 

/neale

-Original Message-
From:  on behalf of "Avi Cohen (A)" 
Date: Thursday, 28 June 2018 at 15:04
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] twamp

Hi,
Is there any plan to implement/integrate TWAMP into VPP ?
Regards
Avi



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9757): https://lists.fd.io/g/vpp-dev/message/9757
Mute This Topic: https://lists.fd.io/mt/22866386/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] mpls tunnel

2018-06-21 Thread Neale Ranns via Lists.Fd.Io

If you want to resolve a recursive path that has outgoing labels, ie.
  via 1.1.1.1 out-labels 33

then the resolving route in the FIB MUST also have out-labels. This is because 
you are in effect layering LSPs (the tunnel is the upper/inner layer and the 
route the lower/outer layer). The out-label for the tunnel, provided by the 
tunnel egress device, is not necessarily directly connected to the tunnel 
ingress device. Hence, if the route did not have an out label then a device in 
between the two (that is in the lower layer) would see the label for the 
tunnel/upper layer and mis-forward.
If your two devices are directly connected and so the problem above cannot 
occur, you still need an out-label for the route, but one describes such 
directly connectivity by giving the route an implicit-null out-label, i.e.
   ip route 1.1.1.1/32  via 192.168.1.1 GigabitEthernet13/0/0 out-label imp-null

/neale


From: Holoo Gulakh 
Date: Thursday, 21 June 2018 at 17:26
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] mpls tunnel

Hi,
It is not a valid solution.(at least "show mpls tunnel" says that)

Here is the new configuration and result:
   mpls tunnel l2-only via 1.1.1.1 out-labels 33
   ip route add 1.1.1.1/32 via 192.168.1.1 
GigabitEthernet13/0/0

result:
I expect to see something like the result of second scenario above:
===
[@0] mpls_tunnel0: sw_if_index:4 hw_if_index:4
 flags:L2,
 via:
path-list:[23] locks:1 flags:shared, uPRF-list:19 len:1 itfs:[2, ]
  path:[23] pl-index:23 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
1.1.1.1 GigabitEthernet13/0/0
  [@0]: arp-ipv4: via 1.1.1.1 GigabitEthernet13/0/0
Extensions:
 path:23 mpls-flags:[no-ip-tll-decr] labels:[[33 pipe ttl:0 exp:0]]
 forwarding: ethernet
 [@1]: dpo-load-balance: [proto:ethernet index:23 buckets:1 uRPF:-1 to:[0:0]]
[0] [@2]: mpls-label[0]:[33:64:0:eos]
[@1]: arp-mpls: via 1.1.1.1 GigabitEthernet13/0/0


But the result is as follow:
===
[@0] mpls_tunnel0: sw_if_index:4 hw_if_index:4
 flags:L2,
 via:
path-list:[23] locks:1 flags:shared, uPRF-list:19 len:1 itfs:[2, ]
  path:[23] pl-index:23 ip4 weight=1 pref=0 recursive:
via 1.1.1.1 in fib:0 via-fib:17 via-dpo:[dpo-load-balance:20]
Extensions:
 path:23 mpls-flags:[no-ip-tll-decr] labels:[[33 pipe ttl:0 exp:0]]
 forwarding: ethernet
 [@1]: dpo-load-balance: [proto:ethernet index:23 buckets:1 uRPF:-1 to:[0:0]]
[0] [@0]: dpo-drop ethernet


If I use the following command for the route to 1.1.1.1:
 ip route add 1.1.1.1/32 via 192.168.1.1 
GigabitEthernet13/0/0  out-labels 50

the result is:
===
[@0] mpls_tunnel0: sw_if_index:4 hw_if_index:4
 flags:L2,
 via:
path-list:[23] locks:1 flags:shared, uPRF-list:19 len:1 itfs:[2, ]
  path:[23] pl-index:23 ip4 weight=1 pref=0 recursive:  oper-flags:resolved,
via 1.1.1.1 in fib:0 via-fib:17 via-dpo:[dpo-load-balance:20]
Extensions:
 path:23 mpls-flags:[no-ip-tll-decr] labels:[[33 pipe ttl:0 exp:0]]
 forwarding: ethernet
 [@1]: dpo-load-balance: [proto:ethernet index:23 buckets:1 uRPF:-1 to:[0:0]]
[0] [@2]: mpls-label[0]:[33:64:0:eos]
[@1]: dpo-load-balance: [proto:mpls index:21 buckets:1 uRPF:22 to:[0:0] 
 via:[1:64]]
[0] [@6]: mpls-label[1]:[50:64:0:neos]
[@2]: mpls via 192.168.1.1 GigabitEthernet13/0/0: mtu:9000 
000c293a39d7000c29d693938847
Which is correct in my scenario.

How can I use the defined route for 1.1.1.1 in IP fib as the route for mpls 
tunnel to get to 1.1.1.1 (in both case with mpls label--which I provided-- and 
without mpls label)??

Thanks.

On Thu, Jun 21, 2018 at 4:46 AM, Neale Ranns (nranns) 
mailto:nra...@cisco.com>> wrote:
Hi,

This:
  XXX via via 1.1.1.1 ip4-lookup-in-table 0 out-labels 33
is not a valid path.

If you want packets to follow the same path as for 1.1.1.1 (i.e. the path is 
recursive via 1.1.1.1, and you’ll need a route in the fib for 1.1.1.1) and have 
label 33 imposed, do:
  XXX via via 1.1.1.1 out-labels 33
If the 1.1.1.1 you want to recurse via is not in the default table, then do:
XXX via via 1.1.1.1 next-hop-table Y out-labels 33

If (e.g. post a label pop) you want to use the exposed IP4 header to do a IP4 
lookup then do:
  XXX via via ip4-lookup-in-table 0

This:
  XXX via 1.1.1.1 GigabitEthernet13/0/0 out-labels 33
Is not a recursive path. It will resolve via the adjacency for 1.1.1.1 on 
GigE13/0/0 and thus will attempt to ARP for 1.1.1.1 out of that interface. 
Since 1.1.1.1 is not an address on that interface’s configured subnet, this 
won’t work, unless the peer is running proxy ARP, which we all know is evil.

/neale

From: mailto:vpp-dev@lists.fd.io>> on behalf of Gulakh 
mailto:holoogul...@gmail.com>>
Date: Wednesday, 20 June 2018 at 22:32
To: "vpp-dev@lists.fd.io" 
mail

Re: [vpp-dev] mpls tunnel

2018-06-20 Thread Neale Ranns via Lists.Fd.Io
Hi,

This:
  XXX via via 1.1.1.1 ip4-lookup-in-table 0 out-labels 33
is not a valid path.

If you want packets to follow the same path as for 1.1.1.1 (i.e. the path is 
recursive via 1.1.1.1, and you’ll need a route in the fib for 1.1.1.1) and have 
label 33 imposed, do:
  XXX via via 1.1.1.1 out-labels 33
If the 1.1.1.1 you want to recurse via is not in the default table, then do:
XXX via via 1.1.1.1 next-hop-table Y out-labels 33

If (e.g. post a label pop) you want to use the exposed IP4 header to do a IP4 
lookup then do:
  XXX via via ip4-lookup-in-table 0

This:
  XXX via 1.1.1.1 GigabitEthernet13/0/0 out-labels 33
Is not a recursive path. It will resolve via the adjacency for 1.1.1.1 on 
GigE13/0/0 and thus will attempt to ARP for 1.1.1.1 out of that interface. 
Since 1.1.1.1 is not an address on that interface’s configured subnet, this 
won’t work, unless the peer is running proxy ARP, which we all know is evil.

/neale

From:  on behalf of Gulakh 
Date: Wednesday, 20 June 2018 at 22:32
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] mpls tunnel

Hi,
My topology is:

  R1 (192.168.1.1/24) 
<--> R2 (192.168.1.2/24)

and R1 loopback interface has IP address of 1.1.1.1/32

==
I have configured VPP's mpls tunnel as follow:
set interface ip address GigabitEthernet13/0/0 
192.168.1.2/24
mpls tunnel l2-only via 1.1.1.1 ip4-lookup-in-table 0 out-labels 33

To make it possible to find 1.1.1.1, I inserted a route in ip fib as follow:
   ip route add 1.1.1.1/32 via 192.168.1.1 
GigabitEthernet13/0/0

What I see in "show mpls tunnel" is as follow:

[@0] mpls_tunnel0: sw_if_index:4 hw_if_index:4
 flags:L2,
 via:
path-list:[22] locks:1 flags:shared, uPRF-list:20 len:1 itfs:[2, ]
  path:[22] pl-index:22 ip4 weight=1 pref=0 recursive:
via 192.168.1.1 in fib:0 via-fib:17 via-dpo:[dpo-load-balance:20]
Extensions:
 path:22 mpls-flags:[no-ip-tll-decr] labels:[[33 pipe ttl:0 exp:0]]
 forwarding: ethernet
 [@1]: dpo-load-balance: [proto:ethernet index:22 buckets:1 uRPF:-1 to:[0:0]]
[0] [@0]: dpo-drop ethernet

==
In another scenario, I have used following configurations:
set interface ip address GigabitEthernet13/0/0 
192.168.1.2/24
mpls tunnel l2-only via 1.1.1.1 GigabitEthernet13/0/0 out-labels 33

What I see in "show mpls tunnel" is as follow:

[@0] mpls_tunnel0: sw_if_index:4 hw_if_index:4
 flags:L2,
 via:
path-list:[23] locks:1 flags:shared, uPRF-list:19 len:1 itfs:[2, ]
  path:[23] pl-index:23 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
1.1.1.1 GigabitEthernet13/0/0
  [@0]: arp-ipv4: via 1.1.1.1 GigabitEthernet13/0/0
Extensions:
 path:23 mpls-flags:[no-ip-tll-decr] labels:[[33 pipe ttl:0 exp:0]]
 forwarding: ethernet
 [@1]: dpo-load-balance: [proto:ethernet index:23 buckets:1 uRPF:-1 to:[0:0]]
[0] [@2]: mpls-label[0]:[33:64:0:eos]
[@1]: arp-mpls: via 1.1.1.1 GigabitEthernet13/0/0


==
My Question:
Q: Why does not VPP resolve IP address of 1.1.1.1 in the first configuration 
(As the "show mpls tunnel" in first scenario shows, It has not been resolved)?? 
I expect to do so since I have added a route for 1.1.1.1 in IP fib.

Thanks


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9663): https://lists.fd.io/g/vpp-dev/message/9663
Mute This Topic: https://lists.fd.io/mt/22449276/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] creat mpls tunnel

2018-06-14 Thread Neale Ranns via Lists.Fd.Io

You can’t with the current API. Nor can you with any other type of tunnel.

/neale

From:  on behalf of "omid via Lists.Fd.Io" 

Reply-To: "zeinalpouro...@yahoo.com" 
Date: Tuesday, 12 June 2018 at 18:46
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] creat mpls tunnel

Hi,
how mpls tunnel add with an arbitrary name?
thanks.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9619): https://lists.fd.io/g/vpp-dev/message/9619
Mute This Topic: https://lists.fd.io/mt/22252088/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



<    1   2   3