Hi Hongjun, We need an ARP entry on the tunnel interface? That’s not great. If a GTPu interface is point to point we should set the flags:
VNET_HW_INTERFACE_CLASS (gtpu_hw_class) = { … .flags = VNET_HW_INTERFACE_CLASS_FLAG_P2P, }; A P2P interface is expected to provide a ‘complete’, i.e. a fully formed, rewrite string, hence there’s no need for ARP. /neale -----Original Message----- From: <vpp-dev-boun...@lists.fd.io> on behalf of "Ni, Hongjun" <hongjun...@intel.com> Date: Thursday, 2 November 2017 at 12:46 To: Ryota Yushina <r-yush...@ce.jp.nec.com>, "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io> Subject: Re: [vpp-dev] gtpu tunneling decap-next ip4 issue Hi Ryota, Below is my configuration to test GTP-U encapsulation for your reference: set int state TenGigabitEthernet5/0/0 up set int ip table TenGigabitEthernet5/0/0 0 set int ip address TenGigabitEthernet5/0/0 192.168.50.72/24 ip route add 192.168.50.71/24 via 192.168.50.72 TenGigabitEthernet5/0/0 set ip arp TenGigabitEthernet5/0/0 192.168.50.71 90e2.ba48.7a71 set int state TenGigabitEthernet5/0/1 up set int ip table TenGigabitEthernet5/0/1 0 set int ip address TenGigabitEthernet5/0/1 192.168.50.73/24 ip route add 192.168.50.74/24 via 192.168.50.73 TenGigabitEthernet5/0/1 set ip arp TenGigabitEthernet5/0/1 192.168.50.74 90e2.ba48.7a74 create gtpu tunnel src 192.168.50.72 dst 192.168.50.71 teid 9 encap-vrf-id 0 set int ip address gtpu_tunnel0 192.168.50.75/24 ip route add 192.168.50.70/24 via gtpu_tunnel0 set ip arp gtpu_tunnel0 192.168.50.70 90e2.ba48.7a70 -Hongjun -----Original Message----- From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Ryota Yushina Sent: Thursday, November 2, 2017 1:04 PM To: vpp-dev@lists.fd.io Subject: [vpp-dev] gtpu tunneling decap-next ip4 issue Hi, all Let me ask about a GTPu issue. Although I tried to overlay IPv4 packets with GTP-u, it didn't work by 17.10. Actually vpp was rebooted silently when I sent ping. Could someone help or provide GTPu IPv4 sample configuration ? My situation: By following diagram, when I sent ping from 10.9.0.3 to 11.9.0.4 on VPP#3, VPP#7 was rebooted(or crashed?). I've expected icmp echo request would be routed and encapped on VPP#4 via gtpu_tunnel0, but it didn't. +- VPP#3 ------------------------- | | [TenGigabitEthernet82/0/1: 10.9.0.3] +------ | ------------------------ | +-VPP#7 | ------------------------ | [TenGigabitEthernet82/0/1: 10.9.0.1] | | [gtpu_tunnel0: 11.9.0.1] | | | [TenGigabitEthernet82/0/0: 192.168.152.70] --> vrf:152 +------ || ----------------------- || +------ || ----------------------- | [TenGigabitEthernet82/0/0: 192.168.152.40] --> vrf:152 | | [loop0: 11.9.0.4] +- VPP#4 ------------------------- My cli configurations: <<VPP#3>> set interface ip address TenGigabitEthernet82/0/1 10.9.0.3/16 ip route 11.9.0.0/16 via 10.9.0.1 set interface state TenGigabitEthernet82/0/1 up <<VPP#7>> set interface ip address TenGigabitEthernet82/0/1 10.9.0.1/16 ip table add 152 set interface ip table TenGigabitEthernet82/0/0 152 set interface ip address TenGigabitEthernet82/0/0 192.168.152.70/24 create gtpu tunnel src 192.168.152.70 dst 192.168.152.40 teid 7777 encap-vrf-id 152 decap-next ip4 set interface ip address gtpu_tunnel0 11.9.0.1/16 ip route 11.9.0.0/16 via gtpu_tunnel0 ip route 10.9.0.0/16 via TenGigabitEthernet82/0/1 set interface state TenGigabitEthernet82/0/0 up set interface state TenGigabitEthernet82/0/1 up set interface state loop0 up Thanks. --- Best Regards, Ryota Yushina, _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev