Hi Neale, Is this issue due to adjacency being wrong. Is this a day one issue or is this issue due to the way I added routes in my test case. As far as I know the way I configured GRE peer and routing seems to be correct.
Did you see this issue anytime on your setup? Regards. On Thu, 25 Mar 2021, 19:41 Vijay Kumar, <vjkumar2...@gmail.com> wrote: > Hi Neale, > > I was able to make a good progress for the GRE-over-IPSec use case. But > stumbled at the last step. > I have explained the problem faced (an extra IP header added by VPP, > overall 4 IP headers in the pkt) and other details about logs and > configuration below. Also attached the decrypted pcap (frame 4 shows the > problem) > > Kindly let me know if you can catch the problem at the VPP side. > > *Topology* > *============* > Strongswan <====================================> VPP > loopback 7.7.7.7 > loopback 8.8.8.8 > GRE 44.44.44.44 > GRE src 42.42.42.42 (tunnel remote 44.44.44.44) > IPSEC 20.20.99.215 > IPSEC 20.20.99.99 > > > create gre tunnel src 42.42.42.42 instance 1 > multipoint > > set interface state gre1 up > > set interface ip addr gre1 2.2.2.2/32 > > create teib gre1 peer 2.2.2.1 nh 44.44.44.44 > > > ip route add 7.7.7.7/32 via 2.2.2.1 gre1 (added > route to the overlay 7.7.7.7 of SS) > > > /* added below commands after IPSec is UP */ > > set interface state ipip0 up > > ip route add 44.44.44.44/32 via ipip0 > > set interface unnumbered ipip0 use > VirtualFuncEthernet0/7/0.1556 > > Problem faced > ============== > --- SS sends ICMP over GRE over IPSEC. The VPP also replies with > ICMP-o-GRE-o-IPSEC. But the SS was dropping the packets. Debugging the SS > logs did not show any decryption failure. I decrypted the packets captured > at SS. On applying the encrytion and auth keys in the wireshark, it > was showing an extra IP header added to the reply packet making it a > total of 4 IP headers which is wrong > > --- I observed that VPP had added GRE tunnel IP header on top of the inner > most packet. Again on top of the GRE IP header, another IP was added by VPP > with SRC and DST being same as that of IPSec tunnel header > (20.20.99.99-to-20.20.99.215). Now on top of this extra IP header > the ESP encapsulation was added with outermost IP header IPs of that of > the IPSec tunnel (20.20.99.99-to-20.20.99.215) > > --- Why is VPP adding an extra header on top of GRE IP header. Is this > because of the route adjacencies that has gone bad causing an extra IP > header to be added? > > --- Below I have pasted the FIB entry for the destination 7.7.7.7 that are > stacked on FIB entry 73 and 75 > > --- Attached IPSec decoded pcap, please check frame 4 that is the ICMP > reply from VPP to SS > > --- Is there anything wrong in the way I have added routes on the VPP side > for the overlay 7.7.7.7 and GRE destination 44.44.44.44 > > > Ping on SS > ======== > root@SS# ping 8.8.8.8 -I 7.7.7.7 -c 3 > PING 8.8.8.8 (8.8.8.8) from 7.7.7.7 : 56(84) bytes of data. > > --- 8.8.8.8 ping statistics --- > 3 packets transmitted, 0 received, 100% packet loss, time 2041ms > > > TCPDUMP taken on SS > ======================= > 12:46:48.439122 IP vmsrvrlnx-strongswan-215 > dns.google: ICMP echo > request, id 43, seq 1, length 64 > 12:46:48.439174 IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x1), length 148 > 12:46:48.439178 ethertype IPv4, IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x1), length 148 > 12:46:48.439487 ethertype IPv4, IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x1), length 180 > 12:46:48.439487 IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x1), length 180 > 12:46:49.455775 IP vmsrvrlnx-strongswan-215 > dns.google: ICMP echo > request, id 43, seq 2, length 64 > 12:46:49.455826 IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x2), length 148 > 12:46:49.455831 ethertype IPv4, IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x2), length 148 > 12:46:49.455958 ethertype IPv4, IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x2), length 180 > 12:46:49.455958 IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x2), length 180 > 12:46:50.479872 IP vmsrvrlnx-strongswan-215 > dns.google: ICMP echo > request, id 43, seq 3, length 64 > 12:46:50.479933 IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x3), length 148 > 12:46:50.479939 ethertype IPv4, IP vmsrvrlnx-strongswan-215 > 20.20.99.99: > ESP(spi=0x1234567b,seq=0x3), length 148 > 12:46:50.480080 ethertype IPv4, IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x3), length 180 > 12:46:50.480080 IP 20.20.99.99 > vmsrvrlnx-strongswan-215: > ESP(spi=0xce90b744,seq=0x3), length 180 > > > > VPP LOGS > ================================= > vpp# show gre tunnel > [0] instance 1 src 42.42.42.42 dst 0.0.0.0 fib-idx 0 sw-if-idx 20 payload > L3 multi-point > > vpp# show teib > [0] gre1:2.2.2.1 via [0]:44.44.44.44/32 > > vpp# show ipip tunnel > [0] instance 0 src 20.20.99.99 dst 20.20.99.215 table-ID 0 sw-if-idx 21 > flags [none] dscp CS0 > > FIB ouput of loopback IPs and GRE tunnel IPs shown in topology > ============================================= > 7.7.7.7/32 > unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:75 buckets:1 uRPF:105 to:[0:0]] > [0] [@6]: ipv4 via 2.2.2.1 gre1: mtu:9000 next:15 > 4500000000000000fe2f10232a2a2a2a2c2c2c2c00000800 > stacked-on entry:73: > [@3]: ipv4 via 0.0.0.0 ipip0: mtu:9000 next:17 > 450000000000000040048b9814146363141463d7 > stacked-on entry:75: > [@2]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: > mtu:9000 next:16 fa163e4b6b42fa163ec2b4f4810006140800 > > > 8.8.8.8/32 > unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:72 buckets:1 uRPF:103 to:[0:0]] > [0] [@2]: dpo-receive: 8.8.8.8 on loop3 > > > 42.42.42.42/32 > unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:71 buckets:1 uRPF:102 to:[0:0]] > [0] [@2]: dpo-receive: 42.42.42.42 on loop2 > > 44.44.44.44/32 > unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:74 buckets:1 uRPF:107 to:[0:0]] > [0] [@6]: ipv4 via 0.0.0.0 ipip0: mtu:9000 next:17 > 450000000000000040048b9814146363141463d7 > stacked-on entry:75: > [@2]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: > mtu:9000 next:16 fa163e4b6b42fa163ec2b4f4810006140800 > > > vpp# > vpp# show fib entry 73 > 73@44.44.44.44/32 fib:0 index:73 locks:4 > CLI refs:1 entry-flags:attached, src-flags:added,contributing,active, > path-list:[81] locks:2 flags:shared, uPRF-list:107 len:1 itfs:[21, ] > path:[117] pl-index:81 ip4 weight=1 pref=0 attached-nexthop: > oper-flags:resolved, cfg-flags:attached, > 44.44.44.44 ipip0 (p2p) > [@0]: ipv4 via 0.0.0.0 ipip0: mtu:9000 next:17 > 450000000000000040048b9814146363141463d7 > stacked-on entry:75: > [@2]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: > mtu:9000 next:16 fa163e4b6b42fa163ec2b4f4810006140800 > > recursive-resolution refs:1 src-flags:added, cover:-1 > > forwarding: unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:74 buckets:1 uRPF:107 to:[0:0]] > [0] [@6]: ipv4 via 0.0.0.0 ipip0: mtu:9000 next:17 > 450000000000000040048b9814146363141463d7 > stacked-on entry:75: > [@2]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: > mtu:9000 next:16 fa163e4b6b42fa163ec2b4f4810006140800 > Delegates: > track: sibling:147 > Children:{adj:17} > Children:{fib-entry-track:16} > vpp# > vpp# > vpp# > vpp# show fib entry 75 > 75@20.20.99.215/32 fib:0 index:75 locks:4 > adjacency refs:1 entry-flags:attached, > src-flags:added,contributing,active, cover:65 > path-list:[80] locks:2 uPRF-list:106 len:1 itfs:[17, ] > path:[116] pl-index:80 ip4 weight=1 pref=0 attached-nexthop: > oper-flags:resolved, > 20.20.99.215 VirtualFuncEthernet0/7/0.1556 > [@0]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:9000 > next:16 fa163e4b6b42fa163ec2b4f4810006140800 > Extensions: > path:116 adj-flags:[refines-cover] > recursive-resolution refs:1 src-flags:added, cover:-1 > > forwarding: unicast-ip4-chain > [@0]: dpo-load-balance: [proto:ip4 index:76 buckets:1 uRPF:106 > to:[14:3252]] > [0] [@5]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: > mtu:9000 next:16 fa163e4b6b42fa163ec2b4f4810006140800 > Delegates: > track: sibling:157 > Children:{adj:19}{ipsec-sa:0} > Children:{fib-entry-track:17} > vpp# > vpp# > > > > > Regards, > Vijay > > On Tue, Mar 23, 2021 at 5:44 PM Neale Ranns <ne...@graphiant.com> wrote: > >> >> >> Hi Vijay, >> >> >> >> I don’t think you can have both a GRE and IPIP tunnel to the same peer, >> sourced from the same local address. These two tunnels will create two >> identical ‘keys’ in the lookup table (src:X,dst:Y,proto:ESP). my guess is >> your packet matches against the IPIP tunnel, not the GRE and the key >> material is different, hence the integ failure. You can see in the trace >> that it used SA:1, check ‘sh ipsec sa 1’ and ‘sh ipsec protect’ to see >> which tunnel that is associated with. You can also see the lookup table >> with ‘sh ipsec protect-hash’. >> >> >> >> If you remove (or admin down) the ipip tunnel, does it work? >> >> >> >> /neale >> >> >> >> >> >> *From: *Vijay Kumar <vjkumar2...@gmail.com> >> *Date: *Tuesday, 23 March 2021 at 04:18 >> *To: *Neale Ranns <ne...@graphiant.com> >> *Cc: *vpp-dev <vpp-dev@lists.fd.io> >> *Subject: *Re: [vpp-dev] GRE-over-IPSec fails >> >> Hi Neale, >> >> >> >> Could you let me know if you faced the mentioned problem anytime? >> >> >> >> For me only IPSec works fine, Only GRE also works fine. But when I >> configure GRE-over-IPSec, the traffic is dropped at *esp4-decrypt-tun* >> due to integrity check failure. >> >> As there are two logical interfaces created at VPP (ipip0 and gre0) for >> the peer, do I need to take care of something specially? As far as I know, >> I haven't missed any config. >> >> >> >> >> >> Regards, >> >> Vijay Kumar N >> >> >> >> On Mon, Mar 22, 2021 at 11:31 PM Vijay Kumar via lists.fd.io >> <vjkumar2003=gmail....@lists.fd.io> wrote: >> >> Hi, >> >> >> >> I am trying a test case where-in I have an GRE P2MP (mGRE) tunnel on the >> VPP. The GRE peer is a strongswan VM that hosts both the GRE tunnel and >> IPSec SA. When I started ping traffic from SS, the traffic is dropped at >> esp4-decrypt-tun graph node due to integrity check failure. >> >> >> >> Has any one tested GRE-over-IPSec recently? If so can you pls share me a >> working config. If not please review the below config and let me know if I >> missed something >> >> >> >> *NOTE: -* >> >> If I have run only GRE test case, traffic is fine (no IPSec enabled). If >> I have only IPSec configured but no GRE then also traffic is fine. >> >> >> >> I am facing this issue only when both GRE and IPSec are enabled at the >> same time. >> >> >> >> Topology and config at SS and VPP >> >> ============================== >> >> Strongswan VM (20.20.99.215, gre peer 2.2.2.1, loopback 7.7.7.7) >> <=============> VPP cluster (20.20.99.99, gre peer 2.2.2.2, loopback >> 8.8.8.8) >> >> IPSec SA Traffic Selector (7.7.7.7/32 to 8.8.8.8/32) >> >> ike=aes256-sha256-modp2048! >> >> esp=aes256-sha1-noesn! >> >> >> >> >> >> Below is the VPP trace >> >> ================ >> >> 03:20:34:670201: dpdk-input >> VirtualFuncEthernet0/7/0 rx queue 0 >> buffer 0x4c6b91: current data 0, length 170, buffer-pool 0, ref-count >> 1, totlen-nifb 0, trace handle 0x1000000 >> ext-hdr-valid >> l4-cksum-computed l4-cksum-correct >> PKT MBUF: port 0, nb_segs 1, pkt_len 170 >> buf_len 2176, data_len 170, ol_flags 0x180, data_off 128, phys_addr >> 0xa3dae4c0 >> packet_type 0x691 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0 >> rss 0x0 fdir.hi 0x0 fdir.lo 0x0 >> Packet Offload Flags >> PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid >> PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid >> Packet Types >> RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet >> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without >> extension headers >> RTE_PTYPE_L4_NONFRAG (0x0600) Non-fragmented IP packet >> IP4: fa:16:3e:4b:6b:42 -> fa:16:3e:c2:b4:f4 802.1q vlan 1556 >> IPSEC_ESP: 20.20.99.215 -> 20.20.99.99 >> tos 0x00, ttl 64, length 152, checksum 0x5b33 dscp CS0 ecn NON_ECN >> fragment id 0xef9e, flags DONT_FRAGMENT >> 03:20:34:670208: ethernet-input >> frame: flags 0x3, hw-if-index 3, sw-if-index 3 >> IP4: fa:16:3e:4b:6b:42 -> fa:16:3e:c2:b4:f4 802.1q vlan 1556 >> 03:20:34:670214: ip4-input >> IPSEC_ESP: 20.20.99.215 -> 20.20.99.99 >> tos 0x00, ttl 64, length 152, checksum 0x5b33 dscp CS0 ecn NON_ECN >> fragment id 0xef9e, flags DONT_FRAGMENT >> 03:20:34:670218: ip4-lookup >> fib 1 dpo-idx 21 flow hash: 0x00000000 >> IPSEC_ESP: 20.20.99.215 -> 20.20.99.99 >> tos 0x00, ttl 64, length 152, checksum 0x5b33 dscp CS0 ecn NON_ECN >> fragment id 0xef9e, flags DONT_FRAGMENT >> 03:20:34:670220: ip4-local >> IPSEC_ESP: 20.20.99.215 -> 20.20.99.99 >> tos 0x00, ttl 64, length 152, checksum 0x5b33 dscp CS0 ecn NON_ECN >> fragment id 0xef9e, flags DONT_FRAGMENT >> 03:20:34:670222: ipsec4-tun-input >> IPSec: remote:20.20.99.215 spi:305419897 (0x12345679) seq 40 sa 1 >> 03:20:34:670225: esp4-decrypt-tun >> esp: crypto aes-cbc-256 integrity sha1-96 pkt-seq 40 sa-seq 0 sa-seq-hi >> 0 >> 03:20:34:670241: ip4-drop >> IP6_NONXT: 242.163.36.86 -> 70.168.225.19 >> version 1, header length 8 >> tos 0x34, ttl 245, length 22137, checksum 0x5156 (should be 0x972a) >> dscp unknown ecn NON_ECN >> fragment id 0x0000 offset 320 >> 03:20:34:670243: error-drop >> rx:ipip0 >> 03:20:34:670244: drop >> esp4-decrypt-tun: Integrity check failed >> >> >> >> vpp# show node counters >> Count Node Reason >> 25 esp4-encrypt-tun ESP pkts received >> 213 memif-input not ip packet >> 3 dpdk-input no error >> 136 arp-reply ARP replies sent >> 3 arp-reply IP4 source address not >> local to subnet >> 1 gre4-input no error >> 213 ip4-udp-lookup No error >> 42 esp4-decrypt-tun ESP pkts received >> 42 esp4-decrypt-tun Integrity check failed >> 25 esp4-encrypt-tun ESP pkts received >> 42 ipsec4-tun-input good packets received >> 11 ip4-local ip4 source lookup miss >> 3 ip4-local unknown ip protocol >> 3 ethernet-input unknown vlan >> vpp# >> >> >> >> >> >> >> >> >> >> >>
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19031): https://lists.fd.io/g/vpp-dev/message/19031 Mute This Topic: https://lists.fd.io/mt/81531694/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-