Hi Vijay,

Please ‘sh fib entry 81’ which, according to the adj on the gre tunnel, is the 
FIB entry to reach the next-hop.

/neale


From: Vijay Kumar <vjkumar2...@gmail.com>
Date: Wednesday, 17 March 2021 at 18:48
To: Neale Ranns <ne...@graphiant.com>
Cc: y...@wangsu.com <y...@wangsu.com>, Vijay Kumar Nagaraj 
<vijay...@microsoft.com>, vpp-dev@lists.fd.io <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi Neale,

I did the configuration exactly like you suggested above. But still the ping 
from VPP to the overlay is failing as Blackholed packets.
In my case both gre tunnel created and the teib peer created were in fib-idx 1. 
My case is always non-zero fib

Let me know if you tested mgre in non-zero fib?

Some points FYI: -
===============
1) the fib entry for my overlay (7.7.7.7/32<http://7.7.7.7/32>) was showing 
dpo-drop as pasted below.
2) The vpp version I am running is mentioned below.
3) I also applied the mGRE patch you shared yesterday.
               https://gerrit.fd.io/r/c/vpp/+/31643

Topology and config
==============================
Strongswan VM (20.20.99.215, gre peer 2.2.2.1)           
<=======================>    VPP cluster (20.20.99.99, gre peer 2.2.2.2)

Configuration on VPP side
================
create gre tunnel src 20.20.99.99 outer-table-id 1 instance 1 multipoint
set interface ip addr gre1 2.2.2.2/32<http://2.2.2.2/32>
set interface state gre1 up
create teib gre1 peer 2.2.2.1 nh-table-id 1 nh 20.20.99.215

ip route add 7.7.7.7/32<http://7.7.7.7/32> via 2.2.2.1 gre1


FIB entry and logs
=======================
NWU, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none 
locks:[API:2, ]
7.7.7.7/32<http://7.7.7.7/32>
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:85 buckets:1 uRPF:115 to:[0:0]]
    [0] [@6]: ipv4 via 2.2.2.1 gre1: mtu:9000 next:15 
4500000000000000fe2fcd6c14146363141463d700000800
        stacked-on entry:81:
          [@1]: dpo-drop ip4
20.20.99.99/32<http://20.20.99.99/32>
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:65 buckets:1 uRPF:90 to:[0:0]]
    [0] [@2]: dpo-receive: 20.20.99.99 on VirtualFuncEthernet0/6/0.1556

vpp# ping 7.7.7.7 source gre1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show node counters
   Count                    Node                  Reason
        12                null-node               blackholed packets
       945             an_ppe_wfectrl             wfectrl packets received
       945             an_ppe_wfectrl             wfectrl replies sent
       754             an_ppe_wfectrl             session stat request received
         1             an_ppe_wfectrl             service construct config 
request received
         1             an_ppe_wfectrl             service construct config 
request success
         1             an_ppe_wfectrl             service config request 
received
         1             an_ppe_wfectrl             service config request success
       157             an_ppe_wfectrl             dpi stats request received
       157             an_ppe_wfectrl             dpi stats request success
         6             an_ppe_wfectrl             nat stats request received
         6             an_ppe_wfectrl             nat stats request success
vpp#
vpp#
vpp# show version
vpp v20.05.1-2~gca5e4556e-dirty built by an-vijay_kumar on af37e99caca7 at 
2021-03-17T14:31:40
vpp#
vpp#



On Wed, Mar 17, 2021 at 5:26 PM Neale Ranns 
<ne...@graphiant.com<mailto:ne...@graphiant.com>> wrote:

Can I suggest a few changes. You should consider a mgre interface much like 
ethernet when assigning addresses. So it should be;

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24<http://1.1.1.1/24>
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/<http://2.1.1.2/>28

these are the equivalent of ARPs for hosts (the nh is the MAC address 
equivalent)
    create teib  gre1 peer 2.1.1.3 nh 1.1.1.2
    create teib  gre1 peer 2.1.1.4 nh 1.1.1.3

then you can add whatever routes you have in the overlay via these GRE peers
   ip route add 4.4.4.4/32<http://4.4.4.4/32> via  2.1.1.4 gre1
and you must specify the next hop, like on an ethernet.
/neale


From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗 via 
lists.fd.io<http://lists.fd.io> 
<yedg=wangsu....@lists.fd.io<mailto:wangsu....@lists.fd.io>>
Date: Wednesday, 17 March 2021 at 10:54
To: Vijay Kumar Nagaraj 
<vijay...@microsoft.com<mailto:vijay...@microsoft.com>>, 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24<http://1.1.1.1/24>
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/32<http://2.1.1.2/32>
    create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
    ip route add 3.3.3.3/32<http://3.3.3.3/32> via  gre1
    create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
    ip route add 4.4.4.4/32<http://4.4.4.4/32> via  gre1

this config works.
在 2021/3/17 下午5:28, Vijay Kumar Nagaraj 写道:
Hi Yedg,

Gentle reminder!!

Hope you are doing fine.

I am trying mGRE for a certain project at Microsoft and even I don’t have much 
idea about exact config. I followed mGRE example in the fd.io<http://fd.io> 
wiki page but it is crashing when I configured multipoint tunnel, setup route 
and tried to ping from VPP to the destination host

Can you pls share me your mGRE config if it is working?


From: Vijay Kumar N
Sent: 15 March 2021 11:09
To: 'y...@wangsu.com<mailto:y...@wangsu.com>' 
<y...@wangsu.com><mailto:y...@wangsu.com>
Cc: vjkumar2...@gmail.com<mailto:vjkumar2...@gmail.com>
Subject: RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?
Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t have much 
idea about exact config. I followed mGRE example in the fd.io<http://fd.io> 
wiki page but it is crashing when I configured multipoint tunnel, setup route 
and tried to ping from VPP to the destination host

Can you pls share me your mGRE config if it is working?


Regards.

---------- Forwarded message ---------
From: Neale Ranns <ne...@graphiant.com<mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com<mailto:y...@wangsu.com> 
<y...@wangsu.com<mailto:y...@wangsu.com>>, 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>



From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗 via 
lists.fd.io<http://lists.fd.io> 
<yedg=wangsu....@lists.fd.io<mailto:wangsu....@lists.fd.io>>
Date: Monday, 22 February 2021 at 13:53
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi:

     I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be unicast-ip4-chain?

     any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24<http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32<http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1
3.3.3.3 is not in the same subnet as 2.1.1.2/32<http://2.1.1.2/32>, so it’s not 
a valid neighbour, hence the UNRESOLVED.
/neale



DBGvpp# show ip fib 3.3.3.3/32<http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32<http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
     path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
       path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
         3.3.3.3 gre1
       [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500000000000000fe2fb8cb010101010101010100000800
              stacked-on entry:11:
                [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
     Extensions:
      path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0<http://0.0.0.0/0>] memif1/0: mtu:9000 next:1
ffffffffffff02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24<http://1.1.1.0/24>] memif1/0: mtu:9000 next:1
ffffffffffff02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 02fe21058f7502fe049eea920800
[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500000000000000fe2fb8cb010101010101010100000800
   stacked-on entry:11:
     [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32<http://1.1.1.1/32>



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18960): https://lists.fd.io/g/vpp-dev/message/18960
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to