[vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-02-22 Thread 叶东岗

Hi:

    I try to config a mgre interface fellow those steps, then i get an 
UNRESOLVED fib entry,  is it right? I think it should be unicast-ip4-chain?


    any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1


DBGvpp# show ip fib 3.3.3.3/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel 
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1, 
default-route:1, ]

3.3.3.3/32 fib:0 index:16 locks:4
  adjacency refs:1 entry-flags:attached, 
src-flags:added,contributing,active, cover:0

    path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
  path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved,

    3.3.3.3 gre1
  [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

 stacked-on entry:11:
   [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

    Extensions:
 path:25
  recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0] memif1/0: mtu:9000 next:1 
02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24] memif1/0: mtu:9000 next:1 
02fe049eea920806

[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 02fe21058f7502fe049eea920800
[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800



DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18780): https://lists.fd.io/g/vpp-dev/message/18780
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-02-22 Thread 叶东岗

After explicitly add a route:

DBGvpp# ip route add 3.3.3.3 via gre1

DBGvpp# show ip fib 3.3.3.3/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel 
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1, 
default-route:1, ]

3.3.3.3/32 fib:0 index:16 locks:5
  CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
    path-list:[20] locks:2 flags:shared, uPRF-list:12 len:1 itfs:[2, ]
  path:[26] pl-index:20 ip4 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved, cfg-flags:attached,

    3.3.3.3 gre1
  [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

 stacked-on entry:11:
   [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800


  adjacency refs:1 entry-flags:attached, src-flags:added, cover:-1
    path-list:[21] locks:1 uPRF-list:24 len:1 itfs:[2, ]
  path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved,

    3.3.3.3 gre1
  [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

 stacked-on entry:11:
   [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

    Extensions:
 path:25
  recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:12 to:[0:0]]
    [0] [@6]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

    stacked-on entry:11:
  [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800



I doubt this step is necessary?


在 2021/2/22 下午8:53, yedg 写道:

Hi:

    I try to config a mgre interface fellow those steps, then i get an 
UNRESOLVED fib entry,  is it right? I think it should be 
unicast-ip4-chain?


    any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1


DBGvpp# show ip fib 3.3.3.3/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto 
flowlabel ] epoch:0 flags:none locks:[adjacency:2, 
recursive-resolution:1, default-route:1, ]

3.3.3.3/32 fib:0 index:16 locks:4
  adjacency refs:1 entry-flags:attached, 
src-flags:added,contributing,active, cover:0

    path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
  path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop: 
oper-flags:resolved,

    3.3.3.3 gre1
  [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

 stacked-on entry:11:
   [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

    Extensions:
 path:25
  recursive-resolution refs:1 src-flags:added, cover:-1

 forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0] memif1/0: mtu:9000 next:1 
02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24] memif1/0: mtu:9000 next:1 
02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800
[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4 
4500fe2fb8cb01010101010101010800

  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800



DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18781): https://lists.fd.io/g/vpp-dev/message/18781
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-02-22 Thread Neale Ranns


From: vpp-dev@lists.fd.io  on behalf of 叶东岗 via 
lists.fd.io 
Date: Monday, 22 February 2021 at 13:53
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be unicast-ip4-chain?

 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1
3.3.3.3 is not in the same subnet as 2.1.1.2/32, so it’s not a valid neighbour, 
hence the UNRESOLVED.
/neale



DBGvpp# show ip fib 3.3.3.3/32
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32 fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
[@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0] memif1/0: mtu:9000 next:1
02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24] memif1/0: mtu:9000 next:1
02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 02fe21058f7502fe049eea920800
[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
   stacked-on entry:11:
 [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18783): https://lists.fd.io/g/vpp-dev/message/18783
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread 叶东岗

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/32
    create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
    ip route add 3.3.3.3/32 via  gre1
    create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
    ip route add 4.4.4.4/32 via  gre1

this  works.


在 2021/3/15 下午1:44, Vijay Kumar Nagaraj 写道:


Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?

Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

Regards.

-- Forwarded message -
From: *Neale Ranns* mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com <mailto:y...@wangsu.com> <mailto:y...@wangsu.com>>, vpp-dev@lists.fd.io 
<mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>>


*From: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗via 
lists.fd.io <http://lists.fd.io> <mailto:wangsu@lists.fd.io>>

*Date: *Monday, 22 February 2021 at 13:53
*To: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>

*Subject: *[vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be 
unicast-ip4-chain?


 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24 <http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32 <http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1

3.3.3.3 is not in the same subnet as 2.1.1.2/32 <http://2.1.1.2/32>, 
so it’s not a valid neighbour, hence the UNRESOLVED.


/neale




DBGvpp# show ip fib 3.3.3.3/32 <http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32 <http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0 <http://0.0.0.0/0>] memif1/0: mtu:9000 
next:1

02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24 <http://1.1.1.0/24>] memif1/0: 
mtu:9000 next:1

02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
   stacked-on entry:11:
 [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32 <http://1.1.1.1/32>





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18949): https://lists.fd.io/g/vpp-dev/message/18949
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread 叶东岗

    set interface state eh0 up
    set interface ip addr eth0 1.1.1.1/24
    create gre tunnel src 1.1.1.1 instance 1 multipoint
    set interface state gre1 up
    set interface ip addr gre1 2.1.1.2/32
    create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
    ip route add 3.3.3.3/32 via  gre1
    create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
    ip route add 4.4.4.4/32 via  gre1

this config works.

在 2021/3/17 下午5:28, Vijay Kumar Nagaraj 写道:


Hi Yedg,

Gentle reminder!!

Hope you are doing fine.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

*From:*Vijay Kumar N
*Sent:* 15 March 2021 11:09
*To:* 'y...@wangsu.com' 
*Cc:* vjkumar2...@gmail.com
*Subject:* RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?

Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t 
have much idea about exact config. I followed mGRE example in the 
fd.io wiki page but it is crashing when I configured multipoint 
tunnel, setup route and tried to ping from VPP to the destination host


Can you pls share me your mGRE config if it is working?

Regards.

-- Forwarded message -
From: *Neale Ranns* mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com <mailto:y...@wangsu.com> <mailto:y...@wangsu.com>>, vpp-dev@lists.fd.io 
<mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>>


*From: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗via 
lists.fd.io <http://lists.fd.io> <mailto:wangsu@lists.fd.io>>

*Date: *Monday, 22 February 2021 at 13:53
*To: *vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>

*Subject: *[vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be 
unicast-ip4-chain?


 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24 <http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32 <http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1

3.3.3.3 is not in the same subnet as 2.1.1.2/32 <http://2.1.1.2/32>, 
so it’s not a valid neighbour, hence the UNRESOLVED.


/neale




DBGvpp# show ip fib 3.3.3.3/32 <http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32 <http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
    [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0 <http://0.0.0.0/0>] memif1/0: mtu:9000 
next:1

02fe049eea920806
[@1] ipv4-glean: [src:1.1.1.0/24 <http://1.1.1.0/24>] memif1/0: 
mtu:9000 next:1

02fe049eea920806
[@2] ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3 
02fe21058f7502fe049eea920800

[@3] ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
   stacked-on entry:11:
 [@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800


DBGvpp# show teib
[0] gre1:3.3.3.3 via [0]:1.1.1.1/32 <http://1.1.1.1/32>





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18950): https://lists.fd.io/g/vpp-dev/message/18950
Mute This Topic: https://lists.fd.io/mt/80823285/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread Neale Ranns

Can I suggest a few changes. You should consider a mgre interface much like 
ethernet when assigning addresses. So it should be;

set interface state eh0 up
set interface ip addr eth0 1.1.1.1/24
create gre tunnel src 1.1.1.1 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/28

these are the equivalent of ARPs for hosts (the nh is the MAC address 
equivalent)
create teib  gre1 peer 2.1.1.3 nh 1.1.1.2
create teib  gre1 peer 2.1.1.4 nh 1.1.1.3

then you can add whatever routes you have in the overlay via these GRE peers
   ip route add 4.4.4.4/32 via  2.1.1.4 gre1
and you must specify the next hop, like on an ethernet.

/neale


From: vpp-dev@lists.fd.io  on behalf of 叶东岗 via 
lists.fd.io 
Date: Wednesday, 17 March 2021 at 10:54
To: Vijay Kumar Nagaraj , vpp-dev@lists.fd.io 

Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

set interface state eh0 up
set interface ip addr eth0 1.1.1.1/24
create gre tunnel src 1.1.1.1 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32
create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
ip route add 3.3.3.3/32 via  gre1
create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
ip route add 4.4.4.4/32 via  gre1

this config works.
在 2021/3/17 下午5:28, Vijay Kumar Nagaraj 写道:
Hi Yedg,

Gentle reminder!!

Hope you are doing fine.

I am trying mGRE for a certain project at Microsoft and even I don’t have much 
idea about exact config. I followed mGRE example in the fd.io wiki page but it 
is crashing when I configured multipoint tunnel, setup route and tried to ping 
from VPP to the destination host

Can you pls share me your mGRE config if it is working?


From: Vijay Kumar N
Sent: 15 March 2021 11:09
To: 'y...@wangsu.com<mailto:y...@wangsu.com>' 
<mailto:y...@wangsu.com>
Cc: vjkumar2...@gmail.com<mailto:vjkumar2...@gmail.com>
Subject: RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.

Hi Yedg,

Hope you are doing fine, I saw your recent query on vpp mailing list.

Are you able to successfully test mGRE feature?
Has the below config worked for you after Neale’s reply.

I am trying mGRE for a certain project at Microsoft and even I don’t have much 
idea about exact config. I followed mGRE example in the fd.io wiki page but it 
is crashing when I configured multipoint tunnel, setup route and tried to ping 
from VPP to the destination host

Can you pls share me your mGRE config if it is working?


Regards.

-- Forwarded message -
From: Neale Ranns mailto:ne...@graphiant.com>>
Date: Mon, Feb 22, 2021 at 8:47 PM
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
To: y...@wangsu.com<mailto:y...@wangsu.com> 
mailto:y...@wangsu.com>>, 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>



From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗 via 
lists.fd.io<http://lists.fd.io> 
mailto:wangsu@lists.fd.io>>
Date: Monday, 22 February 2021 at 13:53
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi:

 I try to config a mgre interface fellow those steps, then i get an
UNRESOLVED fib entry,  is it right? I think it should be unicast-ip4-chain?

 any examples of mgre config?  Thinks.


create memif socket id 1 filename /work/memif1
create interface memif socket-id 1 master
set interface state memif1/0 up
set interface ip addr memif1/0 1.1.1.2/24<http://1.1.1.2/24>
set interface rx-mode memif1/0 interrupt
create gre tunnel src 1.1.1.2 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32<http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.1
3.3.3.3 is not in the same subnet as 2.1.1.2/32<http://2.1.1.2/32>, so it’s not 
a valid neighbour, hence the UNRESOLVED.
/neale



DBGvpp# show ip fib 3.3.3.3/32<http://3.3.3.3/32>
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel
] epoch:0 flags:none locks:[adjacency:2, recursive-resolution:1,
default-route:1, ]
3.3.3.3/32<http://3.3.3.3/32> fib:0 index:16 locks:4
   adjacency refs:1 entry-flags:attached,
src-flags:added,contributing,active, cover:0
 path-list:[21] locks:2 uPRF-list:24 len:1 itfs:[2, ]
   path:[25] pl-index:21 ip4 weight=1 pref=0 attached-nexthop:
oper-flags:resolved,
 3.3.3.3 gre1
   [@0]: ipv4 via 3.3.3.3 gre1: mtu:9000 next:4
4500fe2fb8cb01010101010101010800
  stacked-on entry:11:
[@3]: ipv4 via 1.1.1.1 memif1/0: mtu:9000 next:3
02fe21058f7502fe049eea920800
 Extensions:
  path:25
   recursive-resolution refs:1 src-flags:added, cover:-1

  forwarding:   UNRESOLVED


DBGvpp# show adj
[@0] ipv4-glean: [src:0.0.0.0/0<http://0.0.0.0/0>] mem

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread Vijay Kumar
Hi Neale,

I did the configuration exactly like you suggested above. But still the
ping from VPP to the overlay is failing as Blackholed packets.
In my case both gre tunnel created and the teib peer created were in
fib-idx 1. My case is always non-zero fib

Let me know if you tested mgre in non-zero fib?

Some points FYI: -
===
1) the fib entry for my overlay (7.7.7.7/32) was showing dpo-drop as pasted
below.
2) The vpp version I am running is mentioned below.
3) I also applied the mGRE patch you shared yesterday.
   https://gerrit.fd.io/r/c/vpp/+/31643

Topology and config
==
Strongswan VM (20.20.99.215, gre peer 2.2.2.1)
 <===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)

Configuration on VPP side

create gre tunnel src *20.20.99.99* outer-table-id 1 instance 1 multipoint
set interface ip addr gre1 2.2.2.2/32
set interface state gre1 up
create teib gre1 peer 2.2.2.1 nh-table-id 1 nh *20.20.99.215 *

ip route add 7.7.7.7/32 via 2.2.2.1 gre1


FIB entry and logs
===
NWU, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none
locks:[API:2, ]
7.7.7.7/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:85 buckets:1 uRPF:115 to:[0:0]]
[0] [@6]: ipv4 via 2.2.2.1 gre1: mtu:9000 next:15
4500fe2fcd6c14146363141463d70800
stacked-on entry:81:
  [@1]: dpo-drop ip4
20.20.99.99/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:65 buckets:1 uRPF:90 to:[0:0]]
[0] [@2]: dpo-receive: 20.20.99.99 on VirtualFuncEthernet0/6/0.1556

vpp# ping 7.7.7.7 source gre1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show node counters
   CountNode  Reason
12null-node   blackholed packets
   945 an_ppe_wfectrl wfectrl packets received
   945 an_ppe_wfectrl wfectrl replies sent
   754 an_ppe_wfectrl session stat request
received
 1 an_ppe_wfectrl service construct config
request received
 1 an_ppe_wfectrl service construct config
request success
 1 an_ppe_wfectrl service config request
received
 1 an_ppe_wfectrl service config request
success
   157 an_ppe_wfectrl dpi stats request received
   157 an_ppe_wfectrl dpi stats request success
 6 an_ppe_wfectrl nat stats request received
 6 an_ppe_wfectrl nat stats request success
vpp#
vpp#
vpp# show version
vpp v20.05.1-2~gca5e4556e-dirty built by an-vijay_kumar on af37e99caca7 at
2021-03-17T14:31:40
vpp#
vpp#



On Wed, Mar 17, 2021 at 5:26 PM Neale Ranns  wrote:

>
>
> Can I suggest a few changes. You should consider a mgre interface much
> like ethernet when assigning addresses. So it should be;
>
>
>
> set interface state eh0 up
> set interface ip addr eth0 1.1.1.1/24
> create gre tunnel src 1.1.1.1 instance 1 multipoint
> set interface state gre1 up
> set interface ip addr gre1 2.1.1.2/28
>
>
>
> these are the equivalent of ARPs for hosts (the nh is the MAC address
> equivalent)
> create teib  gre1 peer 2.1.1.3 nh 1.1.1.2
> create teib  gre1 peer 2.1.1.4 nh 1.1.1.3
>
>
> then you can add whatever routes you have in the overlay via these GRE
> peers
>
>ip route add 4.4.4.4/32 via  2.1.1.4 gre1
>
> and you must specify the next hop, like on an ethernet.
>
> /neale
>
>
>
>
>
> *From: *vpp-dev@lists.fd.io  on behalf of 叶东岗 via
> lists.fd.io 
> *Date: *Wednesday, 17 March 2021 at 10:54
> *To: *Vijay Kumar Nagaraj , vpp-dev@lists.fd.io <
> vpp-dev@lists.fd.io>
> *Subject: *Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
>
> set interface state eh0 up
> set interface ip addr eth0 1.1.1.1/24
> create gre tunnel src 1.1.1.1 instance 1 multipoint
> set interface state gre1 up
> set interface ip addr gre1 2.1.1.2/32
> create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
> ip route add 3.3.3.3/32 via  gre1
> create teib  gre1 peer 4.4.4.4 nh 1.1.1.3
> ip route add 4.4.4.4/32 via  gre1
>
> this config works.
>
> 在 2021/3/17 下午5:28, Vijay Kumar Nagaraj 写道:
>
> Hi Yedg,
>
>
>
> Gentle reminder!!
>
>
>
> Hope you are doing fine.
>
>
>
> I am trying mGRE for a certain project at Microsoft and even I don’t have
> much idea about exact config. I followed mGRE example in the fd.io wiki
> page but it is crashing when I configured multipoint tunnel, setup route
> and tried to ping from VPP to the destination host
>
&

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-17 Thread Neale Ranns
Hi Vijay,

Please ‘sh fib entry 81’ which, according to the adj on the gre tunnel, is the 
FIB entry to reach the next-hop.

/neale


From: Vijay Kumar 
Date: Wednesday, 17 March 2021 at 18:48
To: Neale Ranns 
Cc: y...@wangsu.com , Vijay Kumar Nagaraj 
, vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi Neale,

I did the configuration exactly like you suggested above. But still the ping 
from VPP to the overlay is failing as Blackholed packets.
In my case both gre tunnel created and the teib peer created were in fib-idx 1. 
My case is always non-zero fib

Let me know if you tested mgre in non-zero fib?

Some points FYI: -
===
1) the fib entry for my overlay (7.7.7.7/32<http://7.7.7.7/32>) was showing 
dpo-drop as pasted below.
2) The vpp version I am running is mentioned below.
3) I also applied the mGRE patch you shared yesterday.
   https://gerrit.fd.io/r/c/vpp/+/31643

Topology and config
==
Strongswan VM (20.20.99.215, gre peer 2.2.2.1)   
<===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)

Configuration on VPP side

create gre tunnel src 20.20.99.99 outer-table-id 1 instance 1 multipoint
set interface ip addr gre1 2.2.2.2/32<http://2.2.2.2/32>
set interface state gre1 up
create teib gre1 peer 2.2.2.1 nh-table-id 1 nh 20.20.99.215

ip route add 7.7.7.7/32<http://7.7.7.7/32> via 2.2.2.1 gre1


FIB entry and logs
===
NWU, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none 
locks:[API:2, ]
7.7.7.7/32<http://7.7.7.7/32>
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:85 buckets:1 uRPF:115 to:[0:0]]
[0] [@6]: ipv4 via 2.2.2.1 gre1: mtu:9000 next:15 
4500fe2fcd6c14146363141463d70800
stacked-on entry:81:
  [@1]: dpo-drop ip4
20.20.99.99/32<http://20.20.99.99/32>
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:65 buckets:1 uRPF:90 to:[0:0]]
[0] [@2]: dpo-receive: 20.20.99.99 on VirtualFuncEthernet0/6/0.1556

vpp# ping 7.7.7.7 source gre1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show node counters
   CountNode  Reason
12null-node   blackholed packets
   945 an_ppe_wfectrl wfectrl packets received
   945 an_ppe_wfectrl wfectrl replies sent
   754 an_ppe_wfectrl session stat request received
 1 an_ppe_wfectrl service construct config 
request received
 1 an_ppe_wfectrl service construct config 
request success
 1 an_ppe_wfectrl service config request 
received
 1 an_ppe_wfectrl service config request success
   157 an_ppe_wfectrl dpi stats request received
   157 an_ppe_wfectrl dpi stats request success
 6 an_ppe_wfectrl nat stats request received
 6 an_ppe_wfectrl nat stats request success
vpp#
vpp#
vpp# show version
vpp v20.05.1-2~gca5e4556e-dirty built by an-vijay_kumar on af37e99caca7 at 
2021-03-17T14:31:40
vpp#
vpp#



On Wed, Mar 17, 2021 at 5:26 PM Neale Ranns 
mailto:ne...@graphiant.com>> wrote:

Can I suggest a few changes. You should consider a mgre interface much like 
ethernet when assigning addresses. So it should be;

set interface state eh0 up
set interface ip addr eth0 1.1.1.1/24<http://1.1.1.1/24>
create gre tunnel src 1.1.1.1 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/<http://2.1.1.2/>28

these are the equivalent of ARPs for hosts (the nh is the MAC address 
equivalent)
create teib  gre1 peer 2.1.1.3 nh 1.1.1.2
create teib  gre1 peer 2.1.1.4 nh 1.1.1.3

then you can add whatever routes you have in the overlay via these GRE peers
   ip route add 4.4.4.4/32<http://4.4.4.4/32> via  2.1.1.4 gre1
and you must specify the next hop, like on an ethernet.
/neale


From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of 叶东岗 via 
lists.fd.io<http://lists.fd.io> 
mailto:wangsu@lists.fd.io>>
Date: Wednesday, 17 March 2021 at 10:54
To: Vijay Kumar Nagaraj 
mailto:vijay...@microsoft.com>>, 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

set interface state eh0 up
set interface ip addr eth0 1.1.1.1/24<http://1.1.1.1/24>
create gre tunnel src 1.1.1.1 instance 1 multipoint
set interface state gre1 up
set interface ip addr gre1 2.1.1.2/32<http://2.1.1.2/32>
create teib  gre1 peer 3.3.3.3 nh 1.1.1.2
ip route add

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-19 Thread Neale Ranns
Hi Vijay,

I was able to re-produce your issue. Please try with:
  https://gerrit.fd.io/r/c/vpp/+/31695

/neale

From: Vijay Kumar Nagaraj 
Date: Friday, 19 March 2021 at 19:12
To: Neale Ranns , Vijay Kumar 
Cc: y...@wangsu.com , vpp-dev@lists.fd.io 
Subject: RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi Neale,

I have captured the output of “show fib entry 81” that is related to the route 
of the overlay and GRE peer.
In the output, the adjacency/FIB entry looks fine for me. I have followed all 
the configurations exactly as you mentioned below but there is no luck. All the 
packets are dropped as blackholed packets (highlighted in red)
The only difference between us is the fib-idx is 1 in my case.  In your example 
there is no outer fib-idx in create gre tunnel cmd or either create teib cmd.

Topology and config
==
Strongswan VM (20.20.99.215, gre peer 2.2.2.1)   
<===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)

Configuration on VPP side

create gre tunnel src 20.20.99.99 outer-table-id 1 instance 1 multipoint
set interface ip addr gre1 
2.2.2.2/32<https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F2.2.2.2%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055192691263%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BsCTdeX3Z%2Bvp4smOeUAUfl6GpmXZbGRSGgbtM9hcYx0%3D&reserved=0>
set interface state gre1 up
create teib gre1 peer 2.2.2.1 nh-table-id 1 nh 20.20.99.215

ip route add 
7.7.7.7/32<https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F7.7.7.7%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055192701256%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=kyEGUcXvxn%2FFWf2uqc%2BRRkbE2wOWVV6vmX8rZdvjvwc%3D&reserved=0>
 table 1 via 2.2.2.1 gre1


vpp# show adj
[@0] ipv4-glean: loop0: mtu:9000 next:1 dead0806
[@1] ipv4-glean: loop1: mtu:9000 next:2 dead00010806
[@2] ipv4 via 0.0.0.0 memif0/0: mtu:65535 next:3
[@3] ipv4 via 0.0.0.0 memif0/1: mtu:65535 next:4
[@4] ipv4 via 0.0.0.0 memif0/2: mtu:65535 next:5
[@5] ipv4 via 0.0.0.0 memif128/0: mtu:65535 next:6
[@6] ipv4 via 0.0.0.0 memif128/1: mtu:65535 next:7
[@7] ipv4 via 0.0.0.0 memif128/2: mtu:65535 next:8
[@8] ipv4 via 0.0.0.0 memif192/0: mtu:65535 next:9
[@9] ipv4 via 0.0.0.0 memif192/1: mtu:65535 next:10
[@10] ipv4 via 0.0.0.0 memif192/2: mtu:65535 next:11
[@11] ipv4-glean: VirtualFuncEthernet0/7/0.1556: mtu:9000 next:3 
fa163ec2b4f4810006140806
[@12] ipv4 via 0.0.0.0 memif210/0: mtu:65535 next:12
[@13] ipv4 via 0.0.0.0 memif210/1: mtu:65535 next:13
[@14] ipv4 via 0.0.0.0 memif210/2: mtu:65535 next:14
[@15] ipv4 via 2.2.2.1 gre1: mtu:9000 next:15 
4500fe2fcd6c14146363141463d70800
  stacked-on entry:81:
[@3]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500 next:16 
fa163e4b6b42fa163ec2b4f4810006140800
[@16] ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500 next:16 
fa163e4b6b42fa163ec2b4f4810006140800
vpp#
vpp#
vpp# show gre tunnel
[0] instance 1 src 20.20.99.99 dst 0.0.0.0 fib-idx 1 sw-if-idx 18 payload L3 
multi-point
vpp#
vpp#
vpp# show teib
[0] gre1:2.2.2.1 via [1]:20.20.99.215/32
vpp#
vpp#
vpp# show ip fib
NWU, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none 
locks:[API:2, adjacency:2, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:55 buckets:1 uRPF:66 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:56 buckets:1 uRPF:78 to:[0:0]]
[0] [@0]: dpo-drop ip4
2.2.2.1/32
  UNRESOLVED
7.7.7.7/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:83 buckets:1 uRPF:113 to:[0:0]]
[0] [@6]: ipv4 via 2.2.2.1 gre1: mtu:9000 next:15 
4500fe2fcd6c14146363141463d70800
stacked-on entry:81:
  [@3]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500 
next:16 fa163e4b6b42fa163ec2b4f4810006140800
20.20.99.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:63 buckets:1 uRPF:86 to:[0:0]]
[0] [@0]: dpo-drop ip4
20.20.99.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:62 buckets:1 uRPF:85 to:[0:0]]
[0] [@4]: ipv4-glean: VirtualFuncEthernet0/7/0.1556: mtu:9000 next:3 
fa163ec2b4f4810006140806
20.20.99.99/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:65 buckets:1 uRPF:90 to:[1:80]]
[0] [@2]: dpo-receive: 20.20.99.99 on VirtualFuncEthernet0/7/0.1556
20.20.99.215/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:82 buckets:1 uRPF:116 to:[0:0]]
[0] [@5]: ipv4 via 20.20.99.215 Virtua

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-19 Thread Vijay Kumar
Hi Neale,

Thanks for sharing the patch.

While I debugged today with gdb I found that fib_index value in the if
block always used to be 0. The I traced the code and found that though
fib_index_by_sw_if_index field is set in ip4_main, the call
to fib_table_get_index_for_sw_if_index() always yields a 0 that is causing
the teib to get added into the wrong FIB (in the problem case, it was
always added to default fib which is fib 0)

On similar lines to your patch, I simply commented out the call
to fib_table_get_index_for_sw_if_index() and use the outer fib_index (which
is nh_fib_index in your patch). This patch would ensure teib gets added
properly.

I am building the change. I will update after testing.

Thank you for the support.


Regards,
Vijay Kumar N






On Sat, Mar 20, 2021 at 1:14 AM Neale Ranns  wrote:

> Hi Vijay,
>
>
>
> I was able to re-produce your issue. Please try with:
>
>   https://gerrit.fd.io/r/c/vpp/+/31695
>
>
>
> /neale
>
>
>
> *From: *Vijay Kumar Nagaraj 
> *Date: *Friday, 19 March 2021 at 19:12
> *To: *Neale Ranns , Vijay Kumar <
> vjkumar2...@gmail.com>
> *Cc: *y...@wangsu.com , vpp-dev@lists.fd.io <
> vpp-dev@lists.fd.io>
> *Subject: *RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.
>
> Hi Neale,
>
>
>
> I have captured the output of “show fib entry 81” that is related to the
> route of the overlay and GRE peer.
>
> In the output, the adjacency/FIB entry looks fine for me. I have followed
> all the configurations exactly as you mentioned below but there is no luck. 
> All
> the packets are dropped as blackholed packets (highlighted in red)
>
> The only difference between us is the fib-idx is 1 in my case.  In your
> example there is no outer fib-idx in create gre tunnel cmd or either create
> teib cmd.
>
>
>
> Topology and config
>
> ==
>
> Strongswan VM (20.20.99.215, gre peer 2.2.2.1)
>  <===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)
>
>
>
> Configuration on VPP side
>
> 
>
> create gre tunnel src *20.20.99.99* outer-table-id 1 instance 1 multipoint
> set interface ip addr gre1 2.2.2.2/32
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F2.2.2.2%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055192691263%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BsCTdeX3Z%2Bvp4smOeUAUfl6GpmXZbGRSGgbtM9hcYx0%3D&reserved=0>
> set interface state gre1 up
> create teib gre1 peer 2.2.2.1 nh-table-id 1 nh *20.20.99.215 *
>
> ip route add 7.7.7.7/32
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F7.7.7.7%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055192701256%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=kyEGUcXvxn%2FFWf2uqc%2BRRkbE2wOWVV6vmX8rZdvjvwc%3D&reserved=0>
>  table
> 1 via 2.2.2.1 gre1
>
>
>
>
>
> vpp# show adj
>
> [@0] ipv4-glean: loop0: mtu:9000 next:1 dead0806
>
> [@1] ipv4-glean: loop1: mtu:9000 next:2 dead00010806
>
> [@2] ipv4 via 0.0.0.0 memif0/0: mtu:65535 next:3
>
> [@3] ipv4 via 0.0.0.0 memif0/1: mtu:65535 next:4
>
> [@4] ipv4 via 0.0.0.0 memif0/2: mtu:65535 next:5
>
> [@5] ipv4 via 0.0.0.0 memif128/0: mtu:65535 next:6
>
> [@6] ipv4 via 0.0.0.0 memif128/1: mtu:65535 next:7
>
> [@7] ipv4 via 0.0.0.0 memif128/2: mtu:65535 next:8
>
> [@8] ipv4 via 0.0.0.0 memif192/0: mtu:65535 next:9
>
> [@9] ipv4 via 0.0.0.0 memif192/1: mtu:65535 next:10
>
> [@10] ipv4 via 0.0.0.0 memif192/2: mtu:65535 next:11
>
> [@11] ipv4-glean: VirtualFuncEthernet0/7/0.1556: mtu:9000 next:3
> fa163ec2b4f4810006140806
>
> [@12] ipv4 via 0.0.0.0 memif210/0: mtu:65535 next:12
>
> [@13] ipv4 via 0.0.0.0 memif210/1: mtu:65535 next:13
>
> [@14] ipv4 via 0.0.0.0 memif210/2: mtu:65535 next:14
>
> [@15] ipv4 via 2.2.2.1 gre1: mtu:9000 next:15
> 4500fe2fcd6c14146363141463d70800
>
>   stacked-on entry:81:
>
> [@3]: ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500
> next:16 fa163e4b6b42fa163ec2b4f4810006140800
>
> [@16] ipv4 via 20.20.99.215 VirtualFuncEthernet0/7/0.1556: mtu:1500
> next:16 fa163e4b6b42fa163ec2b4f4810006140800
>
> vpp#
>
> vpp#
>
> vpp# show gre tunnel
>
> [0] instance 1 src 20.20.99.99 dst 0.0.0.0 fib-idx 1 sw-if-idx 18 payload
> L3 multi-point
>
> vpp#
>
> vpp#
>
> vpp# show teib
>
> [0] gre1:2.2

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-19 Thread Vijay Kumar
Hi Neale,

I tested with the correction in fib_index. Verified by gdb that the TEIB
entry is added properly into the right fib table. Also, the FIB route
entries and show adjacencies were fine as before. I thought we were all set
now.

But when I pinged from the peer side (the side that has the overlay address
7.7.7.7) to VPP, I saw the ping pkts were dropped on the VPP side with the
reason highlighted below (red)

Looks like the IP4_local graph node finds the SRC to be invalid. Am I
missing something

vpp#
vpp# show node counters
   CountNode  Reason
15null-node   blackholed packets
 7   dpdk-input   no error
  6008 an_ppe_wfectrl wfectrl packets received
  6008 an_ppe_wfectrl wfectrl replies sent
  4801 an_ppe_wfectrl session stat request
received
 1 an_ppe_wfectrl service construct config
request received
 1 an_ppe_wfectrl service construct config
request success
 1 an_ppe_wfectrl service config request
received
 1 an_ppe_wfectrl service config request
success
   997 an_ppe_wfectrl dpi stats request received
   997 an_ppe_wfectrl dpi stats request success
41 an_ppe_wfectrl nat stats request received
41 an_ppe_wfectrl nat stats request success
12arp-reply   ARP replies sent
 1   gre4-input   no error
17ip4-local   ip4 source lookup miss
12 ip4-icmp-input unknown type
21 ip4-icmp-input echo replies sent
 2 ethernet-input unknown vlan
vpp#
vpp#

On Sat, Mar 20, 2021 at 1:28 AM Vijay Kumar via lists.fd.io  wrote:

> Hi Neale,
>
> Thanks for sharing the patch.
>
> While I debugged today with gdb I found that fib_index value in the if
> block always used to be 0. The I traced the code and found that though
> fib_index_by_sw_if_index field is set in ip4_main, the call
> to fib_table_get_index_for_sw_if_index() always yields a 0 that is causing
> the teib to get added into the wrong FIB (in the problem case, it was
> always added to default fib which is fib 0)
>
> On similar lines to your patch, I simply commented out the call
> to fib_table_get_index_for_sw_if_index() and use the outer fib_index (which
> is nh_fib_index in your patch). This patch would ensure teib gets added
> properly.
>
> I am building the change. I will update after testing.
>
> Thank you for the support.
>
>
> Regards,
> Vijay Kumar N
>
>
>
>
>
>
> On Sat, Mar 20, 2021 at 1:14 AM Neale Ranns  wrote:
>
>> Hi Vijay,
>>
>>
>>
>> I was able to re-produce your issue. Please try with:
>>
>>   https://gerrit.fd.io/r/c/vpp/+/31695
>>
>>
>>
>> /neale
>>
>>
>>
>> *From: *Vijay Kumar Nagaraj 
>> *Date: *Friday, 19 March 2021 at 19:12
>> *To: *Neale Ranns , Vijay Kumar <
>> vjkumar2...@gmail.com>
>> *Cc: *y...@wangsu.com , vpp-dev@lists.fd.io <
>> vpp-dev@lists.fd.io>
>> *Subject: *RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.
>>
>> Hi Neale,
>>
>>
>>
>> I have captured the output of “show fib entry 81” that is related to the
>> route of the overlay and GRE peer.
>>
>> In the output, the adjacency/FIB entry looks fine for me. I have followed
>> all the configurations exactly as you mentioned below but there is no luck. 
>> All
>> the packets are dropped as blackholed packets (highlighted in red)
>>
>> The only difference between us is the fib-idx is 1 in my case.  In your
>> example there is no outer fib-idx in create gre tunnel cmd or either create
>> teib cmd.
>>
>>
>>
>> Topology and config
>>
>> ==
>>
>> Strongswan VM (20.20.99.215, gre peer 2.2.2.1)
>>  <===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)
>>
>>
>>
>> Configuration on VPP side
>>
>> 
>>
>> create gre tunnel src *20.20.99.99* outer-table-id 1 instance 1
>> multipoint
>> set interface ip addr gre1 2.2.2.2/32
>> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F2.2.2.2%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055

Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.

2021-03-21 Thread Neale Ranns


From: Vijay Kumar 
Date: Friday, 19 March 2021 at 21:11
To: vjkumar2003 
Cc: Neale Ranns , Vijay Kumar Nagaraj 
, y...@wangsu.com , 
vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi Neale,

I tested with the correction in fib_index. Verified by gdb that the TEIB entry 
is added properly into the right fib table. Also, the FIB route entries and 
show adjacencies were fine as before. I thought we were all set now.

But when I pinged from the peer side (the side that has the overlay address 
7.7.7.7) to VPP, I saw the ping pkts were dropped on the VPP side with the 
reason highlighted below (red)

Looks like the IP4_local graph node finds the SRC to be invalid. Am I missing 
something

Indeed it does. Let’s us a packet trace and the ‘sh ip fib’ output to determine 
why.

/neale



vpp#
vpp# show node counters
   CountNode  Reason
15null-node   blackholed packets
 7   dpdk-input   no error
  6008 an_ppe_wfectrl wfectrl packets received
  6008 an_ppe_wfectrl wfectrl replies sent
  4801 an_ppe_wfectrl session stat request received
 1 an_ppe_wfectrl service construct config 
request received
 1 an_ppe_wfectrl service construct config 
request success
 1 an_ppe_wfectrl service config request 
received
 1 an_ppe_wfectrl service config request success
   997 an_ppe_wfectrl dpi stats request received
   997 an_ppe_wfectrl dpi stats request success
41 an_ppe_wfectrl nat stats request received
41 an_ppe_wfectrl nat stats request success
12arp-reply   ARP replies sent
 1   gre4-input   no error
17ip4-local   ip4 source lookup miss
12 ip4-icmp-input unknown type
21 ip4-icmp-input echo replies sent
 2 ethernet-input unknown vlan
vpp#
vpp#

On Sat, Mar 20, 2021 at 1:28 AM Vijay Kumar via lists.fd.io<http://lists.fd.io> 
mailto:gmail@lists.fd.io>> wrote:
Hi Neale,

Thanks for sharing the patch.

While I debugged today with gdb I found that fib_index value in the if block 
always used to be 0. The I traced the code and found that though 
fib_index_by_sw_if_index field is set in ip4_main, the call to 
fib_table_get_index_for_sw_if_index() always yields a 0 that is causing the 
teib to get added into the wrong FIB (in the problem case, it was always added 
to default fib which is fib 0)

On similar lines to your patch, I simply commented out the call to 
fib_table_get_index_for_sw_if_index() and use the outer fib_index (which is 
nh_fib_index in your patch). This patch would ensure teib gets added properly.

I am building the change. I will update after testing.

Thank you for the support.


Regards,
Vijay Kumar N






On Sat, Mar 20, 2021 at 1:14 AM Neale Ranns 
mailto:ne...@graphiant.com>> wrote:
Hi Vijay,

I was able to re-produce your issue. Please try with:
  https://gerrit.fd.io/r/c/vpp/+/31695

/neale

From: Vijay Kumar Nagaraj 
mailto:vijay...@microsoft.com>>
Date: Friday, 19 March 2021 at 19:12
To: Neale Ranns mailto:ne...@graphiant.com>>, Vijay Kumar 
mailto:vjkumar2...@gmail.com>>
Cc: y...@wangsu.com<mailto:y...@wangsu.com> 
mailto:y...@wangsu.com>>, 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] mgre interface get UNRESOLVED fib entry.
Hi Neale,

I have captured the output of “show fib entry 81” that is related to the route 
of the overlay and GRE peer.
In the output, the adjacency/FIB entry looks fine for me. I have followed all 
the configurations exactly as you mentioned below but there is no luck. All the 
packets are dropped as blackholed packets (highlighted in red)
The only difference between us is the fib-idx is 1 in my case.  In your example 
there is no outer fib-idx in create gre tunnel cmd or either create teib cmd.

Topology and config
==
Strongswan VM (20.20.99.215, gre peer 2.2.2.1)   
<===>VPP cluster (20.20.99.99, gre peer 2.2.2.2)

Configuration on VPP side

create gre tunnel src 20.20.99.99 outer-table-id 1 instance 1 multipoint
set interface ip addr gre1 
2.2.2.2/32<https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2F2.2.2.2%2F32&data=04%7C01%7Cvijaynag%40microsoft.com%7C017e3563375946808c3608d8e9794177%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637516055192691263%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMD