Re: [vpp-dev] MTU
up... -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#18189): https://lists.fd.io/g/vpp-dev/message/18189 Mute This Topic: https://lists.fd.io/mt/10641628/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] using fib id after ipsec encap
I want to create IPsec tunnel while the tunnel src IP interface is associated with default router VR0. the IPsec port and the LAN interface is associated with VR4. For this purpose, I am trying to use " tx_table_id - the FIB id used after packet encap" argument of the vl_api_ipsec_tunnel_if_add_del_t function of the vpp api (version 19.01.2) to forward the IPsec traffic via the tunnel src interface, it seems this argument is not used and the traffic is forwarded related to the inbound interface fib. for example: tunnel src interface vpp0 IP:20.20.20.1 - VR0. IPsec port vppIpsec1 IP:60.60.60.1 - VR4. LAN interface vpp4 IP:40.40.40.1 - VR4. tx_table_id = 0. the traffic is not forwarded from the interface vpp4 to interface vpp0. in case of associate also vpp0 to VR4 the traffic is forwarded correctly. what is wrong? -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15428): https://lists.fd.io/g/vpp-dev/message/15428 Mute This Topic: https://lists.fd.io/mt/71350848/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] auto-remove fib entries
Hi All, is there an option to auto-remove fib entries related to interface after performing an administrative shutdown to the interface? Issue example: while configuring static route in the linux kernel and then performing administrative down & up to the related interface the route was removed from the linux fib but still exist in the vpp fib. linux#ifconfig vpp0 10.10.10.1/24 up linux#ip route add 11.11.11.0/24 via 10.10.10.2 linux#route -n: 10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 vpp0 11.11.11.0 10.10.10.2 255.255.255.0 UG 0 0 0 vpp0 vpp# show ip fib: 10.10.10.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:60 buckets:1 uRPF:64 to:[0:0]] [0] [@0]: dpo-drop ip4 10.10.10.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:59 buckets:1 uRPF:81 to:[0:0]] [0] [@4]: ipv4-glean: GigabitEthernet0/14/0: mtu:1790 0008a2095a9c0806 10.10.10.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:62 buckets:1 uRPF:83 to:[0:0]] [0] [@2]: dpo-receive: 10.10.10.1 on GigabitEthernet0/14/0 10.10.10.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:61 buckets:1 uRPF:70 to:[0:0]] [0] [@0]: dpo-drop ip4 11.11.11.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:69 buckets:1 uRPF:66 to:[0:0]] [0] [@3]: arp-ipv4: via 10.10.10.2 GigabitEthernet0/14/0 linux# ifconfig vpp0 down linux# ifconfig vpp0 up linux#route -n: 10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 vpp0 vpp# show ip fib: 10.10.10.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:60 buckets:1 uRPF:67 to:[0:0]] [0] [@0]: dpo-drop ip4 10.10.10.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:59 buckets:1 uRPF:64 to:[0:0]] [0] [@4]: ipv4-glean: GigabitEthernet0/14/0: mtu:1790 0008a2095a9c0806 10.10.10.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:62 buckets:1 uRPF:83 to:[0:0]] [0] [@2]: dpo-receive: 10.10.10.1 on GigabitEthernet0/14/0 10.10.10.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:61 buckets:1 uRPF:120 to:[0:0]] [0] [@0]: dpo-drop ip4 11.11.11.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:69 buckets:1 uRPF:65 to:[0:0]] [0] [@3]: arp-ipv4: via 10.10.10.2 GigabitEthernet0/14/0 -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14131): https://lists.fd.io/g/vpp-dev/message/14131 Mute This Topic: https://lists.fd.io/mt/34426457/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] VPP plugin in L2 related path
Hi All, I see that there are ARC for L3 (IP4/IP6) and one for Ethernet-input/Device-input but no ARC for the l2 related nodes. is there an ARC for L2 nodes (or planned to be)? I am trying to build a vpp plugin and attach it before the l2-fwd node right after the l2-input-vtr node. However, it looks like l2-fwd is not a feature and therefore, I can't attach before it (when I try to use .runs_before = VNET_FEATURES ("l2-fwd")). Also, attachment before "ethernet-input" is possible but it is too soon (as I need to attach after the tag rewrite which is done in the "l2-input-vtr" node). Is there a way to overcome this issue? Thanks, Liran N. ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev