[ovs-discuss] Packet drop issue with Tap Poll Mode Driver

2021-05-21 Thread Nobuhiro Miki
Hi all,

I have experienced packet loss when using DPDK Virtual Devices
and have questions about troubleshooting it. The documentation [1] says
"Not all DPDK virtual PMD drivers have been tested and verified to work."
Does anyone know the current support status for Tap Poll Mode Driver [2] ?
Or is there a mistake in my procedure? Below are instructions on how to
reproduce this issue. For reference, I found a presentation [3] at ovscon2019
that talks about using tap interface with OvS-DPDK.

Environment
---

- Ubuntu 20.04 LTS on Virtual Machine (QEMU/KVM)
- Ubuntu 20.04 LTS on Bare Metal Machine

Diagram
---

tap1 (Tap Poll Mode Driver, source of ping6) --- br0 (datapath_type=netdev) --- 
tap2 (Tap Poll Mode Driver, destination of ping6)

Reproduction Steps
--

# https://ubuntu.com/server/docs/openvswitch-dpdk
sudo apt install -y openvswitch-switch-dpdk
sudo update-alternatives --set ovs-vswitchd 
/usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk
sudo ovs-vsctl set Open_vSwitch . "other_config:dpdk-init=true"
sudo ovs-vsctl set Open_vSwitch . "other_config:dpdk-lcore-mask=0x1"
sudo ovs-vsctl set Open_vSwitch . "other_config:dpdk-alloc-mem=2048"
sudo service openvswitch-switch restart

sudo /usr/share/openvswitch/scripts/ovs-ctl version
# ovsdb-server (Open vSwitch) 2.13.3
# ovs-vswitchd (Open vSwitch) 2.13.3
# DPDK 19.11.7

sudo ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=dpdk 
options:dpdk-devargs="net_tap0,iface=tap1"
sudo ovs-vsctl add-port br0 myeth1 -- set Interface myeth1 type=dpdk 
options:dpdk-devargs="net_tap1,iface=tap2"

sudo ovs-ofctl del-flows br0
sudo ovs-ofctl add-flow br0 priority=10,in_port=myeth1,actions:output=myeth0
sudo ovs-ofctl add-flow br0 priority=10,in_port=myeth0,actions:output=myeth1

sudo ip netns add ns1
sudo ip netns add ns2

sudo ip link set tap1 netns ns1
sudo ip link set tap2 netns ns2

sudo ip netns exec ns1 ip link set tap1 up
sudo ip netns exec ns2 ip link set tap2 up

sudo ip netns exec ns1 ip link set lo up
sudo ip netns exec ns2 ip link set lo up

# ping6 from tap1 to tap2
# Only one ping succeeds every few times. In some cases, there is no response 
at all.
sudo ip netns exec ns1 ping6 $ipv6_address_of_tap2 -I tap1
# 64 bytes from fe80::8433:52ff:fe29:f20a%tap1: icmp_seq=1 ttl=64 time=0.223 ms
# 64 bytes from fe80::8433:52ff:fe29:f20a%tap1: icmp_seq=5 ttl=64 time=0.321 ms
# 64 bytes from fe80::8433:52ff:fe29:f20a%tap1: icmp_seq=9 ttl=64 time=0.291 ms
# 64 bytes from fe80::8433:52ff:fe29:f20a%tap1: icmp_seq=13 ttl=64 time=0.281 ms
# 64 bytes from fe80::8433:52ff:fe29:f20a%tap1: icmp_seq=17 ttl=64 time=0.284 ms

sudo ovs-vsctl get Open_vSwitch . dpdk_initialized
# true

sudo ovs-appctl dpif-netdev/pmd-rxq-show
# pmd thread numa_id 0 core_id 5:
#   isolated : false
#   port: myeth0queue-id:  0 (enabled)   pmd usage:  0 %
#   port: myeth1queue-id:  0 (enabled)   pmd usage:  0 %

[1]: https://docs.openvswitch.org/en/latest/topics/dpdk/vdev/
[2]: https://doc.dpdk.org/guides/nics/tap.html
[3]: 
https://www.openvswitch.org/support/ovscon2019/day1/1101-Utilizing%20DPDK%20Virtual%20Devices%20in%20OVS.pdf

Best Regards,
Nobuhiro Miki
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] packet drop

2020-08-05 Thread Tony Liu


The drop is caused by flow change.

When packet is dropped.

recirc_id(0),tunnel(tun_id=0x19aca,src=10.6.30.92,dst=10.6.30.22,geneve({class=0x102,type=0x80,len=4,0x20003/0x7fff}),flags(-df+csum+key)),in_port(3),eth(src=fa:16:3e:df:1e:85,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=1,frag=no),icmp(type=8/0xf8),
 packets:14, bytes:1372, used:0.846s, actions:drop
recirc_id(0),in_port(12),eth(src=fa:16:3e:7d:bb:85,dst=fa:16:3e:df:1e:85),eth_type(0x0800),ipv4(src=192.168.236.152/255.255.255.252,dst=10.6.40.9,proto=1,tos=0/0x3,ttl=64,frag=no),icmp(type=0),
 packets:6, bytes:588, used:8.983s, actions:drop


When packet goes through.

recirc_id(0),tunnel(tun_id=0x19aca,src=10.6.30.92,dst=10.6.30.22,geneve({class=0x102,type=0x80,len=4,0x20003/0x7fff}),flags(-df+csum+key)),in_port(3),eth(src=fa:16:3e:df:1e:85,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=1,frag=no),icmp(type=8/0xf8),
 packets:3, bytes:294, used:0.104s, actions:12
recirc_id(0),in_port(12),eth(src=fa:16:3e:7d:bb:85,dst=fa:16:3e:df:1e:85),eth_type(0x0800),ipv4(src=192.168.236.152/255.255.255.252,dst=10.6.40.9,proto=1,tos=0/0x3,ttl=64,frag=no),icmp(type=0),
 packets:3, bytes:294, used:0.103s, 
actions:ct_clear,set(tunnel(tun_id=0x1a8ee,dst=10.6.30.92,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x1000b}),flags(df|csum|key))),set(eth(src=fa:16:3e:75:b7:e5,dst=52:54:00:0c:ef:b9)),set(ipv4(ttl=63)),3


Is that flow programmed by ovn-controller via ovs-vswitchd?


Thanks!

Tony

> -Original Message-
> From: discuss  On Behalf Of Tony
> Liu
> Sent: Wednesday, August 5, 2020 2:48 PM
> To: ovs-discuss@openvswitch.org; ovs-...@openvswitch.org
> Subject: [ovs-discuss] packet drop
> 
> Hi,
> 
> I am running ping from external to VM via OVN gateway.
> On the compute node, ICMP request packet is consistently coming into
> interface "ovn-gatewa-1". But there is about 10 out of 25 packet loss on
> tap interface. It's like the switch pauses 10s after every 15s.
> 
> Has anyone experiences such issue?
> Any advice how to look into it?
> 
> 
> 21fed09f-909e-4efc-b117-f5d5fcb636c9
> Bridge br-int
> fail_mode: secure
> datapath_type: system
> Port "ovn-gatewa-0"
> Interface "ovn-gatewa-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="10.6.30.91"}
> bfd_status: {diagnostic="No Diagnostic", flap_count="1",
> forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up,
> state=up}
> Port "tap2588bb4e-35"
> Interface "tap2588bb4e-35"
> Port "ovn-gatewa-1"
> Interface "ovn-gatewa-1"
> type: geneve
> options: {csum="true", key=flow, remote_ip="10.6.30.92"}
> bfd_status: {diagnostic="No Diagnostic", flap_count="1",
> forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up,
> state=up}
> Port "tap37f6b2d7-cc"
> Interface "tap37f6b2d7-cc"
> Port "tap2c4b3b0f-8b"
> Interface "tap2c4b3b0f-8b"
> Port "tap23245491-a4"
> Interface "tap23245491-a4"
> Port "tap51660269-2c"
> Interface "tap51660269-2c"
> Port "tap276cd1ef-e1"
> Interface "tap276cd1ef-e1"
> Port "tap138526d3-b3"
> Interface "tap138526d3-b3"
> Port "tapd1ae48a1-2d"
> Interface "tapd1ae48a1-2d"
> Port br-int
> Interface br-int
> type: internal
> Port "tapdd08f476-94"
> Interface "tapdd08f476-94"
> 
> 
> 
> Thanks!
> 
> Tony
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] packet drop

2020-08-05 Thread Tony Liu
Hi,

I am running ping from external to VM via OVN gateway.
On the compute node, ICMP request packet is consistently coming
into interface "ovn-gatewa-1". But there is about 10 out of 25
packet loss on tap interface. It's like the switch pauses 10s
after every 15s.

Has anyone experiences such issue?
Any advice how to look into it?


21fed09f-909e-4efc-b117-f5d5fcb636c9
Bridge br-int
fail_mode: secure
datapath_type: system
Port "ovn-gatewa-0"
Interface "ovn-gatewa-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.6.30.91"}
bfd_status: {diagnostic="No Diagnostic", flap_count="1", 
forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up}
Port "tap2588bb4e-35"
Interface "tap2588bb4e-35"
Port "ovn-gatewa-1"
Interface "ovn-gatewa-1"
type: geneve
options: {csum="true", key=flow, remote_ip="10.6.30.92"}
bfd_status: {diagnostic="No Diagnostic", flap_count="1", 
forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up}
Port "tap37f6b2d7-cc"
Interface "tap37f6b2d7-cc"
Port "tap2c4b3b0f-8b"
Interface "tap2c4b3b0f-8b"
Port "tap23245491-a4"
Interface "tap23245491-a4"
Port "tap51660269-2c"
Interface "tap51660269-2c"
Port "tap276cd1ef-e1"
Interface "tap276cd1ef-e1"
Port "tap138526d3-b3"
Interface "tap138526d3-b3"
Port "tapd1ae48a1-2d"
Interface "tapd1ae48a1-2d"
Port br-int
Interface br-int
type: internal
Port "tapdd08f476-94"
Interface "tapdd08f476-94"



Thanks!

Tony

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet drop after openvswitch bond interface toggles

2019-04-03 Thread Ian Stokes

On 4/3/2019 12:28 PM, Inakoti, Satish (Nokia - HU/Budapest) wrote:

Hello Ian,
I already tried patching these two fixes in one of the environments sometime 
ago and did not help.


Ok, thanks for trying them.


I kind of am thinking in the similar lines that the LACP control channel of OVS 
has to wait till the carrier is completely capable of handling traffic before 
sending LACP control pdu's.
The below fixes may be missing a trick for the other types of netdevs (eg. 
DPDK) ??


Possibly, ideally the netdev_dpdk behavior would be similar to the other 
netdevs, but as the underlying hardware for a netdev_dpdk device can 
differ also, I'm wondering is there something specific with the ixgbe 
pmd used by the 82599ES card that needs to be addressed here if the 
patches below do not resolve the issue.


I'll need a little time to reproduce on my own system to investigate 
further and I'll follow up then.


Ian



-Satish Inakoti

-Original Message-
From: Ian Stokes 
Sent: Wednesday, April 03, 2019 1:09 PM
To: Inakoti, Satish (Nokia - HU/Budapest) ; 
b...@openvswitch.org
Subject: Re: [ovs-discuss] Packet drop after openvswitch bond interface toggles

On 4/2/2019 8:11 AM, Inakoti, Satish (Nokia - HU/Budapest) wrote:

Hi,
*Problem statement:*
If a ovs-bond is configured with LACP active-active mode (SLB-balancing)
and the one of the links go down and come back up again, we observe a
packet drop for few seconds.
*Environment:*
Openvswitch version - ovs-vsctl (Open vSwitch) 2.9.3
   DB Schema 7.15.1
DPDK version - dpdk-17.11.4
Physical nics: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network
Connection
Bond mode - LACP active-active, SLB balanced.
*Steps to reproduce:*

  1. If one of the links go down (make it down from the ToR switch) - the
 other takes over and traffic flows smooth as expected.
  2. When this link becomes active again, then the VM connected to this
 bond interface observes packets(UDP) drop for few seconds.

*Expected behavior:*
The traffic should flow without any drop, even after the interface comes up.
BR,


Hi,

this sounds similar to the issue described in

https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/356956.html

There are 2 patches under review to help address the issue (although as
you are using an 82599ES I would think patch 1 below should resolve the
issue for you, the second patch is aimed at i40e devices).

https://patchwork.ozlabs.org/patch/1051724/
https://patchwork.ozlabs.org/patch/1051725/

Could you check if they resolve the issue you are seeing?

Regards
Ian




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet drop after openvswitch bond interface toggles

2019-04-03 Thread Inakoti, Satish (Nokia - HU/Budapest)
Hello Ian,
I already tried patching these two fixes in one of the environments sometime 
ago and did not help.

I kind of am thinking in the similar lines that the LACP control channel of OVS 
has to wait till the carrier is completely capable of handling traffic before 
sending LACP control pdu's. 
The below fixes may be missing a trick for the other types of netdevs (eg. 
DPDK) ??


-Satish Inakoti

-Original Message-
From: Ian Stokes  
Sent: Wednesday, April 03, 2019 1:09 PM
To: Inakoti, Satish (Nokia - HU/Budapest) ; 
b...@openvswitch.org
Subject: Re: [ovs-discuss] Packet drop after openvswitch bond interface toggles

On 4/2/2019 8:11 AM, Inakoti, Satish (Nokia - HU/Budapest) wrote:
> Hi,
> *Problem statement:*
> If a ovs-bond is configured with LACP active-active mode (SLB-balancing) 
> and the one of the links go down and come back up again, we observe a 
> packet drop for few seconds.
> *Environment:*
> Openvswitch version - ovs-vsctl (Open vSwitch) 2.9.3
>   DB Schema 7.15.1
> DPDK version - dpdk-17.11.4
> Physical nics: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network 
> Connection
> Bond mode - LACP active-active, SLB balanced.
> *Steps to reproduce:*
> 
>  1. If one of the links go down (make it down from the ToR switch) - the
> other takes over and traffic flows smooth as expected.
>  2. When this link becomes active again, then the VM connected to this
> bond interface observes packets(UDP) drop for few seconds.
> 
> *Expected behavior:*
> The traffic should flow without any drop, even after the interface comes up.
> BR,

Hi,

this sounds similar to the issue described in

https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/356956.html

There are 2 patches under review to help address the issue (although as 
you are using an 82599ES I would think patch 1 below should resolve the 
issue for you, the second patch is aimed at i40e devices).

https://patchwork.ozlabs.org/patch/1051724/
https://patchwork.ozlabs.org/patch/1051725/

Could you check if they resolve the issue you are seeing?

Regards
Ian


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet drop after openvswitch bond interface toggles

2019-04-03 Thread Ian Stokes

On 4/2/2019 8:11 AM, Inakoti, Satish (Nokia - HU/Budapest) wrote:

Hi,
*Problem statement:*
If a ovs-bond is configured with LACP active-active mode (SLB-balancing) 
and the one of the links go down and come back up again, we observe a 
packet drop for few seconds.

*Environment:*
Openvswitch version - ovs-vsctl (Open vSwitch) 2.9.3
  DB Schema 7.15.1
DPDK version - dpdk-17.11.4
Physical nics: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network 
Connection

Bond mode – LACP active-active, SLB balanced.
*Steps to reproduce:*

 1. If one of the links go down (make it down from the ToR switch) – the
other takes over and traffic flows smooth as expected.
 2. When this link becomes active again, then the VM connected to this
bond interface observes packets(UDP) drop for few seconds.

*Expected behavior:*
The traffic should flow without any drop, even after the interface comes up.
BR,


Hi,

this sounds similar to the issue described in

https://mail.openvswitch.org/pipermail/ovs-dev/2019-March/356956.html

There are 2 patches under review to help address the issue (although as 
you are using an 82599ES I would think patch 1 below should resolve the 
issue for you, the second patch is aimed at i40e devices).


https://patchwork.ozlabs.org/patch/1051724/
https://patchwork.ozlabs.org/patch/1051725/

Could you check if they resolve the issue you are seeing?

Regards
Ian


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Packet drop after openvswitch bond interface toggles

2019-04-02 Thread Inakoti, Satish (Nokia - HU/Budapest)
Hi,

Problem statement:
If a ovs-bond is configured with LACP active-active mode (SLB-balancing) and 
the one of the links go down and come back up again, we observe a packet drop 
for few seconds.

Environment:
Openvswitch version - ovs-vsctl (Open vSwitch) 2.9.3
 DB Schema 7.15.1
DPDK version - dpdk-17.11.4
Physical nics: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection
Bond mode - LACP active-active, SLB balanced.

Steps to reproduce:
1.  If one of the links go down (make it down from the ToR switch) - the 
other takes over and traffic flows smooth as expected.
2.  When this link becomes active again, then the VM connected to this bond 
interface observes packets(UDP) drop for few seconds.

Expected behavior:
The traffic should flow without any drop, even after the interface comes up.


BR,
Satish Inakoti

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet drop when output to multiple VXLAN tunnel

2017-04-13 Thread f 62
On Thu, Apr 13, 2017 at 10:55 PM, Joe Stringer  wrote:

> On 13 April 2017 at 08:58, f 62  wrote:
> > Hi ,
> >
> > OVS is dropping packet when packet is output to multiple VXLAN
> tunnel. I
> > see following error:
> >
> > 2017-04-13T15:49:30.112Z|03577|dpif(handler15)|WARN|system@ovs-system:
> > failed to put[create] (Invalid argument)
> > ufid:7e404481-148c-4cee-9495-237bdc62383d
> > recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(10),skb_
> mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_
> label(0/0),eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:4c:54:
> 07),eth_type(0x0800),ipv4(src=51.0.0.8,dst=51.0.0.5,proto=6,
> tos=0/0xfc,ttl=64,frag=no),tcp(src=41564,dst=80),tcp_flags(0/0),
> > actions:set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:
> 44)),set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.
> 2.177,ttl=64,flags(df|key))),push_mpls(label=511,tc=0,ttl=
> 255,bos=1,eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.
> 168.2.91,dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_
> mpls(eth_type=0x800),recirc(0x7)
> > 2017-04-13T15:49:30.112Z|03578|dpif(handler15)|WARN|system@ovs-system:
> > execute
> > set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:44)),set(
> tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.177,ttl=64,
> flags(df|key))),push_mpls(label=511,tc=0,ttl=255,bos=1,
> eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.168.2.91,
> dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_mpls(
> eth_type=0x800),recirc(0x7)
> > failed (Invalid argument) on packet
>
> Hmm, once the push_mpls() happens, you should be able to output fine
> but I'm not sure if the subsequent set(tunnel(...)) would know how to
> change the tunnel attributes since it's now an MPLS packet. Maybe the
> userspace translation code needs to be taught to be a bit smarter to
> generate better actions for this case, eg something more like:
>
> set(eth(...)),set(tunnel(...)),push_mpls(...),output(...),
> pop_mpls(...),set(tunnel(...)),push_mpls(...),output(...),...
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet drop when output to multiple VXLAN tunnel

2017-04-13 Thread Joe Stringer
On 13 April 2017 at 08:58, f 62  wrote:
> Hi ,
>
> OVS is dropping packet when packet is output to multiple VXLAN tunnel. I
> see following error:
>
> 2017-04-13T15:49:30.112Z|03577|dpif(handler15)|WARN|system@ovs-system:
> failed to put[create] (Invalid argument)
> ufid:7e404481-148c-4cee-9495-237bdc62383d
> recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(10),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:4c:54:07),eth_type(0x0800),ipv4(src=51.0.0.8,dst=51.0.0.5,proto=6,tos=0/0xfc,ttl=64,frag=no),tcp(src=41564,dst=80),tcp_flags(0/0),
> actions:set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:44)),set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.177,ttl=64,flags(df|key))),push_mpls(label=511,tc=0,ttl=255,bos=1,eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_mpls(eth_type=0x800),recirc(0x7)
> 2017-04-13T15:49:30.112Z|03578|dpif(handler15)|WARN|system@ovs-system:
> execute
> set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:44)),set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.177,ttl=64,flags(df|key))),push_mpls(label=511,tc=0,ttl=255,bos=1,eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_mpls(eth_type=0x800),recirc(0x7)
> failed (Invalid argument) on packet

Hmm, once the push_mpls() happens, you should be able to output fine
but I'm not sure if the subsequent set(tunnel(...)) would know how to
change the tunnel attributes since it's now an MPLS packet. Maybe the
userspace translation code needs to be taught to be a bit smarter to
generate better actions for this case, eg something more like:

set(eth(...)),set(tunnel(...)),push_mpls(...),output(...),pop_mpls(...),set(tunnel(...)),push_mpls(...),output(...),...
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Packet drop when output to multiple VXLAN tunnel

2017-04-13 Thread f 62
Hi ,

OVS is dropping packet when packet is output to multiple VXLAN tunnel.
I see following error:

2017-04-13T15:49:30.112Z|03577|dpif(handler15)|WARN|system@ovs-system:
failed to put[create] (Invalid argument)
ufid:7e404481-148c-4cee-9495-237bdc62383d
recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(10),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:4c:54:07),eth_type(0x0800),ipv4(src=51.0.0.8,dst=51.0.0.5,proto=6,tos=0/0xfc,ttl=64,frag=no),tcp(src=41564,dst=80),tcp_flags(0/0),
actions:set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:44)),set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.177,ttl=64,flags(df|key))),push_mpls(label=511,tc=0,ttl=255,bos=1,eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_mpls(eth_type=0x800),recirc(0x7)
2017-04-13T15:49:30.112Z|03578|dpif(handler15)|WARN|system@ovs-system:
execute
set(eth(src=fa:16:3e:c0:5a:48,dst=fa:16:3e:db:81:44)),set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.177,ttl=64,flags(df|key))),push_mpls(label=511,tc=0,ttl=255,bos=1,eth_type=0x8847),8,set(tunnel(tun_id=0x2,src=192.168.2.91,dst=192.168.2.141,ttl=64,flags(df|key))),8,pop_mpls(eth_type=0x800),recirc(0x7)
failed (Invalid argument) on packet
tcp,vlan_tci=0x,dl_src=fa:16:3e:c0:5a:48,dl_dst=fa:16:3e:4c:54:07,nw_src=51.0.0.8,nw_dst=51.0.0.5,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=41564,tp_dst=80,tcp_flags=syn
tcp_csum:e1d6
 mtu 0


These are 2 flows which classifies packet:

cookie=0x0, duration=91.547s, table=5, n_packets=4, n_bytes=296,
idle_age=63, priority=0,ip,dl_dst=fa:16:3e:db:81:44
actions=push_mpls:0x8847,load:0x1ff->OXM_OF_MPLS_LABEL[],set_mpls_ttl(255),mod_vlan_vid:3,output:2,resubmit(8,7)

cookie=0x0, duration=72.231s, table=7, n_packets=4, n_bytes=296,
idle_age=60, in_port=8
actions=strip_vlan,pop_mpls:0x0800,move:NXM_NX_REG0[]->NXM_OF_ETH_DST[0..31],move:NXM_NX_REG1[0..9]->NXM_OF_ETH_DST[32..41],push_mpls:0x8847,load:0x1fe->OXM_OF_MPLS_LABEL[],set_mpls_ttl(254),mod_vlan_vid:3,output:2


However, when output to single tunnel, traffic is fine.


Regards,
vikash
irc -vks1
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss