We are trying to set up user-space tunneling with DPDK datapath in OVS 2.5 
using other internal ports on the external bridge (br-prv) than the LOCAL 
bridge port as local tunnel end-points. The ultimate goal is to configure the 
different internal ports on br-prv as VLAN access ports with different tags to 
separate the tunnel traffic into a number of VLANs on the physical underlay.

However, we experience that even though OVS can be configured for this and is 
able to resolve the remote tunnel end-point, the data plane does not work. 
Packets are  being dropped on egress into the tunnel port. In the other 
direction, tunneled packets received on br-prv are forwarded to port tep1 with 
VXLAN encapsulation.

The data plane only works as expected if the LOCAL port of the external bridge 
is configured as local tunnel end-point.

Is this limitation intended? Or is it simply a short-cut to simplify the 
implementation of user-space tunneling?

Here is our configuration in details:

# ovs-vsctl show
176cb2b1-0ebe-4490-9210-f60434f09cc9
    Bridge br_int
        Port br_int
            Interface br_int
                type: internal
        Port "vxlan0"
            Interface "vxlan0"
                type: vxlan
                options: {key=flow, remote_ip="10.1.2.9"}
    Bridge br-prv
        Port br-prv
            tag: 100
            Interface br-prv
                type: internal
        Port bond-dpdk
            Interface "dpdk0"
                type: dpdk
            Interface "dpdk1"
                type: dpdk
        Port "tep1"
            Interface "tep1"
                type: internal

# ifconfig -a
br_int    Link encap:Ethernet  HWaddr 66:e2:69:60:dc:42
          inet addr:172.1.1.8  Bcast:172.1.1.255  Mask:255.255.255.0
          inet6 addr: fe80::64e2:69ff:fe60:dc42/64 Scope:Link
          UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
          RX packets:391 errors:0 dropped:391 overruns:0 frame:0
          TX packets:803 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:16722 (16.7 KB)  TX bytes:34038 (34.0 KB)

br-prv    Link encap:Ethernet  HWaddr 8c:dc:d4:ab:5b:f0
          BROADCAST PROMISC  MTU:1500  Metric:1
          RX packets:23 errors:0 dropped:0 overruns:0 frame:0
          TX packets:191 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:1712 (1.7 KB)  TX bytes:8334 (8.3 KB)

tep1      Link encap:Ethernet  HWaddr c2:cf:a0:d4:f4:d6
          inet addr:10.1.3.8  Bcast:10.1.3.255  Mask:255.255.255.0
          inet6 addr: fe80::c0cf:a0ff:fed4:f4d6/64 Scope:Link
          UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
          RX packets:3328 errors:0 dropped:2682 overruns:0 frame:0
          TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:231004 (231.0 KB)  TX bytes:11454 (11.4 KB)

# ip route
10.1.2.0/24 via 10.1.3.1 dev tep1
10.1.3.0/24 dev tep1  proto kernel  scope link  src 10.1.3.8
172.1.1.0/24 dev br_int  proto kernel  scope link  src 172.1.1.8

Apparently OVS is able to correctly resolve the VXLAN tunnel to the internal 
port tep1 as local end point.

# ovs-appctl ovs/route/show
Route Table:
Cached: 10.1.3.8/32 dev tep1
Cached: 172.1.1.8/32 dev br_int
Cached: ::1/128 dev lo
Cached: 10.1.2.0/24 dev tep1 GW 10.1.3.1
Cached: 10.1.3.0/24 dev tep1
Cached: 172.1.1.0/24 dev br_int
Cached: 127.0.0.0/8 dev lo
Cached: 0.0.0.0/0 dev eth0 GW 10.87.247.1

# ovs-appctl tnl/neigh/show
IP                                            MAC                 Bridge
==========================================================================
10.1.3.8                                      c2:cf:a0:d4:f4:d6   br-prv
10.1.3.1                                      00:04:96:98:78:18   br_int
10.1.3.1                                      00:04:96:98:78:18   br-prv
172.1.1.8                                     66:e2:69:60:dc:42   br_int

# ovs-vsctl list interface vxlan0
_uuid               : b2a26291-fc5c-448c-921e-c6216b24caf7
admin_state         : up
...
mac_in_use          : "1e:84:2f:1e:69:7a"
mtu                 : []
name                : "vxlan0"
ofport              : 100
ofport_request      : 100
options             : {key=flow, remote_ip="10.1.2.9"}
other_config        : {}
statistics          : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, 
rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=64324, 
tx_dropped=0, tx_errors=0, tx_packets=1185}
status              : {tunnel_egress_iface="tep1", 
tunnel_egress_iface_carrier=up}
type                : vxlan

In the ofproto-dpif code handling de-tunneling there seems to be an explicit 
check on OFPP_LOCAL as egress port. Should other internal ports work if we just 
remove the check?

diff --git a/ofproto/ofproto-dpif-xlate.c b/ofproto/ofproto-dpif-xlate.c
index 15ca565..acc8376 100644
--- a/ofproto/ofproto-dpif-xlate.c
+++ b/ofproto/ofproto-dpif-xlate.c
@@ -3165,8 +3165,7 @@ compose_output_action__(struct xlate_ctx *ctx, ofp_port_t 
ofp_port,

                 /* XXX: Write better Filter for tunnel port. We can use inport
                 * int tunnel-port flow to avoid these checks completely. */
-                if (ofp_port == OFPP_LOCAL &&
-                    ovs_native_tunneling_is_on(ctx->xbridge->ofproto)) {
+                if (ovs_native_tunneling_is_on(ctx->xbridge->ofproto)) {

                     odp_tnl_port = tnl_port_map_lookup(flow, wc);
                 }


_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to