On Fri, Jun 28, 2024 at 5:54 PM Numan Siddique <num...@ovn.org> wrote:
>
> On Wed, Jun 12, 2024 at 2:50 AM Felix Huettner via dev
> <ovs-dev@openvswitch.org> wrote:
> >
> > Previously we could only generate ARP requests from IPv4 packets
> > and NS requests from IPv6 packets. This was the case because we rely on
> > information in the packet to generate the ARP/NS requests.
> >
> > However in case of ARP/NS requests originating from the Logical_Router
> > pipeline for nexthop lookups we overwrite the affected fields
> > afterwards. This overwrite is done by the userdata openflow actions.
> > Because of this we actually do not rely on any information of the IPv4/6
> > packets in these cases.
> >
> > Unfortunately we can not easily determine if we are actually later
> > overwriting the affected fields. The approach now is to use the fields
> > from the IP header if we have a matching IP version and default to some
> > values otherwise. In case we overwrite this data afterwards we are
> > generally good. If we do not overwrite this data because of some bug we
> > will send out invalid ARP/NS requests. They will hopefully be dropped by
> > the rest of the network.
> >
> > The alternative would have been to introduce new arp/nd_ns actions where
> > we guarantee this overwrite. This would not suffer from the above
> > limitations, but would require a coordination on upgrades between all
> > ovn-controllers and northd.
> >
> > Signed-off-by: Felix Huettner <felix.huettner@mail.schwarz>
>
> Hi Felix,
>
> Thanks for the patch series and sorry for the delay in reviews.
>
> Looks like this patch series requires another rebase.
>
> I tested this patch series using the ovn-fake-mutlinode - [1] .
>
> After starting it,  I added the below static route
>
> ovn-nbctl lr-route-add lr1 172.15.0.0/24 3001::b
>
> And in the 'ovn-chassis-1' container I ran this command
>
> [root@ovn-chassis-1 ~]# ip netns exec sw01p1 ping 172.15.0.50
>
> Since we don't know the next hop,  the below logical flow is hit
> (which is as expected)
>
>   table=24(lr_in_arp_request  ), priority=200  , match=(eth.dst ==
> 00:00:00:00:00:00 && reg9[9] == 0 && xxreg0 == 3001::4), action=(nd_ns
> { eth.dst = 33:33:ff:00:00:04; ip6.dst = ff02::1:ff00:4; nd.target =
> 3001::b; output; }; output;)
>
> And If I look into the ovn-controller logs (after enabling vconn:dbg)
> I see that ovn-controller receives the icmp ping packet and it
> generates IPv6 NS packet which is also as expected
>
> ----
> 2024-06-28T21:39:06.303Z|00020|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-int.mgmt:
> received: NXT_PACKET_IN2 (OF1.5) (xid=0x0): cookie=0x77e2d8b
> total_len=98 
> reg0=0x30010000,reg1=0xac10016e,reg3=0xb,reg4=0x30010000,reg7=0xa,reg9=0x4,reg10=0x1,reg11=0x1,reg12=0x3,reg14=0x1,reg15=0x3,metadata=0x3,in_port=5
> (via action) data_len=98 (unbuffered)
>  
> userdata=00.00.00.09.00.00.00.00.00.1c.00.18.00.80.00.00.00.00.00.00.00.01.de.10.80.00.3e.10.00.00.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.25.00.00.00
>  continuation.bridge=b72b4e65-0756-4215-bfbf-c3db3652e8ee
>  continuation.actions=unroll_xlate(table=0, cookie=0),resubmit(,37)
>  continuation.odp_port=5
> icmp,vlan_tci=0x0000,dl_src=30:51:00:00:00:03,dl_dst=00:00:00:00:00:00,nw_src=11.0.0.3,nw_dst=172.15.0.50,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,icmp_type=8,icmp_code=0
> icmp_csum:cf9b
> 2024-06-28T21:39:06.304Z|00021|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-int.mgmt:
> sent (Success): OFPT_PACKET_OUT (OF1.5) (xid=0x413):
> in_port=CONTROLLER
> actions=set_field:0x30010000->reg0,set_field:0xac10016e->reg1,set_field:0xb->reg3,set_field:0x30010000->reg4,set_field:0xa->reg7,set_field:0x4->reg9,set_field:0x1->reg10,set_field:0x1->reg11,set_field:0x3->reg12,set_field:0x1->reg14,set_field:0x3->reg15,set_field:0x3->metadata,move:NXM_NX_XXREG0[]->NXM_NX_ND_TARGET[],resubmit(,37)
> data_len=86
> icmp6,vlan_tci=0x0000,dl_src=30:51:00:00:00:03,dl_dst=33:33:ff:ff:ff:ff,ipv6_src=fe80::3251:ff:fe00:3,ipv6_dst=ff02::1:ffff:ffff,ipv6_label=0x00000,nw_tos=0,nw_ecn=0,nw_ttl=255,nw_frag=no,icmp_type=135,icmp_code=0,nd_target=ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff,nd_sll=30:51:00:00:00:03,nd_tll=00:00:00:00:00:00
> icmp6_csum:1877
> ------
>
> But strangely when this packet leaves the physical interface 'eth2' on
> ovn-chassis-1,  the nd_target is wrongly populated
>
> tcpdump on the physical interface (connected to br-ex)
> ---
> 17:37:40.289881 30:51:00:00:00:03 > 33:33:ff:ff:ff:ff, ethertype IPv6
> (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload
> length: 32) fe80::3251:ff:fe00:3 > ff02::1:ffff:ffff: [icmp6 sum ok]
> ICMP6, neighbor solicitation, length 32, who has 3001:0:ac10:16e::b
>   source link-address option (1), length 8 (1): 30:51:00:00:00:03
>
> ---
>
> You can see that the nd_target is '3001:0:ac10:16e::b  and not
> '3001::b', which is strange.  The inner action of "nd_ns" clearly sets
> nd.target = 3001::b.
>
> Can you please verify in your end if that is the case ?  You can
> perhaps use ovn-fake-multinode if you can't  reproduce in your setup.
>


I was curious about what's happening here and I debugged a bit and I
found the reason.
The thing to note is xxreg0 register is nothing but reg0, reg1, reg2
and reg3.  For IPv4 packets, we use individual registers and NOT
extended registers.
But with your patch, we are using both the extended registers and
individual registers and this corrupts the xxreg.
Below details should make things clear.

In my testing I added the static route - ovn-nbctl lr-route-add lr1
172.17.0.0/24 3001::b in my setup.

This is how my logical router looks like

------------------------------
router a0ec6f06-5235-4d42-b6f6-1266256e6604 (lr1)
    port lr1-public1
        mac: "00:01:20:20:12:13"
        networks: ["172.16.1.100/24", "3001::a/64"]
        gateway chassis: [ovn-gw-1]
    port lr1-sw11
        mac: "00:01:00:00:ff:02"
        networks: ["2001::a/64", "21.0.0.1/24"]
    port lr1-sw01
        mac: "00:01:00:00:ff:01"
        networks: ["1001::a/64", "11.0.0.1/24"]
    nat 00d27b61-9df5-4c37-abff-68180e62e3e0
        external ip: "172.16.1.100"
        logical ip: "21.0.0.0/24"
        type: "snat"
    nat 57b916a7-42bd-4a93-836e-36e6f5ad7edf
        external ip: "172.16.1.100"
        logical ip: "11.0.0.0/24"
        type: "snat"
    nat 66ec5ef0-c1fe-4991-a431-919993ad158b
        external ip: "3001::c"
        logical ip: "1001::3"
        type: "dnat_and_snat"
    nat aa2e1c51-ec0a-4b7e-ab5f-d4fe3339b3db
        external ip: "3001::d"
        logical ip: "2001::3"
        type: "dnat_and_snat"
    nat d39249b9-deff-44ae-b20e-63096e49e623
        external ip: "172.16.1.110"
        logical ip: "11.0.0.3"
        type: "dnat_and_snat"
    nat ff23a300-2ea4-4eaf-b568-8f96eda36232
        external ip: "172.16.1.120"
        logical ip: "21.0.0.3"
        type: "dnat_and_snat"
----------------------

Notice the dnat_and_snat entries (11.0.0.3 <-> 172.16.1.110 and
21.0.0.3 <-> 172.16.1.120)

After your patch we have the below logical flows in the router
pipeline (with my test setup)

-----
....
table=14(lr_in_ip_routing   ), priority=73   , match=(reg7 == 0 &&
ip4.dst == 172.17.0.0/24), action=(ip.ttl--; reg8[0..15] = 0; xxreg0 =
3001::b; xxreg1 = 3001::a; eth.src = 00:01:20:20:12:13; outport =
"lr1-public1"; flags.loopback = 1; reg9[9] = 0; next;)
..
..
..
table=23(lr_in_gw_redirect  ), priority=100  , match=(ip4.src ==
11.0.0.3 && outport == "lr1-public1" &&
is_chassis_resident("sw01-port1")), action=(eth.src =
30:51:00:00:00:03; reg1 = 172.16.1.110; next;)
...
...
table=24(lr_in_arp_request  ), priority=200  , match=(eth.dst ==
00:00:00:00:00:00 && reg9[9] == 0 && xxreg0 == 3001::b), action=(nd_ns
{ eth.dst = 33:33:ff:00:00:0b; ip6.dst = ff02::1:ff00:b; nd.target =
3001::b; output; }; output;)
table=24(lr_in_arp_request  ), priority=100  , match=(eth.dst ==
00:00:00:00:00:00 && reg9[9] == 0), action=(nd_ns { nd.target =
xxreg0; output; }; output;)
table=24(lr_in_arp_request  ), priority=100  , match=(eth.dst ==
00:00:00:00:00:00 && reg9[9] == 1), action=(arp { eth.dst =
ff:ff:ff:ff:ff:ff; arp.spa = reg1; arp.tpa = reg0; arp.op = 1; output;
}; output;)
----

I pinged 172.17.0.50 from the logical port sw0p1 which has IP
11.0.0.3.  Since there is a dnat_and_snat entry for this IP,  the
logical flow in table 23 (shown above) is hit and it corrupts xxreg0
due to the action reg1 = 172.16.1.110.

In table 24, the priority 200 flow is never matched because xxreg0 is
no longer 3001::b (since it is now 3001:0:ac10:16e::b) and the
priority 100 flow  is hit with the action=(nd_ns { nd.target = xxreg0;
output; }; output;)

And that's why the IPv6 Neigh Solicitation packet has the wrong nd.target.

I'm afraid your approach doesn't work when  a router has NAT entries
configured.  We need to find an alternate solution to address your use
case.

Thanks
Numan


> Thanks
> Numan
>
>
>
>
> > ---
> > v4->v5: rebase
> > v4: newly added
> >
> >  controller/pinctrl.c |  52 +++++++--
> >  lib/actions.c        |   4 +-
> >  northd/northd.c      |   9 +-
> >  tests/ovn-northd.at  |   8 +-
> >  tests/ovn.at         | 268 ++++++++++++++++++++++++++++++++++++++++++-
> >  5 files changed, 320 insertions(+), 21 deletions(-)
> >
> > diff --git a/controller/pinctrl.c b/controller/pinctrl.c
> > index f2e382a44..4c520bd5e 100644
> > --- a/controller/pinctrl.c
> > +++ b/controller/pinctrl.c
> > @@ -1574,9 +1574,11 @@ pinctrl_handle_arp(struct rconn *swconn, const 
> > struct flow *ip_flow,
> >                     const struct ofputil_packet_in *pin,
> >                     struct ofpbuf *userdata, const struct ofpbuf 
> > *continuation)
> >  {
> > -    /* This action only works for IP packets, and the switch should only 
> > send
> > -     * us IP packets this way, but check here just to be sure. */
> > -    if (ip_flow->dl_type != htons(ETH_TYPE_IP)) {
> > +    uint16_t dl_type = ntohs(ip_flow->dl_type);
> > +
> > +    /* This action only works for IPv4 or IPv6 packets, and the switch 
> > should
> > +     * only send us IP packets this way, but check here just to be sure. */
> > +    if (dl_type != ETH_TYPE_IP && dl_type != ETH_TYPE_IPV6) {
> >          static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> >          VLOG_WARN_RL(&rl, "ARP action on non-IP packet (Ethertype 
> > %"PRIx16")",
> >                       ntohs(ip_flow->dl_type));
> > @@ -1600,9 +1602,25 @@ pinctrl_handle_arp(struct rconn *swconn, const 
> > struct flow *ip_flow,
> >      struct arp_eth_header *arp = dp_packet_l3(&packet);
> >      arp->ar_op = htons(ARP_OP_REQUEST);
> >      arp->ar_sha = ip_flow->dl_src;
> > -    put_16aligned_be32(&arp->ar_spa, ip_flow->nw_src);
> >      arp->ar_tha = eth_addr_zero;
> > -    put_16aligned_be32(&arp->ar_tpa, ip_flow->nw_dst);
> > +
> > +    /* We might be here without actually currently handling an IPv4 packet.
> > +     * This can happen in the case where we route IPv6 packets over an IPv4
> > +     * link.
> > +     * In these cases we have no destination IPv4 address from the packet 
> > that
> > +     * we can reuse. But we receive the actual destination IPv4 address via
> > +     * userdata anyway, so what we set for now is irrelevant.
> > +     * This is just a hope since we do not parse the userdata. If we land 
> > here
> > +     * for whatever reason without being an IPv4 packet and without 
> > userdata we
> > +     * will send out a wrong packet.
> > +     */
> > +    if (ip_flow->dl_type == htons(ETH_TYPE_IP)) {
> > +        put_16aligned_be32(&arp->ar_spa, ip_flow->nw_src);
> > +        put_16aligned_be32(&arp->ar_tpa, ip_flow->nw_dst);
> > +    } else {
> > +        put_16aligned_be32(&arp->ar_spa, 0);
> > +        put_16aligned_be32(&arp->ar_tpa, 0);
> > +    }
> >
> >      if (ip_flow->vlans[0].tci & htons(VLAN_CFI)) {
> >          eth_push_vlan(&packet, htons(ETH_TYPE_VLAN_8021Q),
> > @@ -6741,8 +6759,11 @@ pinctrl_handle_nd_ns(struct rconn *swconn, const 
> > struct flow *ip_flow,
> >                       struct ofpbuf *userdata,
> >                       const struct ofpbuf *continuation)
> >  {
> > -    /* This action only works for IPv6 packets. */
> > -    if (get_dl_type(ip_flow) != htons(ETH_TYPE_IPV6)) {
> > +    uint16_t dl_type = ntohs(ip_flow->dl_type);
> > +
> > +    /* This action only works for IPv4 or IPv6 packets, and the switch 
> > should
> > +     * only send us IP packets this way, but check here just to be sure. */
> > +    if (dl_type != ETH_TYPE_IP && dl_type != ETH_TYPE_IPV6) {
> >          static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> >          VLOG_WARN_RL(&rl, "NS action on non-IPv6 packet");
> >          return;
> > @@ -6758,8 +6779,23 @@ pinctrl_handle_nd_ns(struct rconn *swconn, const 
> > struct flow *ip_flow,
> >      dp_packet_use_stub(&packet, packet_stub, sizeof packet_stub);
> >
> >      in6_generate_lla(ip_flow->dl_src, &ipv6_src);
> > +
> > +    /* We might be here without actually currently handling an IPv6 packet.
> > +     * This can happen in the case where we route IPv4 packets over an IPv6
> > +     * link.
> > +     * In these cases we have no destination IPv6 address from the packet 
> > that
> > +     * we can reuse. But we receive the actual destination IPv6 address via
> > +     * userdata anyway, so what we pass to compose_nd_ns is irrelevant.
> > +     * This is just a hope since we do not parse the userdata. If we land 
> > here
> > +     * for whatever reason without being an IPv6 packet and without 
> > userdata we
> > +     * will send out a wrong packet.
> > +     */
> > +    struct in6_addr ipv6_dst = IN6ADDR_EXACT_INIT;
> > +    if (get_dl_type(ip_flow) == htons(ETH_TYPE_IPV6)) {
> > +        ipv6_dst = ip_flow->ipv6_dst;
> > +    }
> >      compose_nd_ns(&packet, ip_flow->dl_src, &ipv6_src,
> > -                  &ip_flow->ipv6_dst);
> > +                  &ipv6_dst);
> >
> >      /* Reload previous packet metadata and set actions from userdata. */
> >      set_actions_and_enqueue_msg(swconn, &packet,
> > diff --git a/lib/actions.c b/lib/actions.c
> > index e8cc0994d..c7c83a34c 100644
> > --- a/lib/actions.c
> > +++ b/lib/actions.c
> > @@ -1718,7 +1718,7 @@ parse_nested_action(struct action_context *ctx, enum 
> > ovnact_type type,
> >  static void
> >  parse_ARP(struct action_context *ctx)
> >  {
> > -    parse_nested_action(ctx, OVNACT_ARP, "ip4", ctx->scope);
> > +    parse_nested_action(ctx, OVNACT_ARP, "ip", ctx->scope);
> >  }
> >
> >  static void
> > @@ -1772,7 +1772,7 @@ parse_ND_NA_ROUTER(struct action_context *ctx)
> >  static void
> >  parse_ND_NS(struct action_context *ctx)
> >  {
> > -    parse_nested_action(ctx, OVNACT_ND_NS, "ip6", ctx->scope);
> > +    parse_nested_action(ctx, OVNACT_ND_NS, "ip", ctx->scope);
> >  }
> >
> >  static void
> > diff --git a/northd/northd.c b/northd/northd.c
> > index e697973ce..970cdca6e 100644
> > --- a/northd/northd.c
> > +++ b/northd/northd.c
> > @@ -13549,7 +13549,8 @@ build_arp_request_flows_for_lrouter(
> >
> >          ds_clear(match);
> >          ds_put_format(match, "eth.dst == 00:00:00:00:00:00 && "
> > -                      "ip6 && " REG_NEXT_HOP_IPV6 " == %s",
> > +                      REGBIT_NEXTHOP_IS_IPV4" == 0 && "
> > +                      REG_NEXT_HOP_IPV6 " == %s",
> >                        route->nexthop);
> >          struct in6_addr sn_addr;
> >          struct eth_addr eth_dst;
> > @@ -13579,7 +13580,8 @@ build_arp_request_flows_for_lrouter(
> >      }
> >
> >      ovn_lflow_metered(lflows, od, S_ROUTER_IN_ARP_REQUEST, 100,
> > -                      "eth.dst == 00:00:00:00:00:00 && ip4",
> > +                      "eth.dst == 00:00:00:00:00:00 && "
> > +                      REGBIT_NEXTHOP_IS_IPV4" == 1",
> >                        "arp { "
> >                        "eth.dst = ff:ff:ff:ff:ff:ff; "
> >                        "arp.spa = " REG_SRC_IPV4 "; "
> > @@ -13591,7 +13593,8 @@ build_arp_request_flows_for_lrouter(
> >                                       meter_groups),
> >                        lflow_ref);
> >      ovn_lflow_metered(lflows, od, S_ROUTER_IN_ARP_REQUEST, 100,
> > -                      "eth.dst == 00:00:00:00:00:00 && ip6",
> > +                      "eth.dst == 00:00:00:00:00:00 && "
> > +                      REGBIT_NEXTHOP_IS_IPV4" == 0",
> >                        "nd_ns { "
> >                        "nd.target = " REG_NEXT_HOP_IPV6 "; "
> >                        "output; "
> > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> > index 41761ba96..a6aa66a0c 100644
> > --- a/tests/ovn-northd.at
> > +++ b/tests/ovn-northd.at
> > @@ -6796,10 +6796,10 @@ AT_CHECK([grep -e "lr_in_arp_resolve" lr0flows | 
> > ovn_strip_lflows], [0], [dnl
> >
> >  AT_CHECK([grep -e "lr_in_arp_request" lr0flows | ovn_strip_lflows], [0], 
> > [dnl
> >    table=??(lr_in_arp_request  ), priority=0    , match=(1), 
> > action=(output;)
> > -  table=??(lr_in_arp_request  ), priority=100  , match=(eth.dst == 
> > 00:00:00:00:00:00 && ip4), action=(arp { eth.dst = ff:ff:ff:ff:ff:ff; 
> > arp.spa = reg1; arp.tpa = reg0; arp.op = 1; output; }; output;)
> > -  table=??(lr_in_arp_request  ), priority=100  , match=(eth.dst == 
> > 00:00:00:00:00:00 && ip6), action=(nd_ns { nd.target = xxreg0; output; }; 
> > output;)
> > -  table=??(lr_in_arp_request  ), priority=200  , match=(eth.dst == 
> > 00:00:00:00:00:00 && ip6 && xxreg0 == 2001:db8::10), action=(nd_ns { 
> > eth.dst = 33:33:ff:00:00:10; ip6.dst = ff02::1:ff00:10; nd.target = 
> > 2001:db8::10; output; }; output;)
> > -  table=??(lr_in_arp_request  ), priority=200  , match=(eth.dst == 
> > 00:00:00:00:00:00 && ip6 && xxreg0 == 2001:db8::20), action=(nd_ns { 
> > eth.dst = 33:33:ff:00:00:20; ip6.dst = ff02::1:ff00:20; nd.target = 
> > 2001:db8::20; output; }; output;)
> > +  table=??(lr_in_arp_request  ), priority=100  , match=(eth.dst == 
> > 00:00:00:00:00:00 && reg9[[9]] == 0), action=(nd_ns { nd.target = xxreg0; 
> > output; }; output;)
> > +  table=??(lr_in_arp_request  ), priority=100  , match=(eth.dst == 
> > 00:00:00:00:00:00 && reg9[[9]] == 1), action=(arp { eth.dst = 
> > ff:ff:ff:ff:ff:ff; arp.spa = reg1; arp.tpa = reg0; arp.op = 1; output; }; 
> > output;)
> > +  table=??(lr_in_arp_request  ), priority=200  , match=(eth.dst == 
> > 00:00:00:00:00:00 && reg9[[9]] == 0 && xxreg0 == 2001:db8::10), 
> > action=(nd_ns { eth.dst = 33:33:ff:00:00:10; ip6.dst = ff02::1:ff00:10; 
> > nd.target = 2001:db8::10; output; }; output;)
> > +  table=??(lr_in_arp_request  ), priority=200  , match=(eth.dst == 
> > 00:00:00:00:00:00 && reg9[[9]] == 0 && xxreg0 == 2001:db8::20), 
> > action=(nd_ns { eth.dst = 33:33:ff:00:00:20; ip6.dst = ff02::1:ff00:20; 
> > nd.target = 2001:db8::20; output; }; output;)
> >  ])
> >
> >  AT_CLEANUP
> > diff --git a/tests/ovn.at b/tests/ovn.at
> > index 1f1a7963d..6894947d1 100644
> > --- a/tests/ovn.at
> > +++ b/tests/ovn.at
> > @@ -1541,11 +1541,11 @@ clone { ip4.dst = 255.255.255.255; output; }; next;
> >  # arp
> >  arp { eth.dst = ff:ff:ff:ff:ff:ff; output; }; output;
> >      encodes as 
> > controller(userdata=00.00.00.00.00.00.00.00.00.19.00.10.80.00.06.06.ff.ff.ff.ff.ff.ff.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.OFTABLE_SAVE_INPORT_HEX.00.00.00,pause),resubmit(,OFTABLE_SAVE_INPORT)
> > -    has prereqs ip4
> > +    has prereqs ip
> >  arp { };
> >      formats as arp { drop; };
> >      encodes as controller(userdata=00.00.00.00.00.00.00.00,pause)
> > -    has prereqs ip4
> > +    has prereqs ip
> >
> >  # get_arp
> >  get_arp(outport, ip4.dst);
> > @@ -1709,12 +1709,12 @@ reg9[[8]] = dhcp_relay_resp_chk(192.168.1, 
> > 172.16.1.1);
> >  # nd_ns
> >  nd_ns { nd.target = xxreg0; output; };
> >      encodes as 
> > controller(userdata=00.00.00.09.00.00.00.00.00.1c.00.18.00.80.00.00.00.00.00.00.00.01.de.10.80.00.3e.10.00.00.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.OFTABLE_SAVE_INPORT_HEX.00.00.00,pause)
> > -    has prereqs ip6
> > +    has prereqs ip
> >
> >  nd_ns { };
> >      formats as nd_ns { drop; };
> >      encodes as controller(userdata=00.00.00.09.00.00.00.00,pause)
> > -    has prereqs ip6
> > +    has prereqs ip
> >
> >  # nd_na
> >  nd_na { eth.src = 12:34:56:78:9a:bc; nd.tll = 12:34:56:78:9a:bc; outport = 
> > inport; inport = ""; /* Allow sending out inport. */ output; };
> > @@ -38706,6 +38706,266 @@ OVN_CLEANUP([hv1],[hv2])
> >  AT_CLEANUP
> >  ])
> >
> > +OVN_FOR_EACH_NORTHD([
> > +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv4 over IPv6, 
> > dynamic])
> > +AT_SKIP_IF([test $HAVE_SCAPY = no])
> > +ovn_start
> > +
> > +# Logical network:
> > +# Two LRs - R1 and R2 that are connected to ls-transfer in 2001:db8::/64
> > +# network. R1 has a switchs ls1 (192.168.1.0/24) connected to it.
> > +# R2 has ls2 (172.16.1.0/24) connected to it.
> > +
> > +ls1_lp1_mac="f0:00:00:01:02:03"
> > +rp_ls1_mac="00:00:00:01:02:03"
> > +rp_ls2_mac="00:00:00:01:02:04"
> > +ls2_lp1_mac="f0:00:00:01:02:04"
> > +
> > +ls1_lp1_ip="192.168.1.2"
> > +ls2_lp1_ip="172.16.1.2"
> > +
> > +check ovn-nbctl lr-add R1
> > +check ovn-nbctl lr-add R2
> > +
> > +check ovn-nbctl ls-add ls1
> > +check ovn-nbctl ls-add ls2
> > +check ovn-nbctl ls-add ls-transfer
> > +
> > +# Connect ls1 to R1
> > +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 192.168.1.1/24
> > +check ovn-nbctl set Logical_Router R1 options:dynamic_neigh_routers=true
> > +
> > +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 
> > type=router \
> > +  options:router-port=ls1 addresses=\"$rp_ls1_mac\"
> > +
> > +# Connect ls2 to R2
> > +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 172.16.1.1/24
> > +check ovn-nbctl set Logical_Router R2 options:dynamic_neigh_routers=true
> > +
> > +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 
> > type=router \
> > +  options:router-port=ls2 addresses=\"$rp_ls2_mac\"
> > +
> > +# Connect R1 to R2
> > +check ovn-nbctl lrp-add R1 R1_ls-transfer 00:00:00:02:03:04 2001:db8::1/64
> > +check ovn-nbctl lrp-add R2 R2_ls-transfer 00:00:00:02:03:05 2001:db8::2/64
> > +
> > +check ovn-nbctl lsp-add ls-transfer ls-transfer_r1 -- \
> > +  set Logical_Switch_Port ls-transfer_r1 type=router \
> > +  options:router-port=R1_ls-transfer addresses=\"router\"
> > +check ovn-nbctl lsp-add ls-transfer ls-transfer_r2 -- \
> > +  set Logical_Switch_Port ls-transfer_r2 type=router \
> > +  options:router-port=R2_ls-transfer addresses=\"router\"
> > +
> > +AT_CHECK([ovn-nbctl lr-route-add R1 "0.0.0.0/0" 2001:db8::2])
> > +AT_CHECK([ovn-nbctl lr-route-add R2 "0.0.0.0/0" 2001:db8::1])
> > +
> > +# Create logical port ls1-lp1 in ls1
> > +check ovn-nbctl lsp-add ls1 ls1-lp1 \
> > +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip"
> > +
> > +# Create logical port ls2-lp1 in ls2
> > +check ovn-nbctl lsp-add ls2 ls2-lp1 \
> > +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip"
> > +
> > +# Create two hypervisor and create OVS ports corresponding to logical 
> > ports.
> > +net_add n1
> > +
> > +sim_add hv1
> > +as hv1
> > +check ovs-vsctl add-br br-phys
> > +ovn_attach n1 br-phys 192.168.0.1
> > +check ovs-vsctl -- add-port br-int hv1-vif1 -- \
> > +    set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \
> > +    options:tx_pcap=hv1/vif1-tx.pcap \
> > +    options:rxq_pcap=hv1/vif1-rx.pcap \
> > +    ofport-request=1
> > +
> > +sim_add hv2
> > +as hv2
> > +check ovs-vsctl add-br br-phys
> > +ovn_attach n1 br-phys 192.168.0.2
> > +check ovs-vsctl -- add-port br-int hv2-vif1 -- \
> > +    set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \
> > +    options:tx_pcap=hv2/vif1-tx.pcap \
> > +    options:rxq_pcap=hv2/vif1-rx.pcap \
> > +    ofport-request=1
> > +
> > +
> > +# Pre-populate the hypervisors' ARP tables so that we don't lose any
> > +# packets for ARP resolution (native tunneling doesn't queue packets
> > +# for ARP resolution).
> > +OVN_POPULATE_ARP
> > +
> > +# Allow some time for ovn-northd and ovn-controller to catch up.
> > +wait_for_ports_up
> > +check ovn-nbctl --wait=hv sync
> > +
> > +# Packet to send.
> > +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \
> > +                        IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', 
> > ttl=64)/ \
> > +                        UDP(sport=53, dport=4369)")
> > +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet"
> > +
> > +# Packet to Expect
> > +# The TTL should be decremented by 2.
> > +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \
> > +                        IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', 
> > ttl=62)/ \
> > +                        UDP(sport=53, dport=4369)")
> > +echo ${expected} > expected
> > +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
> > +
> > +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \
> > +grep "reg0 == 172.16.1.2" | wc -l], [0], [1
> > +])
> > +
> > +# Disable the ls2-lp1 port.
> > +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false
> > +
> > +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \
> > +grep "reg0 == 172.16.1.2" | wc -l], [0], [0
> > +])
> > +
> > +# Send the same packet again and it should not be delivered
> > +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet"
> > +
> > +# The 2nd packet sent shound not be received.
> > +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
> > +
> > +OVN_CLEANUP([hv1],[hv2])
> > +
> > +AT_CLEANUP
> > +])
> > +
> > +OVN_FOR_EACH_NORTHD([
> > +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv6 over IPv4, 
> > dynamic])
> > +AT_SKIP_IF([test $HAVE_SCAPY = no])
> > +ovn_start
> > +
> > +# Logical network:
> > +# Two LRs - R1 and R2 that are connected to ls-transfer in 10.0.0.0/24
> > +# network. R1 has a switchs ls1 (2001:db8:1::/64) connected to it.
> > +# R2 has ls2 (2001:db8:2::/64) connected to it.
> > +
> > +ls1_lp1_mac="f0:00:00:01:02:03"
> > +rp_ls1_mac="00:00:00:01:02:03"
> > +rp_ls2_mac="00:00:00:01:02:04"
> > +ls2_lp1_mac="f0:00:00:01:02:04"
> > +
> > +ls1_lp1_ip="2001:db8:1::2"
> > +ls2_lp1_ip="2001:db8:2::2"
> > +
> > +check ovn-nbctl lr-add R1
> > +check ovn-nbctl lr-add R2
> > +
> > +check ovn-nbctl ls-add ls1
> > +check ovn-nbctl ls-add ls2
> > +check ovn-nbctl ls-add ls-transfer
> > +
> > +# Connect ls1 to R1
> > +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 2001:db8:1::1/64
> > +check ovn-nbctl set Logical_Router R1 options:dynamic_neigh_routers=true
> > +
> > +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 
> > type=router \
> > +  options:router-port=ls1 addresses=\"$rp_ls1_mac\"
> > +
> > +# Connect ls2 to R2
> > +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 2001:db8:2::1/64
> > +check ovn-nbctl set Logical_Router R2 options:dynamic_neigh_routers=true
> > +
> > +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 
> > type=router \
> > +  options:router-port=ls2 addresses=\"$rp_ls2_mac\"
> > +
> > +# Connect R1 to R2
> > +check ovn-nbctl lrp-add R1 R1_ls-transfer 00:00:00:02:03:04 10.0.0.1/24
> > +check ovn-nbctl lrp-add R2 R2_ls-transfer 00:00:00:02:03:05 10.0.0.2/24
> > +
> > +check ovn-nbctl lsp-add ls-transfer ls-transfer_r1 -- \
> > +  set Logical_Switch_Port ls-transfer_r1 type=router \
> > +  options:router-port=R1_ls-transfer addresses=\"router\"
> > +check ovn-nbctl lsp-add ls-transfer ls-transfer_r2 -- \
> > +  set Logical_Switch_Port ls-transfer_r2 type=router \
> > +  options:router-port=R2_ls-transfer addresses=\"router\"
> > +
> > +AT_CHECK([ovn-nbctl lr-route-add R1 "::/0" 10.0.0.2])
> > +AT_CHECK([ovn-nbctl lr-route-add R2 "::/0" 10.0.0.1])
> > +
> > +# Create logical port ls1-lp1 in ls1
> > +check ovn-nbctl lsp-add ls1 ls1-lp1 \
> > +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip"
> > +
> > +# Create logical port ls2-lp1 in ls2
> > +check ovn-nbctl lsp-add ls2 ls2-lp1 \
> > +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip"
> > +
> > +# Create two hypervisor and create OVS ports corresponding to logical 
> > ports.
> > +net_add n1
> > +
> > +sim_add hv1
> > +as hv1
> > +check ovs-vsctl add-br br-phys
> > +ovn_attach n1 br-phys 192.168.0.1
> > +check ovs-vsctl -- add-port br-int hv1-vif1 -- \
> > +    set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \
> > +    options:tx_pcap=hv1/vif1-tx.pcap \
> > +    options:rxq_pcap=hv1/vif1-rx.pcap \
> > +    ofport-request=1
> > +
> > +sim_add hv2
> > +as hv2
> > +check ovs-vsctl add-br br-phys
> > +ovn_attach n1 br-phys 192.168.0.2
> > +check ovs-vsctl -- add-port br-int hv2-vif1 -- \
> > +    set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \
> > +    options:tx_pcap=hv2/vif1-tx.pcap \
> > +    options:rxq_pcap=hv2/vif1-rx.pcap \
> > +    ofport-request=1
> > +
> > +
> > +# Pre-populate the hypervisors' ARP tables so that we don't lose any
> > +# packets for ARP resolution (native tunneling doesn't queue packets
> > +# for ARP resolution).
> > +OVN_POPULATE_ARP
> > +
> > +# Allow some time for ovn-northd and ovn-controller to catch up.
> > +wait_for_ports_up
> > +check ovn-nbctl --wait=hv sync
> > +
> > +# Packet to send.
> > +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \
> > +                        IPv6(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', 
> > hlim=64)/ \
> > +                        UDP(sport=53, dport=4369)")
> > +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet"
> > +
> > +# Packet to Expect
> > +# The TTL should be decremented by 2.
> > +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \
> > +                        IPv6(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', 
> > hlim=62)/ \
> > +                        UDP(sport=53, dport=4369)")
> > +echo ${expected} > expected
> > +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
> > +
> > +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \
> > +grep "reg0 == 2001:db8:2::2" | wc -l], [0], [1
> > +])
> > +
> > +# Disable the ls2-lp1 port.
> > +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false
> > +
> > +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \
> > +grep "reg0 == 2001:db8:2::2" | wc -l], [0], [0
> > +])
> > +
> > +# Send the same packet again and it should not be delivered
> > +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet"
> > +
> > +# The 2nd packet sent shound not be received.
> > +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
> > +
> > +OVN_CLEANUP([hv1],[hv2])
> > +
> > +AT_CLEANUP
> > +])
> > +
> >  OVN_FOR_EACH_NORTHD([
> >  AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv4 over IPv6, 
> > ECMP])
> >  AT_SKIP_IF([test $HAVE_SCAPY = no])
> > --
> > 2.45.2
> >
> > _______________________________________________
> > dev mailing list
> > d...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to