Hi, Dumitru

        Thank you for review, i don't have performance measurements yet.
In real physical switch, if flows hit dynimic/static route, or policy route,
the flows will skip other route entries, it's more reasonable on logic.

For ovn, in table of ip route, there is no default route entry , but route 
policy
has, if i add one default route entry on route, for unknown flows, it will 
through
default route entry, then it won't hit any route policy, but it will go to next 
table,
so, we will not know where the flows will go, it may makes the network confused.

What about add a default route entry on router, and delete the default route 
policy on router?
And if flows hit route entry except the default route entry, we set a register,
the if we detect the register is set on policy table, we skip the reroute or 
drop action in policy
table. In reverse, if we don't hit any route entry, the it go to policy table, 
if it don't hit any
policy entry, we should drop it.

This idea will make the  ovn-kubernetes test fail, but it's more reasonable.
What do you think? waiting your reply!

Anyone else have ideas about this?

Best regards!
wangchuanlei

> Hi wangchuanlei,

>Thanks for the patch!

>On 7/10/23 10:46, wangchuanlei wrote:
>     If there is no route in table ip_routing, the route policy item in 
> table policy is useless.

> I'm sorry but I disagree with this.  We can always add a default route that 
> matches all traffic that's not matched by any other route.

> Afterwards, in the policy stage we can re-route/drop accordingly.

>     Because route policy has a higher priority than ip routing, so 
> moddify the table id of IP_ROUTING and POLICY is a little better.By 
> this way, there is no need reroute any more, this should be more 
> effienct.

> I can see how this can be slightly more efficient in slow path but I'm quite 
> confident the difference won't be significant.  Do you have performance 
> measurements to indicate the opposite?


> The real problem with just swapping the two pipelines is that we change 
> behavior completely.  CMS out there (ovn-kube, neutron, etc.) rely on the 
> fact that policies are applied after routing.  We can't just decide to change 
> this and break compatibility.

> I'm quite sure the ovn-kubernetes tests also fail because of this change in 
> behavior:

> https://github.com/ovsrobot/ovn/actions/runs/5506310703/jobs/10035411109

> It also makes OVN PBR tests fail:

> https://github.com/ovsrobot/ovn/actions/runs/5506310686

> 
> Signed-off-by: wangchuanlei <wangchuan...@inspur.com>

> Regards,
> Dumitru

> ---
>  northd/northd.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/northd/northd.c b/northd/northd.c index 
> a45c8b53a..35187abf8 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -174,10 +174,10 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
>      PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") 
> \
>      PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre") 
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")     
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, 
> "lr_in_ip_routing_ecmp") \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")         
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          13, "lr_in_policy")         
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     14, "lr_in_policy_ecmp")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      15, "lr_in_ip_routing")     
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 16, 
> + "lr_in_ip_routing_ecmp") \
>      PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")    
>  \
>      PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")    
>  \
>      PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")    
>  \
> @@ -278,6 +278,7 @@ enum ovn_stage {
>  #define REG_SRC_IPV4 "reg1"
>  #define REG_SRC_IPV6 "xxreg1"
>  #define REG_ROUTE_TABLE_ID "reg7"
> +#define REG_ROUTE_POLICY "reg2"
>  
>  /* Register used to store backend ipv6 address
>   * for load balancer affinity. */
> @@ -10240,6 +10241,7 @@ build_routing_policy_flow(struct hmap *lflows, struct 
> ovn_datapath *od,
>          bool is_ipv4 = strchr(nexthop, '.') ? true : false;
>          ds_put_format(&actions, "%s = %s; "
>                        "%s = %s; "
> +                      "%s = 1; "
>                        "eth.src = %s; "
>                        "outport = %s; "
>                        "flags.loopback = 1; "
> @@ -10249,6 +10251,7 @@ build_routing_policy_flow(struct hmap *lflows, struct 
> ovn_datapath *od,
>                        nexthop,
>                        is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6,
>                        lrp_addr_s,
> +                      REG_ROUTE_POLICY,
>                        out_port->lrp_networks.ea_s,
>                        out_port->json_key);
>  
> @@ -10340,6 +10343,7 @@ build_ecmp_routing_policy_flows(struct hmap *lflows, 
> struct ovn_datapath *od,
>                        out_port->json_key);
>  
>          ds_clear(&match);
> +        ds_put_cstr(&actions, REG_ROUTE_POLICY" = 1; ");
>          ds_put_format(&match, REG_ECMP_GROUP_ID" == %"PRIu16" && "
>                        REG_ECMP_MEMBER_ID" == %"PRIuSIZE,
>                        ecmp_group_id, i + 1); @@ -12911,6 +12915,8 @@ 
> build_mcast_lookup_flows_for_lrouter(
>       */
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING, 10550,
>                    "nd_rs || nd_ra", debug_drop_action());
> +    ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING, 10000,
> +                  REG_ROUTE_POLICY" == 1", "reg8[0..15] = 0; next;");
>      if (!od->mcast_info.rtr.relay) {
>          return;
>      }


_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to