On Mon, Sep 5, 2016 at 10:23 PM, <bscha...@redhat.com> wrote:

> From: Babu Shanmugam <bscha...@redhat.com>
>
> This patch adds support for marking qos on IP packets based on arbitrary
> match criteria for a logical switch.
>
> Signed-off-by: Babu Shanmugam <bscha...@redhat.com>
> Suggested-by: Mickey Spiegel <mickeys....@gmail.com>
>

Acked-by: Mickey Spiegel <mickeys....@gmail.com>

A few nits below regarding comments, whitespace changes, and documentation.

---
>  ovn/lib/logical-fields.c    |  2 +-
>  ovn/northd/ovn-northd.8.xml | 47 +++++++++++++++++++------
>  ovn/northd/ovn-northd.c     | 84 +++++++++++++++++++++++++++++-
> ---------------
>  ovn/ovn-nb.ovsschema        | 26 ++++++++++++--
>  ovn/ovn-nb.xml              | 57 ++++++++++++++++++++++++++++++
>  ovn/utilities/ovn-nbctl.c   |  5 +++
>  tests/ovn.at                | 52 ++++++++++++++++++++++++++++
>  7 files changed, 231 insertions(+), 42 deletions(-)
>
> diff --git a/ovn/lib/logical-fields.c b/ovn/lib/logical-fields.c
> index 6dbb4ae..068c000 100644
> --- a/ovn/lib/logical-fields.c
> +++ b/ovn/lib/logical-fields.c
> @@ -134,7 +134,7 @@ ovn_init_symtab(struct shash *symtab)
>      expr_symtab_add_predicate(symtab, "ip6", "eth.type == 0x86dd");
>      expr_symtab_add_predicate(symtab, "ip", "ip4 || ip6");
>      expr_symtab_add_field(symtab, "ip.proto", MFF_IP_PROTO, "ip", true);
> -    expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP, "ip", false);
> +    expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP_SHIFTED, "ip",
> false);
>      expr_symtab_add_field(symtab, "ip.ecn", MFF_IP_ECN, "ip", false);
>      expr_symtab_add_field(symtab, "ip.ttl", MFF_IP_TTL, "ip", false);
>
> diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
> index 3448370..25b8564 100644
> --- a/ovn/northd/ovn-northd.8.xml
> +++ b/ovn/northd/ovn-northd.8.xml
> @@ -362,7 +362,27 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 7: LB</h3>
> +    <h3>Ingress Table 7: <code>from-lport</code> QoS marking</h3>
> +
> +    <p>
> +      Logical flows in this table closely reproduce those in the
> +      <code>QoS</code> table in the <code>OVN_Northbound</code> database
> +      for the <code>from-lport</code> direction.
> +    </p>
> +
> +    <ul>
> +      <li>
> +        For every qos_rules for every logical switch a flow will be added
> at
> +        priorities mentioned in the QoS table.
> +      </li>
> +
> +      <li>
> +        One priority-0 fallback flow that matches all packets and
> advances to
> +        the next table.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 8: LB</h3>
>
>      <p>
>        It contains a priority-0 flow that simply moves traffic to the next
> @@ -375,7 +395,7 @@
>        connection.)
>      </p>
>
> -    <h3>Ingress Table 8: Stateful</h3>
> +    <h3>Ingress Table 9: Stateful</h3>
>
>      <ul>
>        <li>
> @@ -412,7 +432,7 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 9: ARP/ND responder</h3>
> +    <h3>Ingress Table 10: ARP/ND responder</h3>
>
>      <p>
>        This table implements ARP/ND responder for known IPs.  It contains
> these
> @@ -484,7 +504,7 @@ nd_na {
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 10: DHCP option processing</h3>
> +    <h3>Ingress Table 11: DHCP option processing</h3>
>
>      <p>
>        This table adds the DHCPv4 options to a DHCPv4 packet from the
> @@ -544,7 +564,7 @@ next;
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 11: DHCP responses</h3>
> +    <h3>Ingress Table 12: DHCP responses</h3>
>
>      <p>
>        This table implements DHCP responder for the DHCP replies generated
> by
> @@ -626,7 +646,7 @@ output;
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 12: Destination Lookup</h3>
> +    <h3>Ingress Table 13 Destination Lookup</h3>
>
>      <p>
>        This table implements switching behavior.  It contains these logical
> @@ -693,7 +713,14 @@ output;
>        <code>to-lport</code> ACLs.
>      </p>
>
> -    <h3>Egress Table 5: Stateful</h3>
> +    <h3>Egress Table 5: <code>to-lport</code> QoS marking</h3>
> +
> +    <p>
> +      This is similar to ingress table <code>QoS marking</code> except for
> +      <code>to-lport</code> qos rules.
> +    </p>
> +
> +    <h3>Egress Table 6: Stateful</h3>
>
>      <p>
>        This is similar to ingress table <code>Stateful</code> except that
> @@ -704,10 +731,10 @@ output;
>        Also a priority 34000 logical flow is added for each logical port
> which
>        has DHCPv4 options defined to allow the DHCPv4 reply packet and
> which has
>        DHCPv6 options defined to allow the DHCPv6 reply packet from the
> -      <code>Ingress Table 11: DHCP responses</code>.
> +      <code>Ingress Table 12: DHCP responses</code>.
>      </p>
>
> -    <h3>Egress Table 6: Egress Port Security - IP</h3>
> +    <h3>Egress Table 7: Egress Port Security - IP</h3>
>
>      <p>
>        This is similar to the port security logic in table
> @@ -717,7 +744,7 @@ output;
>        <code>ip4.src</code> and <code>ip6.src</code>
>      </p>
>
> -    <h3>Egress Table 7: Egress Port Security - L2</h3>
> +    <h3>Egress Table 8: Egress Port Security - L2</h3>
>
>      <p>
>        This is similar to the ingress port security logic in ingress table
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index 9ce2af9..f36eae6 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -93,21 +93,22 @@ enum ovn_datapath_type {
>   * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
>   * S_ROUTER_OUT_DELIVERY. */
>  enum ovn_stage {
> -#define PIPELINE_STAGES                                               \
> -    /* Logical switch ingress stages. */                              \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    1, "ls_in_port_sec_ip")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    2, "ls_in_port_sec_nd")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        3, "ls_in_pre_acl")      \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         4, "ls_in_pre_lb")
>  \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   5, "ls_in_pre_stateful")
>   \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL,            6, "ls_in_acl")          \
> -    PIPELINE_STAGE(SWITCH, IN,  LB,             7, "ls_in_lb")           \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,       8, "ls_in_stateful")     \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,     9, "ls_in_arp_rsp")      \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,   10, "ls_in_dhcp_options")
> \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE,  11,
> "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,        12, "ls_in_l2_lkup")
> \
> +#define PIPELINE_STAGES
>  \
> +    /* Logical switch ingress stages. */
> \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    1, "ls_in_port_sec_ip")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    2, "ls_in_port_sec_nd")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        3, "ls_in_pre_acl")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         4, "ls_in_pre_lb")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   5, "ls_in_pre_stateful")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL,            6, "ls_in_acl")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,       7, "ls_in_qos_mark")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  LB,             8, "ls_in_lb")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,       9, "ls_in_stateful")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    10, "ls_in_arp_rsp")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  11, "ls_in_dhcp_options")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 12, "ls_in_dhcp_response")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       13, "ls_in_l2_lkup")
>  \
>                                                                        \
>      /* Logical switch egress stages. */                               \
>      PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")     \
> @@ -115,9 +116,10 @@ enum ovn_stage {
>      PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")  \
>      PIPELINE_STAGE(SWITCH, OUT, LB,           3, "ls_out_lb")            \
>      PIPELINE_STAGE(SWITCH, OUT, ACL,          4, "ls_out_acl")
> \
> -    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     5, "ls_out_stateful")
>  \
> -    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP,  6, "ls_out_port_sec_ip")
> \
> -    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2,  7, "ls_out_port_sec_l2")
> \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     5, "ls_out_qos_mark")
>  \
> +    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     6, "ls_out_stateful")
>  \
> +    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP,  7, "ls_out_port_sec_ip")
> \
> +    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2,  8, "ls_out_port_sec_l2")
> \
>                                                                        \
>      /* Logical router ingress stages. */                              \
>      PIPELINE_STAGE(ROUTER, IN,  ADMISSION,   0, "lr_in_admission")    \
> @@ -2493,6 +2495,29 @@ build_acls(struct ovn_datapath *od, struct hmap
> *lflows)
>  }
>
>  static void
> +build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> +    ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> +    ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_MARK, 0, "1", "next;");
> +
> +    for (size_t i = 0; i < od->nbs->n_qos_rules; i++) {
> +        struct nbrec_qos *qos = od->nbs->qos_rules[i];
> +        bool ingress = !strcmp(qos->direction, "from-lport") ? true
> :false;
> +        enum ovn_stage stage = ingress ? S_SWITCH_IN_QOS_MARK :
> S_SWITCH_OUT_QOS_MARK;
> +
> +        if (!strcmp(qos->key_action, "dscp")) {
> +            struct ds dscp_action = DS_EMPTY_INITIALIZER;
> +
> +            ds_put_format(&dscp_action, "ip.dscp = %d; next;",
> +                          (uint8_t)qos->value_action);
> +            ovn_lflow_add(lflows, od, stage,
> +                          qos->priority,
> +                          qos->match, ds_cstr(&dscp_action));
> +            ds_destroy(&dscp_action);
> +        }
> +    }
> +}
> +
> +static void
>  build_lb(struct ovn_datapath *od, struct hmap *lflows)
>  {
>      /* Ingress and Egress LB Table (Priority 0): Packets are allowed by
> @@ -2596,7 +2621,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>      struct ds actions = DS_EMPTY_INITIALIZER;
>
>      /* Build pre-ACL and ACL tables for both ingress and egress.
> -     * Ingress tables 3 and 4.  Egress tables 0 and 1. */
> +     * Ingress tables 4 and 5.  Egress tables 0 and 1. */
>

This is an old comment that does not seem to have been updated to match
code additions for quite a while.
This code now covers a bunch of tables that exist in both the ingress and
egress pipelines, including ACL, QoS, LB, pre-ACL, pre-LB, pre-stateful,
and stateful tables.
Ingress tables 3 through 9. Egress tables 0 through 6.


>      struct ovn_datapath *od;
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
> @@ -2607,6 +2632,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          build_pre_lb(od, lflows);
>          build_pre_stateful(od, lflows);
>          build_acls(od, lflows);
> +        build_qos(od, lflows);
>          build_lb(od, lflows);
>          build_stateful(od, lflows);
>      }
> @@ -2667,8 +2693,8 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 1 and 2: Port security - IP and ND, by default goto
> next.
> -     * (priority 0)*/
> +    /* Ingress table 1 and 2: Port security - IP and ND,
> +     * by default goto next. (priority 0) */
>

This is just a white space change in an area that you are no longer
touching, so it should probably be reverted to the original form.


>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
>              continue;
> @@ -2678,7 +2704,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_IP, 0, "1",
> "next;");
>      }
>
> -    /* Ingress table 9: ARP/ND responder, skip requests coming from
> localnet
> +    /* Ingress table 10: ARP/ND responder, skip requests coming from
> localnet
>       * ports. (priority 100). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2693,7 +2719,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 9: ARP/ND responder, reply for known IPs.
> +    /* Ingress table 10: ARP/ND responder, reply for known IPs.
>       * (priority 50). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2764,7 +2790,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 9: ARP/ND responder, by default goto next.
> +    /* Ingress table 10: ARP/ND responder, by default goto next.
>       * (priority 0)*/
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
> @@ -2774,7 +2800,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1",
> "next;");
>      }
>
> -    /* Logical switch ingress table 10 and 11: DHCP options and response
> +    /* Logical switch ingress table 11 and 12 DHCP options and response
>

":" is missing after 12


>           * priority 100 flows. */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2853,7 +2879,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 10 and 11: DHCP options and response, by default
> goto next.
> +    /* Ingress table 11 and 12: DHCP options and response, by default
> goto next.
>       * (priority 0). */
>
>      HMAP_FOR_EACH (od, key_node, datapaths) {
> @@ -2865,7 +2891,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_RESPONSE, 0, "1",
> "next;");
>      }
>
> -    /* Ingress table 12: Destination lookup, broadcast and multicast
> handling
> +    /* Ingress table 13: Destination lookup, broadcast and multicast
> handling
>       * (priority 100). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2885,7 +2911,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>                        "outport = \""MC_FLOOD"\"; output;");
>      }
>
> -    /* Ingress table 12: Destination lookup, unicast handling (priority
> 50), */
> +    /* Ingress table 13: Destination lookup, unicast handling (priority
> 50), */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
>              continue;
> @@ -2932,7 +2958,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 12: Destination lookup for unknown MACs (priority
> 0). */
> +    /* Ingress table 13: Destination lookup for unknown MACs (priority
> 0). */
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
>              continue;
> diff --git a/ovn/ovn-nb.ovsschema b/ovn/ovn-nb.ovsschema
> index 456ae98..6196a12 100644
> --- a/ovn/ovn-nb.ovsschema
> +++ b/ovn/ovn-nb.ovsschema
> @@ -1,7 +1,7 @@
>  {
>      "name": "OVN_Northbound",
> -    "version": "5.3.1",
> -    "cksum": "1921908091 9353",
> +    "version": "5.4.0",
> +    "cksum": "136165935 10603",
>      "tables": {
>          "NB_Global": {
>              "columns": {
> @@ -26,6 +26,11 @@
>                                            "refType": "strong"},
>                                    "min": 0,
>                                    "max": "unlimited"}},
> +                "qos_rules": {"type": {"key": {"type": "uuid",
> +                                          "refTable": "QoS",
> +                                          "refType": "strong"},
> +                                  "min": 0,
> +                                  "max": "unlimited"}},
>                  "load_balancer": {"type": {"key": {"type": "uuid",
>                                                    "refTable":
> "Load_Balancer",
>                                                    "refType": "strong"},
> @@ -118,6 +123,23 @@
>                      "type": {"key": "string", "value": "string",
>                               "min": 0, "max": "unlimited"}}},
>              "isRoot": false},
> +        "QoS": {
> +            "columns": {
> +                "priority": {"type": {"key": {"type": "integer",
> +                                              "minInteger": 0,
> +                                              "maxInteger": 32767}}},
> +                "direction": {"type": {"key": {"type": "string",
> +                                            "enum": ["set",
> ["from-lport", "to-lport"]]}}},
> +                "match": {"type": "string"},
> +                "action": {"type": {"key": {"type": "string",
> +                                            "enum": ["set", ["dscp"]]},
> +                                    "value": {"type": "integer",
> +                                              "minInteger": 0,
> +                                              "maxInteger": 63}}},
> +                "external_ids": {
> +                    "type": {"key": "string", "value": "string",
> +                             "min": 0, "max": "unlimited"}}},
> +            "isRoot": false},
>          "Logical_Router": {
>              "columns": {
>                  "name": {"type": "string"},
> diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml
> index 42dfa4f..8d9ac24 100644
> --- a/ovn/ovn-nb.xml
> +++ b/ovn/ovn-nb.xml
> @@ -119,6 +119,10 @@
>        Access control rules that apply to packets within the logical
> switch.
>      </column>
>
> +    <column name="qos_rules">
> +      QOS marking rules that apply to packets within the logical switch.
> +    </column>
> +
>      <group title="other_config">
>        <p>
>          Additional configuration options for the logical switch.
> @@ -871,6 +875,59 @@
>      </group>
>    </table>
>
> +  <table name="QoS" title="QOS table">
> +    <p>
> +      Each row in this table represents one QOS rule for a logical switch
> +      that points to it through its <ref column="qos_rules"/> column.
> The <ref
> +      column="action"/> column for the highest-<ref column="priority"/>
> +      matching row in this table determines a packet's qos marking.  If
> no row
> +      matches, packets will not have any qos marking.
> +    </p>
> +
> +    <column name="priority">
> +      <p>
> +        The QOS rule's priority.  Rules with numerically higher priority
> +        take precedence over those with lower.  If two ACL rules with
>

/ACL/QoS


> +        the same priority both match, then the one actually applied to a
> +        packet is undefined.
> +      </p>
> +    </column>
> +
> +    <column name="direction">
> +      <p>
> +        The value of this fields is similar to <ref column="direction"
>

/fields/field (singular)


> +        table="ACL" db="OVN_Northbound"/> column in the OVN Northbound
> +        database's <ref table="ACL" db="OVN_Northbound"/> table.
> +      </p>
> +    </column>
> +
> +    <column name="match">
> +      <p>
> +        The packets that the QOS rules should match, in the same
> expression
> +        language used for the <ref column="match" table="Logical_Flow"
> +        db="OVN_Southbound"/> column in the OVN Southbound database's
> +        <ref table="Logical_Flow" db="OVN_Southbound"/> table.  The
> +        <code>outport</code> logical port is only available in the
> +        <code>to-lport</code> direction. The <code>inport</code> is
> +        available in only <code>from-lport</code> direction.
>

Why would the inport not be available in both directions, like for ACLs?
I cannot think of a use case offhand, but I cannot think of a reason to
preclude it either, and you have not added any code that precludes it.


> +      </p>
> +    </column>
> +
> +    <column name="action">
> +      <p>The action to be performed on the matched packet</p>
> +      <ul>
> +        <li>
> +          <code>dscp</code>: The value of this action should be in the
> +          range of 0 to 63 (inclusive).
> +        </li>
> +      </ul>
> +    </column>
> +
> +    <column name="external_ids">
> +      See <em>External IDs</em> at the beginning of this document.
> +    </column>
> +  </table>
> +
>    <table name="Logical_Router_Port" title="L3 logical router port">
>      <p>
>        A port within an L3 logical router.
> diff --git a/ovn/utilities/ovn-nbctl.c b/ovn/utilities/ovn-nbctl.c
> index d6d64ea..a72028c 100644
> --- a/ovn/utilities/ovn-nbctl.c
> +++ b/ovn/utilities/ovn-nbctl.c
> @@ -2161,6 +2161,11 @@ static const struct ctl_table_class tables[] = {
>         NULL},
>        {NULL, NULL, NULL}}},
>
> +    {&nbrec_table_qos,
> +     {{&nbrec_table_qos, NULL,
> +       NULL},
> +      {NULL, NULL, NULL}}},
> +
>      {NULL, {{NULL, NULL, NULL}, {NULL, NULL, NULL}}}
>  };
>
> diff --git a/tests/ovn.at b/tests/ovn.at
> index 2fd432b..53e92a2 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -5061,3 +5061,55 @@ OVS_WAIT_UNTIL([test `ovs-ofctl dump-flows br-int
> table=0 | grep REG13 | wc -l`
>  OVN_CLEANUP([hv1])
>
>  AT_CLEANUP
> +
> +AT_SETUP([ovn -- DSCP marking check])
> +AT_KEYWORDS([ovn])
> +ovn_start
> +
> +# Configure the Northbound database
> +ovn-nbctl ls-add lsw0
> +
> +ovn-nbctl lsp-add lsw0 lp1
> +ovn-nbctl lsp-add lsw0 lp2
> +ovn-nbctl lsp-set-addresses lp1 f0:00:00:00:00:01
> +ovn-nbctl lsp-set-addresses lp2 f0:00:00:00:00:02
> +ovn-nbctl lsp-set-port-security lp1 f0:00:00:00:00:01
> +ovn-nbctl lsp-set-port-security lp2 f0:00:00:00:00:02
> +net_add n1
> +sim_add hv
> +as hv
> +ovs-vsctl add-br br-phys
> +ovn_attach n1 br-phys 192.168.0.1
> +ovs-vsctl add-port br-int vif1 -- set Interface vif1
> external-ids:iface-id=lp1 options:tx_pcap=vif1-tx.pcap
> options:rxq_pcap=vif1-rx.pcap ofport-request=1
> +ovs-vsctl add-port br-int vif2 -- set Interface vif2
> external-ids:iface-id=lp2 options:tx_pcap=vif2-tx.pcap
> options:rxq_pcap=vif2-rx.pcap ofport-request=2
> +
> +# check at L2
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02' -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,vlan_
> tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:00:00:
> 02,dl_type=0x0000
> +])
> +
> +# check at L3 without dscp marking
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 0,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Mark DSCP with a valid value
> +qos_id=$(ovn-nbctl -- --id=@lp1-qos create QoS priority=100
> action=dscp=48 match="inport\=\=\"lp1\"" direction="from-lport" -- set
> Logical_Switch lsw0 qos_rules=@lp1-qos)
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 192,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Update the DSCP marking
> +ovn-nbctl set QoS $qos_id action=dscp=63
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 252,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Disable DSCP marking
> +ovn-nbctl clear Logical_Switch lsw0 qos_rules
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 0,nw_ecn=0,nw_ttl=0
> +])
>

It seems like it might be worth adding a test case for "to-lport" / egress
QoS rules.

Mickey


> +
> +OVN_CLEANUP([hv])
> +AT_CLEANUP
> --
> 1.9.1
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to