The OpenFlow 1.0 spec speaks of "a flow table" in Section 2 and the start of Section 3, but by 3.2 it becomes obvious that the intent was to have multiple flow tables.

In Section 3.4 (Matching) of the OpenFlow 1.0 spec, a flow chart indicates that
the packet is matched against table 0, and then against subsequent tables,
and it applies the rule found in the lowest table number.

Later in that section, the spec states
Packets are matched against flow entries based on prioritization. An entry that specifi es an exact match (i.e., it has no wildcards) is always the highest priority. All wildcard entries have a priority associated with them. Higher priority entries must match before lower priority ones. If multiple entries have the same priority, the switch is free to choose any ordering. Higher numbers have higher priorities.
Note carefully this says "flow entries", ie elements of the tables. It seems this is intended to mean "Within a single flow table, packets are matched ..." I would suggest that clarification in future specs, if not already done.


So, priorities are relevant only within a flow table. As far as I've seen, the user can't directly specify which flow rules get put in which tables. The spec states
For valid (non-overlapping) ADD requests, or those with no overlap checking, the switch must insert the flow entry at the lowest numbered table for which the switch supports all wildcards set in the flow_match struct, and for which the
priority would be observed during the matching process.
So, the table assigned to a rule is dependent upon the vendor's implementation. Priorities be damned. You can't even specify a higher number table into which the rule should be placed. What I've had to do was to resort to modifying the match criteria artificially in order to force the table to a certain number. You can imagine that's not very portable!

Suppose I have two rules A and B, where B is to match a subset of the packets that A matches, and B is a higher priority. For instance, suppose that A matches all ipv4 packets from 10.0.0.1 to 10.0.0.2, while B matches just the UDP packets with destination UDP port 53 from 10.0.0.1 to 10.0.0.2. As a user of OpenFlow, I'd like rule B to be a higher priority so it overrides rule A (which is behaving like a switch rule, allowing all other packets to go through). But, if rule A is stored in a lower number table, it would win out, and rule B would never be executed.

To do this right, I'd have to use the OFPST_TABLE stats request, remember the results, compare my intended flow matches against the wildcard features of these tables, and dynamically alter the match of rule A in some way that doesn't change the result so that the lower priority A is in the same or higher table than the higher priority B. I could also precompute these and use runtime flags after checking which switch(es) I have to deal with. It might mean that different switches in the same site get different versions of the rule, depending upon their make or software version.

UGLY!

At the least, I wish the OpenFlow spec allowed for the Flow Mod rules to specify the minimum table for installing the rule. That is, if I specify table 2, it would not install it in table 0 or 1, even if it could be.

----

pyswitch.py creates Flow Mod rules with a priority of 32768. But these rules have a Match section that matches everything about the packet. By the spec, these are the highest priority.

I found that the HP Procurve set the recorded priority of these rules to 65535, and apparently executed them in hardware. When dpctl displayed the rules, they showed up with a priority of 65535.

So, what do you do if you want to use pyswitch but don't want the rules to be executed in hardware? I found that removing the TOS from the match was sufficient to avoid the rule from going into the hardware. (In fact, it got move to table 2.)
            flow = extract_flow(packet)
del flow['nw_tos'] # Don't match on TOS, just to avoid the HP Procurve putting this in hardware

I've also run into a case where as I was doing something similar to the above DNS example. As I'm dealing with a single switch, I did a manual change to the match criteria. Not portable, and subject to bit rot.

Reply via email to