> On Tue, Apr 08, 2008 at 11:59:11PM -0700, Adam Richards wrote:
> > Maybe a pf.conf knob that allows me to turn off stateful tracking
> > for a particular "nat on <iface> ..." rule?
>
> Ah, you keep mentioning 'nat' and 'rdr', which confused me
> before, but I guess what you're actually talking about is
> called 'binat' in pf:
> 
> binat
>     A binat rule specifies a bidirectional mapping between an
>     external IP netblock and an internal IP netblock.

Correct.  Sorry for the confusion.  :)

And there's another nuance as well: on ingress I need dest
re-mapped while preserving src, and on egress I need src
re-mapped while passing on the [preserved] src as the egress
dest.

> You're right, it should be relatively easy to give binat a 'no state'
> option...
> 
> But not for a /18 of arbitrary mappings with a high rate of change.

I'm not so concerned with "arbitrary" mappings as I'll be
statically configuring the mappings, maintaining them from
outside pf.

To your last sentence, what if they're static mappings, ie - not
arbitrary, with a high rate of change?  ;)

What happens in pf when a table has changed and pf need to
re-read it?  Will pkts get dropped?

> With the current translation code this would require a rule for
> every mapping,

This is how I plan on using pf -- 1:1 statically configured
translations.  IOW I don't care about free-floating addr pools.

> and every packet is going to require a linear search of this
> ruleset.  Fixing this is going to require fairly major changes
> to how binat works.

Major changes, *if* we want binat to work with pools, right?  If
it the core binat code remains unchanged, I'd guess modifying the
search algorithm to be something as simple as a bubble sort, or
maybe a radix tree implementation (as is common in networking
code I believe), would be fairly easy to someone familiar with
pf's inner-workings.  I could be over-simplifying it.

> Are you willing to pay someone to make this happen?

If major re-work is needed, possibly.  We'll want to visit about
this off-thread.

> BTW: What kind of packet forwarding rate are you hoping to get
> with this solution?

To be on par with my Linux colleagues running the latest
netfilter/iptables code I'd need to get >= 1Mpps on 10G links.
Even though pkt sizes will obviously influence pps, most flows I
deal with are rather short lived and contain, on avg, pkt sizes
<= 512B.  I expect to find predictible inflection points.

> Much of pf's performance comes from the fact that packets
> matching state entries are not evaluated against the ruleset.

Understood -- makes sense.

Thanks

-Adam

Reply via email to