[Bug 217997] [pf] orphaned entries in src-track

2017-03-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217997

--- Comment #5 from Max  ---
Well, I can reproduce the problem.
I have 3 hosts with 10.3 release (generic kernel). "Server", "client" and
"firewall".
Complete pf.conf of "firewall" host:

set skip on {lo, em2}

table  persist { 192.168.0.10, 192.168.0.20, 192.168.0.30 }


rdr proto tcp from any to 192.168.2.1 port http ->  port http \
round-robin sticky-address

block in all
block out all

pass quick proto tcp from any to  port 80 \
keep state \
(source-track rule, max 120, max-src-states 96, \
 tcp.closing 20, tcp.finwait 15, tcp.closed 10)



It works as expected until we hit the "max states per rule" limit. For example
(just counters):

# pfctl -vsi
Status: Enabled for 0 days 00:17:46   Debug: Urgent

State Table  Total Rate
  current entries   20
  searches 3450.3/s
  inserts   400.0/s
  removals  200.0/s
Source Tracking Table
  current entries   20
  searches  800.1/s
  inserts   400.0/s
  removals  200.0/s

# pfctl -vsi
Status: Enabled for 0 days 00:18:05   Debug: Urgent

State Table  Total Rate
  current entries0
  searches 3450.3/s
  inserts   400.0/s
  removals  400.0/s
Source Tracking Table
  current entries   20
  searches  800.1/s
  inserts   400.0/s
  removals  200.0/s

# pfctl -vsi
Status: Enabled for 0 days 00:18:16   Debug: Urgent

State Table  Total Rate
  current entries0
  searches 3450.3/s
  inserts   400.0/s
  removals  400.0/s
Source Tracking Table
  current entries0
  searches  800.1/s
  inserts   400.0/s
  removals  400.0/s


But when I reach the limit:

# pfctl -vsi
Status: Enabled for 0 days 00:04:46   Debug: Urgent

State Table  Total Rate
  current entries1
  searches16275.7/s
  inserts  2030.7/s
  removals 2020.7/s
Source Tracking Table
  current entries   10
  searches 3331.2/s
  inserts   400.1/s
  removals  300.1/s
Limit Counters
  max states per rule90.0/s
  max-src-states 00.0/s
  max-src-nodes  00.0/s
  max-src-conn   00.0/s
  max-src-conn-rate  00.0/s
  overload table insertion   00.0/s
  overload flush states  00.0/s

# pfctl -ss
all tcp 192.168.0.10:80 (192.168.2.1:80) <- 192.168.2.14:15122  
CLOSED:SYN_SENT

# pfctl -sS
192.168.2.17 -> 192.168.0.10 ( states 1, connections 0, rate 0.0/0s )
192.168.2.15 -> 192.168.0.20 ( states 1, connections 0, rate 0.0/0s )
192.168.2.14 -> 192.168.0.10 ( states 1, connections 0, rate 0.0/0s )
192.168.2.14 -> 0.0.0.0 ( states 1, connections 0, rate 0.0/0s )
192.168.2.13 -> 192.168.0.30 ( states 1, connections 0, rate 0.0/0s )
192.168.2.11 -> 192.168.0.10 ( states 1, connections 0, rate 0.0/0s )
192.168.2.12 -> 192.168.0.20 ( states 1, connections 0, rate 0.0/0s )
192.168.2.16 -> 192.168.0.30 ( states 1, connections 0, rate 0.0/0s )
192.168.2.18 -> 192.168.0.20 ( states 1, connections 0, rate 0.0/0s )
192.168.2.10 -> 192.168.0.30 ( states 1, connections 0, rate 0.0/0s )


# pfctl -vsi
Status: Enabled for 0 days 00:08:19   Debug: Urgent

State Table  Total Rate
  current entries0
  searches16273.3/s
  inserts  2030.4/s
  removals 2030.4/s
Source Tracking Table
  current entries8
  searches 3330.7/s
  inserts   400.1/s
  removals 

Re: pf, ALTQ and 10G

2017-03-28 Thread Kristof Provost

On 28 Mar 2017, at 9:33, Eugene M. Zheganin wrote:
I need to implement QoS on a 10G interface (ix(4)) with bandwidth of 
4-5 Gbit/sec. In general I'm using pf on FreeBSD, since I like it more 
than ipfw. But I'm aware that it's kind of ancient and wasn't updated 
for a long time from the upstream (and the upstream still doesn't 
support SMP). So, my question is - is it worth to stick to pf/ALTQ on 
10G interfaces ? Will pf carry such traffic ?


Be aware that ALTQ will not let you configure queues with that sort of 
bandwidth.
All of the datatypes used are 32-bit integers and top out at 2 or 4 
Gbps.


Unfortunately dummynet has exactly the same problem, so switching to 
ipfw won’t help.


Regards,
Kristof
___
freebsd-pf@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-pf
To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"

[Bug 217997] [pf] orphaned entries in src-track

2017-03-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217997

--- Comment #4 from Robert Schulze  ---
(In reply to Max from comment #3)
Those old-aged src track entries are only on rdr rule:

# pfctl -vsS | grep -A1 $client
$client -> $www_host ( states 4, connections 0, rate 0.0/0s )
   age 02:39:54, 20643 pkts, 23362337 bytes, rdr rule 0

# pfctl -vss | grep -A1 $client
(nothing shown)


One question is: why is there states=4 in source track, but no states in state
table and how could that happen?

regards,
Robert Schulze

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-pf@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-pf
To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"


pf, ALTQ and 10G

2017-03-28 Thread Eugene M. Zheganin

Hi.

I need to implement QoS on a 10G interface (ix(4)) with bandwidth of 4-5 
Gbit/sec. In general I'm using pf on FreeBSD, since I like it more than 
ipfw. But I'm aware that it's kind of ancient and wasn't updated for a 
long time from the upstream (and the upstream still doesn't support 
SMP). So, my question is - is it worth to stick to pf/ALTQ on 10G 
interfaces ? Will pf carry such traffic ?


Thanks.

Eugene.

___
freebsd-pf@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-pf
To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"