I think that I am now hitting a bottleneck somewhere else. Thanks for the help so far... I might come back thirsty for more later... (-:
Regards, Lars. On Wed, Feb 15, 2023 at 4:13 PM Lars Bonnesen <lars.bonne...@gmail.com> wrote: > lbo@PLOSLOL2VPN:/etc$ pfctl -s info > Status: Enabled for 0 days 00:06:49 Debug: err > > State Table Total Rate > current entries 149331 > half-open tcp 5333 > searches 4462647255 10911118.0/s > inserts 78143904 191060.9/s > removals 77994573 190695.8/s > Counters > match 250452866 612354.2/s > bad-offset 0 0.0/s > fragment 1 0.0/s > short 0 0.0/s > normalize 1 0.0/s > memory 5247954 12831.2/s > bad-timestamp 0 0.0/s > congestion 1469 3.6/s > ip-option 3 0.0/s > proto-cksum 3012 7.4/s > state-mismatch 145502864 355752.7/s > state-insert 305 0.7/s > state-limit 0 0.0/s > src-limit 0 0.0/s > synproxy 0 0.0/s > translate 0 0.0/s > no-route 0 0.0/s > > On Wed, Feb 15, 2023 at 2:15 PM Claudio Jeker <cje...@diehard.n-r-g.com> > wrote: > >> On Wed, Feb 15, 2023 at 01:01:10PM -0000, Stuart Henderson wrote: >> > On 2023-02-15, Lars Bonnesen <lars.bonne...@gmail.com> wrote: >> > > One says: >> > > >> > > # pfctl -s info >> > > Status: Enabled for 0 days 10:56:43 Debug: err >> > > >> > > State Table Total Rate >> > > current entries 91680 >> > >> > Lots of entries, close to the default: >> > >> > $ doas pfctl -sm >> > states hard limit 100000 >> > src-nodes hard limit 10000 >> > frags hard limit 65536 >> > tables hard limit 1000 >> > table-entries hard limit 200000 >> > pktdelay-pkts hard limit 10000 >> > anchors hard limit 512 >> > >> > > half-open tcp 4032 >> > > searches 3132304294 79494.1/s >> > > inserts 60916552 1546.0/s >> > > removals 60824872 1543.7/s >> > > Counters >> > > match 79164265 2009.1/s >> > > bad-offset 0 0.0/s >> > > fragment 1 0.0/s >> > > short 0 0.0/s >> > > normalize 0 0.0/s >> > > memory 1768012 44.9/s >> > >> > And this most likely means that you've been bumping into the >> > state limit plenty of times already. >> > >> > > bad-timestamp 0 0.0/s >> > > congestion 1201 0.0/s >> > > ip-option 0 0.0/s >> > > proto-cksum 387 0.0/s >> > > state-mismatch 82794949 2101.2/s >> > >> > Loads of state mismatches and, looking at the rate, this is >> > probably on an ongoing basis. >> > >> > Check to make sure that all packets match either a "pass" or "block" >> > rule (the easiest way to do this is usually to have a simple "block" >> > or "block log" as the first rule) - if you don't have any matching >> > rule in the config, there is an implicit default which passes traffic >> > *without* creating state. >> > >> > (One particularly common result of this is that TCP window scaling >> > isn't handled properly such that longer lived or fast TCP connections >> > are likely to slow down or stall.) >> > >> > You might also need to bump the state limit, but I'd check the above >> > first because the high number of states might be caused because of >> > mismatches. >> >> I think the state-mismatch is a result of hitting the state limit and not >> the other way around. At over 90'000 states the default timeouts are >> reduced by more than 50% and so states are removed too soon resulting in a >> state-mismatch. >> >> So first bump the limit up and then look at the counters again. >> >> -- >> :wq Claudio >> >>