On 2020-08-12 05:11, Alan McKay wrote:
Hey folks, This is one that is difficult to test in a test environment. I've got OpenBSD 6.5 on a relatively new pair of servers each with 8G RAM. With some scripting I'm looking at feeding block IPs to the firewalls to block bad-guys in near real time, but in theory if we got attacked by a bot net or something like that, it could result in a few thousand IPs being blocked. Possibly even 10s of thousands. Are there any real-world data out there on how big of a block list we can handle without impacting performance? We're doing the standard /etc/blacklist to load a table and then have a block on the table right at the top of the ruleset. thanks, -Alan
At Otto said, if you're using tables, then you should be fine. I'm doing geoip blocking and all sorts of filtering using a pf table that contains over 200 undecillion addresses (that obviously includes CIDR block expansion):
# Entries (+-) 9482 addresses added. 10859 addresses deleted. # Entries (expanded CIDR blocks) IPv4 addresses in table: 966545967 IPv6 addresses in table: 298179424470603435988810818668701155328 fw$ wc -l < /etc/pf-badhost.txt 146541