Re: pfctl explaination

2007-09-11 Thread askthelist
I'm having a similar issue as to whats described here. In my situation I have a table with about 200 entries. Im attempting to update that table and add about 200 more entries. I've included network blocks this time with the biggest being a /18. I update my /etc/blackhole.abuse file, then I run pf

Re: pfctl explaination

2007-06-21 Thread Francesco Toscan
2007/6/21, Peter N. M. Hansteen <[EMAIL PROTECTED]>: You may be hitting one or more of the several relevant limits, but have you tried something like 'pfctl -T flush -t tablename' before reloading the table data? Yes, if I first flush the table it works flawlessy. The 'problem' occurs only re

Re: pfctl explaination

2007-06-21 Thread Peter N. M. Hansteen
"Francesco Toscan" <[EMAIL PROTECTED]> writes: > I've just tried to set table-entries to 550K, more than double the > content of (210144 entries) but reload always gives: > /etc/pf.conf.queue:17: cannot define table large_table: Cannot allocate memory You may be hitting one or more of the severa

Re: pfctl explaination

2007-06-21 Thread Francesco Toscan
2007/6/20, Ted Unangst <[EMAIL PROTECTED]>: yes, reloading the rules makes another copy then switches over. if you have a really large table, this means having two copies of the table during the transition. Thank you for your answer. I've just tried to set table-entries to 550K, more than do

Re: pfctl explaination

2007-06-20 Thread Ted Unangst
On 6/20/07, Francesco Toscan <[EMAIL PROTECTED]> wrote: when I first load the rules everything works fine; when I reload the rules with pfctl -f pf.conf, pfctl segfaults or exits returning "Cannot allocate memory" as if table-entries limit were not high enough. If I first flush the large table a

pfctl explaination

2007-06-20 Thread Francesco Toscan
Hi misc@, I'm trying to understand how pfctl re-loads rules and tables. On my soekris board, 64MB RAM, I have a large table with more than 200K entries. It's used to perform some egress filtering (yes maybe it's too large but it's really effective). I raised up table-entries limit to 250K and I g