Hello Eric,
Thanks for you reply. After increasing the ip_conntrack_max value to
4096 I did find a curious entry in my messages log file
:
firewall kernel: __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
This happened twice about a day ago.
According to the bucu-conntrack guide the amount for memory used by 4096
connections (with hash size equal to max conntrack) is 4096 x 308 = 1.2 Mb.
My LEAF box has 16 Mb RAM and cat /proc/meminfo gives:
total: used: free: shared: buffers: cached:
Mem: 14725120 11927552 2797568 0 40960 6443008
Swap: 0 0 0
MemTotal: 14380 kB
MemFree: 2732 kB
MemShared: 0 kB
Buffers: 40 kB
Cached: 6292 kB
SwapCached: 0 kB
Active: 5924 kB
Inactive: 1700 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 14380 kB
LowFree: 2732 kB
SwapTotal: 0 kB
SwapFree: 0 kB
So there should be enough memory left for the conntrack table. Anyway
the firewall is still up and running.
I set the new max conntrack number using
echo 4096 > /proc/sys/net/ipv4/ip_conntrack_max.
How can I make this setting permanent? I have seen the option
net.ipv4.netfilter.ip_conntrack_max in /etc/sysctl.conf but which
package should I backup then?
Regards
Chera Bekker
Eric Spakman wrote:
Hello Chera,
There is some information about this setting in the following
Bering-uClibc guide and the links section in this guide.
http://leaf.sourceforge.net/doc/guide/bucu-conntrack.html
Eric
Hello List,
I have noticed that when running a p2p client behind my Bering firewall
my syslog gets flooded with the message:
|firewall kernel: ip_conntrack: table full, dropping packet.|
||
Allmost all entries in /proc/net/ip_conntrack pointed to the internal
machine running the client.
|I noticed that the value in |/proc/sys/net/ipv4/ip_conntrack_max was
set to 1024. I have increased this value to 4096 which seems to have put a
(temporary?) lid on things. My question is if the increase in the
number of connections will somehow have a negative impact on the
performance of the firewall?
Any information is appreciated.
Regards
Chera Bekker
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log
files for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
------------------------------------------------------------------------
leaf-user mailing list: [email protected]
https://lists.sourceforge.net/lists/listinfo/leaf-user
Support Request -- http://leaf-project.org/
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
------------------------------------------------------------------------
leaf-user mailing list: [email protected]
https://lists.sourceforge.net/lists/listinfo/leaf-user
Support Request -- http://leaf-project.org/
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
------------------------------------------------------------------------
leaf-user mailing list: [email protected]
https://lists.sourceforge.net/lists/listinfo/leaf-user
Support Request -- http://leaf-project.org/