Ben Drees wrote:
After Squid has been up for a day or two handling about 500 (mostly cacheable) requests per second, we start to see CPU spikes reaching 100% and response times getting longer. It usually recovers on its own, but we sometimes resort to restarting it, which always fixes the problem quickly. Attaching gdb and hitting Ctrl-C randomly while it is in this state usually lands in malloc. Zenoss plots (from SNMP) of the number of cached objects always show a decline when this is happening, as if a burst of requests yielding larger responses is displacing many more smaller responses already in the cache.

The config uses no disk cache ("cache_dir null /mw/data/cache/diskcache") and roughly 3GB ("cache_mem 3072 MB") of memory cache on an 8GB machine. I've tried bumping memory_pools_limit up to 1024 MB from the default, but that doesn't seem to make a difference.

Here's some of the configuration:

cache_mem 3072 MB
maximum_object_size_in_memory 512 KB
cache_dir null /mw/data/cache/cache/diskcache
maximum_object_size 512 KB
log_mime_hdrs on
debug_options ALL,1 99,3
strip_query_terms off
buffered_logs on
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
store_avg_object_size 8 KB
half_closed_clients off
snmp_access allow snmppublic localhost
never_direct allow all
check_hostnames off
retry_on_error off

Reply via email to