[squid-users] Long running squid proxy slows way down

2009-04-24 Thread Seann Clark

All,

   I am looking for ideas on ways to avoid this, as the tuning guides I 
have found lead me all over the place. What I am seeing is over time the 
cache starts to slow down from being lightning fast to being ok, to it 
taking 1-3 minutes to decide to load, and I know it is tunable on this 
side. Usually this is fixed by a restart of squid, and everything is 
happy for a variable time frame. I have a tiny user base  (on average 2 
people) since this is on a home system.





What I have:

Squid Cache: Version 2.6.STABLE22
configure options:  '--build=i386-redhat-linux-gnu' 
'--host=i386-redhat-linux-gnu' '--target=i386-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--includedir=/usr/include' '--libdir=/usr/lib' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var' '--datadir=/usr/share' '--sysconfdir=/etc/squid' 
'--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru' 
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl' 
'--with-openssl=/usr/kerberos' '--enable-delay-pools' 
'--enable-linux-netfilter' '--with-pthreads' 
'--enable-ntlm-auth-helpers=SMB,fakeauth' 
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge' 
'--enable-useragent-log' '--enable-referer-log' 
'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost' 
'--enable-underscores' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL' 
'--enable-cache-digests' '--enable-ident-lookups' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--with-large-files' 
'--enable-follow-x-forwarded-for' '--enable-wccpv2' '--with-maxfd=16384' 
'--enable-arp-acl' 'build_alias=i386-redhat-linux-gnu' 
'host_alias=i386-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' 
'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' 'LDFLAGS=-pie'




Hardware:
2x 2.0 Ghz Xeon
2.0 GB RAM
3ware SATA RAID, Raid 5 across 4 discs.
Fedora 9, ext3 filesystem

config items:

ipcache_size 4096
ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95
cache_dir diskd /var/spool/squid 40960 93 256 Q1=72 Q2=64
memory_pools_limit 150 MB
store_avg_object_size 70 KB
store_objects_per_bucket 60
digest_swapout_chunk_size 202907 bytes
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
request_body_max_size 7 MB
memory_replacement_policy heap LFUDA

I also have a redirector in place, squidGuard, and set to use 15 child 
processes to attempt to speed up that section a little more to some 
degree of success.



Any suggestions would be appreciated.


Thanks in advance,
~Seann


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [squid-users] Long running squid proxy slows way down

2009-04-24 Thread Amos Jeffries

Seann Clark wrote:

All,

   I am looking for ideas on ways to avoid this, as the tuning guides I 
have found lead me all over the place. What I am seeing is over time the 
cache starts to slow down from being lightning fast to being ok, to it 
taking 1-3 minutes to decide to load, and I know it is tunable on this 
side. Usually this is fixed by a restart of squid, and everything is 
happy for a variable time frame. I have a tiny user base  (on average 2 
people) since this is on a home system.





What I have:

Squid Cache: Version 2.6.STABLE22


2.7 is 5-10% faster than 2.6.

configure options:  '--build=i386-redhat-linux-gnu' 
'--host=i386-redhat-linux-gnu' '--target=i386-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--includedir=/usr/include' '--libdir=/usr/lib' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var' '--datadir=/usr/share' '--sysconfdir=/etc/squid' 
'--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru' 
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl' 
'--with-openssl=/usr/kerberos' '--enable-delay-pools' 
'--enable-linux-netfilter' '--with-pthreads' 
'--enable-ntlm-auth-helpers=SMB,fakeauth' 
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge' 


'--enable-useragent-log' '--enable-referer-log' 


Disable all these special logs if not being actively used...

'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost' 
'--enable-underscores' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL' 
'--enable-cache-digests' '--enable-ident-lookups' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--with-large-files' 
'--enable-follow-x-forwarded-for' '--enable-wccpv2' '--with-maxfd=16384' 
'--enable-arp-acl' 'build_alias=i386-redhat-linux-gnu' 
'host_alias=i386-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' 
'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' 'LDFLAGS=-pie'




Hardware:
2x 2.0 Ghz Xeon
2.0 GB RAM
3ware SATA RAID, Raid 5 across 4 discs.
Fedora 9, ext3 filesystem


There are people here who disagree, but IMO unless you are running 
high-end hardware RAID. Kill it. Squid data is not that critical. Better 
to use one cache_dir per physical disc, regardless of the disk size.


For speed tuning its worth getting some software that measures I/O wait 
and see how much and what app is dong it.




config items:

ipcache_size 4096


fqdncache_size is paired with this, you might need to raise it as well.


ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95


For cache >1GB the difference of 5% between high/low can mean long 
periods spent garbage-collecting the disk storage. This is a major drag. 
You can shrink the gap if you like less disk delay there.



cache_dir diskd /var/spool/squid 40960 93 256 Q1=72 Q2=64


AUFS is around 10x faster than diskd on Linux. Give it a try.


memory_pools_limit 150 MB
store_avg_object_size 70 KB
store_objects_per_bucket 60
digest_swapout_chunk_size 202907 bytes
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
request_body_max_size 7 MB
memory_replacement_policy heap LFUDA

I also have a redirector in place, squidGuard, and set to use 15 child 
processes to attempt to speed up that section a little more to some 
degree of success.


Check the stats for load on each of those children. If you are getting 
_any_ (>0) load on the last one, increase the number.




Any suggestions would be appreciated.


squidGuard may not be possible. But use concurrency where you are able 
to. It's several orders of magnitude lighter on resources and faster.



Additionally, check the network pipe capacity. If its full you might 
need to use 2 NIC to separate inbound/ outbound.


A single tuned instance of Squid has been known to push the limits of a 
50 Mbps external link. On collapsed forwarding cache hits it can even 
push past a 100Mbps.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Long running squid proxy slows way down

2009-04-24 Thread Seann Clark

Amos Jeffries wrote:

Seann Clark wrote:

All,

   I am looking for ideas on ways to avoid this, as the tuning guides 
I have found lead me all over the place. What I am seeing is over 
time the cache starts to slow down from being lightning fast to being 
ok, to it taking 1-3 minutes to decide to load, and I know it is 
tunable on this side. Usually this is fixed by a restart of squid, 
and everything is happy for a variable time frame. I have a tiny user 
base  (on average 2 people) since this is on a home system.





What I have:

Squid Cache: Version 2.6.STABLE22


2.7 is 5-10% faster than 2.6.
This is a lazy install, I forgot to mention, YUM install VIA Fedora 9. I 
may if this is one of those that are remaining, spin my own with the 
suggestions here.


configure options:  '--build=i386-redhat-linux-gnu' 
'--host=i386-redhat-linux-gnu' '--target=i386-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--includedir=/usr/include' '--libdir=/usr/lib' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--bindir=/usr/sbin' 
'--libexecdir=/usr/lib/squid' '--localstatedir=/var' 
'--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll' 
'--enable-snmp' '--enable-removal-policies=heap,lru' 
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl' 
'--with-openssl=/usr/kerberos' '--enable-delay-pools' 
'--enable-linux-netfilter' '--with-pthreads' 
'--enable-ntlm-auth-helpers=SMB,fakeauth' 
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge' 


'--enable-useragent-log' '--enable-referer-log' 


Disable all these special logs if not being actively used...

'--disable-dependency-tracking' 
'--enable-cachemgr-hostname=localhost' '--enable-underscores' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL' 
'--enable-cache-digests' '--enable-ident-lookups' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--with-large-files' '--enable-follow-x-forwarded-for' 
'--enable-wccpv2' '--with-maxfd=16384' '--enable-arp-acl' 
'build_alias=i386-redhat-linux-gnu' 
'host_alias=i386-redhat-linux-gnu' 
'target_alias=i386-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe 
-fsigned-char -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 
-mtune=generic -fasynchronous-unwind-tables' 'LDFLAGS=-pie'




Hardware:
2x 2.0 Ghz Xeon
2.0 GB RAM
3ware SATA RAID, Raid 5 across 4 discs.
Fedora 9, ext3 filesystem


There are people here who disagree, but IMO unless you are running 
high-end hardware RAID. Kill it. Squid data is not that critical. 
Better to use one cache_dir per physical disc, regardless of the disk 
size.


For speed tuning its worth getting some software that measures I/O 
wait and see how much and what app is dong it.


I didn't mention this but this server is home to a firewall and IDS 
subsystem as well, in order to protect some data left on there I set it 
up to prevent data loss. If need be I can cram another large dedicated 
disk into the server since I do have room.


config items:

ipcache_size 4096


fqdncache_size is paired with this, you might need to raise it as well.


ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95


For cache >1GB the difference of 5% between high/low can mean long 
periods spent garbage-collecting the disk storage. This is a major 
drag. You can shrink the gap if you like less disk delay there.



cache_dir diskd /var/spool/squid 40960 93 256 Q1=72 Q2=64


AUFS is around 10x faster than diskd on Linux. Give it a try.

I will see how that works out on my system



memory_pools_limit 150 MB
store_avg_object_size 70 KB
store_objects_per_bucket 60
digest_swapout_chunk_size 202907 bytes
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
request_body_max_size 7 MB
memory_replacement_policy heap LFUDA

I also have a redirector in place, squidGuard, and set to use 15 
child processes to attempt to speed up that section a little more to 
some degree of success.


Check the stats for load on each of those children. If you are getting 
_any_ (>0) load on the last one, increase the number.




Any suggestions would be appreciated.


squidGuard may not be possible. But use concurrency where you are able 
to. It's several orders of magnitude lighter on resources and faster.



Additionally, check the network pipe capacity. If its full you might 
need to use 2 NIC to separate inbound/ outbound.


A single tuned instance of Squid has been known to push the limits of 
a 50 Mbps external li

Re: [squid-users] Long running squid proxy slows way down

2009-04-25 Thread Gavin McCullagh
Hi Amos,

On Sat, 25 Apr 2009, Amos Jeffries wrote:

>> ipcache_low 90
>> # ipcache_high 95
>> ipcache_high 95
>> cache_mem 1024 MB
>> # cache_swap_low 90
>> cache_swap_low 90
>> # cache_swap_high 95
>> cache_swap_high 95
>
> For cache >1GB the difference of 5% between high/low can mean long  
> periods spent garbage-collecting the disk storage. This is a major drag.  
> You can shrink the gap if you like less disk delay there.

Could you elaborate on this a little?  If I understand correctly from the
comments in the template squid.conf:

  (swap_usage < cache_swap_low)
-> no cache removal
  (cache_swap_low < swap_usage < cache_swap_high)
-> cache removal attempts to maintain (swap_usage == cache_swap_log)
  (swap_usage ~> cache_swap_high)
-> cache removal becomes aggressive until (swap_usage == cache_swap_log)

It seems like you're saying that aggressive removal is a big drag on the
disk so you should hit it early rather than late so the drag is not for
a long period.  Would it be better to calculate an absolute figure (say
200MB) and work out what percentage of your cache that is?  It seems like
the 95% high watermark is probably quite low for large caches too?

I have 2x400GB caches.  A 5% gap would leave 20GB to delete aggressively
which might take quite some time alright.  A 500MB gap would be 0.125.

cache_swap_low 97.875
cache_swap_high 98

Can we use floating point numbers here?  Would it make more sense for squid
to offer absolute watermarks (in MB offset from the total size)?

Gavin



Re: [squid-users] Long running squid proxy slows way down

2009-04-25 Thread Matus UHLAR - fantomas
On 24.04.09 10:36, Seann Clark wrote:
>I am looking for ideas on ways to avoid this, as the tuning guides I  
> have found lead me all over the place. What I am seeing is over time the  
> cache starts to slow down from being lightning fast to being ok, to it  
> taking 1-3 minutes to decide to load, and I know it is tunable on this  
> side. Usually this is fixed by a restart of squid, and everything is  
> happy for a variable time frame. I have a tiny user base  (on average 2  
> people) since this is on a home system.

if restart helps, you apparently configured squid to eat too much memory.
note that cache_mem option only configures memory cache, squid may need
additional memory for cache indexes, buffers etc.

lower cache_mem to a more sane value, e.g. 128MB.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I don't have lysdexia. The Dog wouldn't allow that.


Re: [squid-users] Long running squid proxy slows way down

2009-04-25 Thread Amos Jeffries

Gavin McCullagh wrote:

Hi Amos,

On Sat, 25 Apr 2009, Amos Jeffries wrote:


ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95
For cache >1GB the difference of 5% between high/low can mean long  
periods spent garbage-collecting the disk storage. This is a major drag.  
You can shrink the gap if you like less disk delay there.


Could you elaborate on this a little?  If I understand correctly from the
comments in the template squid.conf:

  (swap_usage < cache_swap_low)
-> no cache removal
  (cache_swap_low < swap_usage < cache_swap_high)
-> cache removal attempts to maintain (swap_usage == cache_swap_log)
  (swap_usage ~> cache_swap_high)
-> cache removal becomes aggressive until (swap_usage == cache_swap_log)


almost. The final one is:
 -> aggressive until swap_usage < cache_swap_low
 which could be only whats currently indexed (cache_swap_log), or could 
be less since aggressive might re-test objects for staleness and discard 
to reach its goal.




It seems like you're saying that aggressive removal is a big drag on the
disk so you should hit it early rather than late so the drag is not for
a long period.


Early or late does not seem to matter as much as the MB/GB width of the 
low->high gap being removed.



 Would it be better to calculate an absolute figure (say
200MB) and work out what percentage of your cache that is?  It seems like
the 95% high watermark is probably quite low for large caches too?


I agree. Something like that. AFAICT the high being less than 100% is to 
allow X amount of new data to arive and be stored between collection 
cycles. 6 GB might be reasonable on a choked-full 100 MB pipe with 5 
minute cycles. Or it might not.


The idea if you recall the conditions above, is that aggressive (case 
#3) does not occur since that is guaranteed to throw away potential HITs.




I have 2x400GB caches.  A 5% gap would leave 20GB to delete aggressively
which might take quite some time alright.  A 500MB gap would be 0.125.

cache_swap_low 97.875
cache_swap_high 98


Precisely. Though IMO you probably want a gap measured off your pipe 
speed and assuming only 50% of the disk load can be spared for removals.




Can we use floating point numbers here?


Unfortunately not. It's whole integer percentages only here.
I'll look at getting this fixed in 3.1 while the larger improvements 
have to wait.


On the bare theory of it I don't see why you can't the same percent in 
both settings. That will need some testing though to make sure it does 
not create a constant disk load to replace periodic slowness.




 Would it make more sense for squid
to offer absolute watermarks (in MB offset from the total size)?


Yes this is one of the ancient aspects remaining in Squid and different 
measures may be much better. I'm having a meeting with Alex Rousskov in 
approx 5 hours on IRC (#squiddev on irc.freenode.net) to discuss the 
general store improvements for 3.2. This is very likely to be one of the 
topics.


What I have on my mind are:
 * the fixed-bytes gap (100% deterministic load)
 * now that you mention it: floating point percentages :)
 * throughput-based threshold. Such that there is a buffer of time 
between high being passed and disk being full where the collection may 
be delayed starting.  Then a smaller buffer down to low to allow 
reasonable periods between next collection.
 * load-based threshold. Such that large-scale collection only occurs 
on idle cycles, or amount collected gets truncated into small chunks and 
spread over the available time.
 * recovery collections. where garbage collection bypasses the usual 
pre-emptive mechanism and runs on an emergency basis for a fixed amount 
of space (currently needed for active transaction).


I think from looking at the detaile cache.log squid tried to do the 
small-chunks method and spread load. But does not go so far as use idle 
cycles, which pretty much negates the spreading.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Long running squid proxy slows way down

2009-04-26 Thread Gavin McCullagh
Hi,

On Sun, 26 Apr 2009, Amos Jeffries wrote:

> almost. The final one is:
>  -> aggressive until swap_usage < cache_swap_low
>  which could be only whats currently indexed (cache_swap_log), or could  
> be less since aggressive might re-test objects for staleness and discard  
> to reach its goal.

I had presumed that squid had a heap or other $STRUCTURE which kept the
cache objects in order of expiry so they could be purged immediately they
expired.  Thinking about it though, perhaps that would kill off all
possibility for TCP_IMS_HITs?

Sorry to be constantly peppering you with these questions, I just find it
all very interesting :-)

>>  Would it be better to calculate an absolute figure (say
>> 200MB) and work out what percentage of your cache that is?  It seems like
>> the 95% high watermark is probably quite low for large caches too?
>
> I agree. Something like that. AFAICT the high being less than 100% is to  
> allow X amount of new data to arive and be stored between collection  
> cycles. 6 GB might be reasonable on a choked-full 100 MB pipe with 5  
> minute cycles. Or it might not.

As I mentioned we have a 20GB gap by default and are on a 40MB pipe which
is often quite choked.  I can't say we've noticed the collection cycles but
maybe we're not measuring it right.

I'll probably change the thresholds to 98%,99%.

>>  Would it make more sense for squid
>> to offer absolute watermarks (in MB offset from the total size)?
>
> Yes this is one of the ancient aspects remaining in Squid and different  
> measures may be much better. I'm having a meeting with Alex Rousskov in  
> approx 5 hours on IRC (#squiddev on irc.freenode.net) to discuss the  
> general store improvements for 3.2. This is very likely to be one of the  
> topics.

Please do let us know how you get on :-)

Thanks as always,
Gavin



Re: [squid-users] Long running squid proxy slows way down

2009-04-26 Thread Amos Jeffries
> Hi,
>
> On Sun, 26 Apr 2009, Amos Jeffries wrote:
>
>> almost. The final one is:
>>  -> aggressive until swap_usage < cache_swap_low
>>  which could be only whats currently indexed (cache_swap_log), or could
>> be less since aggressive might re-test objects for staleness and discard
>> to reach its goal.
>
> I had presumed that squid had a heap or other $STRUCTURE which kept the
> cache objects in order of expiry so they could be purged immediately they
> expired.  Thinking about it though, perhaps that would kill off all
> possibility for TCP_IMS_HITs?
>

Squid has several methods (replacement policies) of heaps and lists for
removal of objects.

(I'm groping in the dark here from quick looks at the code, so this is not
authoritative info anymore).

The first layer seems to be a list of 'discarded' files (FileNums) which
have been declared useless or replaced by newer data but not yet removed
from the physical disk storage. AFAICT thats the difference-list between
actually stored data and the cache_swap_log index.

Second is the replacement policy for finding the second round of objects
to remove under aggressive if the first round was not enough. I don't know
why at this stage (and can't think of a reason why it should), but from
all appearances it calls refresh_patterns again.

On Squid-2 there is also the header-updating mechanism that every 3xx
reply copies the store object from disk to either memory or another disk
file. In the process updating the stored info (attempting to fix bug #7).
This has performance impacts and race conditions all of its own which need
solving before it can be used in Squid-3.

> Sorry to be constantly peppering you with these questions, I just find it
> all very interesting :-)

No problem.

>
>>>  Would it be better to calculate an absolute figure (say
>>> 200MB) and work out what percentage of your cache that is?  It seems
>>> like
>>> the 95% high watermark is probably quite low for large caches too?
>>
>> I agree. Something like that. AFAICT the high being less than 100% is to
>> allow X amount of new data to arive and be stored between collection
>> cycles. 6 GB might be reasonable on a choked-full 100 MB pipe with 5
>> minute cycles. Or it might not.
>
> As I mentioned we have a 20GB gap by default and are on a 40MB pipe which
> is often quite choked.  I can't say we've noticed the collection cycles
> but
> maybe we're not measuring it right.
>
> I'll probably change the thresholds to 98%,99%.

My back-of-envelope calculations for a 40MB pipe indicate that (assuming
_everything_ must be cached >0 seconds) a 6GB gap would be sufficient.
That does not account for IMS_HITS and non-cachable MISS though, which
reduce the gap needs further.

>
>>>  Would it make more sense for squid
>>> to offer absolute watermarks (in MB offset from the total size)?
>>
>> Yes this is one of the ancient aspects remaining in Squid and different
>> measures may be much better. I'm having a meeting with Alex Rousskov in
>> approx 5 hours on IRC (#squiddev on irc.freenode.net) to discuss the
>> general store improvements for 3.2. This is very likely to be one of the
>> topics.
>
> Please do let us know how you get on :-)

We missed meeting up unfortunately. Will be trying again another time.

Amos



Re: [squid-users] Long running squid proxy slows way down

2009-04-27 Thread Wilson Hernandez - MSD, S. A.



I have a similar setup, squid was slow and crashing when it had a long 
time running, crashing every three to six days. I never found out why it 
crashed. I looked in the log files and couldn't find anything. It just 
crashed for no reason. There are some post to the least about it. I 
decided to restart the system everyday from a cron job at 4am. I know 
that doesn't sound too stable as I'm running it on a linux box but, it 
worked. It hasn't crash ever since.


Re: [squid-users] Long running squid proxy slows way down

2009-04-27 Thread Gavin McCullagh
Hi,

On Mon, 27 Apr 2009, Wilson Hernandez - MSD, S. A. wrote:

> I have a similar setup, squid was slow and crashing when it had a long  
> time running, crashing every three to six days. I never found out why it  
> crashed. I looked in the log files and couldn't find anything. It just  
> crashed for no reason. There are some post to the least about it. I  
> decided to restart the system everyday from a cron job at 4am. I know  
> that doesn't sound too stable as I'm running it on a linux box but, it  
> worked. It hasn't crash ever since.

Did you get any message in /var/log/squid/* or /var/log/syslog?

I had a similar experience and it turned out to be down to the RAM usage of
squid exceeding 3GB (the limit for a process on a 32bit OS).  As the cache
memory filled up, squid's ram size increased until it restarted, and began
filling up again.  I reduced the mem_cache size and everything is fine
since then.

Gavin



Re: [squid-users] Long running squid proxy slows way down

2009-04-27 Thread Matus UHLAR - fantomas
> On Mon, 27 Apr 2009, Wilson Hernandez - MSD, S. A. wrote:
> 
> > I have a similar setup, squid was slow and crashing when it had a long  
> > time running, crashing every three to six days. I never found out why it  
> > crashed. I looked in the log files and couldn't find anything. It just  
> > crashed for no reason. There are some post to the least about it. I  
> > decided to restart the system everyday from a cron job at 4am. I know  
> > that doesn't sound too stable as I'm running it on a linux box but, it  
> > worked. It hasn't crash ever since.

On 27.04.09 13:35, Gavin McCullagh wrote:
> Did you get any message in /var/log/squid/* or /var/log/syslog?
> 
> I had a similar experience and it turned out to be down to the RAM usage of
> squid exceeding 3GB (the limit for a process on a 32bit OS).  As the cache
> memory filled up, squid's ram size increased until it restarted, and began
> filling up again.  I reduced the mem_cache size and everything is fine
> since then.

... which most probably happens due to oversized cache_mem, not noticing 
it's only about memory cache, not about memory usage:

http://wiki.squid-cache.org/SquidFaq/SquidMemory
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Christian Science Programming: "Let God Debug It!".


Re: [squid-users] Long running squid proxy slows way down

2009-04-27 Thread Gavin McCullagh
On Mon, 27 Apr 2009, Matus UHLAR - fantomas wrote:

> On 27.04.09 13:35, Gavin McCullagh wrote:
>
> > I had a similar experience and it turned out to be down to the RAM usage of
> > squid exceeding 3GB (the limit for a process on a 32bit OS).  As the cache
> > memory filled up, squid's ram size increased until it restarted, and began
> > filling up again.  I reduced the mem_cache size and everything is fine
> > since then.
> 
> ... which most probably happens due to oversized cache_mem, not noticing 
> it's only about memory cache, not about memory usage:
> 
> http://wiki.squid-cache.org/SquidFaq/SquidMemory

Absolutely.  I meant cache_mem not mem_cache :-)

Gavin