On 2014-02-25 03:40, Mohamad Saeed wrote:
Hi all,

 I'm using squid on a 100Mbps ISP with about 5,000 users.

 I have an Intel Xeon processor-machine, with 8GB of RAM and 500
of HD for the cache.

Squid usually work fine and the memory is OK ,but thetraffic full down
Frequently then it back to normal state .


all my server resources are in best stat because i don't cache any thing .

this is my squid.conf snapshot :

logfile_rotate 0

url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 192 startup=150 idle=10 concurrency=0
redirector_bypass on

Is Squid bypassing the helper when the traffic changes (either up or down)?

Is squidGuard coping with the level of traffic passed to it?
... SG log should show whether it is shifting to emergency bypass mode internally.


What is the distribution of load on the helpers?
 do you actually need 150 to be constantly running?
 is Squid pausing to start 10 new ones at the time of slowdown?

 ... the cache manager "redirector" report shows that information.


http_port 8080
http_port 3129 tproxy
http_port 3127 intercept


wccp2_router x.x.x.x
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method mask
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=240 ports=80
wccp2_rebuild_wait on

wccp_version 4

http://www.squid-cache.org/Doc/config/wccp_version/

... "otherwise do not specify this parameter."


cache deny all
cache_log /dev/null


The log where Squid reports *critical and serious operational issues* is ... /dev/null ?


Do you have any idea or any other data I can collect to try and
track down this?

What does that log file in /dev/null have to say about issues Squid may be encountering?
 ... oops.


Amos

Reply via email to