Thank to Adrian. After I change some configure items of mysq squid.conf,
I think I may solve the problem. Now, the squid can handle 2500 requests/second
and cost cpu to 60-70%.  And I think this may be normal.
I get the below conclusion through this year's using of squid.
1) If requets/second is high, and the files be to cached are small size and many
number, the configure items: cache_mem, memory_pools should be set to use,
and client_persistent_connections should be set 'on' when you use squid-2.6.X
(in which the system's epoll be used);
2)If request/second are not too large(such as 1000), and the files
data be cached
are too large(such as 100G), we should close the memory_pools and set
cache_mem to be a lower value(such as 256MB), this will let the OS system
to use more ram memory and use swap memory fewer, and this will decrease
the page-faults of squid very much and squid will not be blocked on DISK I/O.

Because I use squid in product environment and have made some changed in
squid, I can't test the "HEAD squid" version you suggest. Would you please
tell me the priority of it ? Thanks.

Thanks to squid tech-group again.

On 1/29/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
On Mon, Jan 29, 2007, ShuXin Zheng wrote:
> I use squid2.6.stable2 in Redhat Linux Advanced Server4
> and add some functions to it. I use epool method to get
> large tcp-connection1. The functions I added are for local resolve
> DNS(which dosen't use outer DNS). In squid.conf, I added
> some acl and http_access to obtain more security.

Could you grab the latest squid-2-HEAD snapshot from
http://www.squid-cache.org/Versions/v2/HEAD/ and let me know how it goes?

Thanks,


Adrian




--
zsxxsz

Reply via email to