Hi John,

John Moylan wrote:
Hi,

I have three memory only caches set up 7GB of memory each (the
machines have 12GB of physical memory each). Throughput is fairly high
and this setup works well in reducing the number of requests for
smaller files from my backend storage with lower latency that a disk
and mem. solution.

Do you have statistics regarding fetching from memory and disk? How much is the performance increment when using memory cache only?


However, the cache's on  of the machines fill up
every 2-3 days and Squid's CPU usage subsequently goes up to 100%
(These are all dual SMP machines and system load average remains
around 0.7). FD's, the number of connections and swap are all fine
when the CPU goes up so the culprit is more than likely to be cache
replacement.

I am using heap GDSF as the policy. The maximum size in memory is set
to 96 KB.

Have you tried the LFUDA or the default LRU memory replacement policies?

 I am using squid-2.6.STABLE6-4.el5 on Linux 2.6.

Try upgrading to the latest version of squid.

http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16.tar.gz

It probably contains some improvements over version 2.6.6.


Is there anything I can do to improve expensive cache replacement
apart from stopping and starting Squid every day?

By the way, which Linux distro are you using?

Can you post the output of "squidclient mgr:info" or the relevant parts of your squid.conf?

Thanking you...



J





--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np

http://teklimbu.wordpress.com

Reply via email to