On 5/08/2013 11:14 p.m., babajaga wrote:
Sorry, Amos, not to waste too much time here for an off-topic issue, but
interesting matter anyways:
Okay. I am running out of time and this is slightly old info I'm basing
all this on - so shall we finish up? measurements and testing is kind of
On 5/08/2013 12:58 p.m., babajaga wrote:
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic
Sorry, Amos, not to waste too much time here for an off-topic issue, but
interesting matter anyways:
I ACK your remarks regarding disk controller activity. But, AFAIK, squid
does NOT directly access the disk controller for raw disk I/O, the FS is
always in-between instead. And, that means, that a
report shows upto 5 % to 7
% cache usage
The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space
On 4/08/2013 7:13 p.m., John Joseph wrote:
Thanks Augustus for the email
my information is
---
[root@proxy squid]# squidclient -h 127.0.0.1 mgr:storedir
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Sun, 04 Aug 2013 07:01:30 GMT
Content-Type: text/plain
Expires:
Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.
However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you
On 5/08/2013 4:17 a.m., babajaga wrote:
Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.
However, as an alternative to using rock, you can setup a second
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk
10:55:24
-
Requesting Guidance and Advice
thanks
Joseph John
- Original Message -
From: babajaga augustus_me...@yahoo.de
To: squid-users@squid-cache.org
Cc:
Sent: Tuesday, 30 July 2013 12:13 PM
Subject: [squid-users] Re: Squid monitoring, access report shows
On 1/08/2013 6:35 p.m., John Joseph wrote:
Hi Amos,Ahmad,Babajaga
Thanks for your advice and feed back, I am posting more information
---
the HIT,MISS,REFRESH details are
cat /opt/var/log/squid/access.log | grep -c HIT
13810283
cat /opt/var/log/squid/access.log | grep -c MISS
The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space available for caching.
So you might post
You should install and use
http://wiki.squid-cache.org/Features/CacheManager
This gives you a lot of info regarding cache performance, like hit rate etc.
Having 556 GB of cache within one cache dir might already hit the upper
limit of max. number of cached objects, depending upon the avg size
12 matches
Mail list logo