Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-06 Thread Amos Jeffries

On 5/08/2013 11:14 p.m., babajaga wrote:

Sorry, Amos, not to waste too much time here for an off-topic issue, but
interesting matter anyways:


Okay. I am running out of time and this is slightly old info I'm basing 
all this on - so shall we finish up? measurements and testing is kind of 
requried to go further and demonstrate anything.



Disclaimer: some of what I know and say below may be complete FUD with 
modern disks. I have not done any testing since 10-20GB was a widely 
available storage device size and SSD layers on drives had not even been 
invented. Shop-talk with people doing testing more recently though tells 
me that the basics are probably still completely valid even if the 
tricks added to solve problems are changing rapidly.
 The key take-away should be that Squids disk I/O pattern for small 
objects blows most of those new tricks into uselessness.



I ACK your remarks regarding disk controller activity. But, AFAIK, squid
does NOT directly access the disk controller for raw disk I/O, the FS is
always in-between instead. And, that means, that a (lot of) buffering can
occure, before real disk-I/O is done.


This depends on two factors:
1) there is RAM available for the buffering required.
 - The higher the traffic load the less memory is available to the 
system for this.


2) The OS has a chance of advance buffering.
 - Objects up to 64KB (often 4KB or 8 KB) can be completely loaded 
into Squid I/O buffers in a single read(), and there is no way for the 
OS to identify which of the surrounding sectors/blocks are related 
objects to the one just loaded (if it guesses and gets it wrong things 
go even worse than not guessing at all).
 - Also, remember AUFS is preferred for large (over-32KB) objects - 
the ones which will require multiple read()'s - and Rock best for small 
(under-32KB) objects. This OS buffering prediction is a significant part 
of the reason why.



  Which might even lead to spurious high
reponse times, when all of a sudden the OS decides, to really flush large
disk-buffers to disk.


Note that this will result in bursty disk I/O traffic pattern, with 
waves of alternating high and low access speeds for disk accesses. The 
aim with high performance is to flatten the low-speed troughs out as 
much as possible by raising them up to make a constant peak rate of I/O.



  In a good file system (or disk controller,
downstream), request-reordering should happen, to allow elevator-style head
movements.  Or merging file accesses, referencing the same disk blocks.


Exactly. And this is where Squid being partially *network* I/O event 
driven comes into play affecting the disk I/O pattern. Squid is managing 
N concurrent connections, each of those is potentially servicing a 
distinct *unique* client file fetch (well mostly, and when collapsed 
forwarding is ready fo Squid-3 it will be unique). Every I/O loop Squid 
cycles through all N in order and schedules a cross-sectional slice for 
any which are needing disk read/write. So each I/O cycle Squid delivers 
at most one read (HIT/MISS send to client) and one write (MISS received 
from server) for any given file, with up to N possibly vastly separate 
files on disk being accessed.
 The logics doing that elevator calculation are therefore *not* faced 
with a single set of file operations in one area. But with a 
cross-sectional read/write over potentially the entire disk. At most it 
can reorder those into elevator up/down cross section over the disk. But 
in passing those completion events back to Squid it triggers another I/O 
cycle for Squid over the network sockets, and thus another sweep over 
the entire disk pace. Worst-case (and best) the spindle heads are 
sweeping the platter from end-to-end reading everything needed 
1-cycle:1-sweep.


That is with _one_ cache_dir sitting on the spindle.

Now if you pay close attention to the elevator sweep there is a lot of 
time spent scanning between areas of the disk and not so much doing I/O. 
To optimize around this effect and allow even more concurrent file reads 
Squid is load balancing between cache_dir where it places files. AFAIK 
the theory is that one head can be seeking while another is doing its 
I/O, for overall effect of having a more steady flow of bytes back to 
Squid after the FS software abstraction layer and raising those troughs 
again to a smooth flow. Although that said, theory is not practice.
 Placing both cache_dir on the one disk the FS logics will of course 
reorder and interleave the I/O for each cache_dir such the the disk 
behaviour is a single sweep as for one cache_dir. BUT, as a result the 
seek lag and bursty nature of read() bytes returned is fully exposed to 
Squid - by the very mechanisms supposedly minimizing that. In turn this 
reflects in the network I/O as bytes are relayed directly there by Squid 
and TCP gets a bursty peak/trough pattern appearing.


Additionally, and probably more importantly, that reordering of 2 
cache_dir on one disk 

Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-05 Thread Amos Jeffries

On 5/08/2013 12:58 p.m., babajaga wrote:

Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to

the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk lifetime relative to the traffic speed. Rock and COSS cache types
avoid this by aggregating the small objects into large blocks which get
read/write all at once. 

Really that bad ?
As squid does not use raw disk-I/O for any cache type, OS/FS-specific
buffering/merging/delayed writes will always happen, before cache objects
are really written to disk. So, a-priori I would not see a serious
difference between ufs/aufs/rock/COSS on same spindle for the same object
size (besides some overhead for creation of FS-info for ufs/aufs).


It is not quite that simple. For a small object from UFS/AUFS/diskd 
there is a file seek and load times, related small files all have the 
same overhead. For Rock/COSS there is file seek and load time only for 
the block in which that file exists (admittedly a little more overhead 
loading the larger size of bytes), once it is loaded the additional 
small objects have zero disk overheads. Since web objects are grouped in 
pages objects which are very often loaded together within a short 
timeframe the Rock/COSS method of storing to blocks in request order 
often acts almost as efficiently as compacting the entire page into an 
archive and delivering it in one HTTP request.


AUFS uses threads to get performance out of disk I/O controllers. There 
are only a limited number the OS/controller can handle in parallel 
before it starts to queue and all the others. Placing two such 
directories on one disk doubles the controller load. Each thread will be 
performing some operation on different parts of the disk so the seek 
times are higher despite OS/FS tricks. If you throw too many read/write 
at the controller buffering appears and slows the traffic down - not 
good for performance. Also, unless you have disks with I/O rates faster 
than your network pipes it is entirely possible that Squid will max out 
the OS/FS layer capacity despite all the tricks it uses.


If you want to measure it, please do.


  COSS is
out-of-favour anyway, because of being unstable, wright ?


Sort of. COSS is very much in favour for 2.7 installs, but for 3.2+ it 
is it out of favour mostly because Rock was created as an improved 
version instead of simply porting the COSS fixes.


Amos


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-05 Thread babajaga
Sorry, Amos, not to waste too much time here for an off-topic issue, but
interesting matter anyways:

I ACK your remarks regarding disk controller activity. But, AFAIK, squid
does NOT directly access the disk controller for raw disk I/O, the FS is
always in-between instead. And, that means, that a (lot of) buffering can
occure, before real disk-I/O is done. Which might even lead to spurious high
reponse times, when all of a sudden the OS decides, to really flush large
disk-buffers to disk. In a good file system (or disk controller,
downstream), request-reordering should happen, to allow elevator-style head
movements.  Or merging file accesses, referencing the same disk blocks.
And all this should happen after squids activities are completed, but before
the real disk driver/controller starts its work.
BTW, I did some private measurements, not regarding response times because
of various types of cache_dirs, but regarding reponse times/disk thruput
because of various FS and options thereof. And found, that a crippled ext4
works best for me. Default journaling etc. in ext4 has a definite hit on
disk-I/O. Giving up some safety features has a drastic positive influence.
Should be valid, for all types of cache_dirs, though.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661442.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread John Joseph
Thanks Augustus for the email 

my information is 

---

[root@proxy squid]# squidclient -h 127.0.0.1 mgr:storedir
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Sun, 04 Aug 2013 07:01:30 GMT
Content-Type: text/plain
Expires: Sun, 04 Aug 2013 07:01:30 GMT
Last-Modified: Sun, 04 Aug 2013 07:01:30 GMT
X-Cache: MISS from proxy
X-Cache-Lookup: MISS from proxy:3128
Via: 1.0 proxy (squid/3.1.10)
Connection: close

Store Directory Statistics:
Store Entries  : 13649421
Maximum Swap Size  : 58368 KB
Current Store Swap Size: 250112280 KB
Current Capacity   : 43% used, 57% free

Store Directory #0 (aufs): /opt/var/spool/squid
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 256
Maximum Size: 58368 KB
Current Size: 250112280 KB
Percent Used: 42.85%
Filemap bits in use: 13649213 of 16777216 (81%)
Filesystem Space in use: 264249784/854534468 KB (31%)
Filesystem Inodes in use: 13657502/54263808 (25%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 44.69 days

--

and my squid.conf is as 

--

always_direct allow all
cache_log   /opt/var/log/squid/cache.log
cache_access_log    /opt/var/log/squid/access.log

cache_swap_low 90
cache_swap_high 95

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 172.16.5.0/24    # RFC1918 possible internal network
acl localnet src 172.17.0.0/22    # RFC1918 possible internal network
acl localnet src 192.168.20.0/24    # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
always_direct allow local-servers

acl SSL_ports port 443
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports


acl ipgroup src 172.16.5.1-172.16.5.255/32
acl ipgroup src 172.17.0.10-172.17.3.254/32
delay_pools 1
delay_class 1 2 
delay_parameters 1 256/386 14/18
delay_access 1 allow ipgroup
delay_access 1 deny all

http_access allow localnet
http_access allow localhost
http_access allow localnet
http_access allow localhost

http_access deny all

http_port 3128 transparent

hierarchy_stoplist cgi-bin ?

cache_dir aufs /opt/var/spool/squid 57 32 256

coredump_dir /opt/var/spool/squid


maximum_object_size 4 GB


refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire 
ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 
90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320

refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    40%    40320


visible_hostname proxy

icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all

Guidance and advice requested

Thanks for the reply
Joseph John





- Original Message -
From: babajaga augustus_me...@yahoo.de
To: squid-users@squid-cache.org
Cc: 
Sent: Thursday, 1 August 2013 2:11 PM
Subject: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 
%  cache usage

The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space available for caching.
So you might post your squid.conf here.

And, the output of
squidclient -h 127.0.0.1 mgr:storedir





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4

Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread Amos Jeffries

On 4/08/2013 7:13 p.m., John Joseph wrote:

Thanks Augustus for the email

my information is

---

[root@proxy squid]# squidclient -h 127.0.0.1 mgr:storedir
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Sun, 04 Aug 2013 07:01:30 GMT
Content-Type: text/plain
Expires: Sun, 04 Aug 2013 07:01:30 GMT
Last-Modified: Sun, 04 Aug 2013 07:01:30 GMT
X-Cache: MISS from proxy
X-Cache-Lookup: MISS from proxy:3128
Via: 1.0 proxy (squid/3.1.10)
Connection: close

Store Directory Statistics:
Store Entries  : 13649421
Maximum Swap Size  : 58368 KB
Current Store Swap Size: 250112280 KB
Current Capacity   : 43% used, 57% free

Store Directory #0 (aufs): /opt/var/spool/squid
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 256
Maximum Size: 58368 KB
Current Size: 250112280 KB
Percent Used: 42.85%
Filemap bits in use: 13649213 of 16777216 (81%)
Filesystem Space in use: 264249784/854534468 KB (31%)
Filesystem Inodes in use: 13657502/54263808 (25%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 44.69 days


You appear to have a good case there for upgrading to squid-3.2 or later 
and adding a rock cache_dir.


As you can see 81% of the Filemap is full. That is the file number codes 
Squid uses to internally reference stored objects. There is an absolute 
limit of 2^24 (or 1677216 in the above report). That will require an 
average object size of 35KB to fill your 557 GB storage area. Your 
details earlier said the mean object size actually stored so far was 18KB.


If you add a 50GB rock store alongside that UFS directory you should be 
able to double the cached object count.



--

and my squid.conf is as

--

always_direct allow all
cache_log   /opt/var/log/squid/cache.log
cache_access_log/opt/var/log/squid/access.log

cache_swap_low 90
cache_swap_high 95

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 172.16.5.0/24# RFC1918 possible internal network
acl localnet src 172.17.0.0/22# RFC1918 possible internal network
acl localnet src 192.168.20.0/24# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
always_direct allow local-servers


You are using always_direct allow all above. This line is never even 
being checked.


Also, always_direct has no meaning when there are no cache_peer lines to 
be overridden (which is the purpose of always_direct). You can remove 
both the always_direct lines to make things a bit faster.



acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports


acl ipgroup src 172.16.5.1-172.16.5.255/32
acl ipgroup src 172.17.0.10-172.17.3.254/32
delay_pools 1
delay_class 1 2
delay_parameters 1 256/386 14/18
delay_access 1 allow ipgroup
delay_access 1 deny all

http_access allow localnet
http_access allow localhost
http_access allow localnet
http_access allow localhost


You have doubled these rules up.


http_access deny all

http_port 3128 transparent


It is a good idea to always have 3128 listing for regular proxy traffic 
and redirecting the intercepted traffic to a separate port. The 
interception port is a private detail only relevant to teh NAT 
infrastructure doing the redirection and Squid. It can be firewalled to 
prevent any access directly to the port.




hierarchy_stoplist cgi-bin ?

cache_dir aufs /opt/var/spool/squid 57 32 256

coredump_dir /opt/var/spool/squid


maximum_object_size 4 GB


Can you try placing this above the cache_dir line please and see if it 
makes any difference?



refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire 
ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 
90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private


ignore-private and ignore-no-store are actually VERY bad ideas. No 
matter that it looks okay for innocent things like images and archives. 
Even those types 

[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread babajaga
Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.

However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you have a production system, I would suggest 2
ufs/aufs.) 

BUT, and this is valid for both alternatives, be careful then to avoid
double caching by applying
consistent limits on the size of cached objects.
Note, that there are several limits to be considered:
maximum_object_size_in_memory  xx KB
maximum_object_size yyy KB
minimum_object_size 0 KB
cache_dir aufs /var/cacheA/squid27 250 16 256 min-size=0 max-size=
cache_dir aufs /var/cacheB/squid27 250 16 256 min-size=zzz+1
max-size= KB

And, when doing this, you should use the newest squid release. Or good, old
2.7 :-)
Reason is, that there were a few squid versions 3.x, having a bug when
evaluating the combination of different limit options, with the consequence,
of not storing certain cachable objects on disk.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661428.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread Amos Jeffries

On 5/08/2013 4:17 a.m., babajaga wrote:

Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.

However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you have a production system, I would suggest 2
ufs/aufs.)


Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to 
the limits of the hardware. If you place 2 UFS based cache_dir on one 
physical disk spindle with lots of small objects they will fight for I/O 
resources with the result of dramatic reduction in both performance and 
disk lifetime relative to the traffic speed. Rock and COSS cache types 
avoid this by aggregating the small objects into large blocks which get 
read/write all at once.



BUT, and this is valid for both alternatives, be careful then to avoid
double caching by applying
consistent limits on the size of cached objects.


You won't get double caching within one proxy process. This only happens 
with multiple proxies or with SMP workers.

Note, that there are several limits to be considered:
maximum_object_size_in_memory  xx KB
maximum_object_size yyy KB
minimum_object_size 0 KB
cache_dir aufs /var/cacheA/squid27 250 16 256 min-size=0 max-size=
cache_dir aufs /var/cacheB/squid27 250 16 256 min-size=zzz+1
max-size= KB

And, when doing this, you should use the newest squid release. Or good, old
2.7 :-)
Reason is, that there were a few squid versions 3.x, having a bug when
evaluating the combination of different limit options, with the consequence,
of not storing certain cachable objects on disk.


That bug still exists, the important thing until that gets fixed is to 
place maximum_object_size above the cache_dir options.


Amos


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread babajaga
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk lifetime relative to the traffic speed. Rock and COSS cache types
avoid this by aggregating the small objects into large blocks which get
read/write all at once. 

Really that bad ? 
As squid does not use raw disk-I/O for any cache type, OS/FS-specific
buffering/merging/delayed writes will always happen, before cache objects
are really written to disk. So, a-priori I would not see a serious
difference between ufs/aufs/rock/COSS on same spindle for the same object
size (besides some overhead for creation of FS-info for ufs/aufs). COSS is
out-of-favour anyway, because of being unstable, wright ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661436.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-01 Thread John Joseph
         5%
Wednesday, 17 July 2013      38     1021     25538     80.74G         6%
Tuesday, 16 July 2013      30     1021     27427     85.19G         5%
Monday, 15 July 2013           31     1021     27369     96.13G         5%
Sunday, 14 July 2013           29     1019     23879     74.26G         5%
Saturday, 13 July 2013         28     1014     20715     70.90G         5%
Friday, 12 July 2013         22     1010     22537     69.59G         5%
Thursday, 11 July 2013         22     1021     23031     86.52G         4%
Wednesday, 10 July 2013     19     1015     22542     73.21G         5%
Tuesday, 09 July 2013      16     1020     22408     74.67G         5%
Monday, 08 July 2013       23     1021     23594     72.99G         5%
Sunday, 07 July 2013        28     1021     23408     71.97G         6%
Saturday, 06 July 2013      17     1006     21390     64.28G         5%
Friday, 05 July 2013              23     994     22685     61.42G           5%
Thursday, 04 July 2013      20     1016     25792     71.54G         5%
Wednesday, 03 July 2013  24     1017     25178     74.03G         5%
Tuesday, 02 July 2013           37     1019     29740     83.19G         5%
Monday, 01 July 2013        22     1020     25175     77.47G         5%
Sunday, 30 June 2013         5     343     7083        10.51G         4%
Saturday, 29 June 2013           1     7     1034           1.05G            2%
Friday, 28 June 2013               1     9     1846            3.62G            
1%

Current active users:     155
Current date and time is:     01-08-2013 10:55:24

-
Requesting Guidance and Advice 

thanks 

Joseph John





- Original Message -
From: babajaga augustus_me...@yahoo.de
To: squid-users@squid-cache.org
Cc: 
Sent: Tuesday, 30 July 2013 12:13 PM
Subject: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 
%  cache usage

You should install and use
http://wiki.squid-cache.org/Features/CacheManager

This gives you a lot of info regarding cache performance, like hit rate etc.


Having 556 GB of cache within one cache dir might already hit the upper
limit of max. number of cached objects, depending upon the avg size of
objects in cache.
Which could mean, that only part of the 556GB will ever be used.

Solution: Create different cache dirs, for various object-size classes.
But before doing this, post the infos you get from CacheManager, like avg
object size, cache fill rate etc.
CacheClient is the alternative to CacheManager, in case you do not want to
use web access.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661314.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-01 Thread Amos Jeffries

On 1/08/2013 6:35 p.m., John Joseph wrote:

Hi Amos,Ahmad,Babajaga
Thanks for your advice and feed back, I am posting more information

---
the HIT,MISS,REFRESH details are

cat  /opt/var/log/squid/access.log  | grep -c HIT
13810283
  cat  /opt/var/log/squid/access.log  | grep -c MISS
57874593
cat  /opt/var/log/squid/access.log  | grep -c REFRESH
6760966


Doing a rough calculation here I get just under 18%. Which is low but 
inside the 15-40 range normally seen.





squidclient info result

squidclient -h 127.0.0.1 mgr:info
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Thu, 01 Aug 2013 06:10:06 GMT
Content-Type: text/plain
Expires: Thu, 01 Aug 2013 06:10:06 GMT
Last-Modified: Thu, 01 Aug 2013 06:10:06 GMT
X-Cache: MISS from proxy
X-Cache-Lookup: MISS from proxy:3128
Via: 1.0 proxy (squid/3.1.10)
Connection: close

Squid Object Cache: Version 3.1.10
Start Time:Wed, 31 Jul 2013 23:08:41 GMT
Current Time:Thu, 01 Aug 2013 06:10:06 GMT
Connection information for squid:
 Number of clients accessing cache:574
 Number of HTTP requests received:508778
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:0
 Number of HTCP messages received:0
 Number of HTCP messages sent:0
 Request failure ratio: 0.00
 Average HTTP requests per minute since start:1207.3
 Average ICP messages per minute since start:0.0
 Select loop called: 62199585 times, 0.407 ms avg
Cache information for squid:
 Hits as % of all requests:5min: 21.6%, 60min: 27.1%
 Hits as % of bytes sent:5min: 6.6%, 60min: 5.6%


Thank you. This explains the whole story.

MySAR is reporting your cache bandwidth savings HIT ratio (bytes sent).

What you have is a reasonable HIT ratio 21-27% by request count, however 
it appears that is built from mostly small objects. The larger objects 
are mostly MISS events. Which drags your byte HIT ratio way down low.


However there is more to the story... the log numbers you show above 
indicate that about 9% of all requests through your cache are REFRESH, 
which may be recorded as a HIT with no byte count associated even on the 
largest of objects. With high REFRESH traffic there is also a high 
amount of IMS traffic. Those two will drag down the HIT ratio for bytes 
down while also being a good thing - the byte count is down because 
there actually are less bytes used.
NP: Squid does not yet record how much savings is gained from REFRESH or 
IMS traffic, which would help show this a bit better.




 Memory hits as % of hit requests:5min: 5.6%, 60min: 9.0%
 Disk hits as % of hit requests:5min: 46.9%, 60min: 47.7%


And the breakdown of where those HITs are coming from shows mostly disk 
activity,  very little memory benefits.



MySAR results are



DATE  USERS HOSTS SITES  BYTESTRAFFICCache 
Percent

Thursday, 01 August 2013  11 797 13899 36.11G  4%
Wednesday, 31 July 2013   42 1024 29862 89.22G 5%
Tuesday, 30 July 2013   19 1023 27096 85.24G 5%
Monday, 29 July 201329 1022 26425 82.55G 5%


Hmm. These look like your byte-count HIT ratio percentages.

Amos


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-01 Thread babajaga
The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space available for caching.
So you might post your squid.conf here.

And, the output of
 squidclient -h 127.0.0.1 mgr:storedir





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661399.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-07-30 Thread babajaga
You should install and use
 http://wiki.squid-cache.org/Features/CacheManager

This gives you a lot of info regarding cache performance, like hit rate etc.


Having 556 GB of cache within one cache dir might already hit the upper
limit of max. number of cached objects, depending upon the avg size of
objects in cache.
Which could mean, that only part of the 556GB will ever be used.

Solution: Create different cache dirs, for various object-size classes.
But before doing this, post the infos you get from CacheManager, like avg
object size, cache fill rate etc.
CacheClient is the alternative to CacheManager, in case you do not want to
use web access.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661314.html
Sent from the Squid - Users mailing list archive at Nabble.com.