Re: [squid-users] cache hit rate isn't what I'd expect
Here ya go 26/Sep/2017:20:10:27137 10.93.3.47 TCP_HIT/200 11265 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:10:33 46 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:10:42 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:10:47 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:10:52 5 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:10:56234 10.93.3.47 TCP_HIT/200 11265 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:11:11 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:15 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:19 6 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:24 5 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:28 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:32 1 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:37 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:41 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:48 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:53 4 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:11:57 6 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:01 7 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:06 5 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:10 4 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:14 11 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:19 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:23 6 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:28 4 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:32 6 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:37 96 10.93.3.47 TCP_HIT/200 11265 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:12:41 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:12:49225 10.93.3.47 TCP_HIT/200 11266 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:12:59 0 10.93.3.47 TCP_HIT/200 11265 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:13:03 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:13:08 0 10.93.3.47 TCP_HIT/200 11265 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_NONE/- 26/Sep/2017:20:13:13 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:13:27 3 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:13:33 2 10.93.3.47 TCP_MISS/200 11259 GET https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr - HIER_DIRECT/192.229.163.180 26/Sep/2017:20:13:37 4 10.93.3.47 TCP_MISS/200
Re: [squid-users] cache hit rate isn't what I'd expect
On 29/09/17 11:29, Aaron Turner wrote: So this grep through my access logs for this single URL does a good job illustrating a rather interesting problem: $ grep -h 'https://static.licdn.com/sc/h/ddzuq7qeny6qn0ysh3hj6pzmr text/css ip_index=0,client=m0078269' access.*.log | sort ... > At first I thought this was because the because I have a bunch of clients, each of which behaves exactly the same except for one thing: the client includes a unique request header that squid strips off before forwarding to the server (you can see it logged as client=mX_). But in this case I've controlled for that and only grep'd for a single client's request. I've even tried setting "vary_ignore_expire on", but that doesn't seem to be a complete fix. I can't for the life of me understand why the low hit rate though. The duration and size fields are quite useful for detecting reasons for HIT/MISS. Request headers should not affect the response caching, unless they are listed in the servers Vary header. In this case the server is delivering broken Vary responses. redbot.org says it is using Vary:Accept-Encoding sometimes, so both the Vary and Accept-Encoding would be useful info to log. I expect it is the usual problem of clients fighting over whose variant gets cached when this type of server breakage happens - when the Vary header changes or disappears, old variants become unfindable until it changes back. Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid HIT and Cisco ACL
On 2016-11-07 20:11, Juan C. Crespo R. wrote: Hi, Thanks for your response and help 1. Cache: Version 3.5.19 Service Name: squid configure options: '--prefix=/usr/local/squid' '--enable-storeio=rock,diskd,ufs,aufs' '--enable-removal-policies=lru,heap' '--disable-pf-transparent' '--enable-ipfw-transparent' '--with-large-files' '--enable-delay-pools' '--localstatedir=/usr/local/squid/var/run' '--disable-select' '--enable-ltdl-convenience' '--enable-zph-qos' 2. The only intermediate device its a Cisco 3750G12 switch with no policy or special configuration between the Squid Box and the Cisco CMTS. If 'mls qos' is enabled on your Catalyst, it would clear any QoS marks by default. If it is not the case, you can mirror Squid's traffic (monitor session on Catalyst) to packet analyzer to check whether the QoS marks applied as expected. Garri ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid HIT and Cisco ACL
Hi, Thanks for your response and help 1. Cache: Version 3.5.19 Service Name: squid configure options: '--prefix=/usr/local/squid' '--enable-storeio=rock,diskd,ufs,aufs' '--enable-removal-policies=lru,heap' '--disable-pf-transparent' '--enable-ipfw-transparent' '--with-large-files' '--enable-delay-pools' '--localstatedir=/usr/local/squid/var/run' '--disable-select' '--enable-ltdl-convenience' '--enable-zph-qos' 2. The only intermediate device its a Cisco 3750G12 switch with no policy or special configuration between the Squid Box and the Cisco CMTS. Thanks again On 07/11/2016 08:17 a.m., Garri Djavadyan wrote: On Mon, 2016-11-07 at 06:25 -0400, Juan C. Crespo R. wrote: Good Morning Guys I've been trying to make a few ACL to catch and then improve the BW of the HITS sent from my Squid Box to my CMTS and I can't find any way to doit Squid.conf: qos_flows tos local-hit=0x30 Cisco CMTS: ip access-list extender JC Int giga0/1 ip address 172.25.25.30 255.255.255.0 ip access-group JC in show access-list JC 10 permit ip any any tos 12 20 permit ip any any dscp af12 30 permit ip any any (64509 matches) Thanks Hi, 1. What version of Squid are you using? Also, please provide configure options (squid -v). 2. Are you sure that intermediate devices don't clear DSCP bits before reaching the router? I've tested the feature using 4.0.16-20161104-r14917 with almost default configure options: # sbin/squid -v Squid Cache: Version 4.0.16-20161104-r14917 Service Name: squid configure options: '--prefix=/usr/local/squid40' '--disable- optimizations' '--with-openssl' '--enable-ssl-crtd' And with almost default configuration: # diff etc/squid.conf.default etc/squid.conf 76a77 qos_flows tos local-hit=0x30 Using tcpdump I see that HIT reply has DSCP AF12: 17:14:56.837675 IP (tos 0x30, ttl 64, id 41134, offset 0, flags [DF], proto TCP (6), length 2199) 127.0.0.1.3128 > 127.0.0.1.42848: Flags [P.], cksum 0x068c (incorrect -> 0x478b), seq 1:2148, ack 161, win 350, options [nop,nop,TS val 607416387 ecr 607416387], length 2147 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid HIT and Cisco ACL
On Mon, 2016-11-07 at 06:25 -0400, Juan C. Crespo R. wrote: > Good Morning Guys > > > I've been trying to make a few ACL to catch and then improve the > BW > of the HITS sent from my Squid Box to my CMTS and I can't find any > way > to doit > > > Squid.conf: qos_flows tos local-hit=0x30 > > Cisco CMTS: ip access-list extender JC > > Int giga0/1 > > ip address 172.25.25.30 255.255.255.0 > > ip access-group JC in > > show access-list JC > > 10 permit ip any any tos 12 > 20 permit ip any any dscp af12 > 30 permit ip any any (64509 matches) > > Thanks Hi, 1. What version of Squid are you using? Also, please provide configure options (squid -v). 2. Are you sure that intermediate devices don't clear DSCP bits before reaching the router? I've tested the feature using 4.0.16-20161104-r14917 with almost default configure options: # sbin/squid -v Squid Cache: Version 4.0.16-20161104-r14917 Service Name: squid configure options: '--prefix=/usr/local/squid40' '--disable- optimizations' '--with-openssl' '--enable-ssl-crtd' And with almost default configuration: # diff etc/squid.conf.default etc/squid.conf 76a77 > qos_flows tos local-hit=0x30 Using tcpdump I see that HIT reply has DSCP AF12: 17:14:56.837675 IP (tos 0x30, ttl 64, id 41134, offset 0, flags [DF], proto TCP (6), length 2199) 127.0.0.1.3128 > 127.0.0.1.42848: Flags [P.], cksum 0x068c (incorrect -> 0x478b), seq 1:2148, ack 161, win 350, options [nop,nop,TS val 607416387 ecr 607416387], length 2147 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid HIT ratio
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Fred, look ;) http://i.imgur.com/UBu13g0.png Store-ID rulez! :) Yes very interesting, can you share your bytes ratio please ? I will take a look to increase my cache as I discussed with Amos but I can't touch the SSL part (no bump for me) http://wiki.squid-cache.org/Features/StoreID Squid configuration example seems wrong - for YT - , no ? Google's increased use of HTTPS and now we can't access youtube without SSL ? Thanks, I take any advice I can get, specially for delicate users :) ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid HIT ratio
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 http://i.imgur.com/3jwftYC.png Bytes ratio is a less, of course. But not so dramatically. YT seems not cacheable now. I made some research and AFAIK we can't cache YT now without VERY special store-ID rewriter. Also, of course, I use SSL-bump. SSL consists over 60% in my traffic. Without bump I can't cache them. 40% hit ratio (and lower) is issue. 26.08.15 0:43, FredB пишет: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Fred, look ;) http://i.imgur.com/UBu13g0.png Store-ID rulez! :) Yes very interesting, can you share your bytes ratio please ? I will take a look to increase my cache as I discussed with Amos but I can't touch the SSL part (no bump for me) http://wiki.squid-cache.org/Features/StoreID Squid configuration example seems wrong - for YT - , no ? Google's increased use of HTTPS and now we can't access youtube without SSL ? Thanks, I take any advice I can get, specially for delicate users :) ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJV3Lj0AAoJENNXIZxhPexGpy8H/0JgG6MpmPHshyawW0cvTQDn Fc90hVTQitOsdYN+GqZSYRw9PKsrlmtXMuxtZyqKTKU6nLMtOAN0fmO1M6a1rZ3o XxeyYsjlsHh8MtFwyP/8HYKZzqJfnUeKuSb5Hbm267P4Zy/CWLQG3Bv5mp1C5R2M uPQv0Jw7BFnBojxc70ryvPyrjdNbiGgAXGHwh5M3Z65ueV2B1mX1WRQa2Hn1mOpJ 1PEL0ZRYYw29xvU+N7XI3vHenU6uuJrejoGgtUWMQBI5kNqeDiEVh7Pqr1vJuWWZ u756P820IHuzIRDqpsf13mCm8qj3oe1JiFR15fXhF1p6Iop9LTaNAtqZcwOBiw4= =Hk+b -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Low Hit Rate
Il 09/03/2011 16:31, Mark George ha scritto: Hi, I've recently installed Squid 2.7 Stable on Centos 5 and experiencing an extremely low hit rate at approximately 5%. If I tail the access log I see a lot of TCP_MISS, and looking at the store log I see a lot of RELEASE and very little SWAP_OUT. Any ideas how I can go about diagnosing the problem? Mark Please read: http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid -- Marcello Romani
RE: [squid-users] Low Hit Rate
Already read that -Original Message- From: Marcello Romani [mailto:mrom...@ottotecnica.com] Sent: 10 March 2011 09:09 To: squid-users@squid-cache.org Subject: Re: [squid-users] Low Hit Rate Il 09/03/2011 16:31, Mark George ha scritto: Hi, I've recently installed Squid 2.7 Stable on Centos 5 and experiencing an extremely low hit rate at approximately 5%. If I tail the access log I see a lot of TCP_MISS, and looking at the store log I see a lot of RELEASE and very little SWAP_OUT. Any ideas how I can go about diagnosing the problem? Mark Please read: http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid -- Marcello Romani This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: http://www.star.net.uk
Re: [squid-users] Low Hit Rate
Le mercredi 9 mars 2011 09:31:31, Mark George a écrit : Hi, I've recently installed Squid 2.7 Stable on Centos 5 and experiencing an extremely low hit rate at approximately 5%. If I tail the access log I see a lot of TCP_MISS, and looking at the store log I see a lot of RELEASE and very little SWAP_OUT. Any ideas how I can go about diagnosing the problem? Mark squid hit rate depends on many factors, can you expose us how is your environment, in terms of how many users, bandwith compsumtio for http surfering, etc etc. LD
RE: [squid-users] Low Hit Rate
It's in testing stage at the moment so there's about 20 users using the proxy at the moment. From the squid graphs I generated a little while back here's some more info on the usage: Graph of TCP Access (5 minute total) Total Accesses: 75095 Average Accesses: 3128.95 per hour Total Cache Hits: 6960 Average Cache Hits: 290 per hour % Cache Hits: 9.26 % Total Cache IMS Hits: 1569 Average Cache IMS Hits: 65.37 per hour Total Cache Misses: 36863 Average Cache Misses: 1535.95 per hour % Cache Misses: 49.08 % Graph of TCP Transfers (5 minute total) Total Transfers: 608.4 Mb Average Transfers: 25.3 Mb per hour Total Cache Hits: 38.5 Mb Average Cache Hits: 1.6 Mb per hour % Cache Hits: 6.34 % Total Cache IMS Hits: 531.1 Kb Average Cache IMS Hits: 22.1 Kb per hour Total Cache Misses: 509.7 Mb Average Cache Misses: 21.2 Mb per hour % Cache Misses: 83.77 % -Original Message- From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com] Sent: 09 March 2011 17:32 To: squid-users@squid-cache.org Subject: Re: [squid-users] Low Hit Rate Le mercredi 9 mars 2011 09:31:31, Mark George a écrit : Hi, I've recently installed Squid 2.7 Stable on Centos 5 and experiencing an extremely low hit rate at approximately 5%. If I tail the access log I see a lot of TCP_MISS, and looking at the store log I see a lot of RELEASE and very little SWAP_OUT. Any ideas how I can go about diagnosing the problem? Mark squid hit rate depends on many factors, can you expose us how is your environment, in terms of how many users, bandwith compsumtio for http surfering, etc etc. LD This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: http://www.star.net.uk
Re: [squid-users] Low Hit Rate
Le mercredi 9 mars 2011 11:42:11, Mark George a écrit : .26 just guessing is your disk cache full? i mean if it almost empty the likehood of a hit is too little also you sholuld do fine tunning, dont wait that placing a squid out-of-the-box will do the right job.
Re: [squid-users] Object Hit/Byte Hit accounting with Multiple Instances
On 15/12/10 14:38, Michael Hendrie wrote: Hello List, I have server running 3 instances of squid-3.0.STABLE19 using a configuration similar to that documented at http://wiki.squid-cache.org/MultipleInstances. Each instance has all other instance configured as siblings using the proxy-only directive to allow sharing of cache without duplicating objects. This setup is working very well and has increased server performance by over 50%. I'm now trying to get an accurate indication of byte savings I'm achieving with this configuration however I'm not sure that the calculations I'm using are giving the correct results. Because each instance maintains a separate cache_dir this seems to be a little difficult to calculate. When instance 1 records a request as a MISS it may in fact be a HIT (from an entire system point of view) if the object is retrieved from the cache of instance 2 or 3. Using a combination of squidclient mgr:counters and SNMP, I grab counter values from each instance, tally and use the following formula to calculate the byte hit ratio: (mgr:counters:client_http.hit_kbytes_out + snmp:cacheClientHTTPHitKb.sibling_addresses) / (mgr:counters:client_http.kbytes_out - snmp:cacheClientHTTPHitKb.sibling_addresses) * 100 = % cache byte hit ratio Using this formula, I always seem to get inconsistencies between what squid reports and what my benchmarking tool reports (web-polygraph). In the few cases I've checked so far, squid is always reporting a 4-5% less byte hit than what web-polygraph reports. That sounds about the size of header overheads to me. Give 3.2 workers a try out now and see if that is usable. The stats calculations are fixed there for multiple workers. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.9 Beta testers wanted for 3.2.0.3
Re: [squid-users] Object Hit/Byte Hit accounting with Multiple Instances
On 16/12/2010, at 12:44 PM, Amos Jeffries wrote: On 15/12/10 14:38, Michael Hendrie wrote: Hello List, I have server running 3 instances of squid-3.0.STABLE19 using a configuration similar to that documented at http://wiki.squid-cache.org/MultipleInstances. Each instance has all other instance configured as siblings using the proxy-only directive to allow sharing of cache without duplicating objects. This setup is working very well and has increased server performance by over 50%. I'm now trying to get an accurate indication of byte savings I'm achieving with this configuration however I'm not sure that the calculations I'm using are giving the correct results. Because each instance maintains a separate cache_dir this seems to be a little difficult to calculate. When instance 1 records a request as a MISS it may in fact be a HIT (from an entire system point of view) if the object is retrieved from the cache of instance 2 or 3. Using a combination of squidclient mgr:counters and SNMP, I grab counter values from each instance, tally and use the following formula to calculate the byte hit ratio: (mgr:counters:client_http.hit_kbytes_out + snmp:cacheClientHTTPHitKb.sibling_addresses) / (mgr:counters:client_http.kbytes_out - snmp:cacheClientHTTPHitKb.sibling_addresses) * 100 = % cache byte hit ratio Using this formula, I always seem to get inconsistencies between what squid reports and what my benchmarking tool reports (web- polygraph). In the few cases I've checked so far, squid is always reporting a 4-5% less byte hit than what web-polygraph reports. That sounds about the size of header overheads to me. Give 3.2 workers a try out now and see if that is usable. The stats calculations are fixed there for multiple workers. Unfortunately I must use this version (for the moment) for reasons beyond my control. Just to clarify 1). Are you saying that headers aren't counted in the any of hit_kb_out counters so I would still see the discrepancies in figures between web-polygraph and a single instance squid (never had a need to check before now). 2). Excluding the fact that headers may not be counted, does the formula I'm using sound like the correct way to calculate hit % with a multi-instance setup 3). From the 3.2 wiki page - http://wiki.squid-cache.org/Features/SmpScale Currently, Squid workers do not share and do not synchronize other resources or services, including: • object caches (memory and disk) -- there is an active project to allow such sharing; Can 3.2 workers be configured with other workers as siblings to make use of their cache.
Re: [squid-users] Object Hit/Byte Hit accounting with Multiple Instances
On 16/12/10 17:37, Michael Hendrie wrote: On 16/12/2010, at 12:44 PM, Amos Jeffries wrote: On 15/12/10 14:38, Michael Hendrie wrote: Hello List, I have server running 3 instances of squid-3.0.STABLE19 using a configuration similar to that documented at http://wiki.squid-cache.org/MultipleInstances. Each instance has all other instance configured as siblings using the proxy-only directive to allow sharing of cache without duplicating objects. This setup is working very well and has increased server performance by over 50%. I'm now trying to get an accurate indication of byte savings I'm achieving with this configuration however I'm not sure that the calculations I'm using are giving the correct results. Because each instance maintains a separate cache_dir this seems to be a little difficult to calculate. When instance 1 records a request as a MISS it may in fact be a HIT (from an entire system point of view) if the object is retrieved from the cache of instance 2 or 3. Using a combination of squidclient mgr:counters and SNMP, I grab counter values from each instance, tally and use the following formula to calculate the byte hit ratio: (mgr:counters:client_http.hit_kbytes_out + snmp:cacheClientHTTPHitKb.sibling_addresses) / (mgr:counters:client_http.kbytes_out - snmp:cacheClientHTTPHitKb.sibling_addresses) * 100 = % cache byte hit ratio Using this formula, I always seem to get inconsistencies between what squid reports and what my benchmarking tool reports (web-polygraph). In the few cases I've checked so far, squid is always reporting a 4-5% less byte hit than what web-polygraph reports. That sounds about the size of header overheads to me. Give 3.2 workers a try out now and see if that is usable. The stats calculations are fixed there for multiple workers. Unfortunately I must use this version (for the moment) for reasons beyond my control. Just to clarify 1). Are you saying that headers aren't counted in the any of hit_kb_out counters so I would still see the discrepancies in figures between web-polygraph and a single instance squid (never had a need to check before now). I'm saying 4-5% is about the header size. I can't find anywhere in code which is eliding them but I didn't spend much time looking. 2). Excluding the fact that headers may not be counted, does the formula I'm using sound like the correct way to calculate hit % with a multi-instance setup It make sense to me. I'd use the SNMP counters for everything though. Calls to cachemgr will add avoidable skew. The local traffic to all clients (peers included) can be found at cacheHttpOutKb and the local total from all servers (peers included) at cacheServerInKb. 3). From the 3.2 wiki page - http://wiki.squid-cache.org/Features/SmpScale Currently, Squid workers do not share and do not synchronize other resources or services, including: • object caches (memory and disk) -- there is an active project to allow such sharing; Can 3.2 workers be configured with other workers as siblings to make use of their cache. Yes. They are essentially multiple instances running out of one config file. With some new config tools/settings to make the management far easier. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.9 Beta testers wanted for 3.2.0.3
Re: [squid-users] Near Hit
sön 2010-04-11 klockan 04:53 -0700 skrev nima chavooshi: Hi First of all thanks to anyone that helps to develop squid. For get information squid I use squidclient mgr:info. Does number of hit rate include near hit rate ?? Partly. Another question: what is Not-Modified Replies? 304 responses, where the server (or proxy) tells the requesting client (or proxy) that what the clieny already have in is own cache is up to date. Regards Henrik
Re: [squid-users] cache hit 100%
UK SquidUser (AXA-TECH-UK) wrote: Hi... we are running squid 2.6 stable 17 that have been built within the last 6 months. The problem we are experiencing is that one of our servers has hit 100% on the proxycache partition. It is configured in its own filesystem, it shares with nothing... it has a partition of 36Gb and squid is configured in squid.conf to use 28Gb. Once it has got to 28Gb we see the following messages in cache.log (some entries have been deleted, hopefully I have the relevant ones below) Rebuilding storage in /proxycache (DIRTY) Store rebuilding is 0.1% complete 2009/01/06 12:36:11| diskHandleWrite: FD 536: disk write error: (28) No space left on device 2009/01/06 12:36:11| storeAufsWriteDone: got failure (-6) 2009/01/06 12:36:11| storeSwapOutFileClosed: dirno 0, swapfile 1392, errflag=-6 (28) No space left on device 2009/01/06 12:36:29| WARNING: newer swaplog entry for dirno 0, fileno 15D8 2009/01/06 12:44:13| Store rebuilding is 92.2% complete 2009/01/06 12:44:15| WARNING: Disk space over limit: 13552 KB 13512 KB 2009/01/06 12:44:26| WARNING: Disk space over limit: 13552 KB 13512 KB Huh. Here Squid is saying that the disk usage is ~13MB. FATAL: xcalloc: Unable to allocate 1 blocks of 28 bytes! Squid Cache (Version 2.6.STABLE17): Terminated abnormally. CPU Usage: 531.367 seconds = 482.302 user + 49.066 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: -307440 KB I don't think that's supposed to be negative... Ordinary blocks: -308814 KB 87176 blks Small blocks: 0 KB 0 blks Holding blocks: 25248 KB 6 blks Free Small blocks: 0 KB Free Ordinary blocks: 1373 KB Total in use: -283566 KB 100% Total free: 1373 KB 0% 2009/01/06 12:44:27| Not currently OK to rewrite swap log. 2009/01/06 12:44:27| storeDirWriteCleanLogs: Operation aborted. 2009/01/06 12:44:38| Store rebuilding is 0.0% complete And this process repeats continually. We have taken it out of service, but previously to that it did look like it was processing requests in the cache.. ie TCP_HIT's were appearing in the log. But surely what is appearing in the cache log is incorrect. Squid is configured with the default cache_swap_low and cache_swap_high and cache_mem 1024 MB, maximum_object_size 16384 KB maximum_object_size_in_memory 64 KB cache_replacement_policy heap GDSF memory_replacement_policy heap GDSF cache_dir aufs /proxycache 28000 256 256 Can anyone advise why the proxycache hits 100% and it appears to have problems when it does? My first guess would be either swap.state or filesystem corruption. Find and remove the swap.state files (if you didn't specify a location in squid.conf, it should reside in /proxycache) and see if that fixes it. If that doesn't do it, reformat /proxycache, run squid -z to rebuild the directory structure and start it up. Thanks, K. Kev Shurmer Network Analyst - TS Data Networks AXA Technology Services kev.shur...@axa-tech.com Tel. : +44 1 253 68 4652 - Mob. : +44 7974 83 0090 Chris
Re: [squid-users] Byte Hit Ratio since last restart
On mån, 2008-05-19 at 16:04 +0530, selvi nandu wrote: I would like to know the Byte Hit Ratio (Ratio of total amount of bytes which are hits to the total amount of bytes transferred) since the squid last restart. Is there a way in snmp to find this? Sounds like you are looking for cacheRequestHitRatio and cacheRequestByteRatio (requests or bytes respectively) But I usually graph the relation between cacheHttpOutKb and cacheServerInKb, giving a continous record while Squid is running.. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Low HIT ratio with Coss
Hi Usman, usman wrote: Hi EveryOne, I am getting very low Request Hit ratio on squid cache since i implemented coss. The caching directories containing coss stripes file are filling up very very slow. /dev/amrd1s1d 16G136M 15G 1%/cache1 /dev/amrd2s1d 16G141M 15G 1%/cache2 /dev/amrd3s1d 33G5.9G 24G20%/cache3 From what I understand, COSS by default stores smaller objects in comparison to UFS, AUFS or DISKD. This may explain why the COSS directories are filling up slowly. you can see the comparison between diskd and coss directories. The cache_dir settings are cache_dir coss /cache1 12000 max-size=1048576 max-stripe-waste=524288 membufs=500 cache_dir coss /cache2 12000 max-size=1048576 max-stripe-waste=524288 membufs=500 cache_dir diskd /cache3 28000 16 256 Q1=72 Q2=64 My COSS cache_dir are as follows: cache_dir coss /cache1/squid/coss 8192 max-size=131072 max-stripe-waste=16384 block-size=1024 membufs=500 On other caches with same refresh pattern (total Diskd or Aufs) I get around 45 - 55 % Request HIT ratio. Currently its 12 % with coss. The caching directories are not fully loaded yet but still I feel its very low request hit ratio. In one of my FreeBSD Squid box utilizing COSS with the following uptime: Squid Object Cache: Version 2.6.STABLE16 Start Time: Sun, 09 Sep 2007 11:31:49 GMT Current Time: Thu, 25 Oct 2007 16:33:23 GMT I get the following results: Request Hit Ratios: 5min: 47.3%, 60min: 46.1% Byte Hit Ratios:5min: 17.9%, 60min: 17.5% Request Memory Hit Ratios: 5min: 0.2%, 60min: 0.3% Request Disk Hit Ratios:5min: 55.2%, 60min: 54.5% Cache Hits:0.00767 0.00767 Near Hits: 1.38447 1.31166 Where is something wrong in my Config ? I am sure that the low HIT ratio is not a Configuration problem. Also please suggest the size of block-size in coss settings, I am using FreeBSD 6.2 with UFS2 file system (with default block size of file system 16384 bytes). RAM is 4 GB, SMP System. How long has your FreeBSD squid box been running? My advise is to be a little more patience with COSS. Let the COSS directories get filled up. I am sure that your request HIT ratios will gradually increase. Regards usman Thanking you... -- With best regards and good wishes, Yours sincerely, Tek Bahadur Limbu System Administrator (TAG/TDG Group) Jwl Systems Department Worldlink Communications Pvt. Ltd. Jawalakhel, Nepal http://www.wlink.com.np http://teklimbu.wordpress.com
Re: [squid-users] Zero hit rate on reverse proxy server with Squid
tis 2006-05-16 klockan 18:19 -0700 skrev Michael T. Halligan: a) Authentication was used, and the server did not indicate the content is public (not requiring authentication). Is there something special that I need to do in apache to make it say that the data is public once it's been authenticated? Data requiring authentication is per definition not public, it's limited access. Data which can be considered public (unlimited access) even if the server normally requires authentication can be marked as such by including a Cache-Control: public header in the HTTP response. This tells caches that the content is considered unlimited access even if the request which gave this content included authentication credentials. b) Reload request (max-age=0) c) If-Modified-Since can only be cached once the object as such has been cached. I'm rather squid illiterate here. Where do I begin to research these two statements? b) Don't use the reload button when testing the cache. The reload button tells caches that the client wants a fresh copy by including the above mentioned criteria in it's request.. c) Start with a clean browser cache when testing. Squid can only cache content which has been seen by Squid. Positive cache validations of content not yet seen by Squid is not cached. A good document explaining how HTTP caching works and how to make proper use of it is Caching Tutorial for Web Authors and Webmasters url:http://www.mnot.net/cache_docs/. It not only explains the concepts involved but also how this maps to several common HTTP servers and related technologies. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Zero hit rate on reverse proxy server with Squid
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Here's what I'm seeing in access.log : 1147659469.909 18 adsl-71-134-224-41.dsl.pltn13.pacbell.net TCP_MISS/304 245 GET http://squidtest.bitpusher.com/ 78/31/00/5db751d7d1355191556c70570974a13793ca2468/9a6c2cb08bc0ea681a8 8bf [...] Authorization: Basic Yml0cHVzaGVyOmJwYmVhbnM=\r\nCache-Control: max-age=0\r\n] [HTTP/1.1 304 Not Modified\r\nDate: Mon, 15 May 2006 02:17:26 GMT\r\nServer: Apache/1.3.34 (Unix) PHP/5.1.2 mod_ssl/2.8.25 OpenSSL/0.9.7d\r \nConnection: Keep-Alive, Keep-Alive\r\nKeep-Alive: timeout=15, max=99 \r\nETag: d8e8367-183a-4464ed3d\r\n\r] This can't be cached due to a) Authentication was used, and the server did not indicate the content is public (not requiring authentication). Is there something special that I need to do in apache to make it say that the data is public once it's been authenticated? b) Reload request (max-age=0) c) If-Modified-Since can only be cached once the object as such has been cached. I'm rather squid illiterate here. Where do I begin to research these two statements? -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (Darwin) iD8DBQFEanpAwjCqooJyNAMRAus5AJ9sQbHACHliK7tWZmHSyfC+jfWb/QCfcSp/ ZmbbENu9JjOqW4tAXl/4oIU= =IQr/ -END PGP SIGNATURE-
Re: [squid-users] Zero hit rate on reverse proxy server with Squid
sön 2006-05-14 klockan 19:29 -0700 skrev Michael T. Halligan: Here's what I'm seeing in access.log : 1147659469.909 18 adsl-71-134-224-41.dsl.pltn13.pacbell.net TCP_MISS/304 245 GET http://squidtest.bitpusher.com/ 78/31/00/5db751d7d1355191556c70570974a13793ca2468/9a6c2cb08bc0ea681a88bf [...] Authorization: Basic Yml0cHVzaGVyOmJwYmVhbnM=\r\nCache-Control: max-age=0\r\n] [HTTP/1.1 304 Not Modified\r\nDate: Mon, 15 May 2006 02:17:26 GMT\r\nServer: Apache/1.3.34 (Unix) PHP/5.1.2 mod_ssl/2.8.25 OpenSSL/0.9.7d\r \nConnection: Keep-Alive, Keep-Alive\r\nKeep-Alive: timeout=15, max=99 \r\nETag: d8e8367-183a-4464ed3d\r\n\r] This can't be cached due to a) Authentication was used, and the server did not indicate the content is public (not requiring authentication). b) Reload request (max-age=0) c) If-Modified-Since can only be cached once the object as such has been cached. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Zero hit rate on reverse proxy server with Squid
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'm attempting to setup squid as a reverse proxy, but appear to be failing rather miserably. Whenever I attempt to get a file, I either get a TCP_MISS/304 or TCP_MISS/200 . .I've never actually seen a hit, and every time I try to retrieve a file, the file gets pulled from the webserver behind squid. I'd appreciate any help.
Re: [squid-users] Zero hit rate on reverse proxy server with Squid
I'm attempting to setup squid as a reverse proxy, but appear to be failing rather miserably. Whenever I attempt to get a file, I either get a TCP_MISS/304 or TCP_MISS/200 . .I've never actually seen a hit, and every time I try to retrieve a file, the file gets pulled from the webserver behind squid. I'd appreciate any help. - Are the objects tested cacheable ? Verify with : http://www.ircache.net/cgi-bin/cacheability.py M.
Re: [squid-users] byte hit rasio problem
hello, I have problem with my squid. my squid sometimes going weird. you can see the graph at http://www.geocities.com/adilinux/images/traffic.png http://www.geocities.com/adilinux/images/hit_rate_5min.png http://www.geocities.com/adilinux/images/request_rate.png http://www.geocities.com/adilinux/images/in_out_save.png and report from cachemgr.cgi http://www.geocities.com/adilinux/images/cachemgr.html the point that i review. Everytime the graph look like that, i have found the Byte Hit Ratios become negative. please give me advise, http://www.squid-cache.org/Doc/FAQ/FAQ-12.html#ss12.31 M.
RE: [squid-users] cache hit and byte hit ratio
Thanks Chris I increased my maximum_object size to 128mb ( earlier it was 32 MB ) And I changes replacement policy also. I can see my byte hit ratio has increased to 20-25 %. How can I further increase it. What other parameter I must consider. I have noticed lot of traffic for windows update. Is there any way to cache that. I tried using refresh pattern for same but I still get TCP_MISS. Thanks - LK -Original Message- From: Chris Robertson [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 06, 2005 6:41 PM To: squid-users@squid-cache.org Subject: RE: [squid-users] cache hit and byte hit ratio -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, September 05, 2005 3:22 AM To: squid-users@squid-cache.org Subject: [squid-users] cache hit and byte hit ratio Hi I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. Thanks LK What have you done so far? Look into maximum_object_size, and the heap LFUDA cache(and memory)_replacement_policy. They can make a big difference in cache ratios. Chris Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com **
[squid-users] Balasan: Re: [squid-users] cache hit and byte hit ratio
maybe you should set reload_into_ims on, and more extreme configuration is to set all cacheable file to ignore-reload. for example: refresh_pattern -i \.jpg$ 10080 100% 43200 reload-into-ims refresh_pattern -i \.swf$ 10080 100% 43200 ignore-reload refresh_pattern . 10 50% 43200 reload-into-ims but i suggest you to block banners rather than modifying refresh_pattern regards, [EMAIL PROTECTED] --- Christoph Haas [EMAIL PROTECTED] menulis: On Mon, Sep 05, 2005 at 12:21:35PM +0100, [EMAIL PROTECTED] wrote: I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. It surely depends on which kind of objects are stored. Assume that half of the objects are cached but each one is just 100 bytes. Then you would have 50% cache hit ratio but perhaps only 0.0001% byte hit ratio. Not every object is cachable. Just use a large disk and if you have multiple proxys then establish a sibling relationship between them. There's not much else you can do. Tweaking refresh times surely breaks more applications than it helps you. And if you want to save 95% bandwidth: block porn sites. ;) Regards Christoph -- ~ ~ ~ .signature [Modified] 3 lines --100%-- 3,41 All ___ Apakah Anda Yahoo!? Lelah menerima spam? Surat Yahoo! mempunyai perlindungan terbaik terhadap spam. http://id.mail.yahoo.com/
RE: [squid-users] cache hit and byte hit ratio
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, September 05, 2005 3:22 AM To: squid-users@squid-cache.org Subject: [squid-users] cache hit and byte hit ratio Hi I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. Thanks LK What have you done so far? Look into maximum_object_size, and the heap LFUDA cache(and memory)_replacement_policy. They can make a big difference in cache ratios. Chris
Re: [squid-users] cache hit and byte hit ratio
this thing not only depend on squid u should use firewall too On 9/5/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. Thanks LK Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com ** -- Syed Kashif Ali Bukhari Jr. Network Officer Beaconet
RE: [squid-users] cache hit and byte hit ratio
I didn't understand this part. How firewall will increase cache byte hit ratio? Thanks - LK -Original Message- From: Kashif Ali Bukhari [mailto:[EMAIL PROTECTED] Sent: Monday, September 05, 2005 12:42 PM To: Lokesh Khanna Cc: squid-users@squid-cache.org Subject: Re: [squid-users] cache hit and byte hit ratio this thing not only depend on squid u should use firewall too On 9/5/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. Thanks LK Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com ** -- Syed Kashif Ali Bukhari Jr. Network Officer Beaconet Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com **
Re: [squid-users] cache hit and byte hit ratio
On Mon, Sep 05, 2005 at 12:21:35PM +0100, [EMAIL PROTECTED] wrote: I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. It surely depends on which kind of objects are stored. Assume that half of the objects are cached but each one is just 100 bytes. Then you would have 50% cache hit ratio but perhaps only 0.0001% byte hit ratio. Not every object is cachable. Just use a large disk and if you have multiple proxys then establish a sibling relationship between them. There's not much else you can do. Tweaking refresh times surely breaks more applications than it helps you. And if you want to save 95% bandwidth: block porn sites. ;) Regards Christoph -- ~ ~ ~ .signature [Modified] 3 lines --100%--3,41 All
Re: [squid-users] cache hit and byte hit ratio
On 9/5/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: I didn't understand this part. How firewall will increase cache byte hit ratio? i am not talking about cache byte hit ratio my meaning were about to block illusion and other virus and anonymous attacks from your firewall to prevent extra utilization of your bandwidth Thanks - LK -Original Message- From: Kashif Ali Bukhari [mailto:[EMAIL PROTECTED] Sent: Monday, September 05, 2005 12:42 PM To: Lokesh Khanna Cc: squid-users@squid-cache.org Subject: Re: [squid-users] cache hit and byte hit ratio this thing not only depend on squid u should use firewall too On 9/5/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi I am running squid 2.5.10 stable. I noticed cache hit ratio on my server is 30 % but byte hit ratio is less than 15 %. How can I increase byte hit ratio. I want to save BW. I am not able to save much. Thanks LK Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com ** -- Syed Kashif Ali Bukhari Jr. Network Officer Beaconet Disclaimer The information contained in this e-mail, any attached files, and response threads are confidential and may be legally privileged. It is intended solely for the use of individual(s) or entity to which it is addressed and others authorised to receive it. If you are not the intended recipient, kindly notify the sender by return mail and delete this message and any attachment(s) immediately. Save as expressly permitted by the author, any disclosure, copying, distribution or taking action in reliance on the contents of the information contained in this e-mail is strictly prohibited and may be unlawful. Unless otherwise clearly stated, and related to the official business of Accelon Nigeria Limited, opinions, conclusions, and views expressed in this message are solely personal to the author. Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it direct, indirect or consequential, arising from information made available in this e-mail and actions resulting there from. For more information about Accelon Nigeria Limited, please see our website at http://www.accelonafrica.com ** -- Syed Kashif Ali Bukhari Jr. Network Officer Beaconet
Re: [squid-users] peer hit data..
On 5/31/05, Kapil [EMAIL PROTECTED] wrote: I'm new to squid. I have setup 2 squid boxes(mm15, mm16) as each other's peer. 1) Does this info show that peer look up is working ? 2) Sorry for the dump question, but how did you figure that out ? 3) Peer look up is happening over UDP, so from where does TCP_NEGATIVE_HIT come from ? 4) Does it mean (UDP_HIT - TCP_NEGATIVE_HIT ) were the only good responses to the peer ? The TCP_NEGATIVE_HIT 100% indicates that of the 2,980 requests made to a peer, all 2,980 failed. IIRC, the behavior you are seeing is the result of allowing each peer to ICP query the other in an icp_access rule, but omitting to also add a http_access rule to allow the peers to retrieve objects from each other via TCP. Kevin Kadow
Re: [squid-users] peer hit data..
IP addresses are in http_access as well as in cache_peer_access peerIPAddress allow all I tried http_access allow all also. *** Currently established connections: 0 ICP Requests 5401 UDP_HIT 121 2% UDP_MISS5280 98% HTTP Requests 31 TCP_NEGATIVE_HIT 31 100% If what you are saying is correct then what happened to UDP_HIT - HTTP requests (121 - 31) ? where did they go ? Thanks, ~Kapil. Kevin wrote: On 5/31/05, Kapil [EMAIL PROTECTED] wrote: I'm new to squid. I have setup 2 squid boxes(mm15, mm16) as each other's peer. 1) Does this info show that peer look up is working ? 2) Sorry for the dump question, but how did you figure that out ? 3) Peer look up is happening over UDP, so from where does TCP_NEGATIVE_HIT come from ? 4) Does it mean (UDP_HIT - TCP_NEGATIVE_HIT ) were the only good responses to the peer ? The TCP_NEGATIVE_HIT 100% indicates that of the 2,980 requests made to a peer, all 2,980 failed. IIRC, the behavior you are seeing is the result of allowing each peer to ICP query the other in an icp_access rule, but omitting to also add a http_access rule to allow the peers to retrieve objects from each other via TCP. Kevin Kadow
Re: [squid-users] sibling hit/miss report
hi, please attached -- you'll need to sed the file replacing HOSTNAME and HOSTPORT with values that correspond your network. Additionally, you'll need to turn snmp on in your squid.conf by placing something like: acl snmp_trusted src 127.0.0.1/255.255.255.255 acl snmp_trusted src 192.168.0.0/255.255.255.0 # snmp information so that we can use mrtg to graph squid's performance acl snmppublic snmp_community public snmp_access allow snmppublic snmp_trusted snmp_access deny all snmp_incoming_address 0.0.0.0 snmp_outgoing_address 255.255.255.255 snmp_port 3401 bye charles On Wed, 2005-03-02 at 13:22 +0500, Askar wrote: hi list Is there a srcipt (mrtg) for graphically plot the HIT/MISS between sibling cache servers. we are currently using mrtg for monitoring our squid servers which reports http/req, http/hit etc. regards # master for squid mrtg monitoring # useful command: # snmpwalk -m/etc/squid/mib.txt -v1 -c public snmp_host:snmp_port 1.3.6.1.4.1.3495 # snmpget -m/etc/squid/mib.txt -v1 -c public snmp_host:snmp_port cacheIcpKbRecv.0 LoadMIBs: /etc/squid/mib.txt Options[_]: growright Target[CACHENAME-cacheServerRequests]: cacheServerRequestscacheServerRequests:[EMAIL PROTECTED]:HOSTPORT MaxBytes[CACHENAME-cacheServerRequests]: 1000 Title[CACHENAME-cacheServerRequests]: Server Requests @ HOSTNAME Options[CACHENAME-cacheServerRequests]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheServerRequests]: h2Server Requests @ HOSTNAME/h2 YLegend[CACHENAME-cacheServerRequests]: requests/sec ShortLegend[CACHENAME-cacheServerRequests]: req/s LegendI[CACHENAME-cacheServerRequests]: Requestsnbsp; LegendO[CACHENAME-cacheServerRequests]: Legend1[CACHENAME-cacheServerRequests]: Requests Legend2[CACHENAME-cacheServerRequests]: Target[CACHENAME-cacheServerErrors]: cacheServerErrorscacheServerErrors:[EMAIL PROTECTED]:HOSTPORT MaxBytes[CACHENAME-cacheServerErrors]: 1000 Title[CACHENAME-cacheServerErrors]: Server Errors @ HOSTNAME Options[CACHENAME-cacheServerErrors]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheServerErrors]: h2Server Errors @ HOSTNAME/h2 YLegend[CACHENAME-cacheServerErrors]: errors/sec ShortLegend[CACHENAME-cacheServerErrors]: err/s LegendI[CACHENAME-cacheServerErrors]: Errorsnbsp; LegendO[CACHENAME-cacheServerErrors]: Legend1[CACHENAME-cacheServerErrors]: Errors Legend2[CACHENAME-cacheServerErrors]: Target[CACHENAME-cacheServerInOutKb]: cacheServerInKbcacheServerOutKb:[EMAIL PROTECTED]:HOSTPORT * 1024 MaxBytes[CACHENAME-cacheServerInOutKb]: 10 Title[CACHENAME-cacheServerInOutKb]: Server In/Out Traffic @ HOSTNAME Options[CACHENAME-cacheServerInOutKb]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheServerInOutKb]: H2Server In/Out Traffic @ HOSTNAME/H2 YLegend[CACHENAME-cacheServerInOutKb]: Bytes/sec ShortLegend[CACHENAME-cacheServerInOutKb]: Bytes/s LegendI[CACHENAME-cacheServerInOutKb]: Server Innbsp; LegendO[CACHENAME-cacheServerInOutKb]: Server Outnbsp; Legend1[CACHENAME-cacheServerInOutKb]: Server In Legend2[CACHENAME-cacheServerInOutKb]: Server Out Target[CACHENAME-cacheProtoClientHttpRequests]: cacheProtoClientHttpRequestscacheProtoClientHttpRequests:[EMAIL PROTECTED]:HOSTPORT MaxBytes[CACHENAME-cacheProtoClientHttpRequests]: 1000 Title[CACHENAME-cacheProtoClientHttpRequests]: Client Http Requests @ HOSTNAME Options[CACHENAME-cacheProtoClientHttpRequests]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheProtoClientHttpRequests]: h2Client Http Requests @ HOSTNAME/h2 YLegend[CACHENAME-cacheProtoClientHttpRequests]: requests/sec ShortLegend[CACHENAME-cacheProtoClientHttpRequests]: req/s LegendI[CACHENAME-cacheProtoClientHttpRequests]: Requestsnbsp; LegendO[CACHENAME-cacheProtoClientHttpRequests]: Legend1[CACHENAME-cacheProtoClientHttpRequests]: Requests Legend2[CACHENAME-cacheProtoClientHttpRequests]: Target[CACHENAME-cacheHttpHits]: cacheHttpHitscacheHttpHits:[EMAIL PROTECTED]:HOSTPORT MaxBytes[CACHENAME-cacheHttpHits]: 1000 Title[CACHENAME-cacheHttpHits]: HTTP Hits @ HOSTNAME Options[CACHENAME-cacheHttpHits]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheHttpHits]: h2HTTP Hits @ HOSTNAME/h2 YLegend[CACHENAME-cacheHttpHits]: hits/sec ShortLegend[CACHENAME-cacheHttpHits]: hits/s LegendI[CACHENAME-cacheHttpHits]: Hitsnbsp; LegendO[CACHENAME-cacheHttpHits]: Legend1[CACHENAME-cacheHttpHits]: Hits Legend2[CACHENAME-cacheHttpHits]: Target[CACHENAME-cacheHttpErrors]: cacheHttpErrorscacheHttpErrors:[EMAIL PROTECTED]:HOSTPORT MaxBytes[CACHENAME-cacheHttpErrors]: 1000 Title[CACHENAME-cacheHttpErrors]: HTTP Errors @ HOSTNAME Options[CACHENAME-cacheHttpErrors]: growright,nopercent,unknaszero PageTop[CACHENAME-cacheHttpErrors]: h2HTTP Errors @ HOSTNAME/h2 YLegend[CACHENAME-cacheHttpErrors]: errors/sec ShortLegend[CACHENAME-cacheHttpErrors]: err/s LegendI[CACHENAME-cacheHttpErrors]: Errorsnbsp; LegendO[CACHENAME-cacheHttpErrors]: Legend1[CACHENAME-cacheHttpErrors]: Errors Legend2[CACHENAME-cacheHttpErrors]:
Re: [squid-users] Byte Hit ratio - which data are counted?
On Thu, 11 Nov 2004, Matus UHLAR - fantomas wrote: I have a farm of 3 squid caches, eash of them now has ~10% of byte hit ratio. They all are proxy-only neighbours to each other. I'd like to ask, how should I count the whole farm efficiency? Tricky when you have intra-farm peerings, more so when you use proxy-only as there can be significant amount of traffic in such peerings. - should I count that 10% of all requests are fetched from cache? - should I count the total ratio higher? It can be any of the above depending on the traffic pattern. - should I count that the ratio is lower No, it is at least ~10%, or the medium of the peers hit ratio weighted by the number of requests each peer sees. If there is significant peering traffic then the hit ratio is somewhat higher (but probably not by very much). To get accurate numbers you need to sum the number of requests in hits and misses, excluding the requests forwarded to another peer within the farm. - if squid fetches an object from its neighbour cache, it may and may not be counted to the byte hit radio It is in such case counted as a cache miss on this proxy and a cache hit on the neighbour. - if an object is fetched from squid cache by its neighbour, it may and may not be counted to the byte hit ratio It is in such case counted as a cache miss on the neighbour and a cache hit on this proxy. Actually the same situation as the above, only difference is which of the two received the original request from the client. Reagrads Henrik
RE: [squid-users] Squid HIT analysis, worm DoS mitigation, and general config tweaking
New to the list. I'm sorry if this stuff is covered in a list FAQ somewhere that I'm unable to find. I have 3 main questions about the wonderful squid cache. FAQ : http://www.squid-cache.org/Doc/FAQ/FAQ.html 1. I want to analyze my squid logs graphically in terms of TCP_HIT, TCP_MEM_HIT and other codes from the logs. I'm sure there's something out there to do it already that I'm just not aware of. Look for various tools available in : http://www.squid-cache.org/Scripts/ Also check the squid FAQ as on how to use Squid with MRTG. 2. Also, we've been feeling the brunt of all the new Welchia variants that try port 80 attacks through random, high-frequency portscanning, which saps our squid caches of file descriptors. From doing some previous list reading, I have set half_closed_connections to off, as well as client_persistent connections to off. I didn't turn server_persistent to off, because, well, it sounds important. Am I being a pansy for not doing this? I'm also Although a personal opinion ; I think so yes. The kind of attacks you describe should be handled by perimeter firewalling infrastructure. If you have a good fw. setup then for instance port scans should not be able to reach your squid box. Also that in particular is not much related to fd. usage as squid only listens on one port. Meaning that resource exhausting attacks on squid would have in any case be http-'applicated' based. curious how these settings help the file descriptor problem, as they sound like they adjust network connection behaviour as opposed to anything that impacts file descriptors. Can anyone shed light on how this works? Also, would there be any reason a service provider with many diversely screwed-up operating systems and corresponding screwed-up browsers would not want to muck with these Squid settings? 3. Why is the squid cache so slow when I use diskd? What guidelines do all of you use for large caches (20GB) in terms of directory structure, memory options, and diskd/no diskd, ufs/no ufs? Well, read the FAQ part on diskd. Diskd often requires OS related tuning. M. Thanks, Paul
Re: [squid-users] Squid HIT analysis, worm DoS mitigation, and general config tweaking
On Wed, 25 Feb 2004, Paul Seaman wrote: 1. I want to analyze my squid logs graphically in terms of TCP_HIT, TCP_MEM_HIT and other codes from the logs. I'm sure there's something out there to do it already that I'm just not aware of. The log analysis programs we know about is listed under Log analysis on the squid-cache.org home page. 2. Also, we've been feeling the brunt of all the new Welchia variants that try port 80 attacks through random, high-frequency portscanning, which saps our squid caches of file descriptors. From doing some previous list reading, I have set half_closed_connections to off, as well as client_persistent connections to off. I didn't turn server_persistent to off, because, well, it sounds important. It is not very important, but with half_closed_connections off you should not need to touch the server_persistent directive. Am I being a pansy for not doing this? I'm also curious how these settings help the file descriptor problem, as they sound like they adjust network connection behaviour as opposed to anything that impacts file descriptors. Each open network connection uses one filedescriptor. Can anyone shed light on how this works? Also, would there be any reason a service provider with many diversely screwed-up operating systems and corresponding screwed-up browsers would not want to muck with these Squid settings? half_closed_clients you want to turn off in such environment. The other should only be turned off if the load is too high and rebuilding Squid to support more filedescriptors is not an option. 3. Why is the squid cache so slow when I use diskd? Is it? What guidelines do all of you use for large caches (20GB) in terms of directory structure, memory options, and diskd/no diskd, ufs/no ufs? Memory is described in the Squid FAQ on memory usage. As for diskd/aufs, you need one of these as soon as you are going above ca 30-50 request/s, as the default ufs cache_dir type quickly gets limited by disk speed and can not scale beyond the speed of a single drive. Which of diskd or aufs to use depends on what OS you are using (aufs for Linux, diskd for most others) Regards Henrik
RE: [squid-users] Low hit rate
Kemi, I increased my hit ratio by running pages and script output through a cacheability tool and taking corrective action as required. The main thing was to add mod_expires and mod_headers to my servers. http://www.cacheflow.com/technology/tools/friendly/cacheability/index.cfm John Kent Webmaster Naval Research Laboratory Monterey, CA -Original Message- From: Duane Wessels [mailto:[EMAIL PROTECTED] Sent: Saturday, February 14, 2004 4:47 PM To: Kemi Salam-Alada Cc: [EMAIL PROTECTED] Subject: Re: [squid-users] Low hit rate On Sat, 14 Feb 2004, Kemi Salam-Alada wrote: Hi all, How can I tune my squid so that I can generate high hit rate? Presently, I am running squid using FreeBSD 4.3 OS and Squid 2.5 STABLE2. The file system used for the disk is aufs. See the 'refersh_pattern' directive in squid.conf. You can probably increase your hit ratio by increasing the values of the refresh_pattern line(s). Duane W.
Re: [squid-users] Low hit rate
On Sat, 14 Feb 2004, Kemi Salam-Alada wrote: Hi all, How can I tune my squid so that I can generate high hit rate? Presently, I am running squid using FreeBSD 4.3 OS and Squid 2.5 STABLE2. The file system used for the disk is aufs. See the 'refersh_pattern' directive in squid.conf. You can probably increase your hit ratio by increasing the values of the refresh_pattern line(s). Duane W.
Re: [squid-users] Disk hit ratio question
On Tue, 2 Dec 2003, unixware wrote: i am getting very low Request Disk Hit Ratios: 5 min 0.3% as compare to other proxies in cache farm which are getting around 34 % disk ratio. cache manager. is this normal ? It is not normal that one proxy in a farm has significantly different hit ratios if all members of the farm have approximately similar traffic. is this recommeneded feature when used cache farm . ?? Depends on the setup and how requests are distributed among the farm members. Regards Henrik
Re: [squid-users] BYTE HIT REQUEST HIT RATIOS
On Tue, 11 Nov 2003, Jose Nathaniel Nengasca wrote: I have this on results on my cachemgr... heres the full result... Average HTTP requests per minute since start: 77.5 Cache information for squid: Request Hit Ratios: 5min: 33.3%, 60min: 27.3% Byte Hit Ratios:5min: 13.4%, 60min: 9.4% Storage Swap size: 676032 KB Your cache looks quite small for the request load.. for good hit ratio you need at least a few days worth of cache space. You can also use refresh_pattern to increase the hit ratio, primarily by increasing the time images are considered fresh. Regards Henrik
Re: [squid-users] memory hit ratio
On Friday 07 November 2003 08:36 pm, [EMAIL PROTECTED] wrote: Could someone please give me an idea of what am I doing wrong on the squid.conf ?, I can't get more percentage on Request Memory Hit Ratios: I wouldn't worry about Memory Hit Ratios - you're better off worrying about Request and Byte Hit Ratios (both of which look ok). If you really want to increase Memory Hit Ratio, about the only way you can do it is by increasing the cache_mem setting. But again, it's probably not worth worrying about. Also, I am getting a lot of this: This ip does not belong to our network and is not on our Squid ACL, does this mean that they are trying to use our Squid ? Looks like it. Don't worry - Squid denied the request (at least in the example you provided). If you don't want the requests to reach Squid at all, then block the Squid port with IPTables. Adam
Re: [squid-users] marking HIT pakets (again)
On Wed, 27 Aug 2003, raptor wrote: Can I mark HIT packets so that later I can shape this traffic with another machine. ?? i.e. With some small amount of coding yes. Regards Henrik
RE: [squid-users] no HIT ?
The proxy seems to have no 'HIT' whatsoever, it continues to give me only MISSES. Post your squid.conf (without blank lines or comments). Adam
Re: [squid-users] no HIT ?
Rully Budisatya wrote: Hi, I probably did something wrong with my squid. The proxy seems to have no 'HIT' whatsoever, it continues to give me only MISSES. Can somebody tell me what happened ? Depending on methodologies used (aka browser reload) ,this may leed to this unwanted (averse) effect when testing your proxy. Make sure also objects accessed are cacheable using e.g. : http://www.ircache.net/cgi-bin/cacheability.py (include squid version and platform (version), can be usefull). M. Thanks. ... Rully -- 'Love is truth without any future. (M.E. 1997)