[squid-users] Antwort: Re: [squid-users] Squid ReverseProxy with vhost vport - Problem
Hi Amos, thanks for your answer. Every time we 'fix' this we get complaints from people wanting the opposite behaviour or suddenly getting breakage. We for now have this behaviour: Squid should obey Host: port when vport is given, and ignore it when vport is omitted (using http_port value if none is pulled in indirectly by vhost anyway), and override/replace it when vport=N is given. So your config tells Squid to use what Pound supplies (default 80). You can avoid that by either getting Pound to stop adding the unusual port to the header, or using vport=80 in squid. Ahh ok, so it must be work, when i put vport=3007 and vport=3008 in my config, right? So, but this doesn´t work. In the cache.log i can see that squid try to connect to sub3007 which resolved to 127.0.0.1 by etc/hosts to port 80. I´m very confused about the situation that my config works with squid2. Tim
Re: [squid-users] Squid ReverseProxy with vhost vport - Problem
On 22/07/11 18:24, tim.schmel...@bechtle.com wrote: Hi Amos, thanks for your answer. Every time we 'fix' this we get complaints from people wanting the opposite behaviour or suddenly getting breakage. We for now have this behaviour: Squid should obey Host: port when vport is given, and ignore it when vport is omitted (using http_port value if none is pulled Oops. Typed that around the wrong way. Should have said: ignore Host: port when vport is given, and use it ... etc in indirectly by vhost anyway), and override/replace it when vport=N is given. So your config tells Squid to use what Pound supplies (default 80). You can avoid that by either getting Pound to stop adding the unusual port to the header, or using vport=80 in squid. Ahh ok, so it must be work, when i put vport=3007 and vport=3008 in my config, right? Correct it is supposed to work. So, but this doesn´t work. In the cache.log i can see that squid try to connect to sub3007 which resolved to 127.0.0.1 by etc/hosts to port 80. I´m very confused about the situation that my config works with squid2. Looks like a regression bug. The squid-3 code seems to skip over the vport when vhost or defaultsite is configured. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
RE: [squid-users] I see this error in cache.log file no free membufs
Dear Markus and Amos, I have done the changes you have proposed. I have dropped the max-size on COSS partition to 100KB so the COSS cache_dir line now reads as follows: cache_dir coss /cache3/coss1 11 max-size=102400 max-stripe-waste=32768 block-size=8192 membufs=100 cache_dir aufs /cache1 115000 16 256 min-size=102401 cache_dir aufs /cache2 115000 16 256 min-size=102401 cache_dir aufs /cache4/cache1 24 16 256 min-size=102401 After doing this I have noticed the following warnings every now and then (usually every 1 - 2 hours) in the cache.log file squidaio_queue_request: WARNING - Queue congestion What I also noticed using iostat is that the big HDD with AUFS dir is handling a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid disk read and write access be somewhat equal between the 3 AUFS partitions? Do you think I should go for a higher max-size on the COSS partition to relieve the extra IO from the big AUFS cache_dir? Thanks again for your excellent support. Sincerely, Ragheb Rustom Smart Telecom S.A.R.L -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Thursday, July 21, 2011 2:00 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] I see this error in cache.log file no free membufs On Wed, 20 Jul 2011 18:23:10 -0300, Marcus Kool wrote: The message indicates that the numbers of membufs should be because there are insufficent membufs to use for caching objects. The reason for having 'insufficient membufs' is explained below. Given the fact that the average object size is 13 KB, the given configuration effectively puts a very large percentage of objects, most likely more than 90% in the COSS-based cache dir. This puts a high pressure on (the disk with) COSS and I bet that the disk with COSS (/cache3) is 100% busy while the other three are mostly idle. COSS is very good for small objects and AUFS is fine with larger objects. There is one larger disk. But this larger disk is not faster. It will perform worse with more objects on it than the other disks. To find out more about disk I/O and pressure on the disk with COSS, one can evaluate the output of iostat or 'vmstat -d 5 5' I recommend to change the configuration, to utilise all disks in a more balanced way. Be sure to also look at the output of iostat. My suggestion is to use COSS only for objects smaller than 64 KB. Depending on the average object size of your cache, this limit may be set lower. So I suggest: cache_dir coss /cache3 11 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache1 115000 16 256 min-size=65536 cache_dir aufs /cache2 115000 16 256 min-size=65536 cache_dir aufs /cache4 115000 16 256 min-size=65536 And to observe the log and output of iostat. If the disk I/O is balanced and the message about membufs reappears and you have sufficient free memory, you may increase membufs. If the I/O is not balanced, the limit of 64KB may be decreased to 16KB. Depending on the results and iostat, it may be better to have 2 disks with COSS and 2 disks with AUFS: cache_dir coss /cache3 11 max-size=16383 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache1 11 max-size=16383 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache2 115000 16 256 min-size=16384 cache_dir aufs /cache4 115000 16 256 min-size=16384 Marcus NP: use the cache manager info report to find the average object size. squidclient mgr:info COSS handles things in 1MB slices. This is the main reason max-size=1048575 is a bad idea, one object per file/slice is less efficient than AUFS one object per file. So with 110GB of COSS dir will be juggling a massive 11 slices on and off of disk as things are needed. I recommend using smaller COSS overall size and using the remainder of each disk for AUFS storage of the larger objects. (COSS is the exception to the one-dir-per-spindle guideline) Something like this with ~30GB COSS on each disk, double size on the big disk = ~150GB of small objects: cache_dir coss /cache1coss 3 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache1aufs 10 16 256 min-size=65536 cache_dir coss /cache2coss 3 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache2aufs 10 16 256 min-size=65536 cache_dir coss /cache3coss 3 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache3aufs 10 16 256 min-size=65536 cache_dir coss /cache4coss1 6 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache4aufs 24 16 256 min-size=65536 This last one is a little tricky. You will need to test and see if its is okay this big or needs reducing. On the size multiple
RE: [squid-users] I see this error in cache.log file no free membufs
Sorry forgot to paste the iostat readings in the below email avg-cpu: %user %nice %system %iowait %steal %idle 0.700.011.194.640.00 93.47 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 3.24 120.01 111.8863305175901960 sda1 0.00 0.04 0.00 1876 8 sda2 3.24 119.96 111.8863281455901952 sdb 10.45 133.64 4.217049934 221920 sdb1 10.45 133.62 4.217048838 221920 sdc 10.45 143.34 4.237561702 223264 sdc1 10.45 143.32 4.237560606 223264 sdd 29.48 4451.11 642.59 234804222 33897624 sdd1 29.48 4451.09 642.59 234803126 33897624 sde 91.10 757.60 1533.97 39964886 80919592 sde1 91.10 757.58 1533.97 39963790 80919592 dm-0 15.51 118.73 110.8362631545846352 dm-1 0.29 1.23 1.05 64736 55600 Ragheb Rustom Smart Telecom S.A.R.L -Original Message- From: Ragheb Rustom [mailto:rag...@smartelecom.org] Sent: Friday, July 22, 2011 3:10 PM To: 'Amos Jeffries'; squid-users@squid-cache.org Cc: 'Marcus Kool' Subject: RE: [squid-users] I see this error in cache.log file no free membufs Importance: High Dear Markus and Amos, I have done the changes you have proposed. I have dropped the max-size on COSS partition to 100KB so the COSS cache_dir line now reads as follows: cache_dir coss /cache3/coss1 11 max-size=102400 max-stripe-waste=32768 block-size=8192 membufs=100 cache_dir aufs /cache1 115000 16 256 min-size=102401 cache_dir aufs /cache2 115000 16 256 min-size=102401 cache_dir aufs /cache4/cache1 24 16 256 min-size=102401 After doing this I have noticed the following warnings every now and then (usually every 1 - 2 hours) in the cache.log file squidaio_queue_request: WARNING - Queue congestion What I also noticed using iostat is that the big HDD with AUFS dir is handling a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid disk read and write access be somewhat equal between the 3 AUFS partitions? Do you think I should go for a higher max-size on the COSS partition to relieve the extra IO from the big AUFS cache_dir? Thanks again for your excellent support. Sincerely, Ragheb Rustom Smart Telecom S.A.R.L -Original Message- From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Thursday, July 21, 2011 2:00 AM To: squid-users@squid-cache.org Subject: Re: [squid-users] I see this error in cache.log file no free membufs On Wed, 20 Jul 2011 18:23:10 -0300, Marcus Kool wrote: The message indicates that the numbers of membufs should be because there are insufficent membufs to use for caching objects. The reason for having 'insufficient membufs' is explained below. Given the fact that the average object size is 13 KB, the given configuration effectively puts a very large percentage of objects, most likely more than 90% in the COSS-based cache dir. This puts a high pressure on (the disk with) COSS and I bet that the disk with COSS (/cache3) is 100% busy while the other three are mostly idle. COSS is very good for small objects and AUFS is fine with larger objects. There is one larger disk. But this larger disk is not faster. It will perform worse with more objects on it than the other disks. To find out more about disk I/O and pressure on the disk with COSS, one can evaluate the output of iostat or 'vmstat -d 5 5' I recommend to change the configuration, to utilise all disks in a more balanced way. Be sure to also look at the output of iostat. My suggestion is to use COSS only for objects smaller than 64 KB. Depending on the average object size of your cache, this limit may be set lower. So I suggest: cache_dir coss /cache3 11 max-size=65535 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache1 115000 16 256 min-size=65536 cache_dir aufs /cache2 115000 16 256 min-size=65536 cache_dir aufs /cache4 115000 16 256 min-size=65536 And to observe the log and output of iostat. If the disk I/O is balanced and the message about membufs reappears and you have sufficient free memory, you may increase membufs. If the I/O is not balanced, the limit of 64KB may be decreased to 16KB. Depending on the results and iostat, it may be better to have 2 disks with COSS and 2 disks with AUFS: cache_dir coss /cache3 11 max-size=16383 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache1 11 max-size=16383 max-stripe-waste=32768 block-size=8192 membufs=15 cache_dir aufs /cache2 115000 16 256 min-size=16384 cache_dir aufs /cache4
Re: [squid-users] SQUID Logrotate problem
Hi Amos, On Fri, Jul 22, 2011 at 12:32 AM, Amos Jeffries squ...@treenet.co.nz wrote: Can you check the contents of the squid.pid file vs the processes that are actually running between step (6) and (7). Sure, here are the processes after each step that would cause a change: Step 3: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 1890 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest Step 5: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 1890 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest Step 7: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 2237 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest And the pid file never changed from 1881. Let me know if you want an strace or anything else. --Will
Re: [squid-users] Is there any Linux Wifi Hotspot Solution that can be used with squid .. ?
Chillispot is/was kind of dead last time I checked but the project was forked/continued by others: http://coova.org/CoovaChilli/ Regards and good luck, Eli 2011/7/21 Hasanen AL-Bana hasa...@gmail.com: Try Chillispot , although it is old. On Thu, Jul 21, 2011 at 8:26 PM, Mr Crack mrcrack...@gmail.com wrote: Dear Friends, I would like to know if there is any wifi hotspot solution software in Linux ( free or commercial ) In Windows, that can be done with Antamedia Hotspot software. Thanks in advance MrCrack007
[squid-users] Squid 3 - Cento 6 - don't display flash player
Hello On Centos 6 and squid 3.1.4 when client surfing on page with flash player animation they aren't displayed. I tried to default minimum configuration (see below) + cache_peer but nothing change. Can you help me? - Squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on localhost is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 cache_peer proxy_parent.domain.com parent 8080 0 proxy-only default
Re: [squid-users] Squid ReverseProxy with vhost vport - Problem
On 22/07/11 22:30, Amos Jeffries wrote: On 22/07/11 18:24, tim.schmel...@bechtle.com wrote: Hi Amos, thanks for your answer. Every time we 'fix' this we get complaints from people wanting the opposite behaviour or suddenly getting breakage. We for now have this behaviour: Squid should obey Host: port when vport is given, and ignore it when vport is omitted (using http_port value if none is pulled Oops. Typed that around the wrong way. Should have said: ignore Host: port when vport is given, and use it ... etc in indirectly by vhost anyway), and override/replace it when vport=N is given. So your config tells Squid to use what Pound supplies (default 80). You can avoid that by either getting Pound to stop adding the unusual port to the header, or using vport=80 in squid. Ahh ok, so it must be work, when i put vport=3007 and vport=3008 in my config, right? Correct it is supposed to work. So, but this doesn´t work. In the cache.log i can see that squid try to connect to sub3007 which resolved to 127.0.0.1 by etc/hosts to port 80. I´m very confused about the situation that my config works with squid2. Looks like a regression bug. The squid-3 code seems to skip over the vport when vhost or defaultsite is configured. Amos Fixed it. The patch is at http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-11575.patch It does not apply cleanly to 3.1 series, but the section relevant to vhost seems to succeed so it should work for you despite the rejects. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] SQUID Logrotate problem
On 23/07/11 00:50, Will Roberts wrote: Hi Amos, On Fri, Jul 22, 2011 at 12:32 AM, Amos Jeffriessqu...@treenet.co.nz wrote: Can you check the contents of the squid.pid file vs the processes that are actually running between step (6) and (7). Sure, here are the processes after each step that would cause a change: Step 3: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 1890 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest Step 5: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 1890 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest Step 7: 1879 ?Ss 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf 1881 ?S 0:00 (squid) -YC -f /etc/squid3/squid.conf 2237 ?S 0:00 (digest_pw_auth) -c /etc/squid3/digest And the pid file never changed from 1881. Let me know if you want an strace or anything else. --Will I'm out of ideas for now. I suspected the old issue of PID left pointing at the crashed/gone process, but seems we did really fix that one. So it comes down to why the automated rotate was crashing in the first place. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] Squid 3 - Cento 6 - don't display flash player
On 23/07/11 01:02, Franco, Battista wrote: Hello On Centos 6 and squid 3.1.4 when client surfing on page with flash player animation they aren't displayed. I tried to default minimum configuration (see below) + cache_peer but nothing change. Can you help me? You should see a bunch of requests in access.log as each animation icon, image, script gets loaded. So are the requests happening? if not the app is broken and either not making any requests or not making them through the proxy like it should. If they are, is there a sign of 4xx/5xx status codes? take a closer look at those, particularly on script requests, to see whats going wrong. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
[squid-users] R: [squid-users] Squid 3 - Cento 6 - don't display flash player
Hello In to access.log there are many 503 see an example below. P.S. on my LAN there is another server with squid 2.6 Stable16 and it works without problem 1311340425.016 63101 10.239.57.89 TCP_MISS/503 4297 GET http://ad.it.doubleclick.net/adj/hp.libero.it/hp;bgarea=hp;adv_sso1=0;adv_sso2=0;adv_sso3=0;adv_np=yes;region=0;dcopt=ist;tile=1;sz=728x90,970x90,970x27;ord=3947939781? - DIRECT/74.125.227.59 text/html 1311340488.306 63169 10.239.57.89 TCP_MISS/503 4251 GET http://ad.it.doubleclick.net/adj/hp.libero.it/hp;fasciahp=1;adv_sso1=0;adv_sso2=0;adv_sso3=0;adv_np=yes;region=0;tile=2;sz=300x250,300x600;ord=3947939781? - DIRECT/74.125.227.59 text/html 1311342983.075 63771 10.239.57.89 TCP_MISS/503 3994 GET http://www.microsoft.com/en-us/homepage/shared/core/2/js/js.ashx? - DIRECT/207.46.131.43 text/html 1311342983.075 63770 10.239.57.89 TCP_MISS/503 4038 GET http://www.microsoft.com/en-us/homepage/shared/core/2/css/css.ashx? - DIRECT/207.46.131.43 text/html -Messaggio originale- Da: Amos Jeffries [mailto:squ...@treenet.co.nz] Inviato: venerdì 22 luglio 2011 16:06 A: squid-users@squid-cache.org Oggetto: Re: [squid-users] Squid 3 - Cento 6 - don't display flash player On 23/07/11 01:02, Franco, Battista wrote: Hello On Centos 6 and squid 3.1.4 when client surfing on page with flash player animation they aren't displayed. I tried to default minimum configuration (see below) + cache_peer but nothing change. Can you help me? You should see a bunch of requests in access.log as each animation icon, image, script gets loaded. So are the requests happening? if not the app is broken and either not making any requests or not making them through the proxy like it should. If they are, is there a sign of 4xx/5xx status codes? take a closer look at those, particularly on script requests, to see whats going wrong. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
[squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and 206 codes
I am doing extended testing of a CentOS v6 TPROXY/SQUID3/WCCP setup and I noticing higher than usual TCP_MISS/502 codes. I am also seeing some 206 codes, but it is the 502s that are much higher than normal. I think it is transport related inside the TPROXY/SQUID side of things but I am not sure. I am seeing the 502 codes on both gets and posts. Can anyone provide more insight on this condition and what/where I should start troubleshooting? I am running the stock CentOS v6 kernel (2.6.32-71.29.1) and Squid 3.1.10 as package by RHEL 6 (specifically a RHEL 6 rebuilt source rpm of squid-3.1.10-1.el6. Should update to the more recent release of squid 3.1 as a starting point? Nick
[squid-users] dynamic url based proxy redirect
For a particular use case of ours - we are looking a scenario, where if the url matches a particular pattern http://myproxy.mysite.com/a/rest_of_uri , we would like to route the traffic to a given set of proxies behind the scenes, (and similarly, a different set of proxies for a different site pattern as well). Let me know if / how that is possible, in terms of squid conf. to begin with. Thanks !
[squid-users] Seeing a lot of RELEASE entries store.log for cache that really shouldn't be releasing much at all.
We have our cache setup in the following manner: cache_dir ufs /cache_dir 10 16 256 refresh_pattern . 10512000 100% 10512000 While testing everything looked good but after putting a bit of load on the cache we're seeing a lot of RELEASE entries in the log (non entries) and I don't understand why this would be the case, since our refresh pattern basically says to keep entries in as long as possible. The cache isn't being hit with any special expiry or no-cache headers, just the User-Agent, Connection, Keep-Alive and Host headers. Here are some examples of the RELEASE messages from our store.log. 1311359725.359 RELEASE 00 00019ACB 324E9C9538B7DE8EA5D8DCB7925E668B 200 1311225933 0-1 audio/mpeg 80875/80875 GET http://10.0.21.15:8080/TranscodingInterface/preview/RABFaaTI 1311359737.598 RELEASE 00 00015642 312792582E333AD8185F95EDC0C04DCF 200 1311264767 0-1 audio/aac 72865/72865 GET http://10.0.21.15:8080/TranscodingInterface/preview/RAC2iWNP 1311359737.747 RELEASE 00 000198C3 0A560353D2B2F834CE8225F7A429339E 200 1311357210-1 1311457210 x-squid-internal/vary -1/0 GET http://10.0.21.15:8080/TranscodingInterface/preview/aHR0cDovL2NvbnRlbnQuOXNxdWFyZWQuY29tL21lZGlhL21wMy9wb2x5LzI3MDg3Ny5tcDM If the content is hit with a different User-Agent will this cause the cache to release the object as the result from the origin server will differ in size / content with a different User-Agent? -Dan
Re: [squid-users] Seeing a lot of RELEASE entries store.log for cache that really shouldn't be releasing much at all.
On 23/07/11 09:49, Dan Ford wrote: We have our cache setup in the following manner: cache_dir ufs /cache_dir 10 16 256 refresh_pattern . 10512000 100% 10512000 While testing everything looked good but after putting a bit of load on the cache we're seeing a lot of RELEASE entries in the log (non entries) and I don't understand why this would be the case, since our refresh pattern basically says to keep entries in as long as possible. The cache isn't being hit with any special expiry or no-cache headers, just the User-Agent, Connection, Keep-Alive and Host headers. Here are The reply header are more relevant to the refresh_pattern. Check what they are and maybe enable debug level 22,3 to see what refreshCheck is deciding. some examples of the RELEASE messages from our store.log. 1311359725.359 RELEASE 00 00019ACB 324E9C9538B7DE8EA5D8DCB7925E668B 200 1311225933 0 -1 audio/mpeg 80875/80875 GET http://10.0.21.15:8080/TranscodingInterface/preview/RABFaaTI 1311359737.598 RELEASE 00 00015642 312792582E333AD8185F95EDC0C04DCF 200 1311264767 0 -1 audio/aac 72865/72865 GET http://10.0.21.15:8080/TranscodingInterface/preview/RAC2iWNP 1311359737.747 RELEASE 00 000198C3 0A560353D2B2F834CE8225F7A429339E 200 1311357210 -1 1311457210 x-squid-internal/vary -1/0 GET http://10.0.21.15:8080/TranscodingInterface/preview/aHR0cDovL2NvbnRlbnQuOXNxdWFyZWQuY29tL21lZGlhL21wMy9wb2x5LzI3MDg3Ny5tcDM If the content is hit with a different User-Agent will this cause the cache to release the object as the result from the origin server will differ in size / content with a different User-Agent? Depends on which user agents are involved with the cached version and the new request. Whether the new agents details match the Vary: details the cachesd request was generated from. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] dynamic url based proxy redirect
On 23/07/11 07:53, Renjith T wrote: For a particular use case of ours - we are looking a scenario, where if the url matches a particular pattern http://myproxy.mysite.com/a/rest_of_uri , we would like to route the traffic to a given set of proxies behind the scenes, (and similarly, a different set of proxies for a different site pattern as well). Let me know if / how that is possible, in terms of squid conf. to begin with. Thanks ! Yes. Details on configuring a cache hierarchy are here: http://wiki.squid-cache.org/Features/CacheHierarchy You need to add to that cache_peer_access and an ACL to do the URL test and decide which peers the request may go to. It is easiest is the site pattern as you call it is separated by sub-domains. dstdomain ACL type is faster than url_regex which would include that /a/ piece. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and 206 codes
On 23/07/11 04:24, Ritter, Nicholas wrote: I should add one important point. When the error occurs, it is most often not affecting the entire site nor transaction. This is to say that I can visit a site, get content, and then at some point fill out a form on the site, which then generates the 502. I don't want anyone to assume that the 502 is being generated because of an obvious path connectivity error where the site being surfed was down all along. I should also not that I am not running any unique refresh patterns in the squid.conf. -Original Message- From: Ritter, Nicholas [mailto:nicholas.rit...@americantv.com] Sent: Friday, July 22, 2011 11:16 AM To: squid-users@squid-cache.org Subject: [squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and 206 codes I am doing extended testing of a CentOS v6 TPROXY/SQUID3/WCCP setup and I noticing higher than usual TCP_MISS/502 codes. I am also seeing some 206 codes, but it is the 502s that are much higher than normal. I think it is transport related inside the TPROXY/SQUID side of things but I am not sure. I am seeing the 502 codes on both gets and posts. Can anyone provide more insight on this condition and what/where I should start troubleshooting? With the message presented in that 502 error page. 502 is sent on several outbound connection problems from TCP connect through to reply parsing. I am running the stock CentOS v6 kernel (2.6.32-71.29.1) and Squid 3.1.10 as package by RHEL 6 (specifically a RHEL 6 rebuilt source rpm of squid-3.1.10-1.el6. Should update to the more recent release of squid 3.1 as a starting point? Always a good choice to know if its been fixed. Though I don't recall anything major having changed since .10 regarding connectivity. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] R: [squid-users] Squid 3 - Cento 6 - don't display flash player
On 23/07/11 02:15, Franco, Battista wrote: Hello In to access.log there are many 503 see an example below. P.S. on my LAN there is another server with squid 2.6 Stable16 and it works without problem 1311340425.016 63101 10.239.57.89 TCP_MISS/503 4297 GET http://ad.it.doubleclick.net/adj/hp.libero.it/hp;bgarea=hp;adv_sso1=0;adv_sso2=0;adv_sso3=0;adv_np=yes;region=0;dcopt=ist;tile=1;sz=728x90,970x90,970x27;ord=3947939781? - DIRECT/74.125.227.59 text/html 1311340488.306 63169 10.239.57.89 TCP_MISS/503 4251 GET http://ad.it.doubleclick.net/adj/hp.libero.it/hp;fasciahp=1;adv_sso1=0;adv_sso2=0;adv_sso3=0;adv_np=yes;region=0;tile=2;sz=300x250,300x600;ord=3947939781? - DIRECT/74.125.227.59 text/html 1311342983.075 63771 10.239.57.89 TCP_MISS/503 3994 GET http://www.microsoft.com/en-us/homepage/shared/core/2/js/js.ashx? - DIRECT/207.46.131.43 text/html 1311342983.075 63770 10.239.57.89 TCP_MISS/503 4038 GET http://www.microsoft.com/en-us/homepage/shared/core/2/css/css.ashx? - DIRECT/207.46.131.43 text/html Lots of domains you are unable to connect to. Firewall rules? Nothing in that set related to flash. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9
Re: [squid-users] I see this error in cache.log file no free membufs
On 23/07/11 00:09, Ragheb Rustom wrote: Dear Markus and Amos, I have done the changes you have proposed. I have dropped the max-size on COSS partition to 100KB so the COSS cache_dir line now reads as follows: cache_dir coss /cache3/coss1 11 max-size=102400 max-stripe-waste=32768 block-size=8192 membufs=100 cache_dir aufs /cache1 115000 16 256 min-size=102401 cache_dir aufs /cache2 115000 16 256 min-size=102401 cache_dir aufs /cache4/cache1 24 16 256 min-size=102401 After doing this I have noticed the following warnings every now and then (usually every 1 - 2 hours) in the cache.log file squidaio_queue_request: WARNING - Queue congestion What I also noticed using iostat is that the big HDD with AUFS dir is handling a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid disk read and write access be somewhat equal between the 3 AUFS partitions? Do you think I should go for a higher max-size on the COSS partition to relieve the extra IO from the big AUFS cache_dir? The default selection picks the directory with most available space. So for the first 130GB of unique cacheable objects that would be cache4. http://www.squid-cache.org/Doc/config/store_dir_select_algorithm/ You can set that to round-robin to level the writes more evenly over the AUFS disks. Wont be perfectly even balancing due to differences in object size an a few other factors. Queue congestion is likely a result of everything big going to cache4 initially. http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.14 Beta testers wanted for 3.2.0.9