RE: [squid-users] Multiple separate caches?
Hi Matus, > I'd say this it misbehaviour of the solution. I guess a simple Vary: > header containing the name of the header inserted by ssl offload device > should do it. That did it. Thanks. Apache conf example: Header append Vary X-SSL -- Rgds, Sean. This e-mail is confidential. If you are not the intended recipient you must not disclose or use the information contained within. If you have received it in error please return it to the sender via reply e-mail and delete any record of it from your system. The information contained within is not the opinion of Edith Cowan University in general and the University accepts no liability for the accuracy of the information provided. CRICOS IPC 00279B
Re: [squid-users] url-rewrite & digest authentication not working together
On Wed, 14 Jul 2010 12:07:45 -0700 (PDT), Mike Melson wrote: > Hi - > > I'm having trouble using squid plus a url-rewrite-program as a reverse > proxy to a system that requires digest authentication. > > Digest authentication fails because the uri= in the Authorization > header isn't rewritten & so it doesn't match the POST URI created by > url-rewrite-program. Is there a way to also rewrite the uri string in the > Authorization header before squid sends it to the originserver? No. This is one of the limits of re-writing the requested URL while it is in transit. Consider what the reason for having that URI in the Authorization header means: The client is passing specific credentials to a security zone identified by the URI. If the URI is being used even in part as realm then the encryption itself is salted on the public URI. > > If it helps clarify, I'm using curl to POST to squid as a reverse proxy to > a custom web server. And, if I eliminate the url-rewrite-program > authorization works fine. > > e.g. [curl] --> POST /myfile.txt --> [squid (url-rewrite myfile.txt to > <32-bit hex string>)] --> POST /<32bit-hex-string> --> [originserver] URL-re-writing is a rather nasty violation of HTTP. Where possible you need to remove it. Squid in reverse proxy mode acts exactly like a client web browser when contacting the web server. Your web server should always be aware of it's public URIs and able to handle requests for them. Amos
[squid-users] Suspicious URL:RE: Suspicious URL:[squid-users] Forcing TCP_REFRESH_HIT to be answered from cache
Hi Dererk! Add the "ignore-reload" option on cache refresh policies: Looks like this: refresh_pattern . 0 20% 4320ignore-reload Martin -Original Message- From: der...@mail.buenosaireslibre.org [mailto:der...@mail.buenosaireslibre.org] Sent: Wednesday, July 14, 2010 12:14 PM To: squid-users@squid-cache.org Subject: Suspicious URL:[squid-users] Forcing TCP_REFRESH_HIT to be answered from cache Hi everyone! I'm running a reverse proxy (1) to help my httpd to serve content fast and avoid going to the origin as much as possible. Doing that, I found I made a _lot_ of TCP_REFRESH_HIT requests to origin, although I've an insane 10-year-long expiration date set on my http response headers back to squid. Although I did verify that, using wget -S and some fancies tcpdump lines, I wanted to get rid of any TCP_REFRESH_HIT request, main reason is because there's no way some objects change, so requesting for freshness has no sense moreover increases server load (1/7 are refresh_hit's). I used refresh_pattern with override-expire and extremely high values for min and max values, with absolutely no effect. For the record, If I use offline_mode I obtain partially what I wanted, unfortunately I loose the flexibility of the regex capacity that refresh_pattern has, which I need for avoiding special objects. I've enabled debug for a blink of an eye, and got a request that goes as TCP_REFRESH_HIT, and as for what I understand, appears to be answered as being stale and requested back to origin. 2010/07/14 13:35:58| parseHttpRequest: Complete request received 2010/07/14 13:35:58| removing 1462 bytes; conn->in.offset = 0 2010/07/14 13:35:58| clientSetKeepaliveFlag: http_ver = 1.0 2010/07/14 13:35:58| clientSetKeepaliveFlag: method = GET 2010/07/14 13:35:58| clientRedirectStart: 'http://foobar.com/object' 2010/07/14 13:35:58| clientRedirectDone: 'http://foobar.com/object' result=NULL 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_NOCACHE = NOT SET 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_CACHABLE = SET 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_HIERARCHICAL = SET 2010/07/14 13:35:58| clientProcessRequest: GET 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessRequest2: storeGet() MISS 2010/07/14 13:35:58| clientProcessRequest: TCP_MISS for 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessMiss: 'GET http://foobar.com/object' 2010/07/14 13:35:58| clientCacheHit: http://foobar.com/object = 200 2010/07/14 13:35:58| clientCacheHit: refreshCheckHTTPStale returned 1 2010/07/14 13:35:58| clientCacheHit: in refreshCheck() block 2010/07/14 13:35:58| clientProcessExpired: 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessExpired: lastmod -1 2010/07/14 13:35:58| clientReadRequest: FD 84: reading request... 2010/07/14 13:35:58| parseHttpRequest: Method is 'GET' 2010/07/14 13:35:58| parseHttpRequest: URI is '/object' In the way of checking anything to get some effect, I also gived a try to ignore-stale-while-revalidate override-lastmod override-expire ignore-reload ignore-no-cache, pushed refresh_stale_hit high in the sky, and again, no effects :-( What I'm doing wrong? Is there any other way to avoid REFRESH_HITs from being performed? Greetings, Dererk ref: 1. Squid Cache: Version 2.7.STABLE7 configure options: '--prefix=/usr/local/squid' '--bindir=/usr/local/bin' '--sbindir=/usr/local/sbin' '--sysconfdir=/etc/squid' '--localstatedir=/var' '--mandir=/usr/local/man' '--infodir=/usr/local/info' '--disable-internal-dns' '--enable-async-io' '--enable-storeio=aufs,ufs,coss' '--with-large-files' '--enable-snmp' '--with-maxfd=8192' '--enable-htcp' '--enable-cache-digests' This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at http://www.amdocs.com/email_disclaimer.asp
Re: [squid-users] SSL Certs on Squid
Does anyone know which server type to select from godaddy for an SSL cert that is going to run on squid? GoDaddy has a list based on the type of server I'm using and Squid isn't an option. Will an apache cert work? Other options are things like IIS, mac, tomcat, etc. Thanks, Phil On Wed, Jul 14, 2010 at 8:35 AM, Luis Daniel Lucio Quiroz wrote: > > Le mercredi 14 juillet 2010 09:38:10, Phil McDonnell a écrit : > > I'm trying to setup squid3 to act as a reverse proxy under https. I > > can't seem to figure out how to sign the certs with GoDaddy's SSL > > product. Does anyone have the couple commands I need to generate a > > CSR (Certificate signing request)? I've seen a lot of things on the > > squid site that show how to generate x509 and PEM format requests, but > > it seems GoDaddy takes CSR format. > > > > I've never setup SSL before so let me know if I'm misunderstanding > > something here. > > > > Thanks, > > Phil > > Maybe this helps you > http://portal.okay.com.mx/cademia-linux/administracion-de-la-ca > > sorry it is in spanish but openssl commands are there. > > LD
[squid-users] Forcing TCP_REFRESH_HIT to be answered from cache
Hi everyone! I'm running a reverse proxy (1) to help my httpd to serve content fast and avoid going to the origin as much as possible. Doing that, I found I made a _lot_ of TCP_REFRESH_HIT requests to origin, although I've an insane 10-year-long expiration date set on my http response headers back to squid. Although I did verify that, using wget -S and some fancies tcpdump lines, I wanted to get rid of any TCP_REFRESH_HIT request, main reason is because there's no way some objects change, so requesting for freshness has no sense moreover increases server load (1/7 are refresh_hit's). I used refresh_pattern with override-expire and extremely high values for min and max values, with absolutely no effect. For the record, If I use offline_mode I obtain partially what I wanted, unfortunately I loose the flexibility of the regex capacity that refresh_pattern has, which I need for avoiding special objects. I've enabled debug for a blink of an eye, and got a request that goes as TCP_REFRESH_HIT, and as for what I understand, appears to be answered as being stale and requested back to origin. 2010/07/14 13:35:58| parseHttpRequest: Complete request received 2010/07/14 13:35:58| removing 1462 bytes; conn->in.offset = 0 2010/07/14 13:35:58| clientSetKeepaliveFlag: http_ver = 1.0 2010/07/14 13:35:58| clientSetKeepaliveFlag: method = GET 2010/07/14 13:35:58| clientRedirectStart: 'http://foobar.com/object' 2010/07/14 13:35:58| clientRedirectDone: 'http://foobar.com/object' result=NULL 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_NOCACHE = NOT SET 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_CACHABLE = SET 2010/07/14 13:35:58| clientInterpretRequestHeaders: REQ_HIERARCHICAL = SET 2010/07/14 13:35:58| clientProcessRequest: GET 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessRequest2: storeGet() MISS 2010/07/14 13:35:58| clientProcessRequest: TCP_MISS for 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessMiss: 'GET http://foobar.com/object' 2010/07/14 13:35:58| clientCacheHit: http://foobar.com/object = 200 2010/07/14 13:35:58| clientCacheHit: refreshCheckHTTPStale returned 1 2010/07/14 13:35:58| clientCacheHit: in refreshCheck() block 2010/07/14 13:35:58| clientProcessExpired: 'http://foobar.com/object' 2010/07/14 13:35:58| clientProcessExpired: lastmod -1 2010/07/14 13:35:58| clientReadRequest: FD 84: reading request... 2010/07/14 13:35:58| parseHttpRequest: Method is 'GET' 2010/07/14 13:35:58| parseHttpRequest: URI is '/object' In the way of checking anything to get some effect, I also gived a try to ignore-stale-while-revalidate override-lastmod override-expire ignore-reload ignore-no-cache, pushed refresh_stale_hit high in the sky, and again, no effects :-( What I'm doing wrong? Is there any other way to avoid REFRESH_HITs from being performed? Greetings, Dererk ref: 1. Squid Cache: Version 2.7.STABLE7 configure options: '--prefix=/usr/local/squid' '--bindir=/usr/local/bin' '--sbindir=/usr/local/sbin' '--sysconfdir=/etc/squid' '--localstatedir=/var' '--mandir=/usr/local/man' '--infodir=/usr/local/info' '--disable-internal-dns' '--enable-async-io' '--enable-storeio=aufs,ufs,coss' '--with-large-files' '--enable-snmp' '--with-maxfd=8192' '--enable-htcp' '--enable-cache-digests'
[squid-users] url-rewrite & digest authentication not working together
Hi - I'm having trouble using squid plus a url-rewrite-program as a reverse proxy to a system that requires digest authentication. Digest authentication fails because the uri= in the Authorization header isn't rewritten & so it doesn't match the POST URI created by url-rewrite-program. Is there a way to also rewrite the uri string in the Authorization header before squid sends it to the originserver? If it helps clarify, I'm using curl to POST to squid as a reverse proxy to a custom web server. And, if I eliminate the url-rewrite-program authorization works fine. e.g. [curl] --> POST /myfile.txt --> [squid (url-rewrite myfile.txt to <32-bit hex string>)] --> POST /<32bit-hex-string> --> [originserver] Thanks, Mike
[squid-users] Almost objects don't cached (reverse proxy squid 3.1)
Hello Squid Users.~ My squid.conf below. --- http_port 80 vhost https_port 443 accel cert=/etc/squid/a.crt key=/etc/squid/a.pem cafile=/etc/squid/a.ca protocol=https cache_peer 1.1.1.1 parent 80 0 no-query originserver name=my_parent no-digest no-netdb-exchange acl s1_domain dstdomain img.test.com http_access allow s1_domain cache_peer_domain my_parent img.test.com http_access deny all # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /cache 36000 16 256 # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern . 1440 50% 2880 reload-into-ims --- I make script like this : curl -O/dev/null http://img.TEST.com/.gif But almost object do not store in cache_dir and that connect the Origin server directly I ran the script twice , same result. ( some object was cached ) I want to store all object in my disk and prevent to code (304 or 200) at Origin server And I have wonder what is mean about "TCP_REFRESH_UNMODIFIED" How can I do ? Thank you for reading this :-) PS. In STORE.LOG. almost of object was RELEASE. --- 1279163784.168 RELEASE -1 FB25B6CB108A5507EB97BBF1A1846625 304 1279131351 1279113212-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/plan/plan_s3.gif 1279163784.548 RELEASE -1 DF9A50CB378A3E1AB48A0C6900DC7017 304 1279131352 1279113213-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/search/outoLayer01_tit.gif 1279163784.603 RELEASE -1 15EB239DB8F1D5A665D9AA7C3F680823 304 1279131352 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/logo.gif 1279163784.661 RELEASE -1 617C4C87489B2E89F51E54BE0FE1DF63 304 1279131352 1279113213-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/search/outoLayer02_tit.gif 1279163784.825 RELEASE -1 BA416AF9E47126D86778FB0D0AE17360 304 1279131352 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/copyright.gif 1279163784.935 RELEASE -1 B9F378103145A56C3ADFB70121235832 304 1279131352 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/multiSearch.gif 1279163784.991 RELEASE -1 1751939FD3D31BE9F77D3FE7EB807353 304 1279131352 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/signkorea.gif 1279163785.152 RELEASE -1 2358F9ADB5CFD9D975E2511774039069 304 1279131352 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/ftc.gif 1279163785.207 RELEASE -1 A0CA9827452E4AA6FF4C8C0A44FCC812 304 1279131353 1279113206-1 image/png -1/0 GET http://img.TEST.COM/front_2010/images/common/time_bg.png 1279163785.264 RELEASE -1 40C5A8E250058F82B0857035872703ED 304 1279131353 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/ksnet.gif 1279163785.319 RELEASE -1 C65747391372B5D8A003F30C3123CCBB 304 1279131353 1279113206-1 image/gif -1/0 GET http://img.TEST.COM/front_2010/images/common/layout/ksnet_txt.gif
Re: [squid-users] SSL Certs on Squid
Le mercredi 14 juillet 2010 09:38:10, Phil McDonnell a écrit : > I'm trying to setup squid3 to act as a reverse proxy under https. I > can't seem to figure out how to sign the certs with GoDaddy's SSL > product. Does anyone have the couple commands I need to generate a > CSR (Certificate signing request)? I've seen a lot of things on the > squid site that show how to generate x509 and PEM format requests, but > it seems GoDaddy takes CSR format. > > I've never setup SSL before so let me know if I'm misunderstanding > something here. > > Thanks, > Phil Maybe this helps you http://portal.okay.com.mx/cademia-linux/administracion-de-la-ca sorry it is in spanish but openssl commands are there. LD
[squid-users] SSL Certs on Squid
I'm trying to setup squid3 to act as a reverse proxy under https. I can't seem to figure out how to sign the certs with GoDaddy's SSL product. Does anyone have the couple commands I need to generate a CSR (Certificate signing request)? I've seen a lot of things on the squid site that show how to generate x509 and PEM format requests, but it seems GoDaddy takes CSR format. I've never setup SSL before so let me know if I'm misunderstanding something here. Thanks, Phil
Re: [squid-users] Squid2-only plugin from Secure Computing
On 07/14/2010 02:50 AM, Christoph Goeldi wrote: > On the 15th April 2008 you wrote that you've been in contact with > Secure Computing (today McAfee): > >> FYI: We have started talking to Secure Computing regarding Squid3 >> compatibility of the SmartFilter plugin. I will keep you updated. > > Would you mind sharing the outcome of your conversation with Secure > Computing? Hi Christoph, That 2008 discussion stalled, unfortunately. I have just sent another ping, but with all the corporate changes, it may take some time to find the person in charge again. I will post when/if there is any final outcome. Please consider pushing the issue with McAfee from your end. As a SmartFilter fan and customer, you may be more effective in triggering the decision to port the code to eCAP. And feel free to point the McAfee folks to me if they need help with eCAP in general or with porting their code to eCAP in particular. Thank you, Alex. > I'm still interested in Smartfilter for newer Squid versions (v3.0 and > v3.1). It's the best URL filter I know, or do you know any better > software which also would work with Squid? > > > Regards, > Christoph > > > On Tue, 15 Apr 2008 08:49:53 -0700, Alex Rousskov wrote: >> On Thu, 2008-03-20 at 17:46 +1100, Adam Carter wrote: >>> > I would be happy to try to resolve this issue with Secure Computing. >>> > However, I need more information: >>> > >>> > - What exactly is the Secure Computing plugin that supports Squid2 and >>> > does not support Squid3? Does it have a name and a version number? >>> >>> I think SmartFilter patches the squid source, so is tied to specific >>> versions. It certainly adds another option to the configure script. >>> You can download it for free from SecureComputing's website and have >>> look. Sorry I cant be more helpful but I'm not a developer. >>> >>> Smartfilter 4.2.1 works with squid 2.6-17. >>> >>> http://www.securecomputing.com/index.cfm?skey=1326 >> >> FYI: We have started talking to Secure Computing regarding Squid3 >> compatibility of the SmartFilter plugin. I will keep you updated. >> >> Thank you, >> >> Alex.
Re: [squid-users] transparenty proxy + upstream proxy download limit
NublaII Lists wrote: Hi I just set up a transparent proxy on my network and enabled an upstream proxy as well. Everything is working perfectly but I discovered a problem with downloads: I can only download up to 350M and the connections gets chopped after that. Your next step is to do full-debug cache.log traces and maybe packet traces to see where the abort is happening. It smells like a network quota being enforced. I don't have any kind of restrictions on the configurations, and the upstream proxy config is almost untouched (I only made the changes needed to make it talk to the transparent proxy) Just in case, both servers are running squid, versions: transparent proxy: 2.7.8 upstream proxy: 3.0.STABLE19 Please don't use "transparent proxy" when describing a problem. The term means as many as fours completely different concepts and lends people to assuming the wrong things sometimes. I assume you mean "http_port .. transparent", but may be one of the others. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.5
RE: [squid-users] Squid2-only plugin from Secure Computing
Hi Alex On the 15th April 2008 you wrote that you've been in contact with Secure Computing (today McAfee): > FYI: We have started talking to Secure Computing regarding Squid3 > compatibility of the SmartFilter plugin. I will keep you updated. Would you mind sharing the outcome of your conversation with Secure Computing? I'm still interested in Smartfilter for newer Squid versions (v3.0 and v3.1). It's the best URL filter I know, or do you know any better software which also would work with Squid? Regards, Christoph On Tue, 15 Apr 2008 08:49:53 -0700, Alex Rousskov wrote: On Thu, 2008-03-20 at 17:46 +1100, Adam Carter wrote: > I would be happy to try to resolve this issue with Secure Computing. > However, I need more information: > > - What exactly is the Secure Computing plugin that supports Squid2 and > does not support Squid3? Does it have a name and a version number? I think SmartFilter patches the squid source, so is tied to specific versions. It certainly adds another option to the configure script. You can download it for free from SecureComputing's website and have look. Sorry I cant be more helpful but I'm not a developer. Smartfilter 4.2.1 works with squid 2.6-17. http://www.securecomputing.com/index.cfm?skey=1326 FYI: We have started talking to Secure Computing regarding Squid3 compatibility of the SmartFilter plugin. I will keep you updated. Thank you, Alex.