[squid-users] How squid does Src/Dst IP address matching
Hi Can someone please tell how does squid does the acl evaluation related to Src/Dst IP address? Like acl myNet dst 10.0.0.0/255.255.0.0 As I understand squid does not get to know the IP layer information which has the destination IP address field. But in the HTTP header we have the name of the server like Host mail.yahoo.com, which can be used to determine the destination IP Address. Does squid resolves the IP address of mail.yahoo.com before it does the Dst Address acls matching or evaluation? Thanks in advance Saurabh
[squid-users] Using multiple auth scheme types in one squid instance?
Hi, Im interested in using basic authentication for some client IPs and NTLM for others. I'm wondering if it's possible to set this up from within squid using ACLs so that some are prompted for username/password and others are forced to use the NTLM fakeauth. I have two separate lists of IPs and I wish to force the clients in the lists through two different auth types. I imagine the only alternative is to setup TCP forwarding to separate squid instances running on the same box based on the source IP but that seems a bit messy. If someone knows how to do it i'd appreciate a tip. Thanks, Adrian.
Re: [squid-users] Using multiple auth scheme types in one squid instance?
Adrian wrote: Hi, Im interested in using basic authentication for some client IPs and NTLM for others. I'm wondering if it's possible to set this up from within squid using ACLs so that some are prompted for username/password and others are forced to use the NTLM fakeauth. I have two separate lists of IPs and I wish to force the clients in the lists through two different auth types. I imagine the only alternative is to setup TCP forwarding to separate squid instances running on the same box based on the source IP but that seems a bit messy. If someone knows how to do it i'd appreciate a tip. Squid does not differentiate the types of auth a user has done. It tries all methods its configured with (in the order configured) until one succeeds. The common way to do this appears to be to use the least-accepting method first and failover to the most-accepting. Or vice-versa depending on the situation. None of the methods will cause popup unless the users browser has no record of credentials. Then the browser will be the one asking regardless of the methods you use. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
Re: [squid-users] How squid does Src/Dst IP address matching
Saurabh Agarwal wrote: Hi Can someone please tell how does squid does the acl evaluation related to Src/Dst IP address? Like acl myNet dst 10.0.0.0/255.255.0.0 As I understand squid does not get to know the IP layer information which has the destination IP address field. But in the HTTP header we have the name of the server like Host mail.yahoo.com, which can be used to determine the destination IP Address. Does squid resolves the IP address of mail.yahoo.com before it does the Dst Address acls matching or evaluation? With src and dst it differs in the methods of attaining the IP. But the evaluation is identical. src - performs an OS call to retrieve the IP of the other end of the TCP connection socket its been given. dst - retrieves the FQDN being looked up from the request headers, and performs a DNS lookup on it to retrieve the address. Both then pass the IP to the ACL processing to be checked. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
[squid-users] Logging/Blocking URLs with question marks ?
Dear all 2.5-Stable-5 I have used squid for probably 8 years. It has recently come to my attention that sites with dynamic content as denoted by a ? question mark are not being logged or blocked. so for example searches on google do not show the full URL. is there any way to switch this on as it's important to our filtering of unwanted sites. Thanks Rob
[squid-users] Streaming audio header info
This might sound a bit vague, as I have just picked up support for squid. We are currently running squid-2.5.STABLE1 on a Solaris 5.5.1, with local password authentication. I am in the processes of replacing it with a Solaris 9 box running squid-2.6.STABLE18, and ntlm authenticating via samba off our DCs. Everything seems to work fine off the new scenario, except for a couple of users who use winamp to listen to online music etc. On the old set up, when they listen to a site, they get the track or station info back within winamp. Presumably some form of metadata in the header somewhere. But on the new one, that info isn't present - all they see is the url they are playing. Otherwise, it all works. I can't for the life of me work out where this info is being lost. The only difference in the log files I see is: old working proxy: 1205225734.572 27711 ip address TCP_MISS/600 639404 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - 1205226002.900 26061 ip address TCP_MISS/600 582218 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - new failing proxy: 1204908920.018 235 ip address TCP_MISS/200 7427 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - 1204908920.178 224 ip address TCP_MISS/200 8719 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - Is the TCP_MISS/600 vs TCP_MISS/200 significant? It's not really a show stopper, more of an annoyance, but I'm more worried that the problem might manifest itself elsewhere if I have something misconfigured. This is not my area of expertise, and I'm a bit stumped as where to look next.. Alan
Re: [squid-users] Using multiple auth scheme types in one squid instance?
On Mon, Mar 17, 2008 at 9:25 PM, Amos Jeffries [EMAIL PROTECTED] wrote: Squid does not differentiate the types of auth a user has done. It tries all methods its configured with (in the order configured) until one succeeds. The common way to do this appears to be to use the least-accepting method first and failover to the most-accepting. Or vice-versa depending on the situation. I want to put 'trusted' users through NTLM fakeauth so I can capture their usernames without bothering them with a popup auth box. For the 'untrusted' user subnets, I want to give them a popup box and make them authenticate. Since fakeauth will always pass, I can't just configure the schemes in succession. I was thinking of writing my own fakeauth code which rejected anything in my 'untrusted' IP list forcing it to the next auth scheme, but I don't think the IP address is passed to authenticate scheme by squid to check against? Any other ideas? Thanks, Adrian.
[squid-users] Unable to reach a site
Hello friends This is a strange problem I am facing I am able to ping to a particular site and also Traceroute also is finishing with out any problem. But I am unable to get the page on my browser. why its is happening.. Is there any way like traceroute to find out what is happening in a browser when we access a site. thanks Revathi Explore your hobbies and interests. Go to http://in.promos.yahoo.com/groups
[squid-users] Confusing redirection behaviour
In an attempt to generate a login page I was previously using external_acl_type to define a helper program to define my acl, and then using deny_info to define a logon page for my users. This failed because the redirected page did not appear to use it's own URL as it's root and instead substituted the requested URL. This meant that I was unable to call a CGI from my logon form because the form's CGI was appended to the originally requested (and denied) URL. So, if the user requested toyota.co.za, and was (correctly) sent to my login 192.168.60.254/login.html, the CGI called from the login page's form was toyota.co.za/cgi-bin/myperlscript.cgi. Amos suggested that, instead of hosting the cgi script on the server, I placed it on the 'net, but I'm afraid this wouldn't suit my purpose. In desperation I'm looking at url_rewrite_program, but it also appears to have redirection issues. If I use the Perl script below, I would expect the requested URL to be replaced by http://localhost/login.html, whatever the user requested. However 2 results occur. If the requested URL is a simple tld, like http://www.toyota.co.za, then the user is redirected to the Apache default page which simply proclaims (misleadingly!) that It Works!. This in spite of the fact that the default page has been removed and replaced. If the URL takes the form http://www.google.co.za/firefox?client=firefox-arls=org.mozilla:en-GB:official; then the user is presented with a 404 which says firefox not found. /var/log/apache/error.log confirms that /var/www/firefox is not found. This behaviour persists if I replace http://localhost with http://192.168.60.254 or with http://news.bbc.co.uk, or whatever. #!/usr/bin/perl $|=1; while () { @X = split; $url = $X[0]; $ip = $X[1]; if (1 == 1){ print 302:http://localhost/login.html\n;; } else{ print $url\n; } }
Re: [squid-users] Unable to reach a site
Hi! On Monday 17 March 2008, revathi ganesh wrote: Hello friends This is a strange problem I am facing I am able to ping to a particular site and also Traceroute also is finishing with out any problem. But I am unable to get the page on my browser. why its is happening.. Is there any way like traceroute to find out what is happening in a browser when we access a site. Did you try lynx or wget to make an http connection to the site from the squid server? A simple telnet to the webserver on port 80 also will prove the connection! Always start simple! Cheers Ang -- Angela Williams Enterprise Outsourcing Unix/Linux Cisco spoken here! Bedfordview [EMAIL PROTECTED] Gauteng South Africa Smile!! Jesus Loves You!!
Re: [squid-users] Logging/Blocking URLs with question marks ?
Robin Clayton wrote: Dear all 2.5-Stable-5 You would do well to update. THere are quite a number of problems now known with 2.5. I have used squid for probably 8 years. It has recently come to my attention that sites with dynamic content as denoted by a ? question mark are not being logged or blocked. so for example searches on google do not show the full URL. is there any way to switch this on as it's important to our filtering of unwanted sites. After your upgrade there are a number of options available from logging, blocking, to caching of objects with query-string. The exact ones will depend on your upgraded version. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
Re: [squid-users] squid 2.7 behaviour
Guys, here's the output of the access_log with header logging. Let me know If I can gather some more info. Regards, Pablo 1205758406.127 1324 172.16.254.4 TCP_REFRESH_MISS/200 9580 GET http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc - FIRST_UP_PARENT/172.16.100.22 text/html [Host: listados.deremate.cl\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\nAccept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nConnection: keep-alive\r\nReferer: http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/\r\nCookie: __utma=256982686.1388267116.1203594335.1205344397.1205674701.10; __utmz=256982686.1203594335.1.1.utmgclid=CPKY76Ga1ZECFQkdPAodoCgLZg|utmccn=(not+set)|utmcmd=(not+set)|utmctr=deremate.cl; __utmb=256982686; __utmc=256982686; CASTGC=TGT-60112-VM7jHuI5Ra6i72JSxfiGUN50uG2c2lBuLJr-50\r\nX-Forwarded-For: 64.117.132.58\r\n] [HTTP/1.0 200 OK\r\nDate: Mon, 17 Mar 2008 12:53:25 GMT\r\nServer: Apache\r\nExpires: Mon, 17 Mar 2008 10:13:25 -0300\r\nCache-Control: public, max-age=1200\r\nContent-Language: es-CL\r\nX-Apache: search01\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Type: text/html;charset=ISO-8859-1\r\nX-Cache: MISS from sq01.dc.dr\r\nVia: 1.0 sq01.dc.dr:80 (squid/2.7.DEVEL0-20080313)\r\nConnection: close\r\n\r] 2008/03/16 11:39:16| ctx: exit level 0 2008/03/16 11:39:16| ctx: enter level 0: 'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc' 2008/03/16 11:39:16| storeSetPublicKey: unable to determine vary_id for 'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZprice desc' 2008/03/16 11:39:16| ctx: exit level 0 On Sat, Mar 15, 2008 at 10:20 PM, J. Peng [EMAIL PROTECTED] wrote: On Sun, Mar 16, 2008 at 3:22 AM, Pablo García [EMAIL PROTECTED] wrote: Mark, I can provide network captures for this, I'm using mod_deflate to compress the responses. I'm also using squid-2.7 for apache's mod_deflate. It can work, but I also get lots of the same warnings in cache.log.
Re: [squid-users] Streaming audio header info
[EMAIL PROTECTED] wrote: This might sound a bit vague, as I have just picked up support for squid. We are currently running squid-2.5.STABLE1 on a Solaris 5.5.1, with local password authentication. I am in the processes of replacing it with a Solaris 9 box running squid-2.6.STABLE18, and ntlm authenticating via samba off our DCs. Everything seems to work fine off the new scenario, except for a couple of users who use winamp to listen to online music etc. On the old set up, when they listen to a site, they get the track or station info back within winamp. Presumably some form of metadata in the header somewhere. But on the new one, that info isn't present - all they see is the url they are playing. Otherwise, it all works. I can't for the life of me work out where this info is being lost. The only difference in the log files I see is: old working proxy: 1205225734.572 27711 ip address TCP_MISS/600 639404 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - 1205226002.900 26061 ip address TCP_MISS/600 582218 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - new failing proxy: 1204908920.018 235 ip address TCP_MISS/200 7427 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - 1204908920.178 224 ip address TCP_MISS/200 8719 GET http://mp3-vr-128.smgradio.com/ username DIRECT/85.159.184.42 - Is the TCP_MISS/600 vs TCP_MISS/200 significant? It's not really a show stopper, more of an annoyance, but I'm more worried that the problem might manifest itself elsewhere if I have something misconfigured. This is not my area of expertise, and I'm a bit stumped as where to look next.. Alan Quite probably it is that 600. HTTP does not define any codes higher than 599, but that may not stop some server creating its own ones to do weird stuff. Can you get a tcpdump/wireshark trace of the headers from the server to squid. That should give us some idea of what squid trying to cope with. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
Re: [squid-users] Using multiple auth scheme types in one squid instance?
Adrian wrote: On Mon, Mar 17, 2008 at 9:25 PM, Amos Jeffries [EMAIL PROTECTED] wrote: Squid does not differentiate the types of auth a user has done. It tries all methods its configured with (in the order configured) until one succeeds. The common way to do this appears to be to use the least-accepting method first and failover to the most-accepting. Or vice-versa depending on the situation. I want to put 'trusted' users through NTLM fakeauth so I can capture their usernames without bothering them with a popup auth box. For the 'untrusted' user subnets, I want to give them a popup box and make them authenticate. Since fakeauth will always pass, I can't just configure the schemes in succession. I was thinking of writing my own fakeauth code which rejected anything in my 'untrusted' IP list forcing it to the next auth scheme, but I don't think the IP address is passed to authenticate scheme by squid to check against? Any other ideas? Not really. The kind of thing you are trying to do is not commonly spoken of around here. So we don't have any standard easy way of doing it. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
[squid-users] Support for NTLM web authentication on squid 3.0
Hi, Did anyone try Proxying of NTLM web authentication on squid 3.0 :- http://devel.squid-cache.org/ntlm/ Does it come with squid 3.0? If not, what is the any roadmap for the support? Regards, John Mok
[squid-users] TCP_HIT and TCP_MISS
Hi everybody, I run Squid2.6Stable12 for few months ago and when I look to my access.log I have no TCP_HIT, I just have TCP_MISS, so it's seem to cache nothing. And if I look to the cache.log I have almost just entry like : httpReadReply: Excess data from or WARNING! Your cache is running out of filedescriptors 2008/03/17 07:23:54| WARNING: All url_rewriter processes are busy. 2008/03/17 07:23:54| WARNING: up to 27 pending requests queued My configuration is a transparent squid with WCCP router to redirect without client configuration. So what I did wrong Guillaume Chartrand Technicien informatique Cégep régional de Lanaudière Centre administratif, Repentigny (450) 470-0911 poste 7218
Re: [squid-users] Vary object loop
On Mon, Mar 17, 2008 at 3:20 AM, Alex Rousskov [EMAIL PROTECTED] wrote: On Mon, 2008-03-17 at 06:25 +0900, Adrian Chadd wrote: On Fri, Mar 14, 2008, Alex Rousskov wrote: I think it actually is a bug in the Vary handling in Squid-3. The condition: if (!has_vary || !entry-mem_obj-vary_headers) { if (vary) { /* Oops... something odd is going on here.. */ .. needs to be looked at. But it is not the condition getting hit according to Aurimas' log, is it? There's two confusing log messages with the same message, which is confusing things. The messages are indeed poorly written, but they are different: varyEvaluateMatch: Oops. Not a Vary object on second attempt, varyEvaluateMatch: Oops. Not a Vary match on second attempt, That is why I said that it was the other, arguably less suspicious, condition being hit here. Right, since it's the case with Vary match on second attempt, for now I just moved it to debug level 2 with a help of Alex patch. Thanks! BTW, let me know if I can help debugging this. // Aurimas
Re: [squid-users] Support for NTLM web authentication on squid 3.0
On Mon, Mar 17, 2008 at 4:19 PM, John Mok [EMAIL PROTECTED] wrote: Hi, Did anyone try Proxying of NTLM web authentication on squid 3.0 :- http://devel.squid-cache.org/ntlm/ Does it come with squid 3.0? If not, what is the any roadmap for the support? Hi John, NTLM authentication works out of the box with Squid 3, with some support provided by Samba. See http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM -- /kinkie
[squid-users] Streaming Audio, Video Video on Demand
There is plenty of noise lately about the video on demand being a huge bandwidth burden to cable broadband providers and others. Having worked for a wireless ISP I understand that quite well. End users always thought bandwidth was free for some reason. I was just thinking that almost all major ISP's use web caching to speed up popular websites and save bandwidth. Perhaps by adding some extensions to web caching much of the burden could be reduced. If an say a large ISP in say Detroit has 500+ users listening to the same audio or video stream at the same time it could be buffered and delayed slightly to allow the ISP to only pull one stream off the Internet and serve all 500+ users. Keep alives and such could be exchanged between the orgin and clients to keep statistics and tracking up to date. For movies on demand they could be done in encrypted chunks that are cached rather then one big file. When all or enough chunks are downloaded playback could begin. This would not save much traffic in the last mile or loop but would reduce backbone traffic and perhaps upstream traffic since p2p would not need to be used. Most cable and wireless last mile loops are optimised for downstream rather then upstream anyway. There could be a fallback mechanism if a given cache did not support it as well. Disk space is cheap compared to bandwidth. I remember years ago a single T1 was 1300$ a month. Then each wireless pop only had so much capacity. Things have improved since then but not enormously. Just a though. Matt
[squid-users] How Can I Stop Different Squid Purge Reponse Headers from 3.X than 2.6
Hello. It seems that perhaps with the introduction of Squid Ver 3.X that the replies provided while performing a quid purge command is different. How can I make squid3.X purge reponse replies the same as squid 2.X? As shown below, when performing a purge, the 3.2 server provides a different reponse than the 2.6 server. We use several scripts that are based around the CPAN squid::purge module. I'm not sure wether it or our scripts does not know how to handle this extra information and causes an error reply. For now however, it would be easier if I could simply modify squid rather than having to put in a request for script changes here. Thanks! Nicole $host = 1 PURGE http://p0.domain.com/V21.jpg HTTP/1.0 Accept */* HTTP/1.0 200 OK Server: squid/3.0.STABLE2 Mime-Version: 1.0 Date: Mon, 17 Mar 2008 17:25:26 GMT Content-Length: 0 X-Cache: MISS from p0.domain.com Via: 1.0 p0.domain.com (squid/3.0.STABLE2) Connection: close $host = 2 PURGE http://p0.domain.com/V21.jpg HTTP/1.0 Accept */* HTTP/1.0 200 OK Server: squid/2.6.STABLE14 Date: Mon, 17 Mar 2008 17:25:26 GMT Content-Length: 0 -- |\ __ /| (`\ | o_o |__ ) ) // \\ - [EMAIL PROTECTED] - Powered by FreeBSD - -- The term daemons is a Judeo-Christian pejorative. Such processes will now be known as spiritual guides - Politicaly Correct UNIX Page
Re: [squid-users] How Can I Stop Different Squid Purge Reponse Headers from 3.X than 2.6
Nicole wrote: Hello. It seems that perhaps with the introduction of Squid Ver 3.X that the replies provided while performing a quid purge command is different. How can I make squid3.X purge reponse replies the same as squid 2.X? As shown below, when performing a purge, the 3.2 server provides a different reponse than the 2.6 server. We use several scripts that are based around the CPAN squid::purge module. I'm not sure wether it or our scripts does not know how to handle this extra information and causes an error reply. For now however, it would be easier if I could simply modify squid rather than having to put in a request for script changes here. Squid 3.2 is not even on the drawing boards yet. Did you mean 3.0.STABLE2 ? The problems you are seeing below is looks to be that in this case HTTP/1.1 is further along in 3.x than your 2.6. The 3.x is adding some headers for increased HTTP support in future. The Squid Developers have no plans to decrease squids standards-compliance before the protocols are expired. The best thing for you is to fix your scripts to be able to cope with modern web traffic. RFCs REQUIRE software to ignore headers they don't understand. The fast way is to build squid with HTTP-violations enabled and turn off via in squid.conf. http://www.squid-cache.org/Versions/v3/3.0/cfgman/via.html PS: Be warned such action may cause other non-standard behaviour now or in future that affects your web browsing. $host = 1 PURGE http://p0.domain.com/V21.jpg HTTP/1.0 Accept */* HTTP/1.0 200 OK Server: squid/3.0.STABLE2 Mime-Version: 1.0 Date: Mon, 17 Mar 2008 17:25:26 GMT Content-Length: 0 X-Cache: MISS from p0.domain.com Via: 1.0 p0.domain.com (squid/3.0.STABLE2) Connection: close $host = 2 PURGE http://p0.domain.com/V21.jpg HTTP/1.0 Accept */* HTTP/1.0 200 OK Server: squid/2.6.STABLE14 Date: Mon, 17 Mar 2008 17:25:26 GMT Content-Length: 0 Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
Re: [squid-users] squid 2.7 behaviour
I submitted this as a bug, at Henrik's suggestion; http://www.squid-cache.org/bugs/show_bug.cgi?id=2269 On 18/03/2008, at 12:09 AM, Pablo García wrote: Guys, here's the output of the access_log with header logging. Let me know If I can gather some more info. Regards, Pablo 1205758406.127 1324 172.16.254.4 TCP_REFRESH_MISS/200 9580 GET http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc - FIRST_UP_PARENT/172.16.100.22 text/html [Host: listados.deremate.cl\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\nAccept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/ plain;q=0.8,image/png,*/*;q=0.5\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nConnection: keep-alive\r\nReferer: http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/ \r\nCookie: __utma=256982686.1388267116.1203594335.1205344397.1205674701.10; __utmz=256982686.1203594335.1.1.utmgclid=CPKY76Ga1ZECFQkdPAodoCgLZg| utmccn=(not+set)|utmcmd=(not+set)|utmctr=deremate.cl; __utmb=256982686; __utmc=256982686; CASTGC=TGT-60112-VM7jHuI5Ra6i72JSxfiGUN50uG2c2lBuLJr-50\r\nX- Forwarded-For: 64.117.132.58\r\n] [HTTP/1.0 200 OK\r\nDate: Mon, 17 Mar 2008 12:53:25 GMT\r\nServer: Apache\r\nExpires: Mon, 17 Mar 2008 10:13:25 -0300\r\nCache-Control: public, max-age=1200\r\nContent-Language: es-CL\r\nX-Apache: search01\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Type: text/html;charset=ISO-8859-1\r\nX-Cache: MISS from sq01.dc.dr\r\nVia: 1.0 sq01.dc.dr:80 (squid/2.7.DEVEL0-20080313)\r\nConnection: close\r\n\r] 2008/03/16 11:39:16| ctx: exit level 0 2008/03/16 11:39:16| ctx: enter level 0: 'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc' 2008/03/16 11:39:16| storeSetPublicKey: unable to determine vary_id for 'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZprice desc' 2008/03/16 11:39:16| ctx: exit level 0 On Sat, Mar 15, 2008 at 10:20 PM, J. Peng [EMAIL PROTECTED] wrote: On Sun, Mar 16, 2008 at 3:22 AM, Pablo García [EMAIL PROTECTED] wrote: Mark, I can provide network captures for this, I'm using mod_deflate to compress the responses. I'm also using squid-2.7 for apache's mod_deflate. It can work, but I also get lots of the same warnings in cache.log. -- Mark Nottingham [EMAIL PROTECTED]
Re: [squid-users] TCP_HIT and TCP_MISS
On Mon, 17 Mar 2008, Guillaume Chartrand wrote: Hi everybody, I run Squid2.6Stable12 for few months ago and when I look to my access.log I have no TCP_HIT, I just have TCP_MISS, so it's seem to cache nothing. And if I look to the cache.log I have almost just entry like : httpReadReply: Excess data from or WARNING! Your cache is running out of filedescriptors 2008/03/17 07:23:54| WARNING: All url_rewriter processes are busy. 2008/03/17 07:23:54| WARNING: up to 27 pending requests queued Some information from squid -v and squid.conf please.. My configuration is a transparent squid with WCCP router to redirect without client configuration. So what I did wrong Guillaume Chartrand Technicien informatique Cégep régional de Lanaudière Centre administratif, Repentigny (450) 470-0911 poste 7218 --