[squid-users] Impossible keep-alive header
Hi, What does Impossible keep-alive header errors in cache log mean? Regards
Re: [squid-users] WCCP, Cisco ASA and assymetric path
Thanks you amos I wil try a topology where the return path doesn't use the ASA 2012/7/10 Amos Jeffries squ...@treenet.co.nz: On 10.07.2012 00:44, Abdessamad BARAKAT wrote: In fact on the wiki (http://wiki.squid-cache.org/ConfigExamples/Intercept/CiscoAsaWccp2), there is this : Very important passage from the Cisco-Manual The only topology that the security appliance supports is when client and cache engine are behind the same interface of the security appliance and the cache engine can directly communicate with the client without going through the security appliance. Then you have very clear documentation from the appliance manufacturer that they do not support your desired configuration. And I can see the reply wad dropped by the ASA because I think when the ASA make the wccp redirect, he doesn't record a new connection so when He see the reply from the proxy to the client, the SYN was dropped: Jul 9 14:11:26 192.168.35.250 %ASA-6-106015: Deny TCP (no connection) from Website IP to proxy IP flags SYN ACK on interface PROXY LAN So anyone know a workaround for this issue ? for have the client and the proxy aren't behind the same interface of the firewall ASA It does not matter to Squid or even to routing logics, but apparently the device itself has undefined behaviour when its done. As I understand it may be due to the way the device handles reverse-path (RP) filtering or it may be hard-wired. All I can say now is good Luck figuring out which and whether you can change the device. It has nothing to do with Squid. Amos
[squid-users] RE: SSLBUMP Issue with SSL websites
Dears, Is there anyone can help me in the mentioned error From: Muhammad Shehata Sent: Tuesday, July 10, 2012 8:55 AM To: squid-users@squid-cache.org Cc: squ...@treenet.co.nz Subject: SSLBUMP Issue with SSL websites Dears, hope you all are doing well actually I was following the replies on squid users-mail-list about sslbump issues with showing up some websites inline without images or css style sheet like https://gmail.com and https://facebook.com as I have same issue in version squid 3.1.19, I know that when sslbump is enabled it intercept the CONNECT method and modify it to be GET method that when I used broken sites acl to exclude them however I see that the method is CONNECT for those excluded website not Get as all other bumped sites but it still the same result 1341837646.893 45801 x.x.x.x TCP_MISS/200 62017 CONNECT twitter.com:443 - DIRECT/199.59.150.7 acl broken_sites dstdomain .twitter.com acl broken_sites dstdomain .facebook.com ssl_bump deny broken_sites ssl_bump allow all http_port 192.168.0.1:3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=40MB cert=/etc/pki/tls/certs/sslintercept.crt key=/etc/pki/tls/certs/sslintercept.key
[squid-users] Squid 3.2.0.18 vs 2.7 - apostrophe in access.log - who must escape the client or Squid ?
Dear squid users. When using a client like FF/IE against Squid v3.2 and v2.7 with default logformat Squid always escape the apostrophes in the URL with %27 in the access.log When using a client like LINKS against Squid v3.2 and v2.7 with default logformat Squid only escape the apostrophes in the URL with %27 in the access.log with v2.7 and not with v3.2 I assume clients like FF/IE escape the URL before passing it to Squid ? Is this a possible bug in Squid v3.2 since Squid 2.7 does the proper escaping before logging it to access.log. The reason I'm asking is, I have a daemon as logger and now need to escape the URL value in the daemon since Squid v3.2. Who must escape the URL the client or Squid ? Regards Bartel Viljoen e-mail : bar...@ncc.co.za phone : 086 155 5444 fax : 051 448 1214 web: www.ncc.co.za Disclaimer added by CodeTwo Exchange Rules 2007 http://www.codetwo.com
Re: [squid-users] squid_session problem
I have no idea why, but all of a sudden, after reinstalling and reconfiguring squid 3.2.0.18 from scratch, my Ubuntu Server 12.04 64 bit VM started working. I really did nothing special to get it to run - just the same as I've been doing. As soon as this happened, I quickly followed the same steps on my real Ubuntu Server 12.04, but there, it once again refused to work. The squid.conf file on the working VM and my production Ubuntu Server aren't just similar - they have the same md5 checksum. They are exactly the same. I tried the -d flag, but it had no effect on the level of detail found in cache.log. This is probably to be expected, considering I just spotted this line within the same log file: WARNING: Cannot run '/usr/local/squid/libexec/ext_session_acl' process. Here's the section around that line: 2012/07/12 09:02:20 kid1| Starting Squid Cache version 3.2.0.18 for x86_64-unknown-linux-gnu... 2012/07/12 09:02:20 kid1| Process ID 8871 2012/07/12 09:02:20 kid1| Process Roles: worker 2012/07/12 09:02:20 kid1| With 1024 file descriptors available 2012/07/12 09:02:20 kid1| Initializing IP Cache... 2012/07/12 09:02:20 kid1| DNS Socket created at [::], FD 8 2012/07/12 09:02:20 kid1| DNS Socket created at 0.0.0.0, FD 9 2012/07/12 09:02:20 kid1| Adding nameserver 205.233.109.40 from /etc/resolv.conf 2012/07/12 09:02:20 kid1| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/07/12 09:02:20 kid1| helperOpenServers: Starting 1/1 'ext_session_acl' processes 2012/07/12 09:02:20 kid1| commBind: Cannot bind socket FD 10 to [::1]: (99) Cannot assign requested address 2012/07/12 09:02:20 kid1| commBind: Cannot bind socket FD 11 to [::1]: (99) Cannot assign requested address 2012/07/12 09:02:20 kid1| ipcCreate: Failed to create child FD. 2012/07/12 09:02:20 kid1| WARNING: Cannot run '/usr/local/squid/libexec/ext_session_acl' process. 2012/07/12 09:02:20 kid1| Logfile: opening log daemon:/usr/local/squid/var/logs/access.log 2012/07/12 09:02:20 kid1| Logfile Daemon: opening log /usr/local/squid/var/logs/access.log 2012/07/12 09:02:20 kid1| Store logging disabled 2012/07/12 09:02:20 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects I tried changing the ownership of the ext_session_acl file to proxy:proxy, and even set its permissions to 777, but neither helped. It doesn't appear to be a permissions issue. Is there anything obviously wrong here that might indicate why ext_session_acl won't run? Like maybe the line that reads ipcCreate: Failed to create child FD.? Or is that normal? Tal On Wed, Jul 11, 2012 at 7:12 PM, Amos Jeffries squ...@treenet.co.nz wrote: On 12.07.2012 12:37, Jack Black wrote: I just tried the same squid configuration on Ubuntu Mini remix 11.04 x64, and Ubuntu Desktop 12.04 x64 live (I didn't even bother installing it). They both work perfectly, just like CentOS. Whatever the problem is, it appears to be specific to Ubuntu 12.04 Server edition for some reason, which just happens to be the exact OS my production server (the one that needs to run squid) is running. There's got to be something that I'm missing... Since you have the helper from squid-3.2 built stick with that one. It has a few important bug fixes and the -d flag operating. If you add -d to the helper command-line parameters in squid.conf it should record in cache.log what is going on with each session lookup. I'm kind of suspicious that the helper is having problems in the background that are not showing up. Also double-check that the squid.conf are completely identical between the machines. Security in squid is default-closed so the expected behaviour with a failing helper should be causing TCP_DENIED constantly to the client on Ubuntu, not access to the Internet. Amos
[squid-users] Using ACLs with ICAP/SquidClamAV
This is my first posting. Please be gentle! I've run Squid in many arrangements but only recently have I been using the ICAP client to invoke SquidClamAV. I've browsed the wiki and searched on Google, but I can't seem to figure out how I might use ACLs to control when a request gets passed to the ICAP server. We have a Windows server that wants to download an update file from windowsupdate.com. That file triggers the known ClamAV false positive W32.Virut.Gen.D-159. I'd like to write an ACL so that objects requested from this machine's IP address are not passed to the ICAP server but sent directly to the requesting machine. I've written lots of ACLs in the past to exempt hosts, URL regexes, and the like, but I can't seem to figure out how to do this with an ICAP request. I've looked at the documentation for configuration file directives like adaptation_access, icap_service, and the like, but I can't seem to find anything that tells me how to use ACLs with those. Can anyone point me to some documentation I might read, or suggest some methods to use ACLs with ICAP? Thanks! Peter
[squid-users] i'm having a little performance trouble with squid + ICAP server.
i am using squid 3.1.19 for testing and will use in the next couple days squid 3.2 because it has couple new icap options. i have added some caching to my icap server and the stress tests shows efficiency of about 4000 requests per second. a unlimited linux FD to support the stress testings to 65535 for squid and my server. it is double handling connections to mysql and redis while redis is a persistent connection and mysql not. mysql requests done only and only if redis dosnt have the data needed. so when im testing squid.. using apache benchmark after about 2000 finished connections squid stops passing icap requests like a failed ICAP service until the recovery time(30 secs). i have set the bypass on the icap service to on to allow the stress test continue. so squid continues to serve the requests but will stop icap queries. any direction on what to look for? Thanks, Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: [squid-users] Using ACLs with ICAP/SquidClamAV
On 7/12/2012 11:17 PM, Peter H. Lemieux wrote: This is my first posting. Please be gentle! I've run Squid in many arrangements but only recently have I been using the ICAP client to invoke SquidClamAV. I've browsed the wiki and searched on Google, but I can't seem to figure out how I might use ACLs to control when a request gets passed to the ICAP server. We have a Windows server that wants to download an update file from windowsupdate.com. That file triggers the known ClamAV false positive W32.Virut.Gen.D-159. I'd like to write an ACL so that objects requested from this machine's IP address are not passed to the ICAP server but sent directly to the requesting machine. I've written lots of ACLs in the past to exempt hosts, URL regexes, and the like, but I can't seem to figure out how to do this with an ICAP request. I've looked at the documentation for configuration file directives like adaptation_access, icap_service, and the like, but I can't seem to find anything that tells me how to use ACLs with those. Can anyone point me to some documentation I might read, or suggest some methods to use ACLs with ICAP? Thanks! Peter use the logic of acls: ##start #instead of 192.168.0.1 use the machie ip acl my_machine src 192.168.0.1 icap_service service_av reqmod_precache bypass=0 icap://clamavserver:1344/reqmod adaptation_access service_av deny my_machine adaptation_access service_av allow all ##end That is all Best Regards, Elieze -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: [squid-users] squid_session problem
Eureka! It appears that despite ipv6 being disabled on the Ubuntu Server OS itself, it was still interfering with the helper somehow. All I had to do was change the line in squid.conf that starts with external_acl_type to read: external_acl_type session ipv4 ... and all of a sudden, cache.log stopped displaying errors, and everything started to work. It redirects properly now. Thank you Amos for your time, and advice. If we ever meet in person, remind me that I owe you a beer :) Tal On Thu, Jul 12, 2012 at 9:40 AM, Jack Black secretagent...@gmail.com wrote: I have no idea why, but all of a sudden, after reinstalling and reconfiguring squid 3.2.0.18 from scratch, my Ubuntu Server 12.04 64 bit VM started working. I really did nothing special to get it to run - just the same as I've been doing. As soon as this happened, I quickly followed the same steps on my real Ubuntu Server 12.04, but there, it once again refused to work. The squid.conf file on the working VM and my production Ubuntu Server aren't just similar - they have the same md5 checksum. They are exactly the same. I tried the -d flag, but it had no effect on the level of detail found in cache.log. This is probably to be expected, considering I just spotted this line within the same log file: WARNING: Cannot run '/usr/local/squid/libexec/ext_session_acl' process. Here's the section around that line: 2012/07/12 09:02:20 kid1| Starting Squid Cache version 3.2.0.18 for x86_64-unknown-linux-gnu... 2012/07/12 09:02:20 kid1| Process ID 8871 2012/07/12 09:02:20 kid1| Process Roles: worker 2012/07/12 09:02:20 kid1| With 1024 file descriptors available 2012/07/12 09:02:20 kid1| Initializing IP Cache... 2012/07/12 09:02:20 kid1| DNS Socket created at [::], FD 8 2012/07/12 09:02:20 kid1| DNS Socket created at 0.0.0.0, FD 9 2012/07/12 09:02:20 kid1| Adding nameserver 205.233.109.40 from /etc/resolv.conf 2012/07/12 09:02:20 kid1| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/07/12 09:02:20 kid1| helperOpenServers: Starting 1/1 'ext_session_acl' processes 2012/07/12 09:02:20 kid1| commBind: Cannot bind socket FD 10 to [::1]: (99) Cannot assign requested address 2012/07/12 09:02:20 kid1| commBind: Cannot bind socket FD 11 to [::1]: (99) Cannot assign requested address 2012/07/12 09:02:20 kid1| ipcCreate: Failed to create child FD. 2012/07/12 09:02:20 kid1| WARNING: Cannot run '/usr/local/squid/libexec/ext_session_acl' process. 2012/07/12 09:02:20 kid1| Logfile: opening log daemon:/usr/local/squid/var/logs/access.log 2012/07/12 09:02:20 kid1| Logfile Daemon: opening log /usr/local/squid/var/logs/access.log 2012/07/12 09:02:20 kid1| Store logging disabled 2012/07/12 09:02:20 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 objects I tried changing the ownership of the ext_session_acl file to proxy:proxy, and even set its permissions to 777, but neither helped. It doesn't appear to be a permissions issue. Is there anything obviously wrong here that might indicate why ext_session_acl won't run? Like maybe the line that reads ipcCreate: Failed to create child FD.? Or is that normal? Tal On Wed, Jul 11, 2012 at 7:12 PM, Amos Jeffries squ...@treenet.co.nz wrote: On 12.07.2012 12:37, Jack Black wrote: I just tried the same squid configuration on Ubuntu Mini remix 11.04 x64, and Ubuntu Desktop 12.04 x64 live (I didn't even bother installing it). They both work perfectly, just like CentOS. Whatever the problem is, it appears to be specific to Ubuntu 12.04 Server edition for some reason, which just happens to be the exact OS my production server (the one that needs to run squid) is running. There's got to be something that I'm missing... Since you have the helper from squid-3.2 built stick with that one. It has a few important bug fixes and the -d flag operating. If you add -d to the helper command-line parameters in squid.conf it should record in cache.log what is going on with each session lookup. I'm kind of suspicious that the helper is having problems in the background that are not showing up. Also double-check that the squid.conf are completely identical between the machines. Security in squid is default-closed so the expected behaviour with a failing helper should be causing TCP_DENIED constantly to the client on Ubuntu, not access to the Internet. Amos
[squid-users] squid 3.2.0.16+ not honoring hierarchy proxy settings on intercept and tproxy mode
i have filed a bug: http://bugs.squid-cache.org/show_bug.cgi?id=3589 and attached the draft on the bug from a month ago. while working with hierarchy of proxies on squid 3.1.19 was fine but on 3.2.0.16+ im having some problems. compilation options: www1 ~ # /opt/squid3119/sbin/squid -v Squid Cache: Version 3.1.19 configure options: '--prefix=/opt/squid3119' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-digest-auth-helpers=ldap,password' '--enable-arp-acl' '--enable-esi--disable-translation' '--with-logdir=/opt/squid3119/var/log' '--with-pidfile=/var/run/squid3119.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' '--enable-ltdl-convenience' '--enable-snmp' --with-squid=/opt/src/squid-3.1.19 www1 ~ # /opt/squid3217/sbin/squid -v Squid Cache: Version 3.2.0.17 configure options: '--prefix=/opt/squid3217' '--with-default-user=proxy' '--enable-linux-netfilter' '--with-filedescriptors=65536' '--enable-underscores' '--enable-storeio=ufs,aufs' '--enable-delay-pools' '--enable-esi' '--enable-icap-client' '--enable-ssl' '--enable-forw-via-db' '--enable-cache-digests' '--enable-follow-x-forwarded-for' '--enable-ssl-crtd' '--enable-auth' '--disable-translation' '--disable-auto-locale' '--with-large-files' Thanks, Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
Re: [squid-users] Impossible keep-alive header
On 12/07/2012 7:38 p.m., a bv wrote: Impossible keep-alive header http://www.squid-cache.org/Doc/config/detect_broken_pconn/ The server has sent a header telling Squid to keep-alive the connection, but then has sent an unknown-length response that requires a connection close. Amos
Re: [squid-users] Squid 3.2.0.18 vs 2.7 - apostrophe in access.log - who must escape the client or Squid ?
On 12/07/2012 8:53 p.m., Bartel Viljoen wrote: Dear squid users. When using a client like FF/IE against Squid v3.2 and v2.7 with default logformat Squid always escape the apostrophes in the URL with %27 in the access.log When using a client like LINKS against Squid v3.2 and v2.7 with default logformat Squid only escape the apostrophes in the URL with %27 in the access.log with v2.7 and not with v3.2 I assume clients like FF/IE escape the URL before passing it to Squid ? Yes. Is this a possible bug in Squid v3.2 since Squid 2.7 does the proper escaping before logging it to access.log. The reason I'm asking is, I have a daemon as logger and now need to escape the URL value in the daemon since Squid v3.2. Who must escape the URL the client or Squid ? Depends n which RFC they are following. RFC 1738 states apostrophe (%27) is not a reserved character ad can be ignored completely Thus, only alphanumerics, the special characters $-_.+!*'(),, and reserved characters used for their reserved purposes may be used unencoded within a URL. RFC 3986 names apostrophe a reserved sub-delim characters in the reserved set are protected from normalization and are therefore safe to be used by scheme-specific and producer-specific algorithms for delimiting data subcomponents within a URI. and URI producing applications should percent-encode data octets that correspond to characters in the reserved set unless these characters are specifically allowed by the URI scheme to represent data in that component. In regard to this character squid complies with both. Amos
Re: [squid-users] squid_session problem
On 13/07/2012 10:27 a.m., Jack Black wrote: Eureka! It appears that despite ipv6 being disabled on the Ubuntu Server OS itself, it was still interfering with the helper somehow. All I had to do was change the line in squid.conf that starts with external_acl_type to read: external_acl_type session ipv4 ... Ahh. How *exactly* was IPv6 disabled in Ubuntu? Squid is supposed to auto-detect missing IPv6 socket support. Thank you for identifying this. I've added some improvements around that log line to show what Squid is doing. Amos
Re: [squid-users] i'm having a little performance trouble with squid + ICAP server.
Sorry I am offering no help but I am interested to know how do you set up a stress test environment. I supposed it's an automatic script based stress tests ?
Re: [squid-users] i'm having a little performance trouble with squid + ICAP server.
On 7/13/2012 4:16 AM, Ming-Ching Tiew wrote: Sorry I am offering no help but I am interested to know how do you set up a stress test environment. I supposed it's an automatic script based stress tests ? Rgds. well it's pretty simple. my setup is like this: gw\dns\dhcp\cahce\icap = server(intel atom d510 2gb ram 500GB sata HD) windows 7 = client(core i3 4gb ram..) linux = client(intel atom d410 2gb ram 160gb) the network is 1Gbit. wan = 5Bmit i have vm on the corei3 with nginx that serv static pages. to test the icap server i wrote a ruby script and changed the linux systems ulimit to 65535. the test was to send a specific icap request that involves filtering query and get at least one line back and then close the connection be because if i got any of the data the processing by the icap server was done. i measured the timestamp before i start the connection and after then calculate the time between them and report the time only if it's more then 0.1 secs long less then that is far more then sufficient. i wrote a two scripts, one is with ruby forks and the other with threads. i looped over sets of 1000 to 4000 requests for between 30 to 60 secs. the load then builds up and the connection tracking shows 25000 + connections on time_wait. so the open connections limit was about 4000 and with a time_wait of about 25000+ (my time_wait is 15 sec). i ran those tests for hours and it worked great. this is about direct ICAP access. then i tested from the linux box with Apache benchmark to squid proxy with the -X option... not intercepted but forward proxy. and it seems like after about 1000 requests squid wont do icap queries (i have live log on stdout from the icap server) If you want some more data i will be happy to give you some . Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il
[squid-users] Posted in the wiki my nice caching method using a coordinator\ICAP
for those who dont want to mess with ICAP i wrote a ruby coordinator for url_rewrite interface. the only problem is the logs that will show one thing but actually the url that gets into cache is the rewritten. it can be verified using an ICP\HTCP client. i wrote ICP client to verify cache objects status using command line can be found at: http://www1.ngtech.co.il/icp_client.rb.txt the draft of the article at: http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator Contents Caching Dynamic Content using a Coordinator Problem Outline What is Dynamic Content File De-Duplication\Duplication Marks of dynamic content in URL ? CGI-BIN HTTP and caching HTTP headers HTTP 206\partial content Dynamic-Content|Bandwidth Consumers Specific Cache Cases analysis Microsoft Updates Caching Youtube video\img CDN\DNS load balancing Facebook Caching Dynamic Content|De-duplicated content Old methods Store URL Rewrite Web-server and URL Rewrite NGINX as a Cache Peer Summery of the ICAP solution Implementing ICAP solution Alternative To ICAP server Using url_rewrite Best Regards, Eliezer -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for Nonprofit organizations eliezer at ngtech.co.il