Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage
On 12/10/2021 09:34, Ralf Hildebrandt wrote: Quite sure, since I've been testing Squid-5-HEAD before it became 5.2 But to be sure, I'm deplyoing it right now. Yep, squid-5.2 is also leaking. :( I'm now reasonably sure that mine is a recurrence of: https://bugs.squid-cache.org/show_bug.cgi?id=4526 ...which I had thought to have gone away in Squid 5.1. I will apply the patch next week and see if the problem goes away again. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol OpendiumOnline Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sa...@opendium.com +44-1792-824568 Support | Cefnogi: supp...@opendium.com +44-1792-825748 Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage
On 08/10/2021 10:24, Ralf Hildebrandt wrote: I'm seeing high memory usage on Squid 5.1. Caching is disabled, so I'd expect memory usage to be fairly low (and it was under Squid 3.5), but some workers are growing pretty large. I'm using ICAP and SSL bump. https://bugs.squid-cache.org/show_bug.cgi?id=5132 is somewhat related I'm not sure if its the same thing. In that bug, Alex said it looked like Squid wasn't maintaining counters for the leaked memory, whereas in my case the "Total" row in mgr:mem reasonably closely tracks the memory usage reported by top, so it looks like it should be accounted for. There are similarities though - lots of memory going to HttpHeaderEntry and Short Strings in both cases. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol OpendiumOnline Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sa...@opendium.com +44-1792-824568 Support | Cefnogi: supp...@opendium.com +44-1792-825748 Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Squid 5.1 memory usage
On 08/10/2021 15:50, Alex Rousskov wrote: It there a way to list all of the Comm::Connection objects? The exact answer is "no", but you can use mgr:filedescriptors as an approximation. I've had to restart this process now (but I'm sure the problem will be back next week). I did use netstat on it though, and the number of established TCP connections was 1090 - that is obviously made up of client->proxy, proxy->origin and proxy->icap connections - my gut feeling was that it wasn't enough connections to account for 200-odd MB of Comm::Connection objects. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol OpendiumOnline Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sa...@opendium.com +44-1792-824568 Support | Cefnogi: supp...@opendium.com +44-1792-825748 Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Squid 5.1 memory usage
I'm seeing high memory usage on Squid 5.1. Caching is disabled, so I'd expect memory usage to be fairly low (and it was under Squid 3.5), but some workers are growing pretty large. I'm using ICAP and SSL bump. I've got a worker using 5 GB which I've collected memory stats from - the things which stand out are: - Long Strings: 220 MB - Short Strings: 2.1 GB - Comm::Connection: 217 MB - HttpHeaderEntry: 777 MB - MemBlob: 773 MB - Entry: 226 MB What's the best way of debugging this? It there a way to list all of the Comm::Connection objects? Thanks. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol OpendiumOnline Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sa...@opendium.com +44-1792-824568 Support | Cefnogi: supp...@opendium.com +44-1792-825748 Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] High memory usage associated with ssl_bump and broken clients
I've identified a problem with Squid 3.5.26 using a lot of memory when some broken clients are on the network. Strictly speaking this isn't really Squid's fault, but it is a denial of service mechanism so I wonder if Squid can help mitigate it. The situation is this: Squid is set up as a transparent proxy performing SSL bumping. A client makes an HTTPS connection, which Squid intercepts. The client sends a TLS client handshake and squid responds with a handshake and the bumped certificate. The client doesn't like the bumped certificate, but rather than cleanly aborting the TLS session and then sending a TCP FIN, it just tears down the connection with a TCP RST packet. Ordinarily, Squid's side of the connection would be torn down in response to the RST, so there would be no problem. But unfortunately, under high network loads the RST packet sometimes gets dropped and as far as Squid is concerned the connection never gets closed. The busted clients I'm seeing the most problems with retry the connection immediately rather than waiting for a retry timer. Problems: 1. A connection that hasn't completed the TLS handshake doesn't appear to ever time out (in this case, the server handshake and certificate exchange has been completed, but the key exchange never starts). 2. If the client sends an RST and the RST is lost, the client won't send another RST until Squid sends some data to it on the aborted connection. In this case, Squid is waiting for data from the client, which will never come, and will not send any new data to the client. Squid will never know that the client aborted the connection. 3. There is a lot of memory associated with each connection - my tests suggest around 1MB. In normal operation these kinds of dead connections can gradually stack up, leading to a slow but significant memory "leak"; when a really badly behaved client is on the network it can open tens of thousands of connections per minute and the memory consumption brings down the server. 4. We can expect similar problems with devices on flakey network connections, even when the clients are well behaved. My thoughts: Connections should have a reasonably short timeout during the TLS handshake - if a client hasn't completed the handshake and made an HTTP request over the encrypted connection within a few seconds, something is broken and Squid should tear down the connection. These connections certainly shouldn't be able to persist forever with neither side sending any data. Testing: I wrote a Python script that makes 1000 concurrent connections as quickly as it can and send a TLS client handshake over them. Once all of the connections are open, it then waits for responses from Squid (which would contain the server handshake and certificate) and quits, tearing down all of the the connections with an RST. It seems that the RST packets for around 300 of those connections were dropped - this sounds surprising, but since all 1000 connections were aborted simultaneously, there would be a flood of RST packets and its probably reasonable to expect a significant number to be dropped. The end result was that netstat showed Squid still had about 300 established connections, which would never go away. -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] More host header forgery pain with peek/splice
This one just seems to keep coming up and I'm wondering how other people are dealing with it: When you peek and splice a transparently proxied connection, the SNI goes through the host validation phase. Squid does a DNS lookup for the SNI, and if it doesn't resolve to the IP address that the client is connecting to, Squid drops the connection. When accessing one of the increasingly common websites that use DNS load balancing, since the DNS results change on each lookup, Squid and the client may not get the same DNS results, so Squid drops perfectly good connections. Most of this problem goes away if you ensure all the clients use the same DNS server as squid, but not quite. Because the TTL on DNS records only has a resolution of 1 second, there is a period of up to 1 second when the DNS records Squid knows about doesn't match the ones that the client knows about. The client and squid may expire the records up to 1 second apart. So what's the solution? (Notably the validation check can't be disabled without hacking the code). -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Rock store status
On 19/08/16 08:45, FredB wrote: Please can you describe your load and configurations ? We supply Squid based online safety systems to schools across the UK, utilising Rock store for caching and peek/splice, external ACLs and ICAP for access control/filtering/auditing. Typically I think our biggest schools probably top out at around 400,000 requests/hour, but I don't have any hard data to hand to back that up at the moment. The only serious Squid issue we've been tracking recently is the memory leak associated with spliced connections, which we've now fixed (and submitted patches). That said, with the schools currently on holiday those fixes haven't yet been well tested on real-world servers - we'll find out if there are any issues with them when term starts again :) -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Rock store status
On 17/08/16 11:50, FredB wrote: I tried rock store and smp long time ago (squid 3.2 I guess), Unfortunately I definitely drop smp because there are some limitations (In my case), and I fall-back to diskd because there were many bugs with rock store. FI I also switched to aufs without big differences. But now with the latest 3.5.20 ? Sadly SMP still not for me but rock store ? There is someone who are using rock store with a high load, more than 800 r/s, without any problem ? There is a real difference in this situation, cpu, speed, memory ? We use SMP and Rock under the 3.5 series without problems. But I don't think any of our sites have as high req/sec load as you. -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Checking SSL bump status in http_access
On 17/08/16 00:12, Amos Jeffries wrote: Is there a way of figuring out if the current request is a bumped request when the http_access ACL is being checked? i.e. can we tell the difference between a GET request that is inside a bumped tunnel, and an unencrypted GET request? In Squid-3 a combo of the myportname and proto ACLs should do that. I think when using a nontransparent proxy you can't tell the difference between: 1. HTTPS requests inside a bumped CONNECT tunnel, and 2. unencrypted "GET https://example.com/ HTTP/1.1" requests made directly to the proxy. -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Checking SSL bump status in http_access
On 17/08/16 17:18, Alex Rousskov wrote: This configuration problem should be at least partially addressed by the upcoming annotate_transaction ACLs inserted into ssl_bump rules: http://lists.squid-cache.org/pipermail/squid-dev/2016-July/006146.html That looks good. When implementing this, beware the note in comment 3 of bug 4340: http://bugs.squid-cache.org/show_bug.cgi?id=4340#c3 "for transparent connections, the NotePairs instance used during the step-1 ssl_bump ACL is not the same as the instance used during the http_access ACL, but for non-transparent connections they are the same instance. The upshot is that any notes set by an external ACL when processing the ssl_bump ACL during step 1 are discarded when handling transparent connections." - It would greatly reduce the functionality of your proposed ACLs if the annotations were sometimes discarded part way through a connection or request. Something I've been wanting to do for a while is attach a unique "connection ID" and "request ID" to requests so that: 1. An ICAP server can make decisions about the connection (e.g. how to authenticate, whether to bump, etc.) and then refer back to the data it knows/generated about the connection when it processes the requests contained within that connection. 2. When multiple ICAP requests will be generated, they can be linked together by the ICAP server - e.g. where a single request will generate a REQMOD followed by a RESPMOD it would be good for the ICAP server to know which REQMOD and RESPMOD relate to the same request. It sounds like your annotations plan may address this to some extent. (We can probably already do some of this by having the ICAP server generate unique IDs and store them in ICAP headers to be passed along with the request, but I think the bug mentioned above would cause those headers to be discarded mid-request in some cases) -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Large memory leak with ssl_peek (now partly understood)
On 17/08/16 06:22, Dan Charlesworth wrote: Deployed a 3.5.20 build with both of those patches and have noticed a big improvement in memory consumption of squid processes at a couple of splice-heavy sites. Thank you, sir! We've now started tentatively rolling this out to a few production sites too and are seeing good results so far. -- - Steve Hill Technical Director OpendiumOnline Safety / Web Filteringhttp://www.opendium.com Enquiries Support - --- sa...@opendium.comsupp...@opendium.com +44-1792-824568 +44-1792-825748 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Checking SSL bump status in http_access
Is there a way of figuring out if the current request is a bumped request when the http_access ACL is being checked? i.e. can we tell the difference between a GET request that is inside a bumped tunnel, and an unencrypted GET request? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Sales / enquiries: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Large memory leak with ssl_peek (now partly understood)
This sounds very similar to Squid bug 4508. Factory proposed a fix for that bug, but the patch is for Squid v4. You may be able to adapt it to v3. Testing (with any version) is very welcomed, of course: Thanks for that - I'll look into adapting and testing it. (been chasing this bug off and on for months - hadn't spotted that there was a bug report open for it :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Sales / enquiries: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Large memory leak with ssl_peek (now partly understood)
I've been suffering from a significant memory leak on multiple servers running Squid 3.5 for months, but was unable to reproduce it in a test environment. I've now figured out how to reproduce it and have done some investigation: When using TPROXY, Squid generates fake "CONNECT 192.0.2.1:443" requests, using the IP address that the client connected to. At ssl_bump step 1, we peek and Squid generates another fake "CONNECT example.com:443" request containing the SNI from the client's SSL handshake. At ssl_bump step 2 we splice the connection and Squid does verification to make sure that example.com does actually resolve to 192.0.2.1. If it doesn't, Squid is supposed to reject the connection in ClientRequestContext::hostHeaderVerifyFailed() to prevent clients from manipulating the SNI to bypass ACLs. Unfortunately, when verification fails, rather than actually dropping the client's connection, Squid just leaves the client hanging. Eventually the client (hopefully) times out and drops the connection itself, but the associated ClientRequestContext is never destroyed. This is testable by repeatedly executing: openssl s_client -connect 17.252.76.30:443 -servername courier.push.apple.com That is a traffic pattern that we see in the real world and is now clearly what is triggering the leak: Apple devices make connections to addresses within the 17.0.0.0/8 network with an SNI of "courier.push.apple.com". courier.push.apple.com resolves to a CNAME pointing to courier-push-apple.com.akadns.net, but courier-push-apple.com.akadns.net doesn't exist. Since Squid can't verify the connection, it won't allow it and after 30 seconds the client times out. Each Apple device keeps retrying the connection, leaking a ClientRequestContext each time, and before long we've leaked several gigabytes of memory (on some networks I'm seeing 16GB or more of leaked RAM over 24 hours!). Unfortunately I'm a bit lost in the Squid code and can't quite figure out how to gracefully terminate the connection and destroy the context. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Sales / enquiries: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] host_verify_strict and wildcard SNI
On 07/07/16 12:30, Marcus Kool wrote: Here things get complicated. It is correct that Squid enforces apps to follow standards or should Squid try to proxy connections for apps when it can? I would say no: where it is possible for Squid to allow an app to work, even where it isn't following standards (without compromising security / other software / etc.) then Squid needs to try to make the app work. Unfortunately, end users do not understand the complexities, and if an app works on their home internet connection and doesn't work through their school / office connection (which is router through Squid) then as far as they are concerned the school / office connection is "broken", even if the problem is actually a broken app. This is made worse by (1) the perception that big businesses such as Microsoft / Apple / Google can never be wrong (even though this is not born our by experience of their software), and (2) the fact that app developers rarely seem at all interested in acknowledging/fixing such bugs (in my experience). So in the end you have a choice: live with people accusing Squid of being "broken" and refuse to allow applications that will never be fixed to work, or work around the broken apps within Squid and therefore get them working without the cooperation of the app developers. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] host_verify_strict and wildcard SNI
On 07/07/16 02:07, Alex Rousskov wrote: Q1. Is wildcard SNI "legal/valid"? I do not know the answer to that question. The "*.example.com" name is certainly legal in many DNS contexts. RFC 6066 requires HostName SNI to be a "fully qualified domain name", but I failed to find a strict-enough RFC definition of an FQDN that would either accept or reject wildcards as FQDNs. I would not be surprised if FQDN syntax is not defined to the level that would allow one to reject wildcards as FQDNs based on syntax alone. Wildcards can be specified in DNS zonefiles, but I don't think you can ever look them up directly (rather, you look up "something.example.com" and the DNS server itself decides to use the wildcard record to fulfil that request - you never look up *.example.com itself). Q2. Can wildcard SNI "make sense" in some cases? Yes, of course. The client essentially says "I am trying to connect to _any_ example.com subdomain at this IP:port address. If you have any service like that, please connect me". That would work fine in deployment contexts where several servers with different names provide essentially the same service and the central "routing point" would pick the "best" service to use. I am not saying it is a good idea to use wildcard SNIs, but I can see them "making sense" in some cases. Realistically, shouldn't the SNI reflect the DNS request that was made to find the IP of the server you're connecting to? You would never make a DNS request for '*.example.com' so I don't see a reason why you would send an SNI that has a larger scope than the DNS request you made. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] host_verify_strict and wildcard SNI
On 06/07/16 20:54, Eliezer Croitoru wrote: There are other options of course but the first thing to check is if the client is a real browser or some special creature that tries it's luck with a special form of ssl. In this case it isn't a real web browser - it's an iOS app, and the vendor has stated that they have no intention of fixing it :( -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Skype, SSL bump and go.trouter.io
On 07/07/16 11:07, Eliezer Croitoru wrote: Can you verify please using a debug 11,9 that squid is not altering the request in any form? Such as mentioned at: http://bugs.squid-cache.org/show_bug.cgi?id=4253 Thanks for this. I've compared the headers and the original contains: Upgrade: websocket Connection: Upgrade Unfortunately, since Squid doesn't support websockets I think there's no way around this - by the time we see the request and can identify it as Skype we've already bumped it so we're committed to pass it through Squid's HTTP engine. :( -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Skype, SSL bump and go.trouter.io
On 06/07/16 20:44, Eliezer Croitoru wrote: There are couple options to the issue and a bad request can happen if squid transforms or modifies the request. Did you tried to use basic debug sections output to verify if you are able to "replicate" the request using a tiny script or curl? I think that section 11 is the right one to start with (http://wiki.squid-cache.org/KnowledgeBase/DebugSections) There were couple issues with intercepted https connections in the past but a 400 means that something is bad and mainly in the expected input and not a certificate but it is possible that other reasons are there. I have not tried to use skype in a transparent environment for a very long time but I can try to test it later. I tcpdumped the icap REQMOD session to retrieve the request and tried it manually (direct to the Skype server) with openssl s_client. The Skype server (not Squid) returned a 400. But of course, the Skype request contains various data that the server will probably (correctly) see as a replay attack, so it isn't a very good test - all I can really say is that the real Skype client was getting exactly the same error from the server when the connection is bumped, but works fine when it is tunnelled. Annoyingly, Skype doesn't include an SNI in the handshake, so peeking in order to exclude it from being bumped isn't an option. The odd thing is that I have had Skype working in a transparent environment previously (with the unprivalidged ports unfirewalled), so I wonder if this is something new from Microsoft. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Skype, SSL bump and go.trouter.io
I've been finding some problems with Skype when combined with TProxy and HTTPS interception and wondered if anyone had seen this before: Skype works so long as HTTPS interception is not performed and traffic to TCP and UDP ports 1024-65535 is allowed directly out to the internet. Enabling SSL-bump seems to break things - When making a call, Skype makes an SSL connection to go.trouter.io, which Squid successfully bumps. Skype then makes a GET request to https://go.trouter.io/v3/c?auth=true=55 over the SSL connection, but the HTTPS server responds with a "400 Bad Request" error and Skype fails to work. The Skype client clearly isn't rejecting the intercepted connection since it is making HTTPS requests over it, but I can't see why the server would be returning an error. Obviously I can't see what's going on inside the connection when it isn't being bumped, but it does work then. The only thing I can think is maybe the server is examining the SSL handshake and returning an error because it knows it isn't talking directly to the Skype client - but that seems like an odd way of doing things, rather than rejecting the SSL handshake in the first place. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] host_verify_strict and wildcard SNI
I'm using a transparent proxy and SSL-peek and have hit a problem with an iOS app which seems to be doing broken things with the SNI. The app is making an HTTPS connection to a server and presenting an SNI with a wildcard in it - i.e. "*.example.com". I'm not sure if this behaviour is actually illegal, but it certainly doesn't seem to make a lot of sense to me. Squid then internally generates a "CONNECT *.example.com:443" request based on the peeked SNI, which is picked up by hostHeaderIpVerify(). Since *.example.com isn't a valid DNS name, Squid rejects the connection on the basis that *.example.com doesn't match the IP address that the client is connecting to. Unfortunately, I can't see any way of working around the problem - "host_verify_strict" is disabled, but according to the docs, "For now suspicious intercepted CONNECT requests are always responded to with an HTTP 409 (Conflict) error page." As I understand it, turning host_verify_strict on causes problems with CDNs which use DNS tricks for load balancing, so I'm not sure I understand the rationale behind preventing it from being turned off for CONNECT requests? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Youtube "challenges"
On 25/02/16 03:52, Darren wrote: The user visits a page on my server with the YouTube links. Visiting this page triggers a state based ACL (something like the captive portal login). The user then clicks a YouTube link and squid checks this ACL to see if the user is originating the request from my local page and if it is, allows the splice to YouTube and the video can play. Squid can't tell that the requests were referred by your page - the iframe itself may have your page as the referrer (although that certainly isn't guaranteed), but the objects that are referred within that iframe won't have a useful referrer string. You could dynamically create an ACL that allows the whole of youtube when the user has your page open, but that is fairly insecure since they could just open the page and then they would be allowed to access anything through youtube. In my experience (and this is what we do), to be at all secure you have to analyse the page itself in order to figure out which specific URIs to whitelist (or at least, have those URIs hard-coded somewhere else). Either way, YouTube uses https, so unless you're going to blindly allow the whole of youtube whenever a user visits your page, you're going to need to ssl bump the requests in order to have an ACL based on the referrer and path. And as you know, ssl bumping involves sticking a certificate on each device. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] SSL bump memory leak
I'm looking into (what appears to be) a memory leak in the Squid 3.5 series. I'm testing this in 3.5.13, but this problem has been observed in earlier releases too. Unfortunately I haven't been able to reproduce the problem in a test environment yet, so my debugging has been limited to what I can do on production systems (so no valgrind, etc). These systems are configured to do SSL peek/bump/splice and I see the Squid workers grow to hundreds or thousands of megabytes in size over a few hours. A configuration reload does not reduce the memory consumption. For debugging purposes, I have set "dynamic_cert_mem_cache_size=0KB" to disable the certificate cache, which should eliminate bug 4005. I've taken a core dump to analyse and have found: Running "strings" on the core, I can see that there are vast numbers of strings that look like certificate subject/issuer identifiers. e.g.: /C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=Secure Certificate Services The vast majority of these seem to refer to root and intermediate certificates. There are a few that include a host name and are probably server certificates, such as: /OU=Domain Control Validated/CN=*.soundcloud.com But these are very much in the minority. Also, notably they are mostly duplicates. Compare the total number: $ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|wc -l 131599 with the number of unique strings: $ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|sort -u -k 2|wc -l 658 There are also a very small number of lines that look something like: /C=US/ST=California/L=San Francisco/O=Wikimedia Foundation, Inc./CN=*.wikipedia.org+Sign=signTrusted+SignHash=SHA256 I think the "+Sign=signTrusted+SignHash=SHA256" part would indicate that this is a Squid database key, which is very confusing since with the certificate cache disabled I wouldn't expect to see these at all. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] kid registration timed out
. 03:43:37 kid1| HTCP Disabled. 03:43:37 kid1| Configuring Parent [::1]/3129/0 03:43:37 kid1| Squid plugin modules loaded: 0 03:43:37 kid1| Adaptation support is on 03:43:38 kid1| storeLateRelease: released 0 objects Squid Cache (Version 3.5.11): Terminated abnormally. CPU Usage: 0.177 seconds = 0.124 user + 0.053 sys Maximum Resident Size: 83088 KB Page faults with physical i/o: 0 Squid Cache (Version 3.5.11): Terminated abnormally. Squid Cache (Version 3.5.11): Terminated abnormally. CPU Usage: 0.189 seconds = 0.127 user + 0.062 sys Maximum Resident Size: 83072 KB Page faults with physical i/o: 0 CPU Usage: 0.191 seconds = 0.130 user + 0.061 sys Maximum Resident Size: 83072 KB Page faults with physical i/o: 0 03:43:43 kid1| Closing HTTP port [::]:3128 03:43:43 kid1| Closing HTTP port [::]:8080 03:43:43 kid1| Closing HTTP port [::]:3130 03:43:43 kid1| Closing HTTPS port [::]:3131 03:43:43 kid1| storeDirWriteCleanLogs: Starting... 03:43:43 kid1| Finished. Wrote 0 entries. 03:43:43 kid1| Took 0.00 seconds ( 0.00 entries/sec). FATAL: kid1 registration timed out Squid Cache (Version 3.5.11): Terminated abnormally. CPU Usage: 0.193 seconds = 0.137 user + 0.056 sys Maximum Resident Size: 83104 KB Page faults with physical i/o: 0 There are actually 4 workers, but I have excluded the log lines for "kid[2-9]" as they seem to show exactly the same as kid1. I can't see any indication of why it is blowing up, other than "FATAL: kid1 registration timed out" (and identical time outs for the other workers). I seem to be left with a Squid process still running (so my monitoring doesn't alert me that Squid isn't running), but it doesn't service requests. This isn't too bad if I'm manually restarting squid during the day, but if squid gets restarted in the night due to a package upgrade I can be left with a dead proxy that requires manual intervention. The second problem, which may or may not be related, is that if Squid crashes (e.g. an assert()), it usually automatically restarts, but some times it fails and I see this logged: FATAL: Ipc::Mem::Segment::open failed to shm_open(/squidnocache-cf__metadata.shm): (2) No such file or directory Similar to the first problem, when this happens I'm still left with a squid process running, but it isn't servicing any requests. I realise that it is a bug for Squid to crash in the first place, but it's compounded by the occasional complete loss of service when it happens. Any help would be appreciated. Thanks. :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] sslBump and intercept
On 12/11/15 09:04, Eugene M. Zheganin wrote: I decided to intercept the HTTPS traffic on my production squids from proxy-unware clients to be able to tell them there's a proxy and they should configure one. So I'm doing it like (the process of forwarding using FreeBSD pf is not shown here): ===Cut=== acl unauthorized proxy_auth stringthatwillnevermatch acl step1 at_step sslBump1 https_port 127.0.0.1:3131 intercept ssl-bump cert=/usr/local/etc/squid/certs/squid.cert.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB dhparams=/usr/local/etc/squid/certs/dhparam.pem https_port [::1]:3131 intercept ssl-bump cert=/usr/local/etc/squid/certs/squid.cert.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB dhparams=/usr/local/etc/squid/certs/dhparam.pem ssl_bump peek step1 ssl_bump bump unauthorized ssl_bump splice all ===Cut=== Almost everything works, except that squid for some reason is generating certificates in this case for IP addresses, not names, so the browser shows a warning abount certificate being valid only for IP, and not name. proxy_auth won't work on intercepted traffic and will therefore always return false, so as far as I can see you're always going to peek and then splice. i.e. you're never going to bump, so squid should never be generating a forged certificate. You say that Squid _is_ generating a forged certificate, so something else is going on to cause it to do that. My first guess is that Squid is generating some kind of error page due to some http_access rules which you haven't listed, and is therefore bumping. Two possibilities spring to mind for the certificate being for the IP address rather than for the name: 1. The browser isn't bothering to include an SNI in the SSL handshake (use wireshark to confirm). In this case, Squid has no way to know what name to stick in the cert, so will just use the IP instead. 2. The bumping is happening in step 1 instead of step 2 for some reason. See: http://bugs.squid-cache.org/show_bug.cgi?id=4327 -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] squid http & https intercept based on DNS server
On 12/11/15 12:08, James Lay wrote: Some applications (I'm thinking mobile apps) may or may not use a hostname...some may simply connect to an IP address, which makes control over DNS irrelevant at that point. Hope that helps. Also, redirecting all the DNS records to Squid will break everything that isn't http/https since there will be nothing on the squid server to handle that traffic. It doesn't sound like a great idea to me - why not just redirect http/https traffic at the gateway (TPROXY) instead of mangling DNS? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Assert, followed by shm_open() fail.
On Squid 3.5.11 I'm seeing occasional asserts: 2015/11/09 13:45:21 kid1| assertion failed: DestinationIp.cc:41: "checklist->conn() && checklist->conn()->clientConnection != NULL" More concerning though, is that usually when a Squid process crashes, it is automatically restarted, but following these asserts I'm often seeing: FATAL: Ipc::Mem::Segment::open failed to shm_open(/squidnocache-squidnocache-cf__metadata.shm): (2) No such file or directory After this, Squid is still running, but won't service requests and requires a manual restart. Has anyone seen this before? Cheers. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] ICAP response header ACL
The latest adaption response headers are available through the %adapt::headers through an ACL? The documentation says that adaptation headers are available in the notes, but this only appears to be headers set with adaptation_meta, not the ICAP response headers. I had also considered using the "note" directive to explicitly stuff the headers into the notes, but it looks like the note directive doesn't allow you to use format strings (i.e. "note icap_headers %adapt::note to "%adapt::<last_h" rather than substituting the headers.) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com <>___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] %un format code doesn't work for external ssl_bump ACLs
Squid 3.5.7 I'm using an external ACL to decide whether to bump traffic during SSL bump step 2. The external ACL needs to know the user's username for requests that have authenticated, but not all requests are authenticated so I can't use %LOGIN and I'm therefore using %un instead. However, %un is never being filled in with a user name. The relevant parts of the config are: http_access allow proxy_auth http_access deny all external_acl_type sslpeek children-max=10 concurrency=100 ttl=0 negative_ttl=0 %SRC %un %URI %ssl::sni %ha{User-Agent} /usr/sbin/check_bump.sh acl sslpeek external sslpeek acl ssl_bump_step_1 at_step SslBump1 acl ssl_bump_step_2 at_step SslBump2 acl ssl_bump_step_3 at_step SslBump3 ssl_bump peek ssl_bump_step_1 #icap_says_peek ssl_bump bump ssl_bump_step_2 sslpeek ssl_bump splice all sslproxy_cert_error allow all The debug log shows that the request is successfully authenticated: Acl.cc(138) matches: checking proxy_auth UserData.cc(22) match: user is steve, case_insensitive is 0 UserData.cc(28) match: aclMatchUser: user REQUIRED and auth-info present. Acl.cc(340) cacheMatchAcl: ACL::cacheMatchAcl: miss for 'proxy_auth'. Adding result 1 Acl.cc(158) matches: checked: proxy_auth = 1 But then later in the log I see: external_acl.cc(1416) Start: fg lookup in 'sslpeek' for '2a00:1940:1:8:468a:5bff:fe9a:cd7f - www.hsbc.co.uk:443 www.hsbc.co.uk Mozilla/5.0%20(X11;%20Fedora;%20Linux%20x86_64;%20rv:39.0)%20Gecko/20100101%20Firefox/39.0' The user name given to the external ACL is - even though the request has been authenticated. Setting a-require_auth in parse_externalAclHelper() makes it work, but obviously just makes %un behave like %LOGIN, so isn't a solution. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com attachment: steve.vcf___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Assert(call-dialer.handler == callback)
in ClientHttpRequest::handleAdaptedHeader (this=0x7ffe1dcda618, msg=Unhandled dwarf expression opcode 0xf3 ) at client_side_request.cc:1935 #35 0x7ffe14abbcaa in JobDialerAdaptation::Initiator::dial (this=0x7ffe1ce04990, call=...) at ../../src/base/AsyncJobCalls.h:174 #36 0x7ffe149bea69 in AsyncCall::make (this=0x7ffe1ce04960) at AsyncCall.cc:40 #37 0x7ffe149c272f in AsyncCallQueue::fireNext (this=Unhandled dwarf expression opcode 0xf3 ) at AsyncCallQueue.cc:56 #38 0x7ffe149c2a60 in AsyncCallQueue::fire (this=0x7ffe16f70bf0) at AsyncCallQueue.cc:42 #39 0x7ffe1484110c in EventLoop::runOnce (this=0x7fffcb8c4be0) at EventLoop.cc:120 #40 0x7ffe148412c8 in EventLoop::run (this=0x7fffcb8c4be0) at EventLoop.cc:82 #41 0x7ffe148ae191 in SquidMain (argc=Unhandled dwarf expression opcode 0xf3 ) at main.cc:1511 #42 0x7ffe148af2e9 in SquidMainSafe (argc=Unhandled dwarf expression opcode 0xf3 ) at main.cc:1243 #43 main (argc=Unhandled dwarf expression opcode 0xf3 ) at main.cc:1236 (sorry about the DWARF errors - it looks like I've got a version mismatch between gcc and gdb) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com attachment: steve.vcf___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] i hope to build web Authentication portal at Tproxy environment recenty , can you give me some advisement .
On 11.03.15 10:22, johnzeng wrote: whether php or jquery need send user ip address to squid ? otherwise i worried whether squid can confirm user info and how to identify and controll http traffic ? I'd do this with an external ACL - when processing a request, Squid would call the external ACL which would do: 1. If the user is not authenticated or their last seen timestamp has expired, return ERR 2. If the user is authenticated, update their last seen timestamp and return OK. Obviously if the ACL returns ERR, Squid needs to redirect the user to the authentication page. If the ACL returns OK, Squid needs to service the request as normal. The authentication page would update the database which the external ACL refers to. Identifying the user's traffic would need to be done by MAC address or IP: - MAC address requires a flat network with no routers between the device and Squid. - IP has (probably) unfixable problems in a dual-stacked network. Beware that: 1. Access to the authentication page must be allowed for unauthenticated users (obviously :) 2. Authentication should really be done over HTTPS with a trusted certificate. 3. Clients require access to some external servers to validate HTTPS certs before they have authenticated. 4. If you want to support WISPr then (2) and (3) are mandatory. 5. External ACL caching You might be able to do it with internal ACLs, but... pain :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Dual-stack IPv4/IPv6 captive portal
On 02.03.15 02:33, Amos Jeffries wrote: These people are plain wrong about how the basic protocol works and yet they are treated with must-accept policies by so many networks. Yep, one of the really big problems we have is the it works when we're not using the proxy, so the proxy must be broken attitude, when almost universally the proxy is working fine and the other software is just plain broken. It's really hard to convince a customer that it really isn't our fault when some app breaks, especially when that app is made by someone like Apple or Google (who, of course, can *never* be wrong!) The vast majority of our support time is spent figuring out ways to work around busted end-user software, because we know saying Apple's software is broken, go and talk to Apple isn't going to work because the likes of Apple have no interest in actually supporting their own customers and somehow this ends up being our fault. (Not just Apple - lots of other companies are equally bad, although Apple have currently hit a nerve with me due to a lot of debugging I recently had to do with their appstore because they didn't bother to log any errors when things broke, which also seems to be par for the course these days). Imagine what would happen if you MUST-accept all emails delivered? or any kind of DNS response they chose to send you? those are two other major protcols with proxies that work just fine by rejecting bad messages wholesale. Well, you say that, but we also get it works at home but not at work complaints when DNS servers start returning broken data. Admittedly we usually seem to be able to not catch quite so much blame for that one, although I'm not sure how. :) Basically, in my experience, if it works in situation A and not in situation B people will assume that the problem is whatever is different in situation B rather than that both situations are completely valid but their application is broken and can't handle one of them. This becomes a big problem when situation A is the more prevalent one - at that point you either start working around the buggy software, or you lose a customer and get a reputation for selling broken stuff. So whilst I agree with you that in an ideal world we wouldn't work around stuff, we would just report bugs and the broken software would be fixed, in the real world the big mainstream businesses aren't interested in supporting their customers and yet somehow the rest of us end up having to do it for them or it reflects badly on *us*. boggle FWIW, I am always happy to work with other people/companies to help them fix their broken stuff. This has been met with a mix of responses - sometimes they are happy to work with me to fix things, which is great, but sadly not the most common experience. Often I send a detailed bug report, explaining what's going wrong, referencing standards, etc. and get a you're wrong, we're right, we're not going to change anything response, which would be fine if they referenced anything to back up their position, but they never do. Many simply ignore the reports altogether. Then we have people like Microsoft, who I've tried to contact on several occasions to report bugs in their public-facing web servers - there are no suitable contact details ever published and I've been bounced from department to department with no one quite sure what to do with someone reporting problems with their _public_ servers and not having some kind of support contract with them (I've got no resolution to any of the problems I reported to them because I've never actually managed to get my report to anyone responsible). I've given up reporting bugs to Apple because they always demand that I spend a lot of my time collecting debug logs, but then they sit on the report and never actually fix it (again, I've never had a resolution to a bug I've reported to Apple, despite supplying them with extensive debugging). /rant :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Dual-stack IPv4/IPv6 captive portal
On 27.02.15 17:00, Michele Bergonzoni wrote: This is true for v6 if the client uses its MAC as an identifier, which it's not supposed to do and last time I checked was not true for Windows, or if clients or DHCP relays support RFC6939, which is quite new. See for example: https://lists.isc.org/pipermail/kea-dev/2014-June/43.html Oh, interesting - I hadn't realised that. Have you thought about engineering your captive portal with a dual stack DNS name (having both A and ), a v4 only and a v6 only, and having you HTML embed requests with appropriate identifiers to correlate addresses? Of course there are HTTP complications and it is not perfect, but I guess that as long as it's a captive portal, kludginess cannot decrease below some level. That was one of my options. However, it won't work in the case of WISPr auto-logons because the page wouldn't be rendered by the client, so you wouldn't expect it to fetch embedded bits either. I am really interested to hear what people are doing in the field of squid-powered captive portals, even more when interoperating with iptables/ip6tables. At the moment, we've written a hybrid captive portal/http-auth system. Essentially, we use HTTP proxy auth where we can and a captive portal where we can't. HTTP proxy auth is preferable because every request gets authenticated individually and we can use Kerberos. Unfortunately a lot of software doesn't support it properly (I'm looking at you, apple and google, although everyone else is getting pretty bad at it too) and it also can't be used for transparent proxying (and again, a lot of software just doesn't bother to support proxies these days, and it's only getting worse). So we use the user-agent string to try and identify the clients we can safely authenticate, and the rest rely on cached credentials or captive portal. Yes, it's a horrible bodge, but unfortunately that's where modern software is driving us. :( For iOS and Android you can pretty much forget using pure HTTP proxy authentication. Luckily iOS can use WISPr to automatically log into a portal, sadly vanilla Android still doesn't include a WISPr client (I'd put money on this being down to patents!). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Dual-stack IPv4/IPv6 captive portal
I'm wondering whether anyone has implemented a captive portal on a dual-stacked network, and whether they can provide any insight into the best way of going about it. The problems: - Networks are frequently routed with the proxy server on the border. This means the proxy doesn't get to see the client's MAC address, so captive portals have to work by associating the IP address with the user's credentials. - In a dual-stacked environment, a clients' requests come from both its IPv4 address and IPv6 address. Treating them independently of each other would lead to a bad user experience since the user would need to authenticate separately for each address. - Where IPv6 privacy extensions are enabled, the client has multiple addresses at the same time, with the preferred address changing at regular intervals. The address rotation interval is typically quite long (e.g. 1 day) but the change-over between addresses will occur spontaneously with the captive portal not being informed in advance. Again, we don't want to auth each address individually. - Captive portals often want to support WISPr to allow client devices to perform automated logins. Possible solutions: - The captive portal page could include embedded objects from the captive portal server's v4 and v6 addresses. This would allow the captive portal to temporarily link the addresses together and therefore link the authentication credentials to both. The portal would still have to work correctly when used from single-stacked devices. This also isn't going to work for WISPr clients since the client will never render the page when doing an automated login so we wouldn't expect any embedded objects to be requested. - Using DHCPv6 instead of SLAAC to do the address assignment would disable IPv6 privacy extensions, which would be desirable in this case. However, many devices don't support DHCPv6. - The DHCP and DHCPv6 servers know the MAC and IPv[46] address of each client and could cooperate with each other to link this data together. However, the proxy does not always have control of the DHCP/DHCPv6 servers. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] ssl-bump doesn't like valid web server
On 02.02.15 13:23, Eliezer Croitoru wrote: On what OS are you running squid? is it self compiled one? Scientific Linux 6.6. And yes, it's a self-compiled Squid. I'm quite happy to change to using the helper if that is the preferred method (until recently I was unaware that the helper existed). Although I've got to admit that I was a bit surprised to be told that the way I've been successfully using Squid is impossible. :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] ssl-bump doesn't like valid web server
On 22.01.15 08:14, Amos Jeffries wrote: Squid only *generates* server certificates using that helper. If you are seeing the log lines Generating SSL certificate they are incorrect when not using the helper. The non-helper bumping is limited to using the configured http(s)_port cert= and key= contents. In essence only doing client-first or peek+splice SSL-bumping styles. I'm pretty sure this is incorrect - I'm running Squid 3.4 without ssl_crtd, configured to bump server-first. The cert= parameter to the http_port line points at a CA certificate. When visiting an https site through the proxy, the certificate sent to the browser is a forged version of the server's certificate, signed by the cert= CA. This definitely seems to be server-first bumping - if the server's CA is unknown, Squid generates an appropriately broken certificate, etc. as you would expect. Am I missing something? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] ssl-bump doesn't like valid web server
On 21/01/15 18:39, Eliezer Croitoru wrote: but not using ssl_crtd What are using if not ssl_crtd? Squid generates the certificates internally if ssl_crtd isn't turned on at compile time. I've not seen any information explaining the pros and cons of each approach (I'd welcome any input!). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-825748 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-824568 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] ssl-bump doesn't like valid web server
On 21.01.15 08:40, Jason Haar wrote: I'm running squid-3.4.10 on CentOS-6 and just got hit with ssl-bump blocking/warning access to a website which I can't figure out why Probably not very helpful, but it works for me (squid-3.4.10, Scientific Linux 6.6, bump-server-first, but not using ssl_crtd). I also can't see anything wrong with the certificate chain. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] ssl_crtd
At the moment I'm running Squid 3.4 with bump-server-first using the internal certificate generation stuff (i.e. not ssl_crtd). I can't find a lot of information about using/not using ssl_crtd so I was wondering if anyone can give me a run-down of the pros and cons of using it instead of the internal cert generator? Thanks. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-825748 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-824568 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging slow access
On 05.01.15 18:15, Amos Jeffries wrote: Can you try making the constructor at the top of src/HelperReply.cc look like this and see if it resolves the problem? HelperReply::HelperReply(char *buf, size_t len) : result(HelperReply::Unknown), notes(), whichServer(NULL) { assert(notes.empty()); parse(buf,len); } This didn't help I'm afraid. Some further debugging so far today: The notes in HelperReply are indeed empty when the token is added. However, Auth::Negotiate::UserRequest::HandleReply() appends the reply notes to auth_user_request. It fetches a cached user record from proxy_auth_username_cache and then calls absorb() to merge auth_user_request with the cached user record. This ends up adding the new Negotiate token into the cached record. This keeps happening for each new request and the cached user record gradually accumulates tokens. As far as I can see, tokens are only ever read from the helper's reply notes, not the user's notes, so maybe the tokens never need to be appended to auth_user_request in the first place? Alternatively, A-absorb(B) could be altered to remove any notes from A that have the same keys as B's notes, before using appendNewOnly() to merge them? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging slow access
On 05.01.15 20:11, Eliezer Croitoru wrote: Did you had the chance to take look at bug 3997: http://bugs.squid-cache.org/show_bug.cgi?id=3997 This could quite likely be the same issue. See my other post this morning for details, but I've pretty much tracked this down to the Negotiate tokens being appended to user cache records in an unbounded way. Eventually you end up with so many tokens (several thousand) that the majority of the CPU time is spent traversing the tokens. A quick look at the NTLM code suggests that this would behave in the same way. The question now is what the correct way is to fix it - we could specifically avoid appending token notes in the Negotiate/NTLM code, or we could do something more generic in the absorb() method. (My preference is the latter unless anyone can think why it would be a bad idea). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging slow access
On 06.01.15 12:15, Steve Hill wrote: Alternatively, A-absorb(B) could be altered to remove any notes from A that have the same keys as B's notes, before using appendNewOnly() to merge them? I've implemented this for now in the attached patch and am currently testing it. Initial results suggest it resolves the problem. It introduces a new method, NotePairs::appendAndReplace(), which iterates through the source NotePairs and removes any NotePairs in the destination that have the same key, then calls append(). This is not the most efficient way of erasing the notes, because Squid's Vector template doesn't appear to have an erase() method. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com Index: source/src/Notes.cc === --- source/src/Notes.cc (revision 354) +++ source/src/Notes.cc (working copy) @@ -221,6 +221,22 @@ } void +NotePairs::appendAndReplace(const NotePairs *src) +{ +for (VectorNotePairs::Entry *::const_iterator i = src-entries.begin(); i != src-entries.end(); ++i) { +VectorNotePairs::Entry *::iterator j = entries.begin(); + while (j != entries.end()) { + if ((*j)-name.cmp((*i)-name.termedBuf()) == 0) { + entries.prune(*j); + j = entries.begin(); + } else + ++j; + } +} +append(src); +} + +void NotePairs::appendNewOnly(const NotePairs *src) { for (VectorNotePairs::Entry *::const_iterator i = src-entries.begin(); i != src-entries.end(); ++i) { Index: source/src/Notes.h === --- source/src/Notes.h (revision 354) +++ source/src/Notes.h (working copy) @@ -131,6 +131,12 @@ void append(const NotePairs *src); /** + * Append the entries of the src NotePairs list to our list, replacing any + * entries in the destination set that have the same keys. + */ +void appendAndReplace(const NotePairs *src); + +/** * Append any new entries of the src NotePairs list to our list. * Entries which already exist in the destination set are ignored. */ Index: source/src/auth/User.cc === --- source/src/auth/User.cc (revision 354) +++ source/src/auth/User.cc (working copy) @@ -101,7 +101,7 @@ debugs(29, 5, HERE auth_user ' from ' into auth_user ' this '.); // combine the helper response annotations. Ensuring no duplicates are copied. -notes.appendNewOnly(from-notes); +notes.appendAndReplace(from-notes); /* absorb the list of IP address sources (for max_user_ip controls) */ AuthUserIP *new_ipdata; ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging slow access
On 10.12.14 17:09, Amos Jeffries wrote: I'm looking for advice on figuring out what is causing intermittent high CPU usage. It appears that the connections gradually gain more and more notes with the key token (and values containing Kerberos tokens). I haven't been able to reproduce the problem reliably enough to determine if this is the root of the high CPU usage problem, but it certainly doesn't look right: When an ACL is executed that requires the login name (e.g. the proxy_auth ACL, or an external ACL using the %LOGIN format specifier), Acl.cc:AuthenticateAcl() is called. This, in turn, calls UserRequest.cc:tryToAuthenticateAndSetAuthUser(), which calls UserRequest.cc:authTryGetUser(). Here we get a call to Notes.cc:appendNewOnly() which appends all the notes from checklist-auth_user_request-user()-notes. I can see the appendNewOnly() call sometimes ends up appending a large number of token notes (I've observed requests with a couple of hundred token notes attached to them) - the number of notes increases each time a Kerberos authentication is performed. My suspicion is that this growth is unbounded and in some cases the number of notes could become large enough to be a significant performance hit. A couple of questions spring to mind: 1. HelperReply.cc:parse() calls notes.add(token,authToken.content()) (i.e. it adds a token rather than replacing an existing one). As far as I can tell, Squid only ever uses the first token note, so maybe we should be removing the old notes when we add a new one? [Actually, on closer inspection, NotePairs::add() appends to the end of the list but NotePairs::findFirst() finds the note closest to the start of the list. Unless I'm missing something, this means the newer token notes are added but never used?] 2. I'm not sure on how the ACL checklists and User objects are shared between connections/requests and how they are supposed to persist. It seems to me that there is something wrong with the sharing/persistence if we're accumulating so many token notes. As well as the performance problems, there could be some race conditions lurking here? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Debugging slow access
On 05.01.15 16:35, Eliezer Croitoru wrote: Can you share the squid -v output and the OS you are using? Scientific Linux 6.6, see below for the squid -v output. I've now more or less confirmed that this is the cause of my performance problems - every so often I see Squid using all the CPU whilst servicing very few requests. Most of the CPU time is being used by the appendNewOnly() function. For example, 228 milliseconds for appendNewOnly() to process a request with 2687 token notes attached to it, and this can happen more than once per request. Squid Cache: Version 3.4.10 configure options: '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' '--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' '--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' '--enable-auth-basic-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' '--enable-auth-ntlm-helpers=smb_lm,no_check,fakeauth' '--enable-auth-digest-helpers=password,ldap,eDirectory' '--enable-auth-negotiate-helpers=squid_kerb_auth' '--enable-external-acl-helpers=file_userip,LDAP_group,unix_group,wbinfo_group' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' '--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-referer-log' '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl' '--with-openssl' '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 'LDFLAGS=-pie' 'CXXFLAGS=-fPIE -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig' --enable-ltdl-convenience -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] Assertion failure: DestinationIp.cc:60
I'm seeing a lot of this in both 3.4.6 and 3.4.9: 2014/11/18 15:08:48 kid1| assertion failed: DestinationIp.cc:60: checklist-conn() checklist-conn()-clientConnection != NULL I've looked through Bugzilla and couldn't see anything regarding this - is this a known bug? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-825748 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-824568 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] RFC2616 headers in bumped requests
Squid (correctly) inserts Via and X-Forwarded-For headers into requests that it is proxying. However, in the case of encrypted traffic, the server and client are expecting the traffic to reach the other end as-is, since usually this could not be intercepted. With SSL bumped requests this is no longer true - the proxy can (and does) modify the traffic, by inserting these headers. So I'm asking the question: is this behavior considered desirable, or should we be attempting to modify the request as little as possible for compatibility reasons? I've just come across a web server that throws its toys out of the pram when it sees a Via header in an HTTPS request, and unfortunately it's quite a big one - Yahoo. See this request: - GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1 Host: uk.finance.yahoo.com Via: 1.1 HTTP/1.1 301 Moved Permanently Date: Tue, 04 Nov 2014 09:55:40 GMT Via: http/1.1 yts212.global.media.ir2.yahoo.com (ApacheTrafficServer [c s f ]), http/1.1 r04.ycpi.ams.yahoo.net (ApacheTrafficServer [cMsSfW]) Server: ATS Strict-Transport-Security: max-age=172800 Location: https://uk.finance.yahoo.com/news/degrees-lead-best-paid-careers-141513989.html Content-Length: 0 Age: 0 Connection: keep-alive - Compare to: - GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1 Host: uk.finance.yahoo.com HTTP/1.1 200 OK ... - Note that the 301 that they return when a Via header is present just points back at the same URI, so the client never gets the object it requested. For now I have worked around it with: request_header_access Via deny https request_header_access X-Forwarded-For deny https But it does make me wonder if inserting the headers into bumped traffic is a sensible thing to do. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-825748 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-824568 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] SSL bump fails accessing .gov.uk servers
On 31/10/14 20:03, Dieter Bloms wrote: but when the server is broken, it will not work. Have a look at: https://www.ssllabs.com/ssltest/analyze.html?d=www.taxdisc.service.gov.uk It works correctly when FireFox connects directly to the web server rather than going through the proxy. yes the browsers have a workaround and try with different cipher suites, when the first connect fails. So my question is: is the web server broken, or am I misunderstanding something? The webserver is broken. Many thanks for this - I have emailed them, which I fully expect them to ignore :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-825748 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-824568 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] leaking memory In Squid 3.4.6
CONNECT acl https proto https acl proxy_auth proxy_auth REQUIRED acl tproxy myportname tproxy acl tproxy_ssl myportname tproxy_ssl # The you have been blocked page comes from the web server on localhost and # needs to be excluded from filtering and being forwarded to the upstream proxy. acl dstdomain_localhost dstdomain localhost ## # Start of http_access access control. ## http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny to_localhost # Unauthenticated access to the local server http_access allow local_ips http_access allow !tproxy !tproxy_ssl !https preauth http_access allow !preauth_done preauth_tproxy http_access allow need_http_auth need_postauth_sync proxy_auth postauth_sync http_access allow need_http_auth need_postauth_async proxy_auth postauth_async http_access allow need_http_auth proxy_auth http_access deny preauth_ok show_login_page http_access deny all ## # Other services ## icp_access deny all htcp_access deny all ## # SSL bumping - http://www.squid-cache.org/mail-archive/squid-dev/201206/0089.html # When the web filter wants a CONNECT request to be bumped it sets the # icap_says_bump header on it, which we trap for here. Transparently # proxied SSL connections are always bumped. ## acl icap_says_bump req_header X-SSL-Bump -i Yes ssl_bump server-first icap_says_bump ssl_bump server-first tproxy_ssl sslproxy_cert_error allow all ## # Listening ports ## http_port 3128 ssl-bump generate-host-certificates=on cert=/etc/pki/tls/certs/squid-sslbump.crt key=/etc/pki/tls/private/squid-sslbump.key http_port 8080 ssl-bump generate-host-certificates=on cert=/etc/pki/tls/certs/squid-sslbump.crt key=/etc/pki/tls/private/squid-sslbump.key http_port 3130 tproxy name=tproxy https_port 3131 ssl-bump generate-host-certificates=on cert=/etc/pki/tls/certs/squid-sslbump.crt key=/etc/pki/tls/private/squid-sslbump.key tproxy name=tproxy_ssl ## # Set a Netfilter mark on transparently proxied connections so they can have # special routing ## tcp_outgoing_mark 0x2 tproxy tcp_outgoing_mark 0x2 tproxy_ssl ## # Since we do no caching in this instance of Squid, we use a second instance as # an upstream caching proxy. For efficiency reasons we try to send uncachable # traffic directly to the web server rather than via the upstream proxy. ## cache_peer [::1] parent 3129 0 proxy-only no-query no-digest no-tproxy name=caching cache_peer_access caching deny CONNECT cache_peer_access caching deny https cache_peer_access caching deny tproxy_ssl cache_peer_access caching deny to_localhost cache_peer_access caching deny dstdomain_localhost cache_peer_access caching allow all cache_mem 0 cache deny all never_direct deny CONNECT never_direct deny https never_direct deny tproxy_ssl never_direct deny to_localhost never_direct deny dstdomain_localhost never_direct allow all ## # Interface with the web filter ## icap_enable on icap_service_revival_delay 30 icap_preview_enable on icap_preview_size 5 icap_send_client_ip on icap_send_client_username on icap_service iceni_reqmod_precache reqmod_precache 0 icap://localhost6:1344/reqmod_precache icap_service iceni_respmod_postcache respmod_precache 0 icap://localhost6:1344/respmod_postcache adaptation_service_set iceni_reqmod_precache iceni_reqmod_precache adaptation_service_set iceni_respmod_postcache iceni_respmod_postcache adaptation_access iceni_reqmod_precache deny local_ips adaptation_access iceni_reqmod_precache deny to_localhost adaptation_access iceni_reqmod_precache deny dstdomain_localhost adaptation_access iceni_reqmod_precache allow all adaptation_access iceni_respmod_postcache deny local_ips adaptation_access iceni_respmod_postcache deny to_localhost adaptation_access iceni_respmod_postcache deny dstdomain_localhost adaptation_access iceni_respmod_postcache allow all -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-1792-824568 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-1792-825748 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.
: 11.9% Storage Swap size: 3773012 KB Storage Swap capacity: 90.0% used, 10.0% free Storage Mem size: 262144 KB Storage Mem capacity: 100.0% used, 0.0% free Mean Object Size: 28.55 KB Requests given to unlinkd: 3198063 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.02899 0.03241 Cache Misses: 0.03066 0.03241 Cache Hits:0.00405 0.00091 Near Hits: 0.03066 0.03427 Not-Modified Replies: 0.0 0.0 DNS Lookups: 0.0 0.0 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:1574985.354 seconds CPU Time: 32733.608 seconds CPU Usage: 2.08% CPU Usage, 5 minute avg:3.80% CPU Usage, 60 minute avg: 3.54% Maximum Resident Size: 1025200 KB Page faults with physical i/o: 289968 Memory usage for squid via mallinfo(): Total space in arena: 49616 KB Ordinary blocks:38418 KB 15268 blks Small blocks: 0 KB 0 blks Holding blocks: 10520 KB 7 blks Free Small blocks: 0 KB Free Ordinary blocks: 11198 KB Total in use: 11198 KB 19% Total free: 11198 KB 19% Total size: 60136 KB Memory accounted for: Total accounted:27128 KB 45% memPool accounted: 27128 KB 45% memPool unaccounted:33008 KB 55% memPoolAlloc calls: 5279809700 memPoolFree calls: 5314670336 File descriptor usage for squid: Maximum number of file descriptors: 16384 Largest file desc currently in use: 67 Number of file desc currently in use: 50 Files queued for open: 0 Available number of file descriptors: 16334 Reserved number of file descriptors: 100 Store Disk files open: 0 Internal Data Structures: 132586 StoreEntries 444 StoreEntries with MemObjects 8192 Hot Object Cache Items 132142 on-disk objects As a separate note: I'm not sure why the memory footprint of the caching squid is so low - with cache_mem set to 256MB (100% used, apparently) and 8 workers I would expect it to be much more. Something else for me to investigate when I've got time. :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] SSL Bump and certificate pinning
Mozilla have announced that Firefox 32 does public key pinning: http://monica-at-mozilla.blogspot.co.uk/2014/08/firefox-32-supports-public-key-pinning.html Obviously this has the potential to render SSL-bump considerably less useful. At the moment it seems to be restricted to a small number of domains, but that's sure to increase. Whilst I support the idea of ensuring that traffic isn't surreptitiously intercepted, there are legitimate instances where interception is necessary *and* the user is fully aware that it is happening (and has therefore imported the proxy's CA certificate into their key chain). So I'm wondering if there is any kind of workaround to keep SSL-bump working with these sites? 1. It seems to me that imported CA certs should have some kind of flag associated with them to indicate that they should be trusted even for pinned domains. 2. I'm guessing that this is not an issue for devices that *always* go through an intercepting proxy, since presumably they would never get to see the real cert, so wouldn't pin it? So this is mainly an issue for devices that move between networks? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] External ACL tags
On 29.07.14 06:37, Amos Jeffries wrote: The note ACL type should match against values in the tag key name same as any other annotation. If that does not work try a different key name than tag=. Perfect, thank you! -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] External ACL tags
I'm trying to build ACLs based on the tags returned by an external ACL, but I can't get it to work. These are the relevant bits of my config: external_acl_type preauth children-max=1 concurrency=100 ttl=0 negative_ttl=0 %SRC %{User-Agent} %URI %METHOD /usr/sbin/squid-preauth acl preauth external preauth acl need_http_auth tag http_auth http_access allow !tproxy !tproxy_ssl !https preauth http_access allow !preauth_done preauth_tproxy http_access allow proxy_auth postauth I can see the external ACL is being called and setting various tags: 2014/07/28 17:29:40.634 kid1| external_acl.cc(1503) Start: externalAclLookup: looking up for '2a00:1a90:5::14 Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 'preauth'. 2014/07/28 17:29:40.634 kid1| external_acl.cc(1513) Start: externalAclLookup: will wait for the result of '2a00:1a90:5::14 Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 'preauth' (ch=0x7f1409a399f8). 2014/07/28 17:29:40.634 kid1| external_acl.cc(871) aclMatchExternal: 2a00:1a90:5::14 Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET: return -1. 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: preauth = -1 async 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access#7 = -1 async 2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access = -1 async 2014/07/28 17:29:40.635 kid1| external_acl.cc(1371) externalAclHandleReply: reply={result=ERR, notes={message: 53d67a74$2a00:1a90:5::14$baa34e80d2d5fb2549621f36616dce9000767e93b6f86b5dc8732a8c46e676ff; tag: http_auth; tag: cp_auth; tag: preauth_ok; tag: preauth_done; }} But then when I test one of the tags, it seems that it isn't set: 2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking !preauth_done 2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking preauth_done 2014/07/28 17:29:40.636 kid1| StringData.cc(81) match: aclMatchStringList: checking 'http_auth' 2014/07/28 17:29:40.636 kid1| StringData.cc(85) match: aclMatchStringList: 'http_auth' NOT found 2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: preauth_done = 0 2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: !preauth_done = 1 It looks to me like its probably only looking at the first tag that the ACL returned - is this a known bug? I couldn't spot anything in Bugzilla. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] Squid in a WiFi Captive portal scenario
On 14/05/14 20:02, JMangia wrote: Apple, Google, Microsoft implement a sort of automatic Web Popup page that appear just connecting to a WiFi network that implement a “captive portal” solution. This popup appear even if the internet request is not coming from the browser but also from any other App. On iOS / Mac Os for example once activated the WiFi the OS make an HTTP request for http://www.apple.com/library/test/success.html with special user agent CaptiveNetworkSupport/1.0 and to get the Splash Landing Page the captive solution must just not return Success. Microsoft implement something similar with WISPr support and Android try to contact http://clients3.google.com/generate_204 or http://www.google.com/blank.html. My squid configuration deny any destination and redirect to my landing splash page but the user need to open the browser to get this splash page. I mean if the user connect to wifi network and open any other App that use the connection no popup login page appear. Is there anyone else working in this scenario using Squid ? iOS devices make a request to their servers as soon as they associate with a wireless network. If the request doesn't return what they expect then they assume its a captive portal login page and pop it up. They also support WISPr - if the page has WISPr XML embedded in it then the OS will scrape the user name and password from the POST request the first time you log in and then automatically resubmit it in future rather than popping up the page. Old iOS devices used http://www.apple.com/library/test/success.html but new versions of iOS probe a variety of URIs. If you return a login page for any request, while the user isn't logged in, then it won't matter which URI they use. This works for me - Squid returns a 302 redirecting to a captive portal login page which has embedded WISPr XML. The devices pop up the login page the first time and thereafter use WISPr. Of course, the device must be able to access that page when they're not logged in to the proxy! This works for both transparent proxy connections and non-transparent connections, but ISTR you *must* return a 302 - anything else (such as a 200 with the login page itself, or a 407, will break). I've not had too much experience with MS's devices so can't really comment. HTC Android devices have supported WISPr for quite a while I believe, but I don't think stock Android has support (or at least if it does it's a pretty recent addition). The CoovaAX app will do WISPr on Android though. Recent OS X versions also do WISPr iff they are using the wifi but not for wired connections (which seems an odd distinction!) Another gotcha is that the WISPr service must be https with a trusted certificate, and ISTR the devices must be able to access the CA's servers even before being authenticated in order to verify the certificate. But I don't believe this affects the actual pop up login screen, just WISPr automatic authentication. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Intermittent slowness
I'm trying to debug an intermittent slowness problem with Squid 3.4.4. Unfortunately I haven't been able to figure out how to reproduce the problem, it just occurs every so often on a production server. I've not yet tried Squid 3.4.5, but there's nothing in the change log that would lead me to believe that this problem has been addressed. I've got an example web fetch from this morning: The network traffic shows: 09:01:54.489515 client - server TCP SYN 09:01:54.489541 client - server TCP SYN/ACK 09:01:54.489555 client - server TCP ACK 09:01:54.490059 client - server HTTP GET request 09:01:54.490074 client - server TCP ACK 09:02:09.492576 client - server TCP FIN (client times out, tears down) 09:02:09.53 client - server TCP ACK 09:02:35.371911 client - server TCP FIN (server tears down connection) The client is port 58469, the server is 3128. As you can see, Squid never replies to the GET request (and actually, in this case the GET request didn't require Squid to contact another server - the authentication credentials were invalid, so it should have produced a 407). Examining the Squid logs (http://persephone.nexusuk.org/~steve/cache.log.trimmed), I can see that Squid didn't accept the connection until 09:02:35.370 What seems to be happening is that helperStatefulHandleRead is being called, and taking several seconds to complete - if this happens frequently enough then the incoming connections get queued up and significantly delayed. See the log below: 2014/05/08 09:01:53.489 kid1| comm.cc(167) comm_read: comm_read, queueing read for local=[::] remote=[::] FD 66 flags=1; asynCall 0x7f9c174c4c00*1 2014/05/08 09:01:53.489 kid1| ModEpoll.cc(139) SetSelect: FD 66, type=1, handler=1, client_data=0x7f9bf4425328, timeout=0 2014/05/08 09:01:53.489 kid1| AsyncCallQueue.cc(53) fireNext: leaving helperHandleRead(local=[::] remote=[::] FD 66 flags=1, data=0x7f9c0a1dad78, size=120, buf=0x7f9c09f4d8a0) 2014/05/08 09:01:53.489 kid1| AsyncCallQueue.cc(51) fireNext: entering helperStatefulHandleRead(local=[::] remote=[::] FD 53 flags=1, data=0x7f9c08ae5928, size=281, buf=0x7f9bfa3e11d0) 2014/05/08 09:01:53.489 kid1| AsyncCall.cc(30) make: make call helperStatefulHandleRead [call44614375] 2014/05/08 09:01:58.329 kid1| AsyncCall.cc(18) AsyncCall: The AsyncCall helperDispatchWriteDone constructed, this=0x7f9c1a39bd40 [call44614625] 2014/05/08 09:01:58.329 kid1| Write.cc(29) Write: local=[::] remote=[::] FD 68 flags=1: sz 188: asynCall 0x7f9c1a39bd40*1 FDs 66 and 68 are connections to external ACL helpers, but the network traffic shows that the external ACLs are answering immediately, so as far as I can tell this delay isn't caused by the helpers themselves. (FD 66 received a query at 09:01:53.480992 and replied at 09:01:53.484808; FD 68 received a query at 09:01:58.334608 and replied at 09:01:58.334661). I *think* FD 53 might be a connection to the ICAP service. I have noticed that squid often seems to use a lot of CPU time when this problem is occurring. Unfortunately I don't know where to go with the debugging now - The current amount of debug logging produces a lot of data, but isn't really detailed enough for me to work out what's going on. But as mentioned, since I can't reproduce this problem in a test environment, I have no choice but to just leave debug logging turned on on a production server. Any suggestions / help from people more familiar with the Squid internals would certainly be helpful. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Broken Apple devices - repeated 407s
Apple devices seem to be pretty broken when it comes to handling authenticated proxies. However, sometimes I see behaviour that is so broken that it could almost be considered a DoS attack: Devices that make a request, get a 407 back from the proxy and immediately make the same request again, still with no authentication credentials - the proxy returns a 407, of course, and the client requests again... repeatedly, with no kind of a back-off timer, going on for hours on end. For example: 28/Apr/2014:07:45:36.194 0 10.203.1.18 TCP_DENIED/407 4660 CONNECT p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1) 28/Apr/2014:07:45:36.205 0 10.203.1.18 TCP_DENIED/407 4660 CONNECT p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1) 28/Apr/2014:07:45:36.215 0 10.203.1.18 TCP_DENIED/407 4660 CONNECT p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1) (continues like that with about 100ms between requests). And other similar requests: 28/Apr/2014:07:45:28.793 0 10.203.1.18 TCP_DENIED/407 4649 CONNECT keyvalueservice.icloud.com:443 - HIER_NONE/- text/html SyncedDefaults/91.30 (Mac OS X 10.9.2 (13C1021)) 28/Apr/2014:07:45:58.358 0 10.203.1.18 TCP_DENIED/407 4630 CONNECT p02-caldav.icloud.com:443 - HIER_NONE/- text/html Mac_OS_X/10.9.2 (13C1021) CalendarAgent/176 28/Apr/2014:07:45:59.114 0 10.203.1.18 TCP_DENIED/407 4612 CONNECT p02-bookmarks.icloud.com:443 - HIER_NONE/- text/html CoreDAV/229.6 (13C1021) etc... It happens from both OS X and iOS devices every so often (presumably flattens the iphone battery pretty quickly!) Clearly this is a bug in Apple's software (which I have reported, but they seem uninterested in fixing it*), but I'm wondering if anyone else has observed this behaviour and come up with any good ideas to mitigate it on the proxy side? rant * Apple's bug reporting process seems to be: 1. I report a bug with lots of information regarding the OS version on the device, how to replicate the problem, etc. 2. They sit on it for a few weeks before asking me to provide them with lots of logs from the device itself, which generally I can't easily do because I don't personally have the device. 3. I jump through the hoops and provide them with the information they request. 4. They sit on the bug and never bother to respond or fix it. So given that (3) involves me spending quite a bit of time getting hold of a device and replicating the problem, even though I provided them enough information to do this themselves, and it basically seems to be a complete waste of my time since they then ignore the bug, I've largely given up reporting them now... Which is a shame - I don't mind spending time collecting debugging information if it's actually going to help get the bug fixed, but with Apple this doesn't seem to happen. /rant -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Debugging slowness
I'm trying to debug some slowness issues with Squid 3.4.4. This is currently under reasonably light use and every so often it becomes very slow. From what I can tell, the client sends a request with Negotiate auth credentials in it. The proxy should respond with a 407 and the negotiate challenge, but instead it sometimes just sits there for ages... sometimes for a minute or so! (I'm not 100% convinced that this is specific to Negotiate auth though). Squid does not appear to be running out of file descriptors. The problem is intermittent, which is making debugging a pain. I have a tcpdump running and full logging turned on in squid so hopefully I can catch some useful information the next time the problem occurs. My question is: Once I've identified a specific request that has experienced the problem, and want to track what Squid was doing with that request, is there any sensible way of filtering the cache.log to exclude the other requests that were happening concurrently? Secondly, does anyone have any suggestions for what specific logging I should turn on, rather than logging everything, since logging everything slows the proxy down significantly? Thanks. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] Destination address rewriting for TPROXY
On 03/12/13 04:01, Amos Jeffries wrote: Does the patch from http://bugs.squid-cache.org/show_bug.cgi?id=3589 fix this for you? Thanks for the reply... The answer is yes and no. :) The patch causes Squid to connect to the right place, but it appears that the ACLs don't necessarilly get re-evaluated. The relevant chunk of my Squid config is: - # cache_peer for the local webserver to prevent t-proxy spoofing of requests to localhost cache_peer [::1] parent 80 0 proxy-only no-query no-digest no-tproxy originserver name=localhost_80 cache_peer_access localhost_80 deny !port_80 cache_peer_access localhost_80 allow to_localhost cache_peer_access localhost_80 deny all cache_peer [::1] parent 3129 0 proxy-only no-query no-digest no-tproxy name=caching cache_peer_access caching deny to_localhost cache_peer_access caching deny CONNECT cache_peer_access caching deny https cache_peer_access caching deny tproxy_ssl cache_peer_access caching allow all adaptation_access iceni_respmod deny to_localhost adaptation_access iceni_respmod allow all - During REQMOD, the ICAP server decides whether or not the web request should be blocked. For unblocked requests it either loops back the request unaltered or returns a 204. For blocked requests, it rewrites the request to go to http://localhost/blah and the local webserver does the heavy lifting of presenting an error page to the user. The cache_peer and cache_peer_access lines should cause Squid to send the http://localhost/blah requests directly to the local web server without tproxy spoofing, all other HTTP traffic goes via an upstream caching proxy. The adaption_access line should prevent the http://localhost/blah requests going through the ICAP RESPMOD service. However, the to_localhost ACL doesn't seem to be working. The rewritten requests are still being sent through RESPMOD and the upstream proxy when the system is used as a transparent proxy, even though this works correctly for non-transparent proxying. Replacing the to_localhost ACL with one that checks dstdomain = localhost works as expected, so this is a reasonable stop-gap, but it does seem that to_localhost is behaving in an unexpected way, since its behaviour changes depending on whether the proxy is transparent or not. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Destination address rewriting for TPROXY
I'm using an ICAP reqmod service to change the URI of certain requests (including the host name). When running under a non-transparent proxy this works fine. However, when using TPROXY, Squid uses the original destination IP address of the connection rather than the Host header to determine where to connect to, so modifying the request doesn't cause Squid to actually connect to a different host. Is there any way to force Squid to connect to the host in the rewritten request, rather than continuing to connect to the original IP address? I'm aware of the client_dst_passthru off option, which sounds like it would almost do what I want, except the manual says that this option gets forced back on for requests that fail host verification. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] CLOSE_WAIT
On 23.01.13 05:12, Amos Jeffries wrote: IIRC we tried that but it resulted in early cloure of CONNECT tunnels and a few other bad side effects on the tunnelled traffic. Due to the way tunnel.cc and client_side.cc code interacts (badly) the client-side code cannot know whether the tunnel is still operating or has data buffered when it gets to the point of emitting that message. Revisiting this problem, I can't see why keepaliveNextRequest() would be getting called after a successful CONNECT. As far as I can tell, client_side_request.cc calls tunnelStart(), and then does nothing else with the connection. If we can't connect to the remote host for whatever reason, tunnel.cc calls errorSend() and all the code paths seem to lead to the socket being closed; if we can connect then the socket then I don't think client_side_request.cc touches it again. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] CLOSE_WAIT
On 11.01.13 00:06, Amos Jeffries wrote: So it seems apparent that after Squid delivers the clear-text response, it abandons the socket but never closes it. From looking in the source, this is client_side.cc, and it has a comment: // XXX: Can this happen? CONNECT tunnels have deferredRequest set. It looks to me as if the (conn-flags.readMore) section above should be the bit being executed, although I don't quite understand deferred requests. In either case, it seems like we should close the socket if it ever gets abandoned? Calling conn-clientConnection-close() from else part where the connection is abandoned seems the right thing to do. Is there any situation where closing the connection when it is abandoned is the wrong thing to do? However, since the CONNECT and the response were both served with a Connection: keep-alive header, it seems that readMore should really be true at this point anyway. clientProcessRequest() explicitly sets readMore = false for CONNECT requests, so I don't understand how Squid handles keep-alive CONNECT tunnels? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Squid 3.2.6 fails to handle large POSTs when returning errors
I've come across what appears to be a bug in Squid's handling of POST requests when returning an error to the client. If the UA makes a POST request with a reasonably large object (e.g. a file upload of a few megabytes) and Squid needs to return an error (I've tested this for 403 and 407 responses), it returns the response immediately after receiving the request headers. The client continues to send the POST body and Squid continues to read it. However, eventually, Squid logs: 2013/01/17 17:50:50.780 kid1| client_side.cc(2322) maybeMakeSpaceAvailable: request buffer full: client_request_buffer_max_size=524288 Squid stops reading from the client and since the client is still sending, the socket's rx queue grows. Eventually the rx queue is full and the TCP stack shrinks the TCP window to 0. The client and Squid both sit there doing nothing with an open socket until the client drops the connection (which may be some time). So, it seems that Squid is storing the POST body in an internal buffer, waiting to do something with it that will never happen. This is reproducible by setting an ACL: http_access deny all Then make a large post request via the proxy: curl --proxy http://yourproxy:3128 -v -F foo=@somefile -H Expect: http://someurl Where somefile is a file of a few megabytes in size. I've seen this in the field with Internet Explorer and proxy authentication - uploading a file can result in IE making the initial unauthenticated request and then hanging, waiting for the upload to complete before redoing the request with auth credentials. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Marking squid-webserver traffic
I'm setting up some traffic routing to use Squid's TPROXY with a separate router. So the network design looks like: Clients - Squid | | Router | | Internet There will be a GRE tunnel between Squid and the router. So the idea is: - The router intercepts web requests from the clients, uses iptables to mark them and routes them over the GRE tunnel to Squid. - The Squid proxy machine intercepts the traffic coming from the GRE interface and redirects it to TPROXY. - Squid does its thing, probably making a request to a web server. - The traffic to the web server is routed over the GRE tunnel back to the router. - The router CONNMARKs the traffic from the GRE tunnel and directs it out to the internet. - Reply traffic from the webserver has its connmark restored by the router and is sent back over the GRE tunnel to Squid. - Squid's response to the client is sent over the GRE tunnel to the router. - The router sends the response on to the client. I can do everything except identify Squid's requests to the web server and therefore route them back over GRE. I could use tcp_outgoing_tos and then route based on ToS, but I'd prefer to avoid abusing the ToS flags - is there a similar way of setting the fwmark? qos_flows only seems to control the replies to the client rather than requests to the web server... I've read through the documentation for setting up wccp, but as far as I can see the example configurations only route client-squid traffic via GRE and the squid-client and squid-webserver traffic all follows the usual routing instead (which would require Squid to have its own dedicated connection to the router). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] CLOSE_WAIT
On 11/01/13 00:06, Amos Jeffries wrote: Okay. So the source of the problem is that Squid thinks there is something using the socket, but it never got into the tunnel code which would have closed it? Is the 403 message being generated inside Squid or by ICAP? The 403 isn't generated by Squid. It can be generated in 2 different ways (both result in the same problem): 1. The REQMOD ICAP server generates a 403 Forbidden HTTP response. 2. The REQMOD ICAP server rewrites the HTTP request to GET from a local webserver and that generates the 403 Forbidden response. In the case where the server generates the 403 and things hang it will be because Squid is expecting a close to arrive from server or client. Setting the Connection: Close HTTP header in the 403 response doesn't help. Once a tunnel is open and transmitting data it is up to those endpoints to ensure keep-alive and other such details, although Squid applies a timeout since last byte transferred. Both the browser and the web server have closed the connection, but squid isn't closing its side of the connection to the browser. Squid is also not reading from the connection to the browser between the 403 response being sent and the browser dropping the connection (anything the browser sends after the 403 just piles up in the socket's rx buffer). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] CLOSE_WAIT
On 09/01/13 21:07, Amos Jeffries wrote: Does the CONNECT request contain Connection:close or Connection:keep-alive? Squid supports keep-alive on CONNECT requests in these situations where the CONNECT size is known and may be waiting for another client request. The client sends Proxy-Connection: keep-alive, so it would indeed be possible for the connection to be reused. I should add that I'm seeing this with a non-transparent configuration (i.e. without Tproxy and without ssl_bump). Please upgrade to 3.2.6 (should anyway for the CVE resolved there) and see if this issue is gone there. I've now upgraded my test server and the problem remains. The 403 Forbidden being sent back to the client didn't have a Content Length header, which of course caused the client to drop the connection. If I add a Content Length, the connection stays alive and I think it's a bit more obvious whats happening: 1. Client connects to Squid and sends a CONNECT. 2. Squid passes the request to the ICAP REQMOD service. 3. ICAP rewrites the request. 4. Squid returns a 403 Forbidden to the client in the clear. At this point, the connection is still alive with empty queues. Squid's cache.log shows: 2013/01/10 17:52:18 kid1| abandoning local=[2a00:1a90:5::9]:3128 remote=[2001:4d48:ad51:501:226:bbff:fe18:f3ff]:49926 FD 19 flags=1 Now, the user retries the connection: 5. Client sends CONNECT over the existing connection. 6. Squid does not read the socket (netstat shows the rx queue on the socket is 220 octets long, which is the whole request the client just sent). 7. The client sits there spinning. 8. Eventually the client times out and drops the connection. The connection is now in CLOSE_WAIT on the Squid server and will remain like that indefinitely. So it seems apparent that after Squid delivers the clear-text response, it abandons the socket but never closes it. From looking in the source, this is client_side.cc, and it has a comment: // XXX: Can this happen? CONNECT tunnels have deferredRequest set. It looks to me as if the (conn-flags.readMore) section above should be the bit being executed, although I don't quite understand deferred requests. In either case, it seems like we should close the socket if it ever gets abandoned? A few notes on the REQMOD rewrite: My ICAP service can generate a 403 Forbidden response during REQMOD, or rewrite the request to a GET to a local web server. In the latter case, the local webserver generates a 403 Forbidden response. In both cases the client sees a similar response to its request, and in both cases this bug manifests. Interestingly, if the ICAP service closes the ICAP connection instead of returning a response, Squid generates a 500 Internal Server Error and does not abandon the socket (the client then drops the connection, which squid handles correctly, and therefore doesn't end in CLOSE_WAIT). -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] CLOSE_WAIT
I have a busy Squid 3.2.3 server that constantly has a huge number of connections tied up in CLOSE_WAIT (i.e. at the moment it has 364 ESTABLISHED but 3622 in CLOSE_WAIT). tcp1 0 :::172.23.3.254:8080 :::172.23.2.158:49615 CLOSE_WAIT 32303/(squid-1) All of these sockets have an rx queue containing 1 byte. I'm aware that I can reduce client_lifetime, but I'm wondering if I have a more fundamental problem somewhere since Squid appears to not be flushing the queue. Can anyone cast any light onto this behaviour? Other servers running various versions of Squid (including 3.2.3) don't seem to be exhibiting the problem to such an extent (I'm still seeing a number of CLOSE_WAIT sockets with an rx queue length of 1 on these servers, but in relatively small quantities.) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] CLOSE_WAIT
On 09/01/13 10:14, Steve Hill wrote: I have a busy Squid 3.2.3 server that constantly has a huge number of connections tied up in CLOSE_WAIT (i.e. at the moment it has 364 ESTABLISHED but 3622 in CLOSE_WAIT). tcp1 0 :::172.23.3.254:8080 :::172.23.2.158:49615 CLOSE_WAIT 32303/(squid-1) Further to this, it appears that this is triggered by ICAP REQMOD rewrites of CONNECT requests: 1. Client sends a CONNECT foo.example.com:443 HTTP/1.1 request to the proxy. 2. Squid passes the request to the ICAP REQMOD service. 3. The ICAP REQMOD service wants to deny the request, so rewrites the request. 4. Squid returns a 403 Forbidden response to the client in clear text (this is allowed, as it is seen by the client as a response from the proxy rather than a response from the web server, although very few clients actually display the page contents these days due to security restrictions). 5. The client sends a FIN At this point, the socket stays open on the Squid server - Squid never closes it and there is 1 byte in the socket's rx queue. I have no idea what that 1 byte is though - Since all requests are terminated with a \r\n maybe squid doesn't read the \n ?) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] TPROXY with IPv6
Squid's TPROXY sockets only seem to bind to the IPv4 stack - Some Googling suggests it can be made to work with IPv6, but I've not found anything explaining how. What am I missing? Thanks. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] TPROXY with IPv6
On 20.12.12 13:58, Paweł Mojski wrote: Search the list archives. I posted working config for ipv6 few months ago. Thanks - I found your config: http://www.squid-cache.org/mail-archive/squid-users/201206/0281.html It didn't explain how it could work when Squid only binds the tproxy socket to the IPv4 stack. However, I just restarted squid and it has now bound to the IPv6 stack so I'm not sure what was previously preventing it. Anyway, looks like the problem is solved - thanks. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Negotiate NTLM authentication broken?, 3.2.3
I've just upgraded a machine from Squid 3.2.0 to 3.2.3 and can't seem to get the Negotiate authenticator to work any more. From the traffic, I can see: 1. The client sends an unauthenticated request 2. Squid returns a 407 with Proxy-Authenticate: Negotiate 3. The client resends the request with Proxy-Authorization: Negotiate TlRMTVNTUAABl4II4gAGAbEdDw== 4. Squid returns a 407 with no Proxy-Authenticate header Example traffic: - GET http://example.com HTTP/1.1 Proxy-Authorization: Negotiate TlRMTVNTUAABl4II4gAGAbEdDw== HTTP/1.1 407 Proxy Authentication Required Server: squid/3.2.3 Mime-Version: 1.0 Date: Fri, 07 Dec 2012 16:22:58 GMT Content-Type: text/html Content-Length: 3878 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0 Vary: Accept-Language Content-Language: en X-Cache: MISS from foo X-Cache-Lookup: NONE from foo:3128 Via: 1.1 foo (squid/3.2.3) Connection: keep-alive - This does not appear to be a problem with negotiate_wrapper itself as I can see from the logs that Squid has got a challenge string from it: 2012/12/07 16:29:39.051 kid1| UserRequest.cc(170) authenticate: need to challenge client 'TlRMTVNTUAACBgAGADAVgonifVf3m5EEkgIAAC4ALgA2SwBTAEIAAgAGAEsAUwBCAAEACgBJAEMARQBOAEkABAMACgBpAGMAZQBuAGkAAA=='! Everything I see in the logs indicates that Squid knows it has to send the challenge to the client, but the header never makes it into the response. I've trimmed my configuration down to a minimum: - debug_options ALL,9 auth_param negotiate program /usr/lib64/squid/negotiate_wrapper_auth -d --ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --domain=FOO --kerberos /usr/lib64/squid/negotiate_kerberos_auth -s HTTP/foo auth_param negotiate children 50 auth_param negotiate keep_alive off auth_param basic program /usr/lib64/squid/basic_pam_auth auth_param basic children 50 auth_param basic realm Iceni Web Proxy auth_param basic credentialsttl 2 hours acl proxy_auth proxy_auth REQUIRED http_access allow proxy_auth http_access deny all icp_access deny all htcp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? logformat iceni %tg.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a %mt %{User-Agent}h access_log stdio:/var/log/squid/access.log iceni cache_log /var/log/squid/cache.log cache_store_log stdio:/var/log/squid/store.log pid_filename /var/run/squid.pid coredump_dir /var/spool/squid-nocache - The appropriate parts of cache.log are available at: http://persephone.nexusuk.org/~steve/cache.log -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] A way to redirect google/Youtube SSL
On 28.11.12 23:22, David Touzeau wrote: Thanks !!! But what about Youtube ? I'm not aware of anything similar for youtube I'm afraid, but if you come across anything I'd be very interested. The other possibility is to ssl-bump the https sessions, but that's a bit nasty. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
[squid-users] Tproxy without spoofed source address
I need to transparently proxy traffic, and the best way to do this seems to be to use tproxy, since that allows IPv6 traffic to be supported. However, when using tproxy, Squid spoofs the client's source address when making the connection to the web server - this is something I don't need, and breaks requests that end up going to web servers on the local network since the return traffic from the web server ends up going straight back to the client instead of back to Squid. So far the only way I've found to disable the spoofing behaviour is to send the traffic via a non-transparent upstream proxy. Is there some way to turn off source address spoofing without using a second proxy? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] A way to redirect google/Youtube SSL
On 28.11.12 13:52, David Touzeau wrote: Since Google and Youtube force browser to use SSL we have lake of statistics and web filtering with Squid. I would like if there is a good way in order to redirect SSL requests to google/Youtube to non-encrypted requests ? Google allow you to do this by redirecting requests to nosslsearch.google.com: http://support.google.com/websearch/bin/answer.py?hl=enanswer=186669 Look at the link at the bottom of the page - Information for school network administrators about the No-SSL option. You can follow their advice and bodge DNS records into your DNS server, add the address to your /etc/hosts file, or use an ICAP server to rewrite the CONNECT requests. Beware that the http traffic itself must be unmodified (e.g. the GET and Host headers must still point at www.google.co.uk or wherever), just the IP address you connect to changes. -- - Steve
[squid-users] ICAP breaks HTTP responses with 1 octet bodies
I'm having some problems with Squid's ICAP client breaking on RESPMOD when handling responses with a body exactly 1 octet long. - The browser makes a request to Squid - Squid makes a request to the web server and receives back a response with a Content-Length: 1 header and a 1 octet body. - The response gets sent to the ICAP server, which replies with a 204 No Modifications. - Squid sends the response on to the browser, with the Content-Length: 1 header intact, but doesn't send a body. The browser sits there indefinitely waiting for the body to appear. As far as I can tell, the ICAP client successfully copies the body from the virgin response to the adapted response. The problem appears to be with ServerStateData::handleMoreAdaptedBodyAvailable() - The API for StoreEntry::bytesWanted() seems to be that it will return 0 if it wants no data, or up to aRange.end-1 if more data is wanted. This means that if aRange.end == 1, which is the case when we only have 1 octet of data, it is always going to look like it can't accept any more data. The fact that it can't accept a single octet is actually noted in ServerStateData::handleMoreAdaptedBodyAvailable() in Server.cc: // XXX: bytesWanted API does not allow us to write just one byte! I've resolved this problem by changing the following lines in ServerStateData::handleMoreAdaptedBodyAvailable(): const size_t bytesWanted = entry-bytesWanted(Rangesize_t(0, contentSize)); const size_t spaceAvailable = bytesWanted 0 ? (bytesWanted + 1) : 0; To: const size_t spaceAvailable = entry-bytesWanted(Rangesize_t(0, contentSize+1)); (Patch attached) However, I'm not sure if this is a correct fix - what effect does adding 1 to the contentSize given to bytesWanted() actually have? And is this supposed to be handled elsewhere? -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com Index: src/Server.cc === --- src/Server.cc (revision 115) +++ src/Server.cc (working copy) @@ -723,8 +723,7 @@ // XXX: entry-bytesWanted returns contentSize-1 if entry can accept data. // We have to add 1 to avoid suspending forever. -const size_t bytesWanted = entry-bytesWanted(Rangesize_t(0, contentSize)); -const size_t spaceAvailable = bytesWanted 0 ? (bytesWanted + 1) : 0; +const size_t spaceAvailable = entry-bytesWanted(Rangesize_t(0, contentSize+1)); if (spaceAvailable contentSize ) { // No or partial body data consuming @@ -734,8 +733,7 @@ entry-deferProducer(call); } -// XXX: bytesWanted API does not allow us to write just one byte! -if (!spaceAvailable contentSize 1) { +if (!spaceAvailable contentSize 0) { debugs(11, 5, HERE NOT storing contentSize bytes of adapted response body at offset adaptedBodySource-consumedSize()); return;
Re: [squid-users] Squid 3.0 icap HIT
On Sat, 6 Nov 2010, Luis Enrique Sanchez Arce wrote: When squid resolve the resource from cache does not send the answer to ICAP. How I can change this behavior? You need a respmod_postcache hook, which unfortunately hasn't been implemented yet. The workaround I use is to run two separate Squid instances - one of them does all the usual caching stuff and listens only on [::1]:3129. A second Squid instance runs with caching turned off entirely, forwarding requests to [::1]:3129. The second squid instance is configured to talk to the ICAP service. All the clients connect to the second instance. My configuration for the non-caching Squid instance that talks to the ICAP server is here: https://subversion.opendium.net/trac/free/browser/thirdparty/squid/trunk/extra_sources/squid-nocache.conf This effectively provides a precache reqmod hook (reqmod_precache) and a postcache respmod hook (respmod_precache). The caching Squid would provide the same precache reqmod hook (reqmod_precache) and a precache respmod hook (respmod_precache), although I don't have a use for these myself. Its a bit nasty, but it happens to work. :) -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com Direct contacts: Instant messager: xmpp:st...@opendium.com Email:st...@opendium.com Phone:sip:st...@opendium.com Sales / enquiries contacts: Email:sa...@opendium.com Phone:+44-844-9791439 / sip:sa...@opendium.com Support contacts: Email:supp...@opendium.com Phone:+44-844-4844916 / sip:supp...@opendium.com
Re: [squid-users] Leaking ICAP connections
On Mon, 18 Oct 2010, Amos Jeffries wrote: Sounds a lot to me like some rare response from ICAP which confuses Squid about the reply size. This is possible. Although shouldn't Squid time out ICAP requests (and close the connection) if the response takes too long to complete? Or a persistent connection race leaking FD. By persistent connection race you mean the ICAP server closing the connection when squid is trying to use it? This is definately not what is happenning - my ICAP server never tries to close the connection (http://bugs.squid-cache.org/show_bug.cgi?id=2980 suggests it would be unsafe to try and do so) and the connections are still in the established state (according to both netstat and the ICAP server itself). It seems as though Squid just decides to stop using the connection but doesn't actually shut it down. Unfortunately I'm way too unfamiliar with the internal structure of squid to attach a debugger to the process and look to see what it thinks the ICAP connections are doing. Pity it's so rare, the fix is likely to require a trace of the ICAP server response headers. This is possibly something I can help with. I added debugging code to the ICAP server to maintain a short in-memory buffer of the last few data written to the socket so that I could attach gdb to the process and dump it. The result was: ICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\n I can't see anything wrong with these headers. The last chunk of data received by the ICAP server was: f0\r\n\304\324\315\216\2330\020\000\340\373\005\227%\255\224%\343?\fTho}\200\366\330V\025\204\t\270\313\217\vN\233\250\332w\257\215vW!\240H\221\252f\016\366`\214\261?\214\217\370\204?\3678\230\340\363\266W\332\004][wY\361nEh$(\243$\226R\362\325\372\316\263\361\'\307]\327c\3421X{\331\316`\237x\241\313U\223\2258$\336\227\261\333\330\325{K]T=\356\022o\365us\314\332b\300\254\337V\217X\244\3047x0i\331e\346\236B\327\332WO\350\017\346\2501\035\306a}\325\224\337\367}\235\252!\320\252}r\035\202m\327\334\323\217\272\352LgkJ\200\330\252\301\302\225:\327j\353\232\200\210\340\207.\375A\017\312`Z\366\231\252\'C\370:e\360\262\266\327\260o\262\023\255\214\321\311f\243\032\376`\366y\340f\215\r\n0\r\n\r\n This looks like the end of a correctly terminated ICAP request, but there isn't much more I can tell from it. I will attempt to capture further data -- - Steve xmpp:st...@nexusuk.org sip:st...@nexusuk.org http://www.nexusuk.org/ Servatis a periculum, servatis a maleficum - Whisper, Evanescence
Re: [squid-users] Leaking ICAP connections
On Fri, 15 Oct 2010, Amos Jeffries wrote: First step is upgrading to 3.1.8 to see if its one of the many found and solved bugs. If its still remains there check bugzilla for any references. I'll certainly check with the latest Squid, but I haven't found anything in bugzilla to suggest that this bug is either known about or fixed. This bug tends to take several weeks or months to show up (and for some reason only appears on 2 of our servers), so just checking on a slightly newer release on the offchance that the bug has been fixed by accident is going to take a really long time. :( -- - Steve
[squid-users] Leaking ICAP connections
I am using Squid 3.1.0.14, configured to make REQMOD and RESPMOD requests to a local ICAP server. Everything seems to work fine, except the number of connections between Squid and the ICAP server sometimes keeps increasing over the course of days or weeks. I haven't been able to figure out what is triggering the problem, but it appears that in certain circumstances, Squid just stops using one of the ICAP connections - from what I can tell, the ICAP server has finished dealing with a request and is waiting for the next one, but Squid never sends a new request. Squid continues to operate ok, bringing up more ICAP connections to accommodate more client requests whilst the lost connection stays dormant. The ICAP server is configured to allow a maximum of 100 concurrent connections and eventually, the number of lost connections becomes so great that this limit is reached and the ICAP server starts rejecting the new connections that Squid is bringing up. At this point the users start getting Squid's ICAP error page. Since I'm unfamiliar with the internal structure of Squid, I'm not really sure where to begin with debugging Squid itself. I think I've done as much debugging as is possible from the ICAP server side (this seems to indicate that the ICAP session itself has been fine - the last ICAP request from Squid looks fine and has terminated and the last ICAP response from the ICAP server looks fine and the server is sat waiting for a new request that never comes). This problem isn't something that can be reliably worked around on the ICAP server end - the ICAP server has no way of knowing if a connection from Squid has been lost (i.e. still open but will never again be used) or if it is simply idle. Because of this, having the ICAP server time out idle connections would introduce a race condition - if the connection is just idle, rather than lost, the ICAP server might time it out and close it just as Squid starts sending a new request; in this case the request would fail and the user would get an error page. Any suggestions on how to debug the problem would be greatfully received. Thanks. -- - Steve Hill Technical Director Opendium Limited http://www.opendium.com
Re: [squid-users] [PATCH] Raw URL path ACL
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mon, 21 Jun 2004, Muthukumar wrote: One more change is needed in the patch as, make that acl to be available on squid.conf with your detailed comments for that. If you wish make that change on patch and send it to list with CC to henrick. Ok, fixed that - the modified patch is attached. - --- The attached patch against squid-2.5.STABLE5 adds a new ACL type called urlpath_raw_regex. It works in exactly the same way as urlpath_regex except no unescaping of the URI is done first, which makes it possible to filter specific attacks that escape some characters in the URI without blocking legitimate requests. I.e. you can filter URIs containing %2easp (the signature of some attacks) without blocking legitimate requests for .asp - --- - - Steve Hill Senior Software DeveloperEmail: [EMAIL PROTECTED] Navaho Technologies Ltd. Tel: +44-870-7034015 ... Alcohol and calculus don't mix - Don't drink and derive! ... -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Public key available at http://linux.navaho.co.uk/pubkey.steve.txt iD8DBQFA1/icb26jEkrydY4RAotyAJ0Xn1CV4jAh3TTph95boNN++9ZvGwCgzf+C LqsYOGLz0piNjjj47b2J4vc= =c0x9 -END PGP SIGNATURE-diff -urN squid-2.5.STABLE5.vanilla/src/acl.c squid-2.5.STABLE5/src/acl.c --- squid-2.5.STABLE5.vanilla/src/acl.c 2004-02-27 17:36:35.0 +0100 +++ squid-2.5.STABLE5/src/acl.c 2004-06-22 10:23:34.839051573 +0200 @@ -128,6 +128,8 @@ return ACL_URLPATH_REGEX; if (!strcmp(s, urlpath_regex)) return ACL_URLPATH_REGEX; +if (!strcmp(s, urlpath_raw_regex)) + return ACL_URLPATH_RAW_REGEX; if (!strcmp(s, url_regex)) return ACL_URL_REGEX; if (!strcmp(s, port)) @@ -204,6 +206,8 @@ return time; if (type == ACL_URLPATH_REGEX) return urlpath_regex; +if (type == ACL_URLPATH_RAW_REGEX) + return urlpath_raw_regex; if (type == ACL_URL_REGEX) return url_regex; if (type == ACL_URL_PORT) @@ -746,6 +750,7 @@ case ACL_URL_REGEX: case ACL_URLLOGIN: case ACL_URLPATH_REGEX: +case ACL_URLPATH_RAW_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: case ACL_SRC_DOM_REGEX: @@ -1474,6 +1479,7 @@ case ACL_REP_MIME_TYPE: case ACL_REQ_MIME_TYPE: case ACL_URLPATH_REGEX: +case ACL_URLPATH_RAW_REGEX: case ACL_URL_PORT: case ACL_URL_REGEX: case ACL_URLLOGIN: @@ -1574,6 +1580,12 @@ safe_free(esc_buf); return k; /* NOTREACHED */ +case ACL_URLPATH_RAW_REGEX: + esc_buf = xstrdup(strBuf(r-urlpath)); + k = aclMatchRegex(ae-data, esc_buf); + safe_free(esc_buf); + return k; + /* NOTREACHED */ case ACL_URL_REGEX: esc_buf = xstrdup(urlCanonical(r)); rfc1738_unescape(esc_buf); @@ -2155,6 +2167,7 @@ case ACL_URL_REGEX: case ACL_URLLOGIN: case ACL_URLPATH_REGEX: + case ACL_URLPATH_RAW_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: case ACL_SRC_DOM_REGEX: @@ -2570,7 +2583,7 @@ case ACL_PROXY_AUTH_REGEX: case ACL_URL_REGEX: case ACL_URLLOGIN: -case ACL_URLPATH_REGEX: +case ACL_URLPATH_RAW_REGEX: case ACL_BROWSER: case ACL_REFERER_REGEX: case ACL_SRC_DOM_REGEX: diff -urN squid-2.5.STABLE5.vanilla/src/cf.data.pre squid-2.5.STABLE5/src/cf.data.pre --- squid-2.5.STABLE5.vanilla/src/cf.data.pre 2004-02-10 22:01:21.0 +0100 +++ squid-2.5.STABLE5/src/cf.data.pre 2004-06-22 10:36:53.516068180 +0200 @@ -2004,6 +2004,7 @@ h1:m1 must be less than h2:m2 acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path + acl aclname urlpath_raw_regex [-i] %2egif$ ... # regex matching on raw (i.e. not unescaped) URL path acl aclname urllogin [-i] [^a-zA-Z0-9] ... # regex matching on URL login field acl aclname port 80 70 21 ... acl aclname port 0-1024 ... # ranges allowed diff -urN squid-2.5.STABLE5.vanilla/src/enums.h squid-2.5.STABLE5/src/enums.h --- squid-2.5.STABLE5.vanilla/src/enums.h 2004-02-04 18:42:28.0 +0100 +++ squid-2.5.STABLE5/src/enums.h 2004-06-22 10:23:34.840051427 +0200 @@ -107,6 +107,7 @@ ACL_DST_DOM_REGEX, ACL_TIME, ACL_URLPATH_REGEX, +ACL_URLPATH_RAW_REGEX, ACL_URL_REGEX, ACL_URL_PORT, ACL_MY_PORT,
Re: [squid-users] [PATCH] Raw URL path ACL
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mon, 21 Jun 2004, Muthukumar wrote: It works in exactly the same way as urlpath_regex except no unescaping of the URI is done first, which makes it possible to filter specific attacks that escape some characters in the URI without blocking legitimate requests. If you use the uri_whitespace option with strip mode,it will be like that. I.e. you can filter URIs containing %2easp (the signature of some attacks) without blocking legitimate requests for .asp We can use allow or encode mode there. As I understand it (from reading the documentation in the example config), uri_whitespace only affects whitespace characters, have I misunderstood? I am talking about normal printable characters. i.e. the character A can be sent through a URI as either P or %50. When filtering them using url_regex they will both match a regex containing P. This is valid behaviour since the web server will usually unescape the path so your filter which blocks PORN still wants to catch it if someone tries to bypass it be requesting %50ORN. However, in some situations (such as where URIs containing these escaped printable characters are a signature of a type of attack) you will want to be able to differentiate between the 2. In any case, uri_whitespace is a global option and would affect everything, whereas urlpath_regex and urlpath_raw_regex can be mixed. (did that make sense or have I misunderstood? :) - - Steve Hill Senior Software DeveloperEmail: [EMAIL PROTECTED] Navaho Technologies Ltd. Tel: +44-870-7034015 ... Alcohol and calculus don't mix - Don't drink and derive! ... -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Public key available at http://linux.navaho.co.uk/pubkey.steve.txt iD8DBQFA1sqZb26jEkrydY4RAvPMAJ9husO9qyYNH+QTn9CkwwKjBcQ6VgCfbzAC ZsGWD0/16YscjNt0r22//I4= =QtF/ -END PGP SIGNATURE-