Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage

2021-10-15 Thread Steve Hill

On 12/10/2021 09:34, Ralf Hildebrandt wrote:


Quite sure, since I've been testing Squid-5-HEAD before it became 5.2
But to be sure, I'm deplyoing it right now.


Yep, squid-5.2 is also leaking.


:(

I'm now reasonably sure that mine is a recurrence of:
https://bugs.squid-cache.org/show_bug.cgi?id=4526
...which I had thought to have gone away in Squid 5.1.  I will apply the 
patch next week and see if the problem goes away again.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill

On 08/10/2021 10:24, Ralf Hildebrandt wrote:


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so I'd
expect memory usage to be fairly low (and it was under Squid 3.5), but some
workers are growing pretty large.  I'm using ICAP and SSL bump.


https://bugs.squid-cache.org/show_bug.cgi?id=5132
is somewhat related


I'm not sure if its the same thing.  In that bug, Alex said it looked 
like Squid wasn't maintaining counters for the leaked memory, whereas in 
my case the "Total" row in mgr:mem reasonably closely tracks the memory 
usage reported by top, so it looks like it should be accounted for.


There are similarities though - lots of memory going to HttpHeaderEntry 
and Short Strings in both cases.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill

On 08/10/2021 15:50, Alex Rousskov wrote:


It there a way to list all of the Comm::Connection objects?


The exact answer is "no", but you can use mgr:filedescriptors as an
approximation.


I've had to restart this process now (but I'm sure the problem will be 
back next week).  I did use netstat on it though, and the number of 
established TCP connections was 1090 - that is obviously made up of 
client->proxy, proxy->origin and proxy->icap connections - my gut 
feeling was that it wasn't enough connections to account for 200-odd MB 
of Comm::Connection objects.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so I'd 
expect memory usage to be fairly low (and it was under Squid 3.5), but 
some workers are growing pretty large.  I'm using ICAP and SSL bump.


I've got a worker using 5 GB which I've collected memory stats from - 
the things which stand out are:

 - Long Strings: 220 MB
 - Short Strings: 2.1 GB
 - Comm::Connection: 217 MB
 - HttpHeaderEntry: 777 MB
 - MemBlob: 773 MB
 - Entry: 226 MB

What's the best way of debugging this?  It there a way to list all of 
the Comm::Connection objects?


Thanks.

--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] High memory usage associated with ssl_bump and broken clients

2017-09-08 Thread Steve Hill


I've identified a problem with Squid 3.5.26 using a lot of memory when 
some broken clients are on the network.  Strictly speaking this isn't 
really Squid's fault, but it is a denial of service mechanism so I 
wonder if Squid can help mitigate it.


The situation is this:

Squid is set up as a transparent proxy performing SSL bumping.
A client makes an HTTPS connection, which Squid intercepts.  The client 
sends a TLS client handshake and squid responds with a handshake and the 
bumped certificate.  The client doesn't like the bumped certificate, but 
rather than cleanly aborting the TLS session and then sending a TCP FIN, 
it just tears down the connection with a TCP RST packet.


Ordinarily, Squid's side of the connection would be torn down in 
response to the RST, so there would be no problem.  But unfortunately, 
under high network loads the RST packet sometimes gets dropped and as 
far as Squid is concerned the connection never gets closed.


The busted clients I'm seeing the most problems with retry the 
connection immediately rather than waiting for a retry timer.



Problems:
1. A connection that hasn't completed the TLS handshake doesn't appear 
to ever time out (in this case, the server handshake and certificate 
exchange has been completed, but the key exchange never starts).


2. If the client sends an RST and the RST is lost, the client won't send 
another RST until Squid sends some data to it on the aborted connection. 
 In this case, Squid is waiting for data from the client, which will 
never come, and will not send any new data to the client.  Squid will 
never know that the client aborted the connection.


3. There is a lot of memory associated with each connection - my tests 
suggest around 1MB.  In normal operation these kinds of dead connections 
can gradually stack up, leading to a slow but significant memory "leak"; 
when a really badly behaved client is on the network it can open tens of 
thousands of connections per minute and the memory consumption brings 
down the server.


4. We can expect similar problems with devices on flakey network 
connections, even when the clients are well behaved.



My thoughts:
Connections should have a reasonably short timeout during the TLS 
handshake - if a client hasn't completed the handshake and made an HTTP 
request over the encrypted connection within a few seconds, something is 
broken and Squid should tear down the connection.  These connections 
certainly shouldn't be able to persist forever with neither side sending 
any data.



Testing:
I wrote a Python script that makes 1000 concurrent connections as 
quickly as it can and send a TLS client handshake over them.  Once all 
of the connections are open, it then waits for responses from Squid 
(which would contain the server handshake and certificate) and quits, 
tearing down all of the the connections with an RST.


It seems that the RST packets for around 300 of those connections were 
dropped - this sounds surprising, but since all 1000 connections were 
aborted simultaneously, there would be a flood of RST packets and its 
probably reasonable to expect a significant number to be dropped.  The 
end result was that netstat showed Squid still had about 300 established 
connections, which would never go away.


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] More host header forgery pain with peek/splice

2016-08-25 Thread Steve Hill


This one just seems to keep coming up and I'm wondering how other people 
are dealing with it:


When you peek and splice a transparently proxied connection, the SNI 
goes through the host validation phase.  Squid does a DNS lookup for the 
SNI, and if it doesn't resolve to the IP address that the client is 
connecting to, Squid drops the connection.


When accessing one of the increasingly common websites that use DNS load 
balancing, since the DNS results change on each lookup, Squid and the 
client may not get the same DNS results, so Squid drops perfectly good 
connections.


Most of this problem goes away if you ensure all the clients use the 
same DNS server as squid, but not quite.  Because the TTL on DNS records 
only has a resolution of 1 second, there is a period of up to 1 second 
when the DNS records Squid knows about doesn't match the ones that the 
client knows about.  The client and squid may expire the records up to 1 
second apart.


So what's the solution?  (Notably the validation check can't be disabled 
without hacking the code).


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-08-19 Thread Steve Hill

On 19/08/16 08:45, FredB wrote:


Please can you describe your load and configurations ?


We supply Squid based online safety systems to schools across the UK, 
utilising Rock store for caching and peek/splice, external ACLs and ICAP 
for access control/filtering/auditing.  Typically I think our biggest 
schools probably top out at around 400,000 requests/hour, but I don't 
have any hard data to hand to back that up at the moment.


The only serious Squid issue we've been tracking recently is the memory 
leak associated with spliced connections, which we've now fixed (and 
submitted patches).  That said, with the schools currently on holiday 
those fixes haven't yet been well tested on real-world servers - we'll 
find out if there are any issues with them when term starts again :)


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-08-18 Thread Steve Hill

On 17/08/16 11:50, FredB wrote:


I tried rock store and smp long time ago (squid 3.2 I guess), Unfortunately I 
definitely drop smp because there are some limitations (In my case), and I 
fall-back to diskd because there were many bugs with rock store. FI I also 
switched to aufs without big differences.

But now with the latest 3.5.20 ? Sadly SMP still not for me but rock store ?

There is someone who are using rock store with a high load, more than 800 r/s, 
without any problem ? There is a real difference in this situation, cpu, speed, 
memory ?


We use SMP and Rock under the 3.5 series without problems.  But I don't 
think any of our sites have as high req/sec load as you.


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Checking SSL bump status in http_access

2016-08-18 Thread Steve Hill

On 17/08/16 00:12, Amos Jeffries wrote:


Is there a way of figuring out if the current request is a bumped
request when the http_access ACL is being checked?  i.e. can we tell the
difference between a GET request that is inside a bumped tunnel, and an
unencrypted GET request?


In Squid-3 a combo of the myportname and proto ACLs should do that.


I think when using a nontransparent proxy you can't tell the difference 
between:


1. HTTPS requests inside a bumped CONNECT tunnel, and
2. unencrypted "GET https://example.com/ HTTP/1.1" requests made 
directly to the proxy.



--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Checking SSL bump status in http_access

2016-08-18 Thread Steve Hill

On 17/08/16 17:18, Alex Rousskov wrote:


This configuration problem should be at least partially addressed by the
upcoming annotate_transaction ACLs inserted into ssl_bump rules:
http://lists.squid-cache.org/pipermail/squid-dev/2016-July/006146.html


That looks good.  When implementing this, beware the note in comment 3 
of bug 4340: http://bugs.squid-cache.org/show_bug.cgi?id=4340#c3
"for transparent connections, the NotePairs instance used during the 
step-1 ssl_bump ACL is not the same as the instance used during the 
http_access ACL, but for non-transparent connections they are the same 
instance.  The upshot is that any notes set by an external ACL when 
processing the ssl_bump ACL during step 1 are discarded when handling 
transparent connections."  - It would greatly reduce the functionality 
of your proposed ACLs if the annotations were sometimes discarded part 
way through a connection or request.


Something I've been wanting to do for a while is attach a unique 
"connection ID" and "request ID" to requests so that:
1. An ICAP server can make decisions about the connection (e.g. how to 
authenticate, whether to bump, etc.) and then refer back to the data it 
knows/generated about the connection when it processes the requests 
contained within that connection.
2. When multiple ICAP requests will be generated, they can be linked 
together by the ICAP server - e.g. where a single request will generate 
a REQMOD followed by a RESPMOD it would be good for the ICAP server to 
know which REQMOD and RESPMOD relate to the same request.


It sounds like your annotations plan may address this to some extent. 
(We can probably already do some of this by having the ICAP server 
generate unique IDs and store them in ICAP headers to be passed along 
with the request, but I think the bug mentioned above would cause those 
headers to be discarded mid-request in some cases)


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-17 Thread Steve Hill

On 17/08/16 06:22, Dan Charlesworth wrote:


Deployed a 3.5.20 build with both of those patches and have noticed a big 
improvement in memory consumption of squid processes at a couple of 
splice-heavy sites.

Thank you, sir!


We've now started tentatively rolling this out to a few production sites 
too and are seeing good results so far.



--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Checking SSL bump status in http_access

2016-08-16 Thread Steve Hill


Is there a way of figuring out if the current request is a bumped 
request when the http_access ACL is being checked?  i.e. can we tell the 
difference between a GET request that is inside a bumped tunnel, and an 
unencrypted GET request?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-12 Thread Steve Hill



This sounds very similar to Squid bug 4508. Factory proposed a fix
for that bug, but the patch is for Squid v4. You may be able to adapt it
to v3. Testing (with any version) is very welcomed, of course:


Thanks for that - I'll look into adapting and testing it.

(been chasing this bug off and on for months - hadn't spotted that there 
was a bug report open for it :)



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-11 Thread Steve Hill


I've been suffering from a significant memory leak on multiple servers 
running Squid 3.5 for months, but was unable to reproduce it in a test 
environment.  I've now figured out how to reproduce it and have done 
some investigation:


When using TPROXY, Squid generates fake "CONNECT 192.0.2.1:443" 
requests, using the IP address that the client connected to.  At 
ssl_bump step 1, we peek and Squid generates another fake "CONNECT 
example.com:443" request containing the SNI from the client's SSL handshake.


At ssl_bump step 2 we splice the connection and Squid does verification 
to make sure that example.com does actually resolve to 192.0.2.1.  If it 
doesn't, Squid is supposed to reject the connection in 
ClientRequestContext::hostHeaderVerifyFailed() to prevent clients from 
manipulating the SNI to bypass ACLs.


Unfortunately, when verification fails, rather than actually dropping 
the client's connection, Squid just leaves the client hanging. 
Eventually the client (hopefully) times out and drops the connection 
itself, but the associated ClientRequestContext is never destroyed.


This is testable by repeatedly executing:
openssl s_client -connect 17.252.76.30:443 -servername 
courier.push.apple.com


That is a traffic pattern that we see in the real world and is now 
clearly what is triggering the leak: Apple devices make connections to 
addresses within the 17.0.0.0/8 network with an SNI of 
"courier.push.apple.com".  courier.push.apple.com resolves to a CNAME 
pointing to courier-push-apple.com.akadns.net, but 
courier-push-apple.com.akadns.net doesn't exist.  Since Squid can't 
verify the connection, it won't allow it and after 30 seconds the client 
times out.  Each Apple device keeps retrying the connection, leaking a 
ClientRequestContext each time, and before long we've leaked several 
gigabytes of memory (on some networks I'm seeing 16GB or more of leaked 
RAM over 24 hours!).


Unfortunately I'm a bit lost in the Squid code and can't quite figure 
out how to gracefully terminate the connection and destroy the context.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-11 Thread Steve Hill

On 07/07/16 12:30, Marcus Kool wrote:


Here things get complicated.
It is correct that Squid enforces apps to follow standards or
should Squid try to proxy connections for apps when it can?


I would say no: where it is possible for Squid to allow an app to work, 
even where it isn't following standards (without compromising security / 
other software / etc.) then Squid needs to try to make the app work.


Unfortunately, end users do not understand the complexities, and if an 
app works on their home internet connection and doesn't work through 
their school / office connection (which is router through Squid) then as 
far as they are concerned the school / office connection is "broken", 
even if the problem is actually a broken app.


This is made worse by (1) the perception that big businesses such as 
Microsoft / Apple / Google can never be wrong (even though this is not 
born our by experience of their software), and (2) the fact that app 
developers rarely seem at all interested in acknowledging/fixing such 
bugs (in my experience).


So in the end you have a choice: live with people accusing Squid of 
being "broken" and refuse to allow applications that will never be fixed 
to work, or work around the broken apps within Squid and therefore get 
them working without the cooperation of the app developers.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Steve Hill

On 07/07/16 02:07, Alex Rousskov wrote:


Q1. Is wildcard SNI "legal/valid"?

I do not know the answer to that question. The "*.example.com" name is
certainly legal in many DNS contexts. RFC 6066 requires HostName SNI to
be a "fully qualified domain name", but I failed to find a strict-enough
RFC definition of an FQDN that would either accept or reject wildcards
as FQDNs. I would not be surprised if FQDN syntax is not defined to the
level that would allow one to reject wildcards as FQDNs based on syntax
alone.


Wildcards can be specified in DNS zonefiles, but I don't think you can 
ever look them up directly (rather, you look up "something.example.com" 
and the DNS server itself decides to use the wildcard record to fulfil 
that request - you never look up *.example.com itself).



Q2. Can wildcard SNI "make sense" in some cases?

Yes, of course. The client essentially says "I am trying to connect to
_any_ example.com subdomain at this IP:port address. If you have any
service like that, please connect me". That would work fine in
deployment contexts where several servers with different names provide
essentially the same service and the central "routing point" would pick
the "best" service to use. I am not saying it is a good idea to use
wildcard SNIs, but I can see them "making sense" in some cases.


Realistically, shouldn't the SNI reflect the DNS request that was made 
to find the IP of the server you're connecting to?  You would never make 
a DNS request for '*.example.com' so I don't see a reason why you would 
send an SNI that has a larger scope than the DNS request you made.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Steve Hill

On 06/07/16 20:54, Eliezer Croitoru wrote:


There are other options of course but the first thing to check is if the client 
is a real browser or some special creature that tries it's luck with a special 
form of ssl.


In this case it isn't a real web browser - it's an iOS app, and the 
vendor has stated that they have no intention of fixing it :(


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Steve Hill

On 07/07/16 11:07, Eliezer Croitoru wrote:


Can you verify please using a debug 11,9 that squid is not altering the request 
in any form?
Such as mentioned at: http://bugs.squid-cache.org/show_bug.cgi?id=4253


Thanks for this.  I've compared the headers and the original contains:
Upgrade: websocket
Connection: Upgrade

Unfortunately, since Squid doesn't support websockets I think there's no 
way around this - by the time we see the request and can identify it as 
Skype we've already bumped it so we're committed to pass it through 
Squid's HTTP engine.  :(


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Steve Hill

On 06/07/16 20:44, Eliezer Croitoru wrote:


There are couple options to the issue and a bad request can happen if
squid transforms or modifies the request. Did you tried to use basic
debug sections output to verify if you are able to "replicate" the
request using a tiny script or curl? I think that section 11 is the
right one to start with
(http://wiki.squid-cache.org/KnowledgeBase/DebugSections) There were
couple issues with intercepted https connections in the past but a
400 means that something is bad and mainly in the expected input and
not a certificate but it is possible that other reasons are there. I
have not tried to use skype in a transparent environment for a very
long time but I can try to test it later.


I tcpdumped the icap REQMOD session to retrieve the request and tried it
manually (direct to the Skype server) with openssl s_client.  The Skype
server (not Squid) returned a 400.  But of course, the Skype request
contains various data that the server will probably (correctly) see as a
replay attack, so it isn't a very good test - all I can really say is
that the real Skype client was getting exactly the same error from the
server when the connection is bumped, but works fine when it is tunnelled.

Annoyingly, Skype doesn't include an SNI in the handshake, so peeking in
order to exclude it from being bumped isn't an option.

The odd thing is that I have had Skype working in a transparent 
environment previously (with the unprivalidged ports unfirewalled), so I 
wonder if this is something new from Microsoft.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Skype, SSL bump and go.trouter.io

2016-07-06 Thread Steve Hill


I've been finding some problems with Skype when combined with TProxy and 
HTTPS interception and wondered if anyone had seen this before:


Skype works so long as HTTPS interception is not performed and traffic 
to TCP and UDP ports 1024-65535 is allowed directly out to the internet. 
 Enabling SSL-bump seems to break things - When making a call, Skype 
makes an SSL connection to go.trouter.io, which Squid successfully 
bumps.  Skype then makes a GET request to 
https://go.trouter.io/v3/c?auth=true=55 over the SSL connection, 
but the HTTPS server responds with a "400 Bad Request" error and Skype 
fails to work.


The Skype client clearly isn't rejecting the intercepted connection 
since it is making HTTPS requests over it, but I can't see why the 
server would be returning an error.  Obviously I can't see what's going 
on inside the connection when it isn't being bumped, but it does work 
then.  The only thing I can think is maybe the server is examining the 
SSL handshake and returning an error because it knows it isn't talking 
directly to the Skype client - but that seems like an odd way of doing 
things, rather than rejecting the SSL handshake in the first place.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Steve Hill


I'm using a transparent proxy and SSL-peek and have hit a problem with 
an iOS app which seems to be doing broken things with the SNI.


The app is making an HTTPS connection to a server and presenting an SNI 
with a wildcard in it - i.e. "*.example.com".  I'm not sure if this 
behaviour is actually illegal, but it certainly doesn't seem to make a 
lot of sense to me.


Squid then internally generates a "CONNECT *.example.com:443" request 
based on the peeked SNI, which is picked up by hostHeaderIpVerify(). 
Since *.example.com isn't a valid DNS name, Squid rejects the connection 
on the basis that *.example.com doesn't match the IP address that the 
client is connecting to.


Unfortunately, I can't see any way of working around the problem - 
"host_verify_strict" is disabled, but according to the docs,
"For now suspicious intercepted CONNECT requests are always responded to 
with an HTTP 409 (Conflict) error page."


As I understand it, turning host_verify_strict on causes problems with 
CDNs which use DNS tricks for load balancing, so I'm not sure I 
understand the rationale behind preventing it from being turned off for 
CONNECT requests?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube "challenges"

2016-02-25 Thread Steve Hill

On 25/02/16 03:52, Darren wrote:


The user visits a page on my server with the YouTube links. Visiting
this page triggers a state based ACL (something like the captive portal
login).

The user then clicks a YouTube link and squid checks this ACL to see if
the user is originating the request from my local page and if it is,
allows the splice to YouTube and the video can play.


Squid can't tell that the requests were referred by your page - the 
iframe itself may have your page as the referrer (although that 
certainly isn't guaranteed), but the objects that are referred within 
that iframe won't have a useful referrer string.


You could dynamically create an ACL that allows the whole of youtube 
when the user has your page open, but that is fairly insecure since they 
could just open the page and then they would be allowed to access 
anything through youtube.


In my experience (and this is what we do), to be at all secure you have 
to analyse the page itself in order to figure out which specific URIs to 
whitelist (or at least, have those URIs hard-coded somewhere else).


Either way, YouTube uses https, so unless you're going to blindly allow 
the whole of youtube whenever a user visits your page, you're going to 
need to ssl bump the requests in order to have an ACL based on the 
referrer and path.  And as you know, ssl bumping involves sticking a 
certificate on each device.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL bump memory leak

2016-02-23 Thread Steve Hill


I'm looking into (what appears to be) a memory leak in the Squid 3.5 
series.  I'm testing this in 3.5.13, but this problem has been observed 
in earlier releases too.  Unfortunately I haven't been able to reproduce 
the problem in a test environment yet, so my debugging has been limited 
to what I can do on production systems (so no valgrind, etc).


These systems are configured to do SSL peek/bump/splice and I see the 
Squid workers grow to hundreds or thousands of megabytes in size over a 
few hours.  A configuration reload does not reduce the memory 
consumption.  For debugging purposes, I have set 
"dynamic_cert_mem_cache_size=0KB" to disable the certificate cache, 
which should eliminate bug 4005.  I've taken a core dump to analyse and 
have found:


Running "strings" on the core, I can see that there are vast numbers of 
strings that look like certificate subject/issuer identifiers.  e.g.:
	/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=Secure 
Certificate Services


The vast majority of these seem to refer to root and intermediate 
certificates.  There are a few that include a host name and are probably 
server certificates, such as:

/OU=Domain Control Validated/CN=*.soundcloud.com
But these are very much in the minority.

Also, notably they are mostly duplicates.  Compare the total number:
$ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|wc -l
131599
with the number of unique strings:
$ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|sort -u -k 2|wc -l
658

There are also a very small number of lines that look something like:
	/C=US/ST=California/L=San Francisco/O=Wikimedia Foundation, 
Inc./CN=*.wikipedia.org+Sign=signTrusted+SignHash=SHA256
I think the "+Sign=signTrusted+SignHash=SHA256" part would indicate that 
this is a Squid database key, which is very confusing since with the 
certificate cache disabled I wouldn't expect to see these at all.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] kid registration timed out

2016-02-08 Thread Steve Hill
.
03:43:37 kid1| HTCP Disabled.
03:43:37 kid1| Configuring Parent [::1]/3129/0
03:43:37 kid1| Squid plugin modules loaded: 0
03:43:37 kid1| Adaptation support is on
03:43:38 kid1| storeLateRelease: released 0 objects
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.177 seconds = 0.124 user + 0.053 sys
Maximum Resident Size: 83088 KB
Page faults with physical i/o: 0
Squid Cache (Version 3.5.11): Terminated abnormally.
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.189 seconds = 0.127 user + 0.062 sys
Maximum Resident Size: 83072 KB
Page faults with physical i/o: 0
CPU Usage: 0.191 seconds = 0.130 user + 0.061 sys
Maximum Resident Size: 83072 KB
Page faults with physical i/o: 0
03:43:43 kid1| Closing HTTP port [::]:3128
03:43:43 kid1| Closing HTTP port [::]:8080
03:43:43 kid1| Closing HTTP port [::]:3130
03:43:43 kid1| Closing HTTPS port [::]:3131
03:43:43 kid1| storeDirWriteCleanLogs: Starting...
03:43:43 kid1|   Finished.  Wrote 0 entries.
03:43:43 kid1|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: kid1 registration timed out
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.193 seconds = 0.137 user + 0.056 sys
Maximum Resident Size: 83104 KB
Page faults with physical i/o: 0


There are actually 4 workers, but I have excluded the log lines for 
"kid[2-9]" as they seem to show exactly the same as kid1.  I can't see 
any indication of why it is blowing up, other than "FATAL: kid1 
registration timed out" (and identical time outs for the other workers). 
 I seem to be left with a Squid process still running (so my monitoring 
doesn't alert me that Squid isn't running), but it doesn't service 
requests.  This isn't too bad if I'm manually restarting squid during 
the day, but if squid gets restarted in the night due to a package 
upgrade I can be left with a dead proxy that requires manual intervention.



The second problem, which may or may not be related, is that if Squid 
crashes (e.g. an assert()), it usually automatically restarts, but some 
times it fails and I see this logged:


FATAL: Ipc::Mem::Segment::open failed to 
shm_open(/squidnocache-cf__metadata.shm): (2) No such file or directory


Similar to the first problem, when this happens I'm still left with a 
squid process running, but it isn't servicing any requests.  I realise 
that it is a bug for Squid to crash in the first place, but it's 
compounded by the occasional complete loss of service when it happens.


Any help would be appreciated.  Thanks. :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Steve Hill

On 12/11/15 09:04, Eugene M. Zheganin wrote:


I decided to intercept the HTTPS traffic on my production squids from
proxy-unware clients to be able to tell them there's a proxy and they
should configure one.
So I'm doing it like (the process of forwarding using FreeBSD pf is not
shown here):

===Cut===
acl unauthorized proxy_auth stringthatwillnevermatch
acl step1 at_step sslBump1

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump splice all
===Cut===

Almost everything works, except that squid for some reason is generating
certificates in this case for IP addresses, not names, so the browser
shows a warning abount certificate being valid only for IP, and not name.


proxy_auth won't work on intercepted traffic and will therefore always 
return false, so as far as I can see you're always going to peek and 
then splice.  i.e. you're never going to bump, so squid should never be 
generating a forged certificate.


You say that Squid _is_ generating a forged certificate, so something 
else is going on to cause it to do that.  My first guess is that Squid 
is generating some kind of error page due to some http_access rules 
which you haven't listed, and is therefore bumping.


Two possibilities spring to mind for the certificate being for the IP 
address rather than for the name:
1. The browser isn't bothering to include an SNI in the SSL handshake 
(use wireshark to confirm).  In this case, Squid has no way to know what 
name to stick in the cert, so will just use the IP instead.
2. The bumping is happening in step 1 instead of step 2 for some reason. 
 See:  http://bugs.squid-cache.org/show_bug.cgi?id=4327


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid http & https intercept based on DNS server

2015-11-12 Thread Steve Hill

On 12/11/15 12:08, James Lay wrote:


Some applications (I'm thinking mobile apps) may or may not use a
hostname...some may simply connect to an IP address, which makes control
over DNS irrelevant at that point.  Hope that helps.


Also, redirecting all the DNS records to Squid will break everything 
that isn't http/https since there will be nothing on the squid server to 
handle that traffic.


It doesn't sound like a great idea to me - why not just redirect 
http/https traffic at the gateway (TPROXY) instead of mangling DNS?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assert, followed by shm_open() fail.

2015-11-09 Thread Steve Hill


On Squid 3.5.11 I'm seeing occasional asserts:

2015/11/09 13:45:21 kid1| assertion failed: DestinationIp.cc:41: 
"checklist->conn() && checklist->conn()->clientConnection != NULL"


More concerning though, is that usually when a Squid process crashes, it 
is automatically restarted, but following these asserts I'm often seeing:


FATAL: Ipc::Mem::Segment::open failed to 
shm_open(/squidnocache-squidnocache-cf__metadata.shm): (2) No such file 
or directory


After this, Squid is still running, but won't service requests and 
requires a manual restart.


Has anyone seen this before?

Cheers.

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ICAP response header ACL

2015-10-01 Thread Steve Hill


The latest adaption response headers are available through the 
%adapt::headers through an ACL?


The documentation says that adaptation headers are available in the 
notes, but this only appears to be headers set with adaptation_meta, not 
the ICAP response headers.  I had also considered using the "note" 
directive to explicitly stuff the headers into the notes, but it looks 
like the note directive doesn't allow you to use format strings (i.e. 
"note icap_headers %adapt::note to "%adapt::<last_h" rather than substituting the headers.)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %un format code doesn't work for external ssl_bump ACLs

2015-08-28 Thread Steve Hill


Squid 3.5.7

I'm using an external ACL to decide whether to bump traffic during SSL 
bump step 2.  The external ACL needs to know the user's username for 
requests that have authenticated, but not all requests are authenticated 
so I can't use %LOGIN and I'm therefore using %un instead.  However, %un 
is never being filled in with a user name.



The relevant parts of the config are:

http_access allow proxy_auth
http_access deny all
external_acl_type sslpeek children-max=10 concurrency=100 ttl=0 
negative_ttl=0 %SRC %un %URI %ssl::sni %ha{User-Agent} 
/usr/sbin/check_bump.sh

acl sslpeek external sslpeek
acl ssl_bump_step_1 at_step SslBump1
acl ssl_bump_step_2 at_step SslBump2
acl ssl_bump_step_3 at_step SslBump3
ssl_bump peek ssl_bump_step_1 #icap_says_peek
ssl_bump bump ssl_bump_step_2 sslpeek
ssl_bump splice all
sslproxy_cert_error allow all


The debug log shows that the request is successfully authenticated:

Acl.cc(138) matches: checking proxy_auth
UserData.cc(22) match: user is steve, case_insensitive is 0
UserData.cc(28) match: aclMatchUser: user REQUIRED and auth-info present.
Acl.cc(340) cacheMatchAcl: ACL::cacheMatchAcl: miss for 'proxy_auth'. 
Adding result 1

Acl.cc(158) matches: checked: proxy_auth = 1

But then later in the log I see:

external_acl.cc(1416) Start: fg lookup in 'sslpeek' for 
'2a00:1940:1:8:468a:5bff:fe9a:cd7f - www.hsbc.co.uk:443 www.hsbc.co.uk 
Mozilla/5.0%20(X11;%20Fedora;%20Linux%20x86_64;%20rv:39.0)%20Gecko/20100101%20Firefox/39.0'



The user name given to the external ACL is - even though the request 
has been authenticated.  Setting a-require_auth in 
parse_externalAclHelper() makes it work, but obviously just makes %un 
behave like %LOGIN, so isn't a solution.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
attachment: steve.vcf___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assert(call-dialer.handler == callback)

2015-04-30 Thread Steve Hill
 in ClientHttpRequest::handleAdaptedHeader 
(this=0x7ffe1dcda618, msg=Unhandled dwarf expression opcode 0xf3

) at client_side_request.cc:1935
#35 0x7ffe14abbcaa in JobDialerAdaptation::Initiator::dial 
(this=0x7ffe1ce04990, call=...) at ../../src/base/AsyncJobCalls.h:174
#36 0x7ffe149bea69 in AsyncCall::make (this=0x7ffe1ce04960) at 
AsyncCall.cc:40
#37 0x7ffe149c272f in AsyncCallQueue::fireNext (this=Unhandled dwarf 
expression opcode 0xf3

) at AsyncCallQueue.cc:56
#38 0x7ffe149c2a60 in AsyncCallQueue::fire (this=0x7ffe16f70bf0) at 
AsyncCallQueue.cc:42
#39 0x7ffe1484110c in EventLoop::runOnce (this=0x7fffcb8c4be0) at 
EventLoop.cc:120
#40 0x7ffe148412c8 in EventLoop::run (this=0x7fffcb8c4be0) at 
EventLoop.cc:82
#41 0x7ffe148ae191 in SquidMain (argc=Unhandled dwarf expression 
opcode 0xf3

) at main.cc:1511
#42 0x7ffe148af2e9 in SquidMainSafe (argc=Unhandled dwarf expression 
opcode 0xf3

) at main.cc:1243
#43 main (argc=Unhandled dwarf expression opcode 0xf3
) at main.cc:1236

(sorry about the DWARF errors - it looks like I've got a version 
mismatch between gcc and gdb)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
attachment: steve.vcf___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i hope to build web Authentication portal at Tproxy environment recenty , can you give me some advisement .

2015-03-11 Thread Steve Hill

On 11.03.15 10:22, johnzeng wrote:


whether php or jquery need send user ip address to squid ? otherwise i
worried whether squid can confirm user info

and how to identify and controll http traffic ?


I'd do this with an external ACL - when processing a request, Squid 
would call the external ACL which would do:


1. If the user is not authenticated or their last seen timestamp has 
expired, return ERR
2. If the user is authenticated, update their last seen timestamp and 
return OK.


Obviously if the ACL returns ERR, Squid needs to redirect the user to 
the authentication page.  If the ACL returns OK, Squid needs to service 
the request as normal.


The authentication page would update the database which the external ACL 
refers to.


Identifying the user's traffic would need to be done by MAC address or IP:
 - MAC address requires a flat network with no routers between the 
device and Squid.

 - IP has (probably) unfixable problems in a dual-stacked network.

Beware that:
1. Access to the authentication page must be allowed for unauthenticated 
users (obviously :)
2. Authentication should really be done over HTTPS with a trusted 
certificate.
3. Clients require access to some external servers to validate HTTPS 
certs before they have authenticated.

4. If you want to support WISPr then (2) and (3) are mandatory.
5. External ACL caching

You might be able to do it with internal ACLs, but... pain :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dual-stack IPv4/IPv6 captive portal

2015-03-03 Thread Steve Hill

On 02.03.15 02:33, Amos Jeffries wrote:


  These people are plain wrong about how the basic protocol works and yet
they are treated with must-accept policies by so many networks.


Yep, one of the really big problems we have is the it works when we're 
not using the proxy, so the proxy must be broken attitude, when almost 
universally the proxy is working fine and the other software is just 
plain broken.  It's really hard to convince a customer that it really 
isn't our fault when some app breaks, especially when that app is made 
by someone like Apple or Google (who, of course, can *never* be wrong!)


The vast majority of our support time is spent figuring out ways to work 
around busted end-user software, because we know saying Apple's 
software is broken, go and talk to Apple isn't going to work because 
the likes of Apple have no interest in actually supporting their own 
customers and somehow this ends up being our fault.  (Not just Apple - 
lots of other companies are equally bad, although Apple have currently 
hit a nerve with me due to a lot of debugging I recently had to do with 
their appstore because they didn't bother to log any errors when things 
broke, which also seems to be par for the course these days).



  Imagine what would happen if you MUST-accept all emails delivered? or
any kind of DNS response they chose to send you? those are two other
major protcols with proxies that work just fine by rejecting bad
messages wholesale.


Well, you say that, but we also get it works at home but not at work 
complaints when DNS servers start returning broken data.  Admittedly we 
usually seem to be able to not catch quite so much blame for that one, 
although I'm not sure how. :)


Basically, in my experience, if it works in situation A and not in 
situation B people will assume that the problem is whatever is different 
in situation B rather than that both situations are completely valid but 
their application is broken and can't handle one of them.  This becomes 
a big problem when situation A is the more prevalent one - at that point 
you either start working around the buggy software, or you lose a 
customer and get a reputation for selling broken stuff.


So whilst I agree with you that in an ideal world we wouldn't work 
around stuff, we would just report bugs and the broken software would be 
fixed, in the real world the big mainstream businesses aren't interested 
in supporting their customers and yet somehow the rest of us end up 
having to do it for them or it reflects badly on *us*. boggle



FWIW, I am always happy to work with other people/companies to help them 
fix their broken stuff.  This has been met with a mix of responses - 
sometimes they are happy to work with me to fix things, which is great, 
but sadly not the most common experience.  Often I send a detailed bug 
report, explaining what's going wrong, referencing standards, etc. and 
get a you're wrong, we're right, we're not going to change anything 
response, which would be fine if they referenced anything to back up 
their position, but they never do.  Many simply ignore the reports 
altogether.  Then we have people like Microsoft, who I've tried to 
contact on several occasions to report bugs in their public-facing web 
servers - there are no suitable contact details ever published and I've 
been bounced from department to department with no one quite sure what 
to do with someone reporting problems with their _public_ servers and 
not having some kind of support contract with them (I've got no 
resolution to any of the problems I reported to them because I've never 
actually managed to get my report to anyone responsible).  I've given up 
reporting bugs to Apple because they always demand that I spend a lot of 
my time collecting debug logs, but then they sit on the report and never 
actually fix it (again, I've never had a resolution to a bug I've 
reported to Apple, despite supplying them with extensive debugging).



/rant :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dual-stack IPv4/IPv6 captive portal

2015-02-27 Thread Steve Hill

On 27.02.15 17:00, Michele Bergonzoni wrote:


This is true for v6 if the client uses its MAC as an identifier,
which it's not supposed to do and last time I checked was not true
for Windows, or if clients or DHCP relays support RFC6939, which is
quite new. See for example:

https://lists.isc.org/pipermail/kea-dev/2014-June/43.html


Oh, interesting - I hadn't realised that.


Have you thought about engineering your captive portal with a dual
stack DNS name (having both A and ), a v4 only and a v6 only, and
having you HTML embed requests with appropriate identifiers to
correlate addresses? Of course there are HTTP complications and it is
not perfect, but I guess that as long as it's a captive portal,
kludginess cannot decrease below some level.


That was one of my options.  However, it won't work in the case of WISPr 
auto-logons because the page wouldn't be rendered by the client, so you 
wouldn't expect it to fetch embedded bits either.



I am really interested to hear what people are doing in the field of
squid-powered captive portals, even more when interoperating with
iptables/ip6tables.


At the moment, we've written a hybrid captive portal/http-auth system. 
Essentially, we use HTTP proxy auth where we can and a captive portal 
where we can't.  HTTP proxy auth is preferable because every request 
gets authenticated individually and we can use Kerberos.  Unfortunately 
a lot of software doesn't support it properly (I'm looking at you, apple 
and google, although everyone else is getting pretty bad at it too) and 
it also can't be used for transparent proxying (and again, a lot of 
software just doesn't bother to support proxies these days, and it's 
only getting worse).  So we use the user-agent string to try and 
identify the clients we can safely authenticate, and the rest rely on 
cached credentials or captive portal.


Yes, it's a horrible bodge, but unfortunately that's where modern 
software is driving us. :(  For iOS and Android you can pretty much 
forget using pure HTTP proxy authentication.  Luckily iOS can use WISPr 
to automatically log into a portal, sadly vanilla Android still doesn't 
include a WISPr client (I'd put money on this being down to patents!).



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Dual-stack IPv4/IPv6 captive portal

2015-02-27 Thread Steve Hill


I'm wondering whether anyone has implemented a captive portal on a 
dual-stacked network, and whether they can provide any insight into the 
best way of going about it.



The problems:

- Networks are frequently routed with the proxy server on the border. 
This means the proxy doesn't get to see the client's MAC address, so 
captive portals have to work by associating the IP address with the 
user's credentials.


- In a dual-stacked environment, a clients' requests come from both its 
IPv4 address and IPv6 address.  Treating them independently of each 
other would lead to a bad user experience since the user would need to 
authenticate separately for each address.


- Where IPv6 privacy extensions are enabled, the client has multiple 
addresses at the same time, with the preferred address changing at 
regular intervals.  The address rotation interval is typically quite 
long (e.g. 1 day) but the change-over between addresses will occur 
spontaneously with the captive portal not being informed in advance. 
Again, we don't want to auth each address individually.


- Captive portals often want to support WISPr to allow client devices to 
perform automated logins.



Possible solutions:

- The captive portal page could include embedded objects from the 
captive portal server's v4 and v6 addresses.  This would allow the 
captive portal to temporarily link the addresses together and therefore 
link the authentication credentials to both.  The portal would still 
have to work correctly when used from single-stacked devices.  This also 
isn't going to work for WISPr clients since the client will never render 
the page when doing an automated login so we wouldn't expect any 
embedded objects to be requested.


- Using DHCPv6 instead of SLAAC to do the address assignment would 
disable IPv6 privacy extensions, which would be desirable in this case. 
 However, many devices don't support DHCPv6.


- The DHCP and DHCPv6 servers know the MAC and IPv[46] address of each 
client and could cooperate with each other to link this data together. 
However, the proxy does not always have control of the DHCP/DHCPv6 servers.



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-02-04 Thread Steve Hill

On 02.02.15 13:23, Eliezer Croitoru wrote:


On what OS are you running squid? is it self compiled one?


Scientific Linux 6.6.

And yes, it's a self-compiled Squid.

I'm quite happy to change to using the helper if that is the preferred 
method (until recently I was unaware that the helper existed).  Although 
I've got to admit that I was a bit surprised to be told that the way 
I've been successfully using Squid is impossible. :)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-02-02 Thread Steve Hill

On 22.01.15 08:14, Amos Jeffries wrote:


Squid only *generates* server certificates using that helper. If you
are seeing the log lines Generating SSL certificate they are
incorrect when not using the helper.

The non-helper bumping is limited to using the configured http(s)_port
cert= and key= contents. In essence only doing client-first or
peek+splice SSL-bumping styles.


I'm pretty sure this is incorrect - I'm running Squid 3.4 without 
ssl_crtd, configured to bump server-first.  The cert= parameter to the 
http_port line points at a CA certificate.  When visiting an https site 
through the proxy, the certificate sent to the browser is a forged 
version of the server's certificate, signed by the cert= CA.  This 
definitely seems to be server-first bumping - if the server's CA is 
unknown, Squid generates an appropriately broken certificate, etc. as 
you would expect.


Am I missing something?

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Steve Hill
On 21/01/15 18:39, Eliezer Croitoru wrote:

 but not using ssl_crtd
 What are using if not ssl_crtd?

Squid generates the certificates internally if ssl_crtd isn't turned on
at compile time.  I've not seen any information explaining the pros and
cons of each approach (I'd welcome any input!).


-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Steve Hill

On 21.01.15 08:40, Jason Haar wrote:


I'm running squid-3.4.10 on CentOS-6 and just got hit with ssl-bump
blocking/warning access to a website which I can't figure out why


Probably not very helpful, but it works for me (squid-3.4.10, Scientific 
Linux 6.6, bump-server-first, but not using ssl_crtd).  I also can't see 
anything wrong with the certificate chain.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl_crtd

2015-01-20 Thread Steve Hill

At the moment I'm running Squid 3.4 with bump-server-first using the
internal certificate generation stuff (i.e. not ssl_crtd).  I can't find
a lot of information about using/not using ssl_crtd so I was wondering
if anyone can give me a run-down of the pros and cons of using it
instead of the internal cert generator?

Thanks.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 05.01.15 18:15, Amos Jeffries wrote:


Can you try making the constructor at the top of src/HelperReply.cc
look like this and see if it resolves the problem?

HelperReply::HelperReply(char *buf, size_t len) :
 result(HelperReply::Unknown),
 notes(),
 whichServer(NULL)
{
 assert(notes.empty());
 parse(buf,len);
}


This didn't help I'm afraid.

Some further debugging so far today:
The notes in HelperReply are indeed empty when the token is added.

However, Auth::Negotiate::UserRequest::HandleReply() appends the reply 
notes to auth_user_request.  It fetches a cached user record from 
proxy_auth_username_cache and then calls absorb() to merge 
auth_user_request with the cached user record.  This ends up adding the 
new Negotiate token into the cached record.  This keeps happening for 
each new request and the cached user record gradually accumulates tokens.


As far as I can see, tokens are only ever read from the helper's reply 
notes, not the user's notes, so maybe the tokens never need to be 
appended to auth_user_request in the first place?


Alternatively, A-absorb(B) could be altered to remove any notes from A 
that have the same keys as B's notes, before using appendNewOnly() to 
merge them?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 05.01.15 20:11, Eliezer Croitoru wrote:


Did you had the chance to take look at bug 3997:
http://bugs.squid-cache.org/show_bug.cgi?id=3997


This could quite likely be the same issue.  See my other post this 
morning for details, but I've pretty much tracked this down to the 
Negotiate tokens being appended to user cache records in an unbounded 
way.  Eventually you end up with so many tokens (several thousand) that 
the majority of the CPU time is spent traversing the tokens.  A quick 
look at the NTLM code suggests that this would behave in the same way.


The question now is what the correct way is to fix it - we could 
specifically avoid appending token notes in the Negotiate/NTLM code, 
or we could do something more generic in the absorb() method.  (My 
preference is the latter unless anyone can think why it would be a bad 
idea).


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 06.01.15 12:15, Steve Hill wrote:


Alternatively, A-absorb(B) could be altered to remove any notes from A
that have the same keys as B's notes, before using appendNewOnly() to
merge them?


I've implemented this for now in the attached patch and am currently 
testing it.  Initial results suggest it resolves the problem.


It introduces a new method, NotePairs::appendAndReplace(), which 
iterates through the source NotePairs and removes any NotePairs in the 
destination that have the same key, then calls append().


This is not the most efficient way of erasing the notes, because Squid's 
Vector template doesn't appear to have an erase() method.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
Index: source/src/Notes.cc
===
--- source/src/Notes.cc	(revision 354)
+++ source/src/Notes.cc	(working copy)
@@ -221,6 +221,22 @@
 }
 
 void
+NotePairs::appendAndReplace(const NotePairs *src)
+{
+for (VectorNotePairs::Entry *::const_iterator  i = src-entries.begin(); i != src-entries.end(); ++i) {
+VectorNotePairs::Entry *::iterator  j = entries.begin();
+	while (j != entries.end()) {
+	if ((*j)-name.cmp((*i)-name.termedBuf()) == 0) {
+		entries.prune(*j);
+		j = entries.begin();
+	} else
+		++j;
+	}
+}
+append(src);
+}
+
+void
 NotePairs::appendNewOnly(const NotePairs *src)
 {
 for (VectorNotePairs::Entry *::const_iterator  i = src-entries.begin(); i != src-entries.end(); ++i) {
Index: source/src/Notes.h
===
--- source/src/Notes.h	(revision 354)
+++ source/src/Notes.h	(working copy)
@@ -131,6 +131,12 @@
 void append(const NotePairs *src);
 
 /**
+ * Append the entries of the src NotePairs list to our list, replacing any
+ * entries in the destination set that have the same keys.
+ */
+void appendAndReplace(const NotePairs *src);
+
+/**
  * Append any new entries of the src NotePairs list to our list.
  * Entries which already exist in the destination set are ignored.
  */
Index: source/src/auth/User.cc
===
--- source/src/auth/User.cc	(revision 354)
+++ source/src/auth/User.cc	(working copy)
@@ -101,7 +101,7 @@
 debugs(29, 5, HERE  auth_user '  from  ' into auth_user '  this  '.);
 
 // combine the helper response annotations. Ensuring no duplicates are copied.
-notes.appendNewOnly(from-notes);
+notes.appendAndReplace(from-notes);
 
 /* absorb the list of IP address sources (for max_user_ip controls) */
 AuthUserIP *new_ipdata;
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-05 Thread Steve Hill

On 10.12.14 17:09, Amos Jeffries wrote:


I'm looking for advice on figuring out what is causing intermittent
high CPU usage.


It appears that the connections gradually gain more and more notes with 
the key token (and values containing Kerberos tokens).  I haven't been 
able to reproduce the problem reliably enough to determine if this is 
the root of the high CPU usage problem, but it certainly doesn't look right:


When an ACL is executed that requires the login name (e.g. the 
proxy_auth ACL, or an external ACL using the %LOGIN format specifier), 
Acl.cc:AuthenticateAcl() is called.  This, in turn, calls
UserRequest.cc:tryToAuthenticateAndSetAuthUser(), which calls 
UserRequest.cc:authTryGetUser().  Here we get a call to 
Notes.cc:appendNewOnly() which appends all the notes from 
checklist-auth_user_request-user()-notes.


I can see the appendNewOnly() call sometimes ends up appending a large 
number of token notes (I've observed requests with a couple of hundred 
token notes attached to them) - the number of notes increases each 
time a Kerberos authentication is performed.  My suspicion is that this 
growth is unbounded and in some cases the number of notes could become 
large enough to be a significant performance hit.


A couple of questions spring to mind:

1. HelperReply.cc:parse() calls notes.add(token,authToken.content()) 
(i.e. it adds a token rather than replacing an existing one).  As far as 
I can tell, Squid only ever uses the first token note, so maybe we 
should be removing the old notes when we add a new one?


[Actually, on closer inspection, NotePairs::add() appends to the end of 
the list but NotePairs::findFirst() finds the note closest to the start 
of the list.  Unless I'm missing something, this means the newer token 
notes are added but never used?]


2. I'm not sure on how the ACL checklists and User objects are shared 
between connections/requests and how they are supposed to persist.  It 
seems to me that there is something wrong with the sharing/persistence 
if we're accumulating so many token notes.  As well as the performance 
problems, there could be some race conditions lurking here?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-05 Thread Steve Hill

On 05.01.15 16:35, Eliezer Croitoru wrote:


Can you share the squid -v output and the OS you are using?


Scientific Linux 6.6, see below for the squid -v output.

I've now more or less confirmed that this is the cause of my performance 
problems - every so often I see Squid using all the CPU whilst servicing 
very few requests.  Most of the CPU time is being used by the 
appendNewOnly() function.  For example, 228 milliseconds for 
appendNewOnly() to process a request with 2687 token notes attached to 
it, and this can happen more than once per request.



Squid Cache: Version 3.4.10
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' 
'--enable-auth-ntlm-helpers=smb_lm,no_check,fakeauth' 
'--enable-auth-digest-helpers=password,ldap,eDirectory' 
'--enable-auth-negotiate-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=file_userip,LDAP_group,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs,rock' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--with-aio' 
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl' 
'--with-openssl' '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe 
-fsigned-char -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 
'LDFLAGS=-pie' 'CXXFLAGS=-fPIE -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
--enable-ltdl-convenience



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assertion failure: DestinationIp.cc:60

2014-11-18 Thread Steve Hill
I'm seeing a lot of this in both 3.4.6 and 3.4.9:

2014/11/18 15:08:48 kid1| assertion failed: DestinationIp.cc:60:
checklist-conn()  checklist-conn()-clientConnection != NULL

I've looked through Bugzilla and couldn't see anything regarding this -
is this a known bug?

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] RFC2616 headers in bumped requests

2014-11-04 Thread Steve Hill

Squid (correctly) inserts Via and X-Forwarded-For headers into requests
that it is proxying.  However, in the case of encrypted traffic, the
server and client are expecting the traffic to reach the other end
as-is, since usually this could not be intercepted.  With SSL bumped
requests this is no longer true - the proxy can (and does) modify the
traffic, by inserting these headers.

So I'm asking the question: is this behavior considered desirable, or
should we be attempting to modify the request as little as possible for
compatibility reasons?

I've just come across a web server that throws its toys out of the pram
when it sees a Via header in an HTTPS request, and unfortunately it's
quite a big one - Yahoo.  See this request:

-
GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1
Host: uk.finance.yahoo.com
Via: 1.1

HTTP/1.1 301 Moved Permanently
Date: Tue, 04 Nov 2014 09:55:40 GMT
Via: http/1.1 yts212.global.media.ir2.yahoo.com (ApacheTrafficServer [c
s f ]), http/1.1 r04.ycpi.ams.yahoo.net (ApacheTrafficServer [cMsSfW])
Server: ATS
Strict-Transport-Security: max-age=172800
Location:
https://uk.finance.yahoo.com/news/degrees-lead-best-paid-careers-141513989.html
Content-Length: 0
Age: 0
Connection: keep-alive
-

Compare to:

-
GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1
Host: uk.finance.yahoo.com

HTTP/1.1 200 OK
...
-


Note that the 301 that they return when a Via header is present just
points back at the same URI, so the client never gets the object it
requested.

For now I have worked around it with:
  request_header_access Via deny https
  request_header_access X-Forwarded-For deny https
But it does make me wonder if inserting the headers into bumped traffic
is a sensible thing to do.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump fails accessing .gov.uk servers

2014-11-04 Thread Steve Hill
On 31/10/14 20:03, Dieter Bloms wrote:

 but when the server is broken, it will not work.
 Have a look at:
 
 https://www.ssllabs.com/ssltest/analyze.html?d=www.taxdisc.service.gov.uk
 
 It works correctly when FireFox connects directly to the web server
 rather than going through the proxy.
 
 yes the browsers have a workaround and try with different cipher suites,
 when the first connect fails.
 
 So my question is: is the web server broken, or am I misunderstanding
 something?
 
 The webserver is broken.

Many thanks for this - I have emailed them, which I fully expect them to
ignore  :)

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory In Squid 3.4.6

2014-10-09 Thread Steve Hill

On 08.10.14 15:05, Amos Jeffries wrote:


New patch added to bug 4088. Please see if it resolves the
external_acl_type leak.


Despite the external ACL cache leak being plugged, I'm still getting a 
serious memory leak.  This data was captured over night on a production 
server, graphing memory usage against requests:

  http://persephone.opendium.net/~steve/squid-memory.png

The graph starts at around 18:00 yesterday evening, ending at around 
09:00 this morning.  I've included the yellow requests per minute line 
so you can see how busy the server is - it starts off pretty quiet in 
the evening and gets quieter through the night, but then traffic picks 
up this morning.


The accounted memory increases slightly through the run, but not 
significantly enough for me to worry about for the time being.  My 
concern is the unaccounted memory rapidly increasing.  From the graph, 
it is clear that it is not leaking a fixed amount per request, but I 
can't figure out what correlates with the leak.


Here's an overview of what this Squid is doing:
- Single process - no SMP workers
- External ACLs
- TPROXY
- Kerberos and Basic auth.
- SSL Bump
- ICAP
- No memory caching (cache_mem 0)
- No disk caching (cache_dir isn't set)
- Almost all non-HTTPS traffic is sent to a parent proxy.
- HTTPS traffic is sent direct

Config file:

auth_param negotiate program /usr/lib64/squid/negotiate_wrapper_auth 
--ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos 
/usr/lib64/squid/negotiate_kerberos_auth -s HTTP/proxy.example.com

auth_param negotiate children 50
auth_param negotiate keep_alive off

auth_param basic program /usr/lib64/squid/basic_pam_auth -r
auth_param basic children 50
auth_param basic realm Iceni Web Proxy
auth_param basic credentialsttl 2 hours


shutdown_lifetime 3 seconds
forward_max_tries 40
icap_service_failure_limit -1
host_verify_strict off
spoof_client_ip deny all

logformat iceni %tg.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a 
%mt %{User-Agent}h %lp

access_log stdio:/var/log/squid-nocache/access.log iceni
cache_log /var/log/squid-nocache/cache.log
cache_store_log none
pid_filename /var/run/squid-nocache.pid
coredump_dir /var/spool/squid-nocache
state_dir /var/run/squid-nocache


##
# ACL definitions
##

external_acl_type preauth cache=0 children-max=1 concurrency=100 ttl=60 
negative_ttl=0 %SRC %{User-Agent} %URI %METHOD /usr/sbin/squid-preauth 
/etc/iceni/authcached/authcached.psk

acl preauth external preauth
acl preauth_tproxy  external preauth transparent
acl preauth_ok  note auth_tag preauth_ok
acl preauth_donenote auth_tag preauth_done
acl need_http_auth  note auth_tag http_auth
acl need_cp_authnote auth_tag cp_auth
acl need_postauth_sync  note auth_tag postauth_sync
acl need_postauth_async note auth_tag postauth_async

external_acl_type postauth_async cache=0 children-max=1 concurrency=100 
ttl=0 grace=100 %SRC %{User-Agent} %LOGIN %EXT_USER 
/usr/sbin/squid-postauth /etc/iceni/authcached/authcached.psk
external_acl_type postauth_sync cache=0 children-max=1 concurrency=100 
ttl=0 grace=0 %SRC %{User-Agent} %LOGIN %EXT_USER 
/usr/sbin/squid-postauth /etc/iceni/authcached/authcached.psk
#external_acl_type postauth_async cache=1 children-max=1 concurrency=100 
ttl=1 negative_ttl=1 grace=100 %SRC %{User-Agent} %LOGIN %EXT_USER 
/usr/sbin/squid-postauth /etc/iceni/authcached/authcached.psk
#external_acl_type postauth_sync cache=1 children-max=1 concurrency=100 
ttl=1 negative_ttl=1 grace=0 %SRC %{User-Agent} %LOGIN %EXT_USER 
/usr/sbin/squid-postauth /etc/iceni/authcached/authcached.psk

acl postauth_async  external postauth_async
acl postauth_sync   external postauth_sync

# Show the captive portal login page (use with http_access deny)
acl show_login_page src all
deny_info 
302:https://%h/webproxy/captive_portal/captive_portal_login?c=%o 
show_login_page


# A bodge to ensure accesses to this machine aren't authenticated or 
filtered.
# /etc/squid/local_ips is automatically updated by the init script when 
Squid

# starts or reloads, so Squid should be reloaded whenever the machine's IPs
# change (yuck!).
acl local_ips   dst /etc/squid/local_ips

acl SSL_ports   port 443

acl Safe_ports  port 80 # http
acl Safe_ports  port 21 # ftp
acl Safe_ports  port 443# https
acl Safe_ports  port 70 # gopher
acl Safe_ports  port 210# wais
acl Safe_ports  port 1025-65535 # unregistered ports
acl Safe_ports  port 280# http-mgmt
acl Safe_ports  port 488# gss-http
acl Safe_ports  port 591# filemaker
acl Safe_ports  port 777# multiling http

# CONNECT matches the encrypted tunnel, https matches the decrypted requests
# inside it when it is bumped.
acl CONNECT method

Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-09-30 Thread Steve Hill

On 29.09.14 13:39, Eliezer Croitoru wrote:

Hey Steve,

Can you share the basic cache manager requests statistics and the up
time for the service?
(mgr:info)


This is with 8 workers and was restarted this morning, about 6 hours 
ago.  As you can see, it's using about 5GB at the moment - as mentioned, 
there is no caching enabled for this configuration so there doesn't seem 
much reason for it to grow so large.


Squid Object Cache: Version 3.4.6
Build Info:
Start Time: Tue, 30 Sep 2014 08:33:01 GMT
Current Time:   Tue, 30 Sep 2014 14:14:28 GMT
Connection information for squid:
Number of clients accessing cache:  2545
Number of HTTP requests received:   926815
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   2714.3
Average ICP messages per minute since start:0.0
Select loop called: 33693001 times, 5.219 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 0.0%, 60min: 0.0%
Hits as % of bytes sent:5min: 3.5%, 60min: 4.4%
Memory hits as % of hit requests:   5min: 0.0%, 60min: 0.0%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
Storage Swap size:  0 KB
Storage Swap capacity:   0.0% used,  0.0% free
Storage Mem size:   2168 KB
Storage Mem capacity:0.0% used,  0.0% free
Mean Object Size:   0.00 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.06557  0.06988
Cache Misses:  0.06629  0.07171
Cache Hits:0.0  0.0
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.00012  0.0
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:20487.255 seconds
CPU Time:   3177.711 seconds
CPU Usage:  15.51%
CPU Usage, 5 minute avg:11.44%
CPU Usage, 60 minute avg:   12.40%
Maximum Resident Size: 19930960 KB
Page faults with physical i/o: 6
Memory usage for squid via mallinfo():
Total space in arena:  5014812 KB
Ordinary blocks:   5004965 KB   2620 blks
Small blocks:   0 KB  0 blks
Holding blocks: 82080 KB 48 blks
Free Small blocks:  0 KB
Free Ordinary blocks:9847 KB
Total in use:9847 KB 0%
Total free:  9847 KB 0%
Total size:5096892 KB
Memory accounted for:
Total accounted:   447132 KB   9%
memPool accounted: 447132 KB   9%
memPool unaccounted:   4649760 KB  91%
memPoolAlloc calls: 528851610
memPoolFree calls:  531832922
File descriptor usage for squid:
Maximum number of file descriptors:   131072
Largest file desc currently in use:209
Number of file desc currently in use:  754
Files queued for open:   0
Available number of file descriptors: 130318
Reserved number of file descriptors:   800
Store Disk files open:   0
Internal Data Structures:
   510 StoreEntries
   510 StoreEntries with MemObjects
   408 Hot Object Cache Items
 0 on-disk objects


For comparison, almost all of the http (but none of the https) requests 
that go through the proxy above are then sent through a second proxy 
which does do some caching, but no fancy stuff like ICAP or ssl bumping 
- the caching proxy has been running for 18 days and has a far smaller 
process size (again, 8 workers):


Squid Object Cache: Version 3.4.6
Build Info:
Start Time: Fri, 12 Sep 2014 08:48:50 GMT
Current Time:   Tue, 30 Sep 2014 14:18:35 GMT
Connection information for squid:
Number of clients accessing cache:  1
Number of HTTP requests received:   19496669
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   742.7
Average ICP messages per minute since start:0.0
Select loop called: 763795228 times, 2.062 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 13.1%, 60min: 12.8%
Hits as % of bytes sent:5min: 7.0%, 60min: 4.7%
Memory hits as % of hit requests:   5min: 0.0%, 60min: 0.0%
Disk hits as % of hit requests: 5min: 16.4%, 60min

[squid-users] SSL Bump and certificate pinning

2014-09-01 Thread Steve Hill


Mozilla have announced that Firefox 32 does public key pinning:
http://monica-at-mozilla.blogspot.co.uk/2014/08/firefox-32-supports-public-key-pinning.html

Obviously this has the potential to render SSL-bump considerably less 
useful.  At the moment it seems to be restricted to a small number of 
domains, but that's sure to increase.


Whilst I support the idea of ensuring that traffic isn't surreptitiously 
intercepted, there are legitimate instances where interception is 
necessary *and* the user is fully aware that it is happening (and has 
therefore imported the proxy's CA certificate into their key chain).  So 
I'm wondering if there is any kind of workaround to keep SSL-bump 
working with these sites?


1. It seems to me that imported CA certs should have some kind of flag 
associated with them to indicate that they should be trusted even for 
pinned domains.
2. I'm guessing that this is not an issue for devices that *always* go 
through an intercepting proxy, since presumably they would never get to 
see the real cert, so wouldn't pin it?  So this is mainly an issue for 
devices that move between networks?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] External ACL tags

2014-07-29 Thread Steve Hill

On 29.07.14 06:37, Amos Jeffries wrote:


The note ACL type should match against values in the tag key name same
as any other annotation. If that does not work try a different key name
than tag=.


Perfect, thank you!


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] External ACL tags

2014-07-28 Thread Steve Hill


I'm trying to build ACLs based on the tags returned by an external ACL, 
but I can't get it to work.


These are the relevant bits of my config:

external_acl_type preauth children-max=1 concurrency=100 ttl=0 
negative_ttl=0 %SRC %{User-Agent} %URI %METHOD /usr/sbin/squid-preauth

acl preauth external preauth
acl need_http_auth tag http_auth
http_access allow !tproxy !tproxy_ssl !https preauth
http_access allow !preauth_done preauth_tproxy
http_access allow proxy_auth postauth



I can see the external ACL is being called and setting various tags:

2014/07/28 17:29:40.634 kid1| external_acl.cc(1503) Start: 
externalAclLookup: looking up for '2a00:1a90:5::14 
Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 'preauth'.
2014/07/28 17:29:40.634 kid1| external_acl.cc(1513) Start: 
externalAclLookup: will wait for the result of '2a00:1a90:5::14 
Wget/1.12%20(linux-gnu) http://nexusuk.org/%7Esteve/empty GET' in 
'preauth' (ch=0x7f1409a399f8).
2014/07/28 17:29:40.634 kid1| external_acl.cc(871) aclMatchExternal: 
2a00:1a90:5::14 Wget/1.12%20(linux-gnu) 
http://nexusuk.org/%7Esteve/empty GET: return -1.
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: preauth = -1 
async
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: 
http_access#7 = -1 async
2014/07/28 17:29:40.634 kid1| Acl.cc(177) matches: checked: http_access 
= -1 async
2014/07/28 17:29:40.635 kid1| external_acl.cc(1371) 
externalAclHandleReply: reply={result=ERR, notes={message: 
53d67a74$2a00:1a90:5::14$baa34e80d2d5fb2549621f36616dce9000767e93b6f86b5dc8732a8c46e676ff; 
tag: http_auth; tag: cp_auth; tag: preauth_ok; tag: preauth_done; }}



But then when I test one of the tags, it seems that it isn't set:

2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking !preauth_done
2014/07/28 17:29:40.636 kid1| Acl.cc(157) matches: checking preauth_done
2014/07/28 17:29:40.636 kid1| StringData.cc(81) match: 
aclMatchStringList: checking 'http_auth'
2014/07/28 17:29:40.636 kid1| StringData.cc(85) match: 
aclMatchStringList: 'http_auth' NOT found

2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: preauth_done = 0
2014/07/28 17:29:40.636 kid1| Acl.cc(177) matches: checked: 
!preauth_done = 1



It looks to me like its probably only looking at the first tag that the 
ACL returned - is this a known bug?  I couldn't spot anything in Bugzilla.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] Squid in a WiFi Captive portal scenario

2014-05-23 Thread Steve Hill
On 14/05/14 20:02, JMangia wrote:

 Apple, Google, Microsoft implement a sort of automatic Web Popup page that
 appear just connecting to a WiFi network that implement a “captive portal”
 solution.  This popup appear even if the internet request is not coming from
 the browser but also from any other App.
 
 On iOS / Mac Os for example once activated the WiFi the OS make an HTTP
 request for http://www.apple.com/library/test/success.html with special user
 agent CaptiveNetworkSupport/1.0 and to get the Splash Landing Page the
 captive solution must just not return Success.
 
 Microsoft implement something similar with WISPr support and Android try to
 contact http://clients3.google.com/generate_204 or
 http://www.google.com/blank.html.
 
 My squid configuration deny any destination and redirect to my landing
 splash page but the user need to open the browser to get this splash page. 
 I mean if the user connect to wifi network and open any other App that use
 the connection no popup login page appear.
 
 Is there anyone else working in this scenario using Squid ?

iOS devices make a request to their servers as soon as they associate
with a wireless network.  If the request doesn't return what they expect
then they assume its a captive portal login page and pop it up.  They
also support WISPr - if the page has WISPr XML embedded in it then the
OS will scrape the user name and password from the POST request the
first time you log in and then automatically resubmit it in future
rather than popping up the page.

Old iOS devices used http://www.apple.com/library/test/success.html but
new versions of iOS probe a variety of URIs.  If you return a login page
for any request, while the user isn't logged in, then it won't matter
which URI they use.

This works for me - Squid returns a 302 redirecting to a captive portal
login page which has embedded WISPr XML.  The devices pop up the login
page the first time and thereafter use WISPr.  Of course, the device
must be able to access that page when they're not logged in to the
proxy!  This works for both transparent proxy connections and
non-transparent connections, but ISTR you *must* return a 302 - anything
else (such as a 200 with the login page itself, or a 407, will break).

I've not had too much experience with MS's devices so can't really comment.

HTC Android devices have supported WISPr for quite a while I believe,
but I don't think stock Android has support (or at least if it does it's
a pretty recent addition).  The CoovaAX app will do WISPr on Android though.

Recent OS X versions also do WISPr iff they are using the wifi but not
for wired connections (which seems an odd distinction!)

Another gotcha is that the WISPr service must be https with a trusted
certificate, and ISTR the devices must be able to access the CA's
servers even before being authenticated in order to verify the
certificate.  But I don't believe this affects the actual pop up login
screen, just WISPr automatic authentication.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Intermittent slowness

2014-05-08 Thread Steve Hill


I'm trying to debug an intermittent slowness problem with Squid 3.4.4. 
Unfortunately I haven't been able to figure out how to reproduce the 
problem, it just occurs every so often on a production server.  I've not 
yet tried Squid 3.4.5, but there's nothing in the change log that would 
lead me to believe that this problem has been addressed.


I've got an example web fetch from this morning:

The network traffic shows:
09:01:54.489515 client - server TCP SYN
09:01:54.489541 client - server TCP SYN/ACK
09:01:54.489555 client - server TCP ACK
09:01:54.490059 client - server HTTP GET request
09:01:54.490074 client - server TCP ACK
09:02:09.492576 client - server TCP FIN (client times out, tears down)
09:02:09.53 client - server TCP ACK
09:02:35.371911 client - server TCP FIN (server tears down connection)
The client is port 58469, the server is 3128.

As you can see, Squid never replies to the GET request (and actually, in 
this case the GET request didn't require Squid to contact another server 
- the authentication credentials were invalid, so it should have 
produced a 407).


Examining the Squid logs 
(http://persephone.nexusuk.org/~steve/cache.log.trimmed), I can see that 
Squid didn't accept the connection until 09:02:35.370


What seems to be happening is that helperStatefulHandleRead is being 
called, and taking several seconds to complete - if this happens 
frequently enough then the incoming connections get queued up and 
significantly delayed.  See the log below:


2014/05/08 09:01:53.489 kid1| comm.cc(167) comm_read: comm_read, 
queueing read for local=[::] remote=[::] FD 66 flags=1; asynCall 
0x7f9c174c4c00*1
2014/05/08 09:01:53.489 kid1| ModEpoll.cc(139) SetSelect: FD 66, type=1, 
handler=1, client_data=0x7f9bf4425328, timeout=0
2014/05/08 09:01:53.489 kid1| AsyncCallQueue.cc(53) fireNext: leaving 
helperHandleRead(local=[::] remote=[::] FD 66 flags=1, 
data=0x7f9c0a1dad78, size=120, buf=0x7f9c09f4d8a0)
2014/05/08 09:01:53.489 kid1| AsyncCallQueue.cc(51) fireNext: entering 
helperStatefulHandleRead(local=[::] remote=[::] FD 53 flags=1, 
data=0x7f9c08ae5928, size=281, buf=0x7f9bfa3e11d0)
2014/05/08 09:01:53.489 kid1| AsyncCall.cc(30) make: make call 
helperStatefulHandleRead [call44614375]
2014/05/08 09:01:58.329 kid1| AsyncCall.cc(18) AsyncCall: The AsyncCall 
helperDispatchWriteDone constructed, this=0x7f9c1a39bd40 [call44614625]
2014/05/08 09:01:58.329 kid1| Write.cc(29) Write: local=[::] remote=[::] 
FD 68 flags=1: sz 188: asynCall 0x7f9c1a39bd40*1


FDs 66 and 68 are connections to external ACL helpers, but the network 
traffic shows that the external ACLs are answering immediately, so as 
far as I can tell this delay isn't caused by the helpers themselves. 
(FD 66 received a query at 09:01:53.480992 and replied at 
09:01:53.484808; FD 68 received a query at 09:01:58.334608 and replied 
at 09:01:58.334661).


I *think* FD 53 might be a connection to the ICAP service.

I have noticed that squid often seems to use a lot of CPU time when this 
problem is occurring.


Unfortunately I don't know where to go with the debugging now - The 
current amount of debug logging produces a lot of data, but isn't really 
detailed enough for me to work out what's going on.  But as mentioned, 
since I can't reproduce this problem in a test environment, I have no 
choice but to just leave debug logging turned on on a production server.


Any suggestions / help from people more familiar with the Squid 
internals would certainly be helpful.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Broken Apple devices - repeated 407s

2014-04-29 Thread Steve Hill


Apple devices seem to be pretty broken when it comes to handling 
authenticated proxies.  However, sometimes I see behaviour that is so 
broken that it could almost be considered a DoS attack:  Devices that 
make a request, get a 407 back from the proxy and immediately make the 
same request again, still with no authentication credentials - the proxy 
returns a 407, of course, and the client requests again... repeatedly, 
with no kind of a back-off timer, going on for hours on end.  For example:


28/Apr/2014:07:45:36.194  0 10.203.1.18 TCP_DENIED/407 4660 CONNECT 
p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 
CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1)
28/Apr/2014:07:45:36.205  0 10.203.1.18 TCP_DENIED/407 4660 CONNECT 
p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 
CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1)
28/Apr/2014:07:45:36.215  0 10.203.1.18 TCP_DENIED/407 4660 CONNECT 
p02-ubiquity.icloud.com:443 - HIER_NONE/- text/html ubd/289 
CFNetwork/673.4 Darwin/13.1.0 (x86_64) (Macmini5%2C1)


(continues like that with about 100ms between requests).

And other similar requests:

28/Apr/2014:07:45:28.793  0 10.203.1.18 TCP_DENIED/407 4649 CONNECT 
keyvalueservice.icloud.com:443 - HIER_NONE/- text/html 
SyncedDefaults/91.30 (Mac OS X 10.9.2 (13C1021))
28/Apr/2014:07:45:58.358  0 10.203.1.18 TCP_DENIED/407 4630 CONNECT 
p02-caldav.icloud.com:443 - HIER_NONE/- text/html Mac_OS_X/10.9.2 
(13C1021) CalendarAgent/176
28/Apr/2014:07:45:59.114  0 10.203.1.18 TCP_DENIED/407 4612 CONNECT 
p02-bookmarks.icloud.com:443 - HIER_NONE/- text/html CoreDAV/229.6 
(13C1021)


etc...  It happens from both OS X and iOS devices every so often 
(presumably flattens the iphone battery pretty quickly!)


Clearly this is a bug in Apple's software (which I have reported, but 
they seem uninterested in fixing it*), but I'm wondering if anyone else 
has observed this behaviour and come up with any good ideas to mitigate 
it on the proxy side?



rant
* Apple's bug reporting process seems to be:
1. I report a bug with lots of information regarding the OS version on 
the device, how to replicate the problem, etc.
2. They sit on it for a few weeks before asking me to provide them with 
lots of logs from the device itself, which generally I can't easily do 
because I don't personally have the device.
3. I jump through the hoops and provide them with the information they 
request.

4. They sit on the bug and never bother to respond or fix it.

So given that (3) involves me spending quite a bit of time getting hold 
of a device and replicating the problem, even though I provided them 
enough information to do this themselves, and it basically seems to be a 
complete waste of my time since they then ignore the bug, I've largely 
given up reporting them now...  Which is a shame - I don't mind spending 
time collecting debugging information if it's actually going to help get 
the bug fixed, but with Apple this doesn't seem to happen.

/rant

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Debugging slowness

2014-04-15 Thread Steve Hill

I'm trying to debug some slowness issues with Squid 3.4.4.  This is
currently under reasonably light use and every so often it becomes very
slow.

From what I can tell, the client sends a request with Negotiate auth
credentials in it.  The proxy should respond with a 407 and the
negotiate challenge, but instead it sometimes just sits there for
ages... sometimes for a minute or so!  (I'm not 100% convinced that this
is specific to Negotiate auth though).  Squid does not appear to be
running out of file descriptors.

The problem is intermittent, which is making debugging a pain.  I have a
tcpdump running and full logging turned on in squid so hopefully I can
catch some useful information the next time the problem occurs.

My question is: Once I've identified a specific request that has
experienced the problem, and want to track what Squid was doing with
that request, is there any sensible way of filtering the cache.log to
exclude the other requests that were happening concurrently?

Secondly, does anyone have any suggestions for what specific logging I
should turn on, rather than logging everything, since logging everything
slows the proxy down significantly?

Thanks.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] Destination address rewriting for TPROXY

2013-12-03 Thread Steve Hill

On 03/12/13 04:01, Amos Jeffries wrote:


Does the patch from http://bugs.squid-cache.org/show_bug.cgi?id=3589 fix
this for you?


Thanks for the reply...  The answer is yes and no. :)

The patch causes Squid to connect to the right place, but it appears 
that the ACLs don't necessarilly get re-evaluated.


The relevant chunk of my Squid config is:

-
# cache_peer for the local webserver to prevent t-proxy spoofing of 
requests to localhost
cache_peer [::1] parent 80 0 proxy-only no-query no-digest no-tproxy 
originserver name=localhost_80

cache_peer_access localhost_80 deny !port_80
cache_peer_access localhost_80 allow to_localhost
cache_peer_access localhost_80 deny all

cache_peer [::1] parent 3129 0 proxy-only no-query no-digest no-tproxy 
name=caching

cache_peer_access caching deny to_localhost
cache_peer_access caching deny CONNECT
cache_peer_access caching deny https
cache_peer_access caching deny tproxy_ssl
cache_peer_access caching allow all

adaptation_access iceni_respmod deny to_localhost
adaptation_access iceni_respmod allow all
-

During REQMOD, the ICAP server decides whether or not the web request 
should be blocked.  For unblocked requests it either loops back the 
request unaltered or returns a 204.  For blocked requests, it rewrites 
the request to go to http://localhost/blah and the local webserver does 
the heavy lifting of presenting an error page to the user.


The cache_peer and cache_peer_access lines should cause Squid to send 
the http://localhost/blah requests directly to the local web server 
without tproxy spoofing, all other HTTP traffic goes via an upstream 
caching proxy.


The adaption_access line should prevent the http://localhost/blah 
requests going through the ICAP RESPMOD service.



However, the to_localhost ACL doesn't seem to be working.  The rewritten 
requests are still being sent through RESPMOD and the upstream proxy 
when the system is used as a transparent proxy, even though this works 
correctly for non-transparent proxying.


Replacing the to_localhost ACL with one that checks dstdomain = 
localhost works as expected, so this is a reasonable stop-gap, but it 
does seem that to_localhost is behaving in an unexpected way, since its 
behaviour changes depending on whether the proxy is transparent or not.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Destination address rewriting for TPROXY

2013-12-02 Thread Steve Hill


I'm using an ICAP reqmod service to change the URI of certain requests 
(including the host name).  When running under a non-transparent proxy 
this works fine.  However, when using TPROXY, Squid uses the original 
destination IP address of the connection rather than the Host header to 
determine where to connect to, so modifying the request doesn't cause 
Squid to actually connect to a different host.


Is there any way to force Squid to connect to the host in the rewritten 
request, rather than continuing to connect to the original IP address?


I'm aware of the client_dst_passthru off option, which sounds like it 
would almost do what I want, except the manual says that this option 
gets forced back on for requests that fail host verification.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] TCP/DENIED 303 redirect firefox 19.0 not working

2013-10-08 Thread steve
Dear all

I compiled squid 3.3.9 on debian and are trying to get deny_info working
properly.

When I go to HTTPS sites, I am getting an error message in firefox/ chrome
etc and not redirection. It does work with regular http requests.

I have been reading up on quite some data and I found messages saying that
if we return 303:URL, is should work.

What I see in my log file is in fact not the expected result

Log output:

1381242441.978  0 194.78.29.66 TCP_DENIED/303 331 GET
http://www.homerecording.be/ - HIER_NONE/- text/html
1381242442.034  8 194.78.29.66 TCP_MISS/302 328 GET
http://www.c2root.be/ - HIER_DIRECT/46.18.36.231 text/html
1381242442.069 21 194.78.29.66 TCP_MISS/200 7857 GET
http://www.c2root.be/viewpage.php? - HIER_DIRECT/46.18.36.231 text/html
1381242442.349  8 194.78.29.66 TCP_MISS/302 328 GET
http://www.c2root.be/ - HIER_DIRECT/46.18.36.231 text/html
1381242442.381 21 194.78.29.66 TCP_MISS/200 7857 GET
http://www.c2root.be/viewpage.php? - HIER_DIRECT/46.18.36.231 text/html
1381242463.894  0 194.78.29.66 TCP_DENIED/303 331 CONNECT
www.facebook.com:443 - HIER_NONE/- text/html

When I have a TCP_DENIED/303 in combination with a CONNECT, I simply get a
problem loading page error.

Any ideas.

My conf has the following modified (and only that)
acl whitelist dstdomain .c2root.be .paypal.com .hln.be
http_access allow whitelist
deny_info 303:http://www.c2root.be CONNECT
deny_info 303:http://www.c2root.be/ whitelist
http_access deny !whitelist !CONNECT
http_access deny !whitelist CONNECT
http_access deny all

Any help welcome

Kind regards

Steve






Re: [squid-users] CLOSE_WAIT

2013-03-06 Thread Steve Hill

On 23.01.13 05:12, Amos Jeffries wrote:


IIRC we tried that but it resulted in early cloure of CONNECT tunnels
and a few other bad side effects on the tunnelled traffic.
Due to the way tunnel.cc and client_side.cc code interacts (badly) the
client-side code cannot know whether the tunnel is still operating or
has data buffered when it gets to the point of emitting that message.


Revisiting this problem, I can't see why keepaliveNextRequest() would be 
getting called after a successful CONNECT.


As far as I can tell, client_side_request.cc calls tunnelStart(), and 
then does nothing else with the connection.  If we can't connect to the 
remote host for whatever reason, tunnel.cc calls errorSend() and all the 
code paths seem to lead to the socket being closed; if we can connect 
then the socket then I don't think client_side_request.cc touches it again.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] CLOSE_WAIT

2013-01-21 Thread Steve Hill

On 11.01.13 00:06, Amos Jeffries wrote:


So it seems apparent that after Squid delivers the clear-text
response, it abandons the socket but never closes it.  From looking in
the source, this is client_side.cc, and it has a comment:
// XXX: Can this happen? CONNECT tunnels have deferredRequest set.
It looks to me as if the (conn-flags.readMore) section above should
be the bit being executed, although I don't quite understand deferred
requests.  In either case, it seems like we should close the socket if
it ever gets abandoned?


Calling conn-clientConnection-close() from else part where the 
connection is abandoned seems the right thing to do.  Is there any 
situation where closing the connection when it is abandoned is the wrong 
thing to do?


However, since the CONNECT and the response were both served with a 
Connection: keep-alive header, it seems that readMore should really be 
true at this point anyway.  clientProcessRequest() explicitly sets 
readMore = false for CONNECT requests, so I don't understand how Squid 
handles keep-alive CONNECT tunnels?


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Squid 3.2.6 fails to handle large POSTs when returning errors

2013-01-17 Thread Steve Hill


I've come across what appears to be a bug in Squid's handling of POST 
requests when returning an error to the client.


If the UA makes a POST request with a reasonably large object (e.g. a 
file upload of a few megabytes) and Squid needs to return an error (I've 
tested this for 403 and 407 responses), it returns the response 
immediately after receiving the request headers.  The client continues 
to send the POST body and Squid continues to read it.  However, 
eventually, Squid logs:


2013/01/17 17:50:50.780 kid1| client_side.cc(2322) 
maybeMakeSpaceAvailable: request buffer full: 
client_request_buffer_max_size=524288


Squid stops reading from the client and since the client is still 
sending, the socket's rx queue grows.  Eventually the rx queue is full 
and the TCP stack shrinks the TCP window to 0.  The client and Squid 
both sit there doing nothing with an open socket until the client drops 
the connection (which may be some time).


So, it seems that Squid is storing the POST body in an internal buffer, 
waiting to do something with it that will never happen.


This is reproducible by setting an ACL:
  http_access deny all
Then make a large post request via the proxy:
  curl --proxy http://yourproxy:3128 -v -F foo=@somefile -H Expect: 
http://someurl

Where somefile is a file of a few megabytes in size.

I've seen this in the field with Internet Explorer and proxy 
authentication - uploading a file can result in IE making the initial 
unauthenticated request and then hanging, waiting for the upload to 
complete before redoing the request with auth credentials.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Marking squid-webserver traffic

2013-01-14 Thread Steve Hill


I'm setting up some traffic routing to use Squid's TPROXY with a 
separate router.  So the network design looks like:



Clients - Squid
  |
  |
Router
  |
  |
   Internet

There will be a GRE tunnel between Squid and the router.  So the idea is:
- The router intercepts web requests from the clients, uses iptables to 
mark them and routes them over the GRE tunnel to Squid.
- The Squid proxy machine intercepts the traffic coming from the GRE 
interface and redirects it to TPROXY.

- Squid does its thing, probably making a request to a web server.
- The traffic to the web server is routed over the GRE tunnel back to 
the router.
- The router CONNMARKs the traffic from the GRE tunnel and directs it 
out to the internet.
- Reply traffic from the webserver has its connmark restored by the 
router and is sent back over the GRE tunnel to Squid.

- Squid's response to the client is sent over the GRE tunnel to the router.
- The router sends the response on to the client.

I can do everything except identify Squid's requests to the web server 
and therefore route them back over GRE.  I could use tcp_outgoing_tos 
and then route based on ToS, but I'd prefer to avoid abusing the ToS 
flags - is there a similar way of setting the fwmark?  qos_flows only 
seems to control the replies to the client rather than requests to the 
web server...


I've read through the documentation for setting up wccp, but as far as I 
can see the example configurations only route client-squid traffic via 
GRE and the squid-client and squid-webserver traffic all follows the 
usual routing instead (which would require Squid to have its own 
dedicated connection to the router).


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] CLOSE_WAIT

2013-01-11 Thread Steve Hill

On 11/01/13 00:06, Amos Jeffries wrote:


Okay. So the source of the problem is that Squid thinks there is
something using the socket, but it never got into the tunnel code which
would have closed it?
  Is the 403 message being generated inside Squid or by ICAP?


The 403 isn't generated by Squid.  It can be generated in 2 different 
ways (both result in the same problem):

1. The REQMOD ICAP server generates a 403 Forbidden HTTP response.
2. The REQMOD ICAP server rewrites the HTTP request to GET from a local 
webserver and that generates the 403 Forbidden response.



In the case where the server generates the 403 and things hang it will
be because Squid is expecting a close to arrive from server or client.


Setting the Connection: Close HTTP header in the 403 response doesn't 
help.



Once a tunnel is open and transmitting data it is up to those endpoints
to ensure keep-alive and other such details, although Squid applies a
timeout since last byte transferred.


Both the browser and the web server have closed the connection, but 
squid isn't closing its side of the connection to the browser.  Squid is 
also not reading from the connection to the browser between the 403 
response being sent and the browser dropping the connection (anything 
the browser sends after the 403 just piles up in the socket's rx buffer).



--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] CLOSE_WAIT

2013-01-10 Thread Steve Hill

On 09/01/13 21:07, Amos Jeffries wrote:


Does the CONNECT request contain Connection:close or
Connection:keep-alive? Squid supports keep-alive on CONNECT requests in
these situations where the CONNECT size is known and may be waiting for
another client request.


The client sends Proxy-Connection: keep-alive, so it would indeed be 
possible for the connection to be reused.  I should add that I'm seeing 
this with a non-transparent configuration (i.e. without Tproxy and 
without ssl_bump).



Please upgrade to 3.2.6 (should anyway for the CVE resolved there) and
see if this issue is gone there.


I've now upgraded my test server and the problem remains.

The 403 Forbidden being sent back to the client didn't have a Content 
Length header, which of course caused the client to drop the connection. 
 If I add a Content Length, the connection stays alive and I think it's 
a bit more obvious whats happening:


1. Client connects to Squid and sends a CONNECT.
2. Squid passes the request to the ICAP REQMOD service.
3. ICAP rewrites the request.
4. Squid returns a 403 Forbidden to the client in the clear.  At this 
point, the connection is still alive with empty queues.  Squid's 
cache.log shows:
2013/01/10 17:52:18 kid1| abandoning local=[2a00:1a90:5::9]:3128 
remote=[2001:4d48:ad51:501:226:bbff:fe18:f3ff]:49926 FD 19 flags=1


Now, the user retries the connection:
5. Client sends CONNECT over the existing connection.
6. Squid does not read the socket (netstat shows the rx queue on the 
socket is 220 octets long, which is the whole request the client just sent).

7. The client sits there spinning.
8. Eventually the client times out and drops the connection.
The connection is now in CLOSE_WAIT on the Squid server and will remain 
like that indefinitely.


So it seems apparent that after Squid delivers the clear-text response, 
it abandons the socket but never closes it.  From looking in the source, 
this is client_side.cc, and it has a comment:

// XXX: Can this happen? CONNECT tunnels have deferredRequest set.
It looks to me as if the (conn-flags.readMore) section above should be 
the bit being executed, although I don't quite understand deferred 
requests.  In either case, it seems like we should close the socket if 
it ever gets abandoned?



A few notes on the REQMOD rewrite:
My ICAP service can generate a 403 Forbidden response during REQMOD, 
or rewrite the request to a GET to a local web server.  In the latter 
case, the local webserver generates a 403 Forbidden response.  In both 
cases the client sees a similar response to its request, and in both 
cases this bug manifests.


Interestingly, if the ICAP service closes the ICAP connection instead of 
returning a response, Squid generates a 500 Internal Server Error and 
does not abandon the socket (the client then drops the connection, which 
squid handles correctly, and therefore doesn't end in CLOSE_WAIT).


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] CLOSE_WAIT

2013-01-09 Thread Steve Hill


I have a busy Squid 3.2.3 server that constantly has a huge number of 
connections tied up in CLOSE_WAIT (i.e. at the moment it has 364 
ESTABLISHED but 3622 in CLOSE_WAIT).


tcp1  0 :::172.23.3.254:8080 
:::172.23.2.158:49615   CLOSE_WAIT  32303/(squid-1)


All of these sockets have an rx queue containing 1 byte.  I'm aware that 
I can reduce client_lifetime, but I'm wondering if I have a more 
fundamental problem somewhere since Squid appears to not be flushing the 
queue.  Can anyone cast any light onto this behaviour?


Other servers running various versions of Squid (including 3.2.3) don't 
seem to be exhibiting the problem to such an extent (I'm still seeing a 
number of CLOSE_WAIT sockets with an rx queue length of 1 on these 
servers, but in relatively small quantities.)


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] CLOSE_WAIT

2013-01-09 Thread Steve Hill

On 09/01/13 10:14, Steve Hill wrote:


I have a busy Squid 3.2.3 server that constantly has a huge number of
connections tied up in CLOSE_WAIT (i.e. at the moment it has 364
ESTABLISHED but 3622 in CLOSE_WAIT).

tcp1  0 :::172.23.3.254:8080 :::172.23.2.158:49615
CLOSE_WAIT  32303/(squid-1)


Further to this, it appears that this is triggered by ICAP REQMOD 
rewrites of CONNECT requests:


1. Client sends a CONNECT foo.example.com:443 HTTP/1.1 request to the 
proxy.

2. Squid passes the request to the ICAP REQMOD service.
3. The ICAP REQMOD service wants to deny the request, so rewrites the 
request.
4. Squid returns a 403 Forbidden response to the client in clear text 
(this is allowed, as it is seen by the client as a response from the 
proxy rather than a response from the web server, although very few 
clients actually display the page contents these days due to security 
restrictions).

5. The client sends a FIN
At this point, the socket stays open on the Squid server - Squid never 
closes it and there is 1 byte in the socket's rx queue.  I have no idea 
what that 1 byte is though - Since all requests are terminated with a 
\r\n maybe squid doesn't read the \n ?)



--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] TPROXY with IPv6

2012-12-20 Thread Steve Hill


Squid's TPROXY sockets only seem to bind to the IPv4 stack - Some 
Googling suggests it can be made to work with IPv6, but I've not found 
anything explaining how.  What am I missing?


Thanks.

--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] TPROXY with IPv6

2012-12-20 Thread Steve Hill

On 20.12.12 13:58, Paweł Mojski wrote:


Search the list archives.
I posted working config for ipv6 few months ago.


Thanks - I found your config:
http://www.squid-cache.org/mail-archive/squid-users/201206/0281.html
It didn't explain how it could work when Squid only binds the tproxy 
socket to the IPv4 stack.  However, I just restarted squid and it has 
now bound to the IPv6 stack so I'm not sure what was previously 
preventing it.  Anyway, looks like the problem is solved - thanks.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Negotiate NTLM authentication broken?, 3.2.3

2012-12-07 Thread Steve Hill


I've just upgraded a machine from Squid 3.2.0 to 3.2.3 and can't seem to 
get the Negotiate authenticator to work any more.


From the traffic, I can see:
1. The client sends an unauthenticated request
2. Squid returns a 407 with Proxy-Authenticate: Negotiate
3. The client resends the request with Proxy-Authorization: Negotiate 
TlRMTVNTUAABl4II4gAGAbEdDw==

4. Squid returns a 407 with no Proxy-Authenticate header

Example traffic:
-
GET http://example.com HTTP/1.1
Proxy-Authorization: Negotiate 
TlRMTVNTUAABl4II4gAGAbEdDw==


HTTP/1.1 407 Proxy Authentication Required
Server: squid/3.2.3
Mime-Version: 1.0
Date: Fri, 07 Dec 2012 16:22:58 GMT
Content-Type: text/html
Content-Length: 3878
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from foo
X-Cache-Lookup: NONE from foo:3128
Via: 1.1 foo (squid/3.2.3)
Connection: keep-alive

-

This does not appear to be a problem with negotiate_wrapper itself as I 
can see from the logs that Squid has got a challenge string from it:
2012/12/07 16:29:39.051 kid1| UserRequest.cc(170) authenticate: need to 
challenge client 
'TlRMTVNTUAACBgAGADAVgonifVf3m5EEkgIAAC4ALgA2SwBTAEIAAgAGAEsAUwBCAAEACgBJAEMARQBOAEkABAMACgBpAGMAZQBuAGkAAA=='!


Everything I see in the logs indicates that Squid knows it has to send 
the challenge to the client, but the header never makes it into the 
response.


I've trimmed my configuration down to a minimum:
-
debug_options ALL,9

auth_param negotiate program /usr/lib64/squid/negotiate_wrapper_auth -d 
--ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp 
--domain=FOO --kerberos /usr/lib64/squid/negotiate_kerberos_auth -s HTTP/foo

auth_param negotiate children 50
auth_param negotiate keep_alive off

auth_param basic program /usr/lib64/squid/basic_pam_auth
auth_param basic children 50
auth_param basic realm Iceni Web Proxy
auth_param basic credentialsttl 2 hours

acl proxy_auth proxy_auth REQUIRED

http_access allow proxy_auth
http_access deny all

icp_access deny all
htcp_access deny all

http_port 3128

hierarchy_stoplist cgi-bin ?

logformat iceni %tg.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a 
%mt %{User-Agent}h

access_log stdio:/var/log/squid/access.log iceni
cache_log /var/log/squid/cache.log
cache_store_log stdio:/var/log/squid/store.log
pid_filename /var/run/squid.pid

coredump_dir /var/spool/squid-nocache
-

The appropriate parts of cache.log are available at: 
http://persephone.nexusuk.org/~steve/cache.log


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] A way to redirect google/Youtube SSL

2012-11-29 Thread Steve Hill

On 28.11.12 23:22, David Touzeau wrote:

Thanks !!! But what about Youtube ?


I'm not aware of anything similar for youtube I'm afraid, but if you 
come across anything I'd be very interested.


The other possibility is to ssl-bump the https sessions, but that's a 
bit nasty.


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] Tproxy without spoofed source address

2012-11-28 Thread Steve Hill


I need to transparently proxy traffic, and the best way to do this seems 
to be to use tproxy, since that allows IPv6 traffic to be supported. 
However, when using tproxy, Squid spoofs the client's source address 
when making the connection to the web server - this is something I don't 
need, and breaks requests that end up going to web servers on the local 
network since the return traffic from the web server ends up going 
straight back to the client instead of back to Squid.


So far the only way I've found to disable the spoofing behaviour is to 
send the traffic via a non-transparent upstream proxy.  Is there some 
way to turn off source address spoofing without using a second proxy?


--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


Re: [squid-users] A way to redirect google/Youtube SSL

2012-11-28 Thread Steve Hill


On 28.11.12 13:52, David Touzeau wrote:


Since Google and Youtube force browser to use SSL we have lake of
statistics and web filtering with Squid.
I would like if there is a good way in order to redirect SSL requests to
google/Youtube to non-encrypted requests ?


Google allow you to do this by redirecting requests to
nosslsearch.google.com:

http://support.google.com/websearch/bin/answer.py?hl=enanswer=186669
Look at the link at the bottom of the page - Information for school
network administrators about the No-SSL option.

You can follow their advice and bodge DNS records into your DNS server,
add the address to your /etc/hosts file, or use an ICAP server to
rewrite the CONNECT requests.  Beware that the http traffic itself must
be unmodified (e.g. the GET and Host headers must still point at
www.google.co.uk or wherever), just the IP address you connect to changes.

--

 - Steve




[squid-users] ICAP breaks HTTP responses with 1 octet bodies

2012-06-14 Thread Steve Hill


I'm having some problems with Squid's ICAP client breaking on RESPMOD 
when handling responses with a body exactly 1 octet long.


- The browser makes a request to Squid
- Squid makes a request to the web server and receives back a response 
with a Content-Length: 1 header and a 1 octet body.
- The response gets sent to the ICAP server, which replies with a 204 
No Modifications.
- Squid sends the response on to the browser, with the Content-Length: 
1 header intact, but doesn't send a body.  The browser sits there 
indefinitely waiting for the body to appear.


As far as I can tell, the ICAP client successfully copies the body from 
the virgin response to the adapted response.


The problem appears to be with 
ServerStateData::handleMoreAdaptedBodyAvailable() - The API for 
StoreEntry::bytesWanted() seems to be that it will return 0 if it wants 
no data, or up to aRange.end-1 if more data is wanted.  This means that 
if aRange.end == 1, which is the case when we only have 1 octet of data, 
it is always going to look like it can't accept any more data.


The fact that it can't accept a single octet is actually noted in 
ServerStateData::handleMoreAdaptedBodyAvailable() in Server.cc:

  // XXX: bytesWanted API does not allow us to write just one byte!

I've resolved this problem by changing the following lines in 
ServerStateData::handleMoreAdaptedBodyAvailable():
  const size_t bytesWanted = entry-bytesWanted(Rangesize_t(0, 
contentSize));

  const size_t spaceAvailable = bytesWanted   0 ? (bytesWanted + 1) : 0;

To:
  const size_t spaceAvailable = entry-bytesWanted(Rangesize_t(0, 
contentSize+1));


(Patch attached)

However, I'm not sure if this is a correct fix - what effect does adding 
1 to the contentSize given to bytesWanted() actually have?  And is this 
supposed to be handled elsewhere?





--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com
Index: src/Server.cc
===
--- src/Server.cc   (revision 115)
+++ src/Server.cc   (working copy)
@@ -723,8 +723,7 @@
 
 // XXX: entry-bytesWanted returns contentSize-1 if entry can accept data.
 // We have to add 1 to avoid suspending forever.
-const size_t bytesWanted = entry-bytesWanted(Rangesize_t(0, contentSize));
-const size_t spaceAvailable = bytesWanted   0 ? (bytesWanted + 1) : 0;
+const size_t spaceAvailable = entry-bytesWanted(Rangesize_t(0, contentSize+1));
 
 if (spaceAvailable  contentSize ) {
 // No or partial body data consuming
@@ -734,8 +733,7 @@
 entry-deferProducer(call);
 }
 
-// XXX: bytesWanted API does not allow us to write just one byte!
-if (!spaceAvailable  contentSize  1)  {
+if (!spaceAvailable  contentSize  0)  {
 debugs(11, 5, HERE  NOT storing   contentSize   bytes of adapted  
response body at offset   adaptedBodySource-consumedSize());
 return;


[squid-users] Squid Processes

2012-02-21 Thread Steve Tatlow
Hi,

We are running squid as a transparent proxy, with dansguardian doing the
content filtering. All traffic will be coming from localhost and no
authentication is required.  Can someone tell me how I ensure there are
enough squid processes to support a large number of users (maybe 250
concurrent users)

Thanks
Steve



[squid-users] Trying to install squid with --enable-ssl-crtd switch

2011-09-23 Thread Steve Mansfield
Hello

I'm trying to build  install squid with the following options, per
http://wiki.squid-cache.org/Features/DynamicSslCert

./configure --enable-ssl --enable-ssl-crtd

and I see from another thread that on linux (Ubuntu) you need to
install the OpenSSL Dev Libs.

I've done this, but still it doesn't work, due to SSL errors, and the
helper application ssl_crtd, never gets built. Building without the
--enable-ssl-crtd switch works OK.

Is there a list of the dependencies?

TIA

Steve


[squid-users] How to diagnose race condition?

2011-04-25 Thread Steve Snyder
I just upgraded from CentOS 5.5 to CentOS 5.6, while running Squid v3.1.12.1 in 
both environments, and somehow created a race condition in the process.  
Besides updating the 200+ software packages that are the difference between 5.5 
and 5.6, I configured and enabled DNSSEC on my nameserver.

What I see now is that Squid started at boot time uses 100% CPU, with no 
traffic at all, and will stay that way seemingly forever.  If I shut down Squid 
and restart it, all is well.  So: Squid started at boot time = bad, Squid 
started post-boot = good.  There is nothing unusual in either the system or 
Squid logs to suggest what the problem is.

Can anyone suggest how to diagnose what Squid is doing/waiting for?

Thanks.


[squid-users] ntlmauthenticator errors

2011-03-22 Thread Steve-Mustafa Ismail Mustafa

Hi,

I've been trying to setup squid to limit the accessibility to the 
internet at the local Red Cross hospital because of over usage. As such, 
I've setup a security group on our AD, InternetUsers where only those 
members of that group are capable of connecting to the web, otherwise, 
all their traffic is within our local network.


I've joined Debian Squeeze to the domain without much hassle.  This is 
on a VM Debian Squeeze, Squid 2.7 stable 9.


My squid.conf is:

auth_param ntlm program /usr/lib/squid/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp 
--require-membership-of=RCH\InternetUsers
auth_param basic program /usr/lib/squid/ntlm_auth 
--helper-protocol=squid-2.5-basic 
--require-membership-of=RCH\InternetUsers

auth_param ntlm children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours external acl type nt group ttl=0 
concurrency=5 %LOGIN

/usr/lib/squid/wbinfo_group.pl
#auth_param ntlm max_challenge_reuses 0
#auth_param ntlm max_challenge_lifetime 2 minutes


http_port 3128
acl all src 192.168.10.0/24
acl InternetUsers proxy_auth REQUIRED
http_access allow InternetUsers
http_access deny all


You can see that it needs cleaning up a bit because of the 
experimentation that went on trying to get it to work.  
max_challenge_reuses and max_challenge_lifetime are a carryover from 
when I followed the suggested config on the site (outdated I guess).


Firing up squid through /etc/init.d/squid start gives me unrecognized 
'/usr/lib/squid/wbinfo_group.pl'

Starting it with /usr/sbin/squid -NCdl  comes back with
WARNING: ntlmauthenticator #2 (FD 9) exited
WARNING: ntlmauthenticator #2 (FD 10) exited
WARNING: ntlmauthenticator #2 (FD 11) exited
Too few ntlmauthenticator processes are running
Aborted

checking the log messages yields:

Squid Parent: child process 24182 started
Squid Parent: child process 24182 exited due to signal 6

Any clues? I'm completely stumped and I've been at this a few days now 
and I'd like to move on.


Thanks in advance

SMIM



[squid-users] Help - Squid 3.2.0.4 - IPV6 - Crashes

2011-03-19 Thread Steve
Hi all,

I wonder if anyone can help. I am using Squid 3.2.0.4 compiled with
-disable-ipv6. Whenever I access hosts that have  records as well as A
records then an attempt is made to access them using IPv6. My network
doesn't have IPv6 so it is no surprise the attempts to connect to IPv6
destinations do not work. Given that I have compiled with -disable-ipv6
option I would not expect Squid to even bother looking up the  records.
Squid also crashes and automatically restarts after the connect to an IPv6
address fails.

I have included a relevant log extract below showing the problem as well as
output from squid -v.

Is there any way to force Squid to fully ignore ipV6?

Any assistance or pointers would be gratefully received.

Thanks
Steve

Squid Cache: Version 3.2.0.4
configure options:  '--prefix=/usr/arcadia/Squid_0'
'--enable-replacement-policies=heap,lru' '--enable-snmp'
'--enable-delay-pools' '--enable-removal-policies=heap,lru'
'--enable-useragent-log' '--enable-auth' '--enable-auth-basic=LDAP'
'--enable-external-acl-helpers=LDAP_group' '--disable-ipv6'
--enable-ltdl-convenience


2011/03/16 23:27:25.278 kid1| idnsALookup: buf is 39 bytes for
secure.nominet.org.uk, id = 0x0
2011/03/16 23:27:25.296 kid1| idnsRead: starting with FD 8
2011/03/16 23:27:25.296 kid1| commSetSelect(FD
8,type=1,handler=1,client_data=0,timeout=0)
2011/03/16 23:27:25.296 kid1| comm_udp_recvfrom: FD 8 from [::]
2011/03/16 23:27:25.296 kid1| idnsRead: FD 8: received 91 bytes from
10.99.99.55:53
2011/03/16 23:27:25.296 kid1| idnsGrokReply: QID 0xb731, 2 answers
2011/03/16 23:27:25.296 kid1| idnsGrokReply: secure.nominet.org.uk 
query done. Configured to retrieve A now also.
2011/03/16 23:27:25.296 kid1| comm_udp_recvfrom: FD 8 from 10.99.99.55:53
2011/03/16 23:27:25.300 kid1| comm_select(): got FD 8 events=1 monitoring=19
F-read_handler=1 F-write_handler=0
2011/03/16 23:27:25.300 kid1| comm_select(): Calling read handler on FD 8
2011/03/16 23:27:25.300 kid1| idnsRead: starting with FD 8
2011/03/16 23:27:25.300 kid1| commSetSelect(FD
8,type=1,handler=1,client_data=0,timeout=0)
2011/03/16 23:27:25.300 kid1| comm_udp_recvfrom: FD 8 from [::]
2011/03/16 23:27:25.300 kid1| idnsRead: FD 8: received 79 bytes from
10.99.99.55:53
2011/03/16 23:27:25.300 kid1| idnsGrokReply: QID 0xb000, 2 answers
2011/03/16 23:27:25.300 kid1| dns_internal.cc(1197) idnsGrokReply: Merging
DNS results secure.nominet.org.uk  has 2 RR, A has 2 RR
2011/03/16 23:27:25.300 kid1| dns_internal.cc(1221) idnsGrokReply: Sending 4
DNS results to caller.
2011/03/16 23:27:25.300 kid1| comm.cc(1033) connect: to
[2a01:618:300:1:3a0:b555:5db7:f25d]:443
2011/03/16 23:27:25.300 kid1| comm.cc(1163) comm_connect_addr: connecting
socket FD 15 to [2a01:618:300:1:3a0:b555:5db7:f25d]:443 (want family: 2)
2011/03/16 23:27:25.300 kid1| comm.cc(1049) connect: FD 15:
COMM_ERR_PROTOCOL - try again
2011/03/16 23:27:25.300 kid1| commSetSelect(FD
15,type=0,handler=0,client_data=0,timeout=0)
2011/03/16 23:27:25.300 kid1| comm_udp_recvfrom: FD 8 from 10.99.99.55:53
2011/03/16 23:27:28.313 kid1| fd_open() FD 5 /nosave/Squid_0/logs/cache.log
2011/03/16 23:27:28.313 kid1| Starting Squid Cache version 3.2.0.4 for
x86_64-unknown-linux-gnu...
2011/03/16 23:27:28.313 kid1| idnsInit: attempt open DNS socket to: [::]
2011/03/16 23:27:28.313 kid1| comm_open: FD 7 is a new socket
2011/03/16 23:27:28.314 kid1| fd_open() FD 7 DNS Socket IPv6




Re: [squid-users] Squid 3.0 icap HIT

2010-11-08 Thread Steve Hill

On Sat, 6 Nov 2010, Luis Enrique Sanchez Arce wrote:


When squid resolve the resource from cache does not send the answer to ICAP.
How I can change this behavior?


You need a respmod_postcache hook, which unfortunately hasn't been 
implemented yet.  The workaround I use is to run two separate Squid 
instances - one of them does all the usual caching stuff and listens only 
on [::1]:3129.  A second Squid instance runs with caching turned off 
entirely, forwarding requests to [::1]:3129.  The second squid instance 
is configured to talk to the ICAP service.  All the clients connect to the 
second instance.


My configuration for the non-caching Squid instance that talks to the ICAP 
server is here:

https://subversion.opendium.net/trac/free/browser/thirdparty/squid/trunk/extra_sources/squid-nocache.conf

This effectively provides a precache reqmod hook (reqmod_precache) and a 
postcache respmod hook (respmod_precache).  The caching Squid would 
provide the same precache reqmod hook (reqmod_precache) and a precache 
respmod hook (respmod_precache), although I don't have a use for these 
myself.


Its a bit nasty, but it happens to work. :)

--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com



Re: [squid-users] Leaking ICAP connections

2010-10-19 Thread Steve Hill

On Mon, 18 Oct 2010, Amos Jeffries wrote:


Sounds a lot to me like some rare response from ICAP which confuses Squid
about the reply size.


This is possible.  Although shouldn't Squid time out ICAP requests (and 
close the connection) if the response takes too long to complete?



Or a persistent connection race leaking FD.


By persistent connection race you mean the ICAP server closing the 
connection when squid is trying to use it?  This is definately not what is 
happenning - my ICAP server never tries to close the connection 
(http://bugs.squid-cache.org/show_bug.cgi?id=2980 suggests it would be 
unsafe to try and do so) and the connections are still in the established 
state (according to both netstat and the ICAP server itself).  It seems as 
though Squid just decides to stop using the connection but doesn't 
actually shut it down.


Unfortunately I'm way too unfamiliar with the internal structure of squid 
to attach a debugger to the process and look to see what it thinks the 
ICAP connections are doing.


Pity it's so rare, the fix is likely to require a trace of the ICAP 
server response headers.


This is possibly something I can help with.  I added debugging code to the 
ICAP server to maintain a short in-memory buffer of the last few data 
written to the socket so that I could attach gdb to the process and dump 
it.


The result was:
ICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 Sep 2010 08:56:37 
+\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:37 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: 
null-body=0\r\n\r\nICAP/1.0 204 No modifications needed\r\nDate: Thu, 23 
Sep 2010 08:56:38 +\r\nService: Iceni Webfilter\r\nISTag: 
SHA1:dff441bfae699b4007b3c5f29ab\r\nEncapsulated: null-body=0\r\n\r\n


I can't see anything wrong with these headers.  The last chunk of data 
received by the ICAP server was:


f0\r\n\304\324\315\216\2330\020\000\340\373\005\227%\255\224%\343?\fTho}\200\366\330V\025\204\t\270\313\217\vN\233\250\332w\257\215vW!\240H\221\252f\016\366`\214\261?\214\217\370\204?\3678\230\340\363\266W\332\004][wY\361nEh$(\243$\226R\362\325\372\316\263\361\'\307]\327c\3421X{\331\316`\237x\241\313U\223\2258$\336\227\261\333\330\325{K]T=\356\022o\365us\314\332b\300\254\337V\217X\244\3047x0i\331e\346\236B\327\332WO\350\017\346\2501\035\306a}\325\224\337\367}\235\252!\320\252}r\035\202m\327\334\323\217\272\352LgkJ\200\330\252\301\302\225:\327j\353\232\200\210\340\207.\375A\017\312`Z\366\231\252\'C\370:e\360\262\266\327\260o\262\023\255\214\321\311f\243\032\376`\366y\340f\215\r\n0\r\n\r\n

This looks like the end of a correctly terminated ICAP request, but there 
isn't much more I can tell from it.


I will attempt to capture further data

--

 - Steve
   xmpp:st...@nexusuk.org   sip:st...@nexusuk.org   http://www.nexusuk.org/

 Servatis a periculum, servatis a maleficum - Whisper, Evanescence



Re: [squid-users] Leaking ICAP connections

2010-10-18 Thread Steve Hill


On Fri, 15 Oct 2010, Amos Jeffries wrote:


 First step is upgrading to 3.1.8 to see if its one of the many found and
 solved bugs.
 If its still remains there check bugzilla for any references.


I'll certainly check with the latest Squid, but I haven't found anything in 
bugzilla to suggest that this bug is either known about or fixed.  This bug 
tends to take several weeks or months to show up (and for some reason only 
appears on 2 of our servers), so just checking on a slightly newer release on 
the offchance that the bug has been fixed by accident is going to take a really 
long time. :(


--

 - Steve



[squid-users] Leaking ICAP connections

2010-10-14 Thread Steve Hill


I am using Squid 3.1.0.14, configured to make REQMOD and RESPMOD requests 
to a local ICAP server.  Everything seems to work fine, except the number 
of connections between Squid and the ICAP server sometimes keeps 
increasing over the course of days or weeks.


I haven't been able to figure out what is triggering the problem, but it 
appears that in certain circumstances, Squid just stops using one of the 
ICAP connections - from what I can tell, the ICAP server has finished 
dealing with a request and is waiting for the next one, but Squid never 
sends a new request.  Squid continues to operate ok, bringing up more ICAP 
connections to accommodate more client requests whilst the lost 
connection stays dormant.


The ICAP server is configured to allow a maximum of 100 concurrent 
connections and eventually, the number of lost connections becomes so 
great that this limit is reached and the ICAP server starts rejecting the 
new connections that Squid is bringing up.  At this point the users start 
getting Squid's ICAP error page.


Since I'm unfamiliar with the internal structure of Squid, I'm 
not really sure where to begin with debugging Squid itself.  I think I've 
done as much debugging as is possible from the ICAP server side (this 
seems to indicate that the ICAP session itself has been fine - the last 
ICAP request from Squid looks fine and has terminated and the last ICAP 
response from the ICAP server looks fine and the server is sat waiting for 
a new request that never comes).


This problem isn't something that can be reliably worked around on the 
ICAP server end - the ICAP server has no way of knowing if a connection 
from Squid has been lost (i.e. still open but will never again be used) 
or if it is simply idle.  Because of this, having the ICAP server time 
out idle connections would introduce a race condition - if the connection 
is just idle, rather than lost, the ICAP server might time it out and 
close it just as Squid starts sending a new request; in this case the 
request would fail and the user would get an error page.


Any suggestions on how to debug the problem would be greatfully received.

Thanks.

--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com


Re: [squid-users] Build Fail for 3.1.4 on IBM PowerPC64

2010-06-21 Thread Steve Hall
Silamael

Many Thanks.

I have tested with a later daily build and it works OK.

Cheers
Steve

Sent from my iPad

On 19 Jun 2010, at 10:30, Silamael silam...@coronamundi.de wrote:

 On 06/18/2010 08:27 PM, Steve wrote:
 Hi
 I am getting a compile error with 3.1.4 on IBM Power ppc64 running Red Hat
 Enterprise Linux AS release 4 (Nahant Update 8).
 The error is :-
 mem.cc: In function `void memConfigure()':
 mem.cc:359: warning: converting of negative value `-0x1' to
 `size_t'
 
 src/mem.cc has a change from 3.1.3 from
 new_pool_limit = mem_unlimited_size;
 To
  new_pool_limit = -1;
 which seems to be the problem.
 Compiler is gcc version 3.4.6 20060404 (Red Hat 3.4.6-11).
 Thanks
 Steve
 
 
 Hi Steve,
 
 That's a typo. new_pool_limit shoul be a ssize_t. I think i sent already
 some patch.
 Seems that our older c++ compiler are more picky about types than the
 newer versions ;)
 
 Greetings,
 Matthias
 


[squid-users] Build Fail for 3.1.4 on IBM PowerPC64

2010-06-18 Thread Steve
Hi
I am getting a compile error with 3.1.4 on IBM Power ppc64 running Red Hat
Enterprise Linux AS release 4 (Nahant Update 8).
The error is :-
mem.cc: In function `void memConfigure()':
mem.cc:359: warning: converting of negative value `-0x1' to
`size_t'

src/mem.cc has a change from 3.1.3 from
 new_pool_limit = mem_unlimited_size;
To
 new_pool_limit = -1;
which seems to be the problem.
Compiler is gcc version 3.4.6 20060404 (Red Hat 3.4.6-11).
Thanks
Steve





RE: [squid-users] agent.log and https clients

2010-05-21 Thread Steve
Henrik

Thanks for that. Explains what I am seeing for this software. Given my browser 
acls which only allow specific known user-agents then anything that doesn't 
come into squid with a User-Agent header is going to get the forbidden error. 
As this particular software uses one particular site then maybe a dstdomain acl 
above the browser ones will be the answer. I'll give that a try.

Thanks again.
Steve

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: 21 May 2010 08:36
To: Steve
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] agent.log and https clients

fre 2010-05-21 klockan 00:20 +0100 skrev Steve:

 I have a quick question about agent.log. Does the user agent get 
 logged for https clients?

Most agents do not indicate who they are in CONNECT requests.

Regards
Henrik




[squid-users] agent.log and https clients

2010-05-20 Thread Steve
Hi all,

I have a quick question about agent.log. Does the user agent get logged for
https clients?

I have some acls to allow only known user-agents and need to add whatever
the user-agent is for Cisco ICM Script Editor which only connects via the
proxy on https.

I have disabled the agent acl to allow this client software to work but
cannot find the user agent in agent.log. Nothing gets logged.

Any clues would be helpful.

Thanks
Steve




Re: [squid-users] Build fail for versions squid-3.0.STABLE21 and later on IBM PowerPC64

2010-04-16 Thread Steve
Amos

Many Thanks. Have tried the latest snapshot and all is fine. Thanks
for swift fix.

Steve

On Fri, Apr 16, 2010 at 2:46 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Steve wrote:

 Hi
 This is my first post to such a list so please go easy with me and I
 hope I have found the correct place. I have a problem compiling any
 release after  version squid-3.0.STABLE20 on IBM Power ppc64 running
 Red Hat Enterprise Linux AS release 4 (Nahant Update 8).
 I have been using Squid on this platform for about 5 years and been
 compiling previous releases without any issues. I have done the usual
 googling and searching of the list but it seems I am unique in
 experiencing this problem!

 The key errors I am seeing are displayed in the back end of the log
 extract below and seem to relate to rfc1738.c: In function
 `rfc1738_unescape' and the warnings generated relating to comparison
 is always false.
 Compiler is gcc version 3.4.6 20060404 (Red Hat 3.4.6-11).

 mv -f .deps/md5.Tpo .deps/md5.Po
 gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
 -Wmissing-declarations -Wcomments -Wall -g -O2 -MT radix.o -MD -MP -MF
 .deps/radix.Tpo -c -o radix.o radix.c
 mv -f .deps/radix.Tpo .deps/radix.Po
 gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
 -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1035.o -MD -MP
 -MF .deps/rfc1035.Tpo -c -o rfc1035.o rfc1035.c
 mv -f .deps/rfc1035.Tpo .deps/rfc1035.Po
 gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
 -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1123.o -MD -MP
 -MF .deps/rfc1123.Tpo -c -o rfc1123.o rfc1123.c
 mv -f .deps/rfc1123.Tpo .deps/rfc1123.Po
 gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
 -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1738.o -MD -MP
 -MF .deps/rfc1738.Tpo -c -o rfc1738.o rfc1738.c
 rfc1738.c: In function `rfc1738_unescape':
 rfc1738.c:209: warning: comparison is always false due to limited
 range of data type
 rfc1738.c:212: warning: comparison is always false due to limited
 range of data type
 make[2]: *** [rfc1738.o] Error 1
 make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
 make[1]: *** [all-recursive] Error 1
 make[1]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'make:
 *** [all-recursive] Error 1
 Making install in lib
 make[1]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib'
 Making all in libTrie
 make[2]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 make  all-recursive
 make[3]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 Making all in src
 make[4]: Entering directory
 `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/src'
 make[4]: Nothing to be done for `all'.
 make[4]: Leaving directory
 `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/src'
 Making all in test
 make[4]: Entering directory
 `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/test'
 make[4]: Nothing to be done for `all'.
 make[4]: Leaving directory
 `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie/test'
 make[4]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 make[4]: Nothing to be done for `all-am'.
 make[4]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 make[3]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib/libTrie'
 make[2]: Entering directory `/arc1/Squid/squid-3.0.STABLE21/lib'
 gcc -DHAVE_CONFIG_H -I. -I../include -I../include -I../include
 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wmissing-prototypes
 -Wmissing-declarations -Wcomments -Wall -g -O2 -MT rfc1738.o -MD -MP
 -MF .deps/rfc1738.Tpo -c -o rfc1738.o rfc1738.c
 rfc1738.c: In function `rfc1738_unescape':
 rfc1738.c:209: warning: comparison is always false due to limited
 range of data type
 rfc1738.c:212: warning: comparison is always false due to limited
 range of data type
 make[2]: *** [rfc1738.o] Error 1
 make[2]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
 make[1]: *** [all-recursive] Error 1
 make[1]: Leaving directory `/arc1/Squid/squid-3.0.STABLE21/lib'
 make: *** [install-recursive] Error 1

 I get the same for  squid-3.0.STABLE21, squid-3.0.STABLE22,
 squid-3.0.STABLE23, squid-3.0.STABLE24 and squid-3.0.STABLE25.
 Squid-3.1.1 also will not compile for me.

 This suggests to me that lib/rfc1738.c is an area to look at and I
 find that the file is different between releases. If I copy the
 version from squid-3.0.STABLE20 to squid-3.0.STABLE21 and try again
 then the build works. I don’t know what the knock on effect of this
 would be but I guess a more permanent fix at source may be possible.

 Any help on getting these working without copying source files between
 releases

[squid-users] Posting Files

2009-10-18 Thread Steve Allen
Hi,

I'm having trouble posting files through squid.

Files below around the Meg mark work ok.

Files bigger, seem to timeout or break midstream.

Most of the problems seem to be uploading photo's to sites like facebook and
megaupload.

I can post to these sites fine if I bypass the proxy and Nat a connection
out directly from the client.

Squid log shows 

TCP_MISS/502 2511 POST http://www617.megaupload.com/upload_done.php? -
DIRECT/174.140.129.22 text/html

Tcpdump

14:53:11.841579 IP (tos 0x0, ttl 127, id 40440, offset 0, flags [DF], proto:
TCP (6), length: 40) bit4.xx.xxx.au.1120  proxytest.svr.afc.3128: .,
cksum 0x0ed8 (correct), 1368140:1368140(0) ack 4245 win 64484
14:53:11.841605 IP (tos 0x0, ttl 127, id 40442, offset 0, flags [DF], proto:
TCP (6), length: 40) bit4.xx.xxx.au.1120  proxytest.svr.afc.3128: F,
cksum 0x0ed7 (correct), 1368140:1368140(0) ack 4245 win 64484
14:53:11.841642 IP (tos 0x0, ttl  64, id 62880, offset 0, flags [DF], proto:
TCP (6), length: 40) proxytest.svr.afc.3128  bit4.xx.xxx.au.1120: .,
cksum 0xe0f8 (incorrect (- 0x0abd), 4245:4245(0) ack 1368141 win 65534

FreeBSD proxytest.svr.afc 6.4-RELEASE-p5
squid-3.0.19

I've been messing around with these 3 settings to try and get it working.

chunked_request_body_max_size 0
maximum_object_size 500024 KB
request_body_max_size 0 K

Any ideas what I've done wrong?

Squid conf below.

Cheers
Steve

Squid conf
#listen port
http_port 3128

hierarchy_stoplist cgi-bin ?

cache_mem 16 MB
cache_dir ufs /data/squid/cache 5000 24 256
cache_access_log /data/squid/logs/squid-access.log
cache_log /data/squid/logs/squid-cache.log
cache_store_log /data/squid/logs/squid-store.log

forwarded_for transparent
 
pid_filename /data/squid/logs/squid.pid

auth_param ntlm program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 20
auth_param ntlm keep_alive on
auth_param basic program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 20
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT
acl QUERY urlpath_regex cgi-bin \?
acl passwordexception src /usr/local/etc/squid/PasswordByPass
acl safesites url_regex -i /usr/local/etc/squid/SafeSites
acl Authenticated proxy_auth REQUIRED

no_cache deny QUERY

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow passwordexception
http_access allow safesites
http_access allow Authenticated
http_access deny !Authenticated
http_access deny all
 
http_reply_access allow all
 
acl FTP proto FTP
always_direct allow FTP

chunked_request_body_max_size 0
maximum_object_size 500024 KB
request_body_max_size 0 K
cache_mgr ad...@xx.xxx.au

coredump_dir /data/squid/cache
debug_options ALL,1



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Confidentiality: This e-mail is from the Adelaide Festival Centre.
The contents are intended only for the named recipient of this e-mail.
If the reader of this e-mail is not the intended recipient you are hereby
notified that any use, reproduction, disclosure or distribution of the
information contained in the e-mail is prohibited. If you have received
this e-mail in error, please reply to us immediately and delete the document
from your system. Any personal views expressed in this communication are those 
of
the individual sender unless the sender expressly states them to be the views
of the Adelaide Festival Centre. No liability will be accepted for any loss
or damage whatsoever (whether direct or consequential) resulting from the
use of the email or any attached files.



Re: [squid-users] Using Squid/Squidguard on

2009-06-26 Thread Steve Allen


Do the clients connect to the Squid port or to the Dansguardian port?
Clients Connect to Dansguardian, then it connects to squid, then squid 
goes out of the net.

Why Dans-Squid-Web and not Squid-Dans-Web?
Why, coz all the configuration i've seen does it this way.  I didn't 
think it worked the other way.





Re: [squid-users] Using Squid/Squidguard on

2009-06-25 Thread Steve Allen

Hey,

Do you know if Dansguardian supports NTLM now?
The latest version does but from what I understand you still need squid 
to do the actual authentication.


Dansguardian just uses the ntlm username for group based filtering.

My Setup is

Dans -- Squid -- Web

You have to choose the option to be compiled in and uncomment the 
'/usr/local/etc/dansguardian/authplugins/proxy-ntlm.conf' line


Cheers
Steve



[squid-users] Squid requiring domain for auth

2009-06-22 Thread Steve Allen

Hi,

I'm setting up a squid proxy to auth against our 2003 ADS

I have ntlm working so it authenticates both transparently 
to the user and using domain\username login.


My Problem is getting squid to auth with just the username not 
requiring the domain\ part.


The docs say I need to have winbind use default domain = yes which I do.

With the option set to yes I get

proxyv4# wbinfo -u | grep test99
test99


without the option I get
proxyv4# wbinfo -u | grep test99
AFCT\test99

What am I missing?  I didn't configure anything for kerberos because of this 
line in the samba howto

With both MIT and Heimdal Kerberos, it is unnecessary to configure the /etc/krb5.conf, and it may be detrimental. 


My system hasn't got a the krb5.conf at all and I wonder if the lack of said file is causing me to have to 
enter the AFCT\test99 format?


Cheers
Steve

FreeBSD 6.4-RELEASE-p5 AMD64
Squid Cache: Version 3.0.STABLE15
Samba Version 3.3.4
Windows 2003 ADS in what appears for be native mode.

smb.conf

[GLOBAL]
workgroup = AFCT
realm = afct.org.au
Server String = AFC Proxy
security = ads
encrypt passwords = yes
winbind use default domain = yes
wins server = 10.1.1.5


Relevant lines in squid for ntlm

auth_param ntlm program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on
auth_param basic program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours




[squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Steve Webb

Hello.

I'm caching dynamic content (urls with ? and  in them) and everything's 
working fine with one exception.


I'm seeing ICP queries for only static content and not dynamic content 
even though squid is actually caching dynamic content.


Q: Is there a setting somewhere to ask squid to also do ICP queries for 
dynamic content like there was with the no-cache directive to originally 
not cache dynamic content (aka cgi-bin and ? content)?


I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm 
trying to stick with the same versions across the board and I don't have 
time to run my config through QA with 3.0 at this time.  Please don't tell 
me to upgrade.)


My cache_peer lines look like:

cache_peer 10.23.14.4   sibling 80  3130  proxy-only

This is for a reverse proxy setup.

Dataflow is:

Customer - Internet - Akamai - LB - squid - LB - apache - LB - storage

The apache layer does an image resize (which I want to cache) and the 
url is http://xxx/resize.php?w=xxh=xx;...


The storage layer is just another group of apache servers that serve-up 
the raw files.


LB is a load-balancer.

- Steve

--
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to: [EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


Re: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Steve Webb

That did it.  Thanks!

- Steve

On Wed, 19 Nov 2008, Chris Robertson wrote:


Date: Wed, 19 Nov 2008 11:42:07 -0900
From: Chris Robertson [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

Steve Webb wrote:

Hello.

I'm caching dynamic content (urls with ? and  in them) and everything's 
working fine with one exception.


I'm seeing ICP queries for only static content and not dynamic content even 
though squid is actually caching dynamic content.


Q: Is there a setting somewhere to ask squid to also do ICP queries for 
dynamic content like there was with the no-cache directive to originally 
not cache dynamic content (aka cgi-bin and ? content)?


http://www.squid-cache.org/Doc/config/hierarchy_stoplist/



I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm 
trying to stick with the same versions across the board and I don't have 
time to run my config through QA with 3.0 at this time.  Please don't tell 
me to upgrade.)


My cache_peer lines look like:

cache_peer 10.23.14.4   sibling 80  3130  proxy-only

This is for a reverse proxy setup.

Dataflow is:

Customer - Internet - Akamai - LB - squid - LB - apache - LB - 
storage


The apache layer does an image resize (which I want to cache) and the url 
is http://xxx/resize.php?w=xxh=xx;...


The storage layer is just another group of apache servers that serve-up 
the raw files.


LB is a load-balancer.

- Steve



Chris



--
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to: [EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


Re: AW: [squid-users] Differences between Squid 2.7 and 3.0

2008-08-27 Thread Steve Bertrand

Altrock, Jens wrote:
So there is no significant change in features, only in programming language, is that right? 


IPv6... ;)

Steve


Re: AW: [squid-users] Differences between Squid 2.7 and 3.0

2008-08-27 Thread Steve Snyder
On Wednesday 27 August 2008 08:48:36 am Steve Bertrand wrote:
 Altrock, Jens wrote:
  So there is no significant change in features, only in programming
  language, is that right?

 IPv6... ;)

There is no IPv6 support in Squid v3.0.  It is schedule for inclusion in 
v3.1


Re: AW: [squid-users] Differences between Squid 2.7 and 3.0

2008-08-27 Thread Steve Bertrand

Steve Snyder wrote:

On Wednesday 27 August 2008 08:48:36 am Steve Bertrand wrote:

Altrock, Jens wrote:

So there is no significant change in features, only in programming
language, is that right?

IPv6... ;)


There is no IPv6 support in Squid v3.0.  It is schedule for inclusion in 
v3.1


Oops! My bad...

Steve



Re: [squid-users] Zero Sized Reply / Invalid response

2008-08-26 Thread Steve Bertrand

Henrik Nordstrom wrote:

On tis, 2008-08-26 at 10:22 +0100, Pedro Mansito Pérez wrote:

Hello Henrik,

I have never used wireshark or tshark, so excuse my ignorance.


You need to use the wireshark gui to access the TCP stream analysis
function.

You can run the gui on another host (including Windows) by first making
a packet capture on the server using tshark, tcpdump or another packet
capture tool and then load that in the gui..



- # tcpdump -n -i ethX -s 0 -w /path/to/packet-dump.pcap
(-s 0 specifies capture the entire frame, not just the first XX bytes)
- trigger the event
- stop the pcap
- copy the pcap file from the Squid box to a Windows (or any platform 
with a GUI based Wireshark install)

- open the file in Wireshark
- right click on one of the frames of the conversation
- click on 'Follow TCP Stream'

Steve


Re: [squid-users] IPv6-enabled Squid?

2008-08-25 Thread Steve Bertrand

Amos Jeffries wrote:

Steve Snyder wrote:
Is there a current guesstimate as to when an IPv6-capable version of 
Squid will be released?


According to the FAQ, IPv6 support is already checked into the Squid3 
trunk.  It doesn't give any indicate, though, of when this 
functionality will be available in a non-development release.


Cluestick, please?  Thanks.


Most of the developers have just revisited the issue in our recent 
get-together. It's looking hopeful for a late October release.


Meanwhile 3-HEAD should be stable enough at present to begin your local 
testing if its urgent.


I've been running 3-HEAD (as of/since) May 27 for IPv6 support with no 
issues whatsoever.


Steve


Re: [squid-users] IPv6-enabled Squid?

2008-08-25 Thread Steve Snyder
On Monday 25 August 2008 09:54:37 am Steve Bertrand wrote:
 Amos Jeffries wrote:
  Steve Snyder wrote:
  Is there a current guesstimate as to when an IPv6-capable version
  of Squid will be released?
 
  According to the FAQ, IPv6 support is already checked into the
  Squid3 trunk.  It doesn't give any indicate, though, of when this
  functionality will be available in a non-development release.
 
  Cluestick, please?  Thanks.
 
  Most of the developers have just revisited the issue in our recent
  get-together. It's looking hopeful for a late October release.
 
  Meanwhile 3-HEAD should be stable enough at present to begin your
  local testing if its urgent.

 I've been running 3-HEAD (as of/since) May 27 for IPv6 support with
 no issues whatsoever.

Another data point for those who may be interested:  I've been running 
the 20080820 snapshot of 3.HEAD for all of 5 days now and it seems to 
be working fine.  I can't comment on the other new 3.HEAD features, but 
IPv6 support is working as advertised.



  1   2   3   >