Re: [squid-users] [squid-announce] Squid 4.2 is available

2018-08-13 Thread Dan Charlesworth
I'd be all over any Squid 4 RPMs for EL6, for what that's worth.

I had downloaded your source RPM for EL7 at one point and tried to build
one for EL6. Dealing with the compiler issues was a bit beyond me though,
sadly.

On Tue, 14 Aug 2018 at 05:46, Eliezer Croitoru  wrote:

> I need to test it but I didn't had plans to release 4.X branch for CentOS
> 6.
> It takes me time to test it and I hope I will have more time for it.
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Walter H.
> Sent: Saturday, August 11, 2018 12:47 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] [squid-announce] Squid 4.2 is available
>
> On 10.08.2018 07:41, Amos Jeffries wrote:
> > The Squid HTTP Proxy team is very pleased to announce the availability
> > of the Squid-4.2 release!
> >
> >
>
> will there be a RPM for latest CentOS 6 available?
>
> Walter
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Getbusi
p +61 3 6165 1555
e d...@getbusi.com
w getbusi.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] quiet week

2018-06-03 Thread Dan Charlesworth
Copy, Amos — receiving you loud and clear :)

On Mon, 4 Jun 2018 at 15:47, Amos Jeffries  wrote:

> Hi anyone,
>  just testing to see if the list server is still operational. Things
> have been suspiciously quiet this week.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Getbusi
p +61 3 6165 1555
e d...@getbusi.com
w getbusi.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4 EL6 RPMs

2018-03-21 Thread Dan Charlesworth
Hello all,

I'm wondering if anyone can point to a Squid 4 RPM package for CentOS /
RHEL 6.

I've had a search around, but it seems people are only packaging it for EL7.

I did try compiling an EL6 RPM myself, based on an EL7 source RPM, but I'm
not adept in this area and couldn't get past certain unfamiliar errors.

Any advice welcome!

Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store size not decreasing

2017-05-19 Thread Dan Charlesworth
Okay, cool — thanks for clarifying.

Guess I'll nuke it myself and reinitialise a blank one.

Best,
Dan


On 19 May 2017 at 23:29, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 19/05/17 15:47, Dan Charlesworth wrote:
>
>> Hey all
>>
>> I'm fairly new to rock caching. With aufs, if you reduce the cache size
>> in the config it'll start slowly reducing it down the new size.
>>
>> I've done that with a ~137GB rock store (reduced it to 10240MB) but it
>> 'aint changing after reloading the config.
>>
>
> With UFS/AUFSdiskd the cache is stored in a directory tree with individual
> files per item. Reducing the size results in files being deleted from disk
> an the total size shrinks naturally without any special action by Squid.
>
> Rock on the other hand has all content stored inside one file. That file
> gets initialized with the space configured and maybe grown if needed. But
> there is nothing I'm aware of to reinitialize it on smaller sizes being
> configured. Reducing the size does reduce the size of stuff using space
> *inside* the database file, but AFAIK not the file itself.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
Getbusi
p +61 3 6165 1555
e d...@getbusi.com
w getbusi.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Rock store size not decreasing

2017-05-18 Thread Dan Charlesworth
Hey all

I'm fairly new to rock caching. With aufs, if you reduce the cache size in
the config it'll start slowly reducing it down the new size.

I've done that with a ~137GB rock store (reduced it to 10240MB) but it
'aint changing after reloading the config.

cache_dir rock /var/spool/squid/rock 10240

# du --max-depth=1 /var/spool/squid/ -h

137G /var/spool/squid/rock

What am I missing?

Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Access-Control-* headers missing when going through squid

2017-04-19 Thread Dan Charlesworth
Thanks Amos.As far as I can tell the only device upstream of the proxy is a relatively basic gateway/firewall. I doubt it's capable of messing with HTTP headers (and loading the site directly, as opposed to using the proxy lets it load fine behind the same gateway).I’ve attached the debug output you suggested. Looks like the headers in the browser are the same as what arriving and leaving the proxy?--
2017/04/20 11:49:47.815 kid1| ctx: enter level  0: 
'http://services.pressreader.com/se2skyservices/social/profiles/current/?accessToken=J5n-snnkwqI60m715mVRMm2ghwgrUBXQBhYBWaSyacJzjKCg5qy6LYJoZnGpRsF4r5qrvwLIp64A5xQWGN5-Aw!!=true'
2017/04/20 11:49:47.815 kid1| 11,2| http.cc(727) processReplyHeader: HTTP 
Server local=172.16.0.250:55706 remote=104.45.159.17:80 FD 469 flags=1
2017/04/20 11:49:47.815 kid1| 11,2| http.cc(728) processReplyHeader: HTTP 
Server REPLY:
-
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 237
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
Expires: -1
Server: Microsoft-IIS/10.0
Date: Thu, 20 Apr 2017 01:49:47 GMT

^_<8B>^H
--
2017/04/20 11:49:47.815 kid1| ctx: exit level  0
2017/04/20 11:49:47.815 kid1| 11,2| client_side.cc(1393) sendStartOfMessage: 
HTTP Client local=172.16.0.250:8080 remote=172.16.0.84:45721 FD 1797 flags=1
2017/04/20 11:49:47.815 kid1| 11,2| client_side.cc(1394) sendStartOfMessage: 
HTTP Client REPLY:
-
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 237
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
Expires: -1
Server: Microsoft-IIS/10.0
Date: Thu, 20 Apr 2017 01:49:47 GMT
X-Cache: MISS from livestream.sccs.com.au
X-Cache-Lookup: MISS from livestream.sccs.com.au:8080
Via: 1.1 livestream.sccs.com.au (squid/3.5.22)
Connection: keep-alive
--
2017/04/20 11:55:02.529 kid1| ctx: enter level  0: 
'http://services.pressreader.com/se2skyservices/social/profiles/current/?accessToken=ac-G5GPzpw47p5SU2jJO-kat-eV7P_Jwr8ErpYqSElZi6dekseTXwv8xjCwVl9dj_lbyyFxD-XEMTQSlajf_aQ!!=true'
2017/04/20 11:55:02.529 kid1| 11,2| http.cc(735) processReplyHeader: HTTP 
Server local=10.0.1.15:53762 remote=104.45.159.17:80 FD 60 flags=1
2017/04/20 11:55:02.529 kid1| 11,2| http.cc(736) processReplyHeader: HTTP 
Server REPLY:
-
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 237
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Expose-Headers: 
ndstate,X-PD-AProfile,X-PD-Profile,X-PD-Ticket,X-PD-Auth,X-PD-PAuth,X-PD-Token
ndstate: 
{"Sponsor":null,"Catalog":{"Hash":"0ubiHCQUm5xIzgzlKW9Gbw=="},"Ts":636282501023081355}
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://sheppartonnews.pressreader.com
ws: 5
svc: 5
ws: azure
Date: Thu, 20 Apr 2017 01:55:02 GMT

^_<8B>^H
--
2017/04/20 11:55:02.539 kid1| ctx: exit level  0
2017/04/20 11:55:02.539 kid1| 11,2| client_side.cc(1408) sendStartOfMessage: 
HTTP Client local=10.0.1.15:3128 remote=10.0.1.66:53293 FD 25 flags=1
2017/04/20 11:55:02.539 kid1| 11,2| client_side.cc(1409) sendStartOfMessage: 
HTTP Client REPLY:
-
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 237
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Expose-Headers: 
ndstate,X-PD-AProfile,X-PD-Profile,X-PD-Ticket,X-PD-Auth,X-PD-PAuth,X-PD-Token
ndstate: 
{"Sponsor":null,"Catalog":{"Hash":"0ubiHCQUm5xIzgzlKW9Gbw=="},"Ts":636282501023081355}
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://sheppartonnews.pressreader.com
ws: 5
svc: 5
ws: azure
Date: Thu, 20 Apr 2017 01:55:02 GMT
X-Cache: MISS from 10.0.1.15
X-Cache-Lookup: MISS from 10.0.1.15:3128
Via: 1.1 10.0.1.15 (squid/3.5.25)
Connection: keep-alive
Best,Dan
On 19 Apr 2017, at 2:41 pm, Amos Jeffries  wrote:Squid does not touch these headers itself unless you configure it to. So something there is altering them. It may be external MITM stuff, or Squid coping with broken input.Try adding "debug_options 11,2" to see what is actually arriving and leaving that proxy.Amos___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Access-Control-* headers missing when going through squid

2017-04-18 Thread Dan Charlesworth
Hi everyone,

This is a super weird one!

This Pressreader site (http://sheppartonnews.pressreader.com/shepparton-news) 
gets a totally different (erroneous) response from the server when accessing it 
through squid on a particular school's network.

It doesn’t happen through any other squid box on any other network I’ve tried, 
yet at this site you bypass squid through the same gateway and its fine; you 
use squid and it fails.

The only errors I can see in the browser (that happen when it fails) are CORS 
errors on several of the requests. Comparing the headers it looks like the 
erroneous requests lack these from the response:

Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://sheppartonnews.pressreader.com
Access-Control-Expose-Headers: 
ndstate,X-PD-AProfile,X-PD-Profile,X-PD-Ticket,X-PD-Auth,X-PD-PAuth,X-PD-Token

No, the squid config we’re using never touches headers. Every HTTP/S request 
from the client is being allowed and is a 200/304 in both situations.

(see attached for the full request response headers)

Make any sense to anyone?

REQUEST
Host: services.pressreader.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0
Accept: application/json, text/javascript, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://sheppartonnews.pressreader.com/shepparton-news
Origin: http://sheppartonnews.pressreader.com
Connection: keep-alive

RESPONSE
Cache-Control: no-cache
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 237
Content-Type: application/json; charset=utf-8
Date: Wed, 19 Apr 2017 00:55:46 GMT
Expires: -1
Pragma: no-cache
Server: Microsoft-IIS/10.0
Via: 1.1 livestream.sccs.com.au (squid/3.5.22)
X-Cache: MISS from livestream.sccs.com.au
X-Cache-Lookup: MISS from livestream.sccs.com.au:8080REQUEST
Host: services.pressreader.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) 
Gecko/20100101 Firefox/52.0
Accept: application/json, text/javascript, */*; q=0.01
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://sheppartonnews.pressreader.com/shepparton-news
Origin: http://sheppartonnews.pressreader.com
Proxy-Authorization: Basic c3RhZmYzLTIwMDg6cXFxcXFx
Connection: keep-alive

RESPONSE
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://sheppartonnews.pressreader.com
Access-Control-Expose-Headers: 
ndstate,X-PD-AProfile,X-PD-Profile,X-PD-Ticket,X-PD-Auth,X-PD-PAuth,X-PD-Token
Cache-Control: no-cache
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 237
Content-Type: application/json; charset=utf-8
Date: Wed, 19 Apr 2017 00:52:41 GMT
Expires: -1
Pragma: no-cache
Server: Microsoft-IIS/10.0
Via: 1.1 10.0.1.15 (squid/3.5.25)
X-Cache: MISS from 10.0.1.15
X-Cache-Lookup: MISS from 10.0.1.15:3128
ndstate: 
{"Sponsor":null,"Catalog":{"Hash":"DHnghbvXeRpe9Rrvt/xjIg=="},"Ts":636281599613822574}
svc: 8
ws: 8, azure___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-09-13 Thread Dan Charlesworth
I just want to throw my support behind seeking a solution to this problem. 
Luke’s clearly considered it in way more detail than anyone so far, myself 
included.

The affects the squids under my purview every day.

Best,
Dan

> On 14 Sep. 2016, at 10:18 am, squid-us...@filter.luko.org wrote:
> 
> Hi Squid users,
> 
> Seeking advice on how to slow down 407 responses to broken Apple & MS
> clients, which seem to retry at very short intervals and quickly fill the
> access.log with garbage.  The problem is very similar to this:
> 
> http://www.squid-cache.org/mail-archive/squid-users/201404/0326.html
> 
> However the config below doesn't seem to slow down the response:
> 
> acl delaydomains dstdomain .live.net .apple.com
> acl authresponse http_status 407
> external_acl_type delay ttl=0 negative_ttl=0 cache=0 %SRC /tmp/delay.pl
> acl delay external delay
> http_reply_access deny delaydomains authresponse delay
> http_reply_access allow all
> 
> The helper is never asked by Squid to process the request.  Just wondering
> if http_status ACLs can be used in http_reply_access?
> 
> My other thinking, if this isn't possible, was to mark 407 responses with
> clientside_tos so they could be delayed/throttled with tc or iptables.  Ie,
> 
> acl authresponse http_status 407
> clientside_tos 0x20 authresponse
> 
> However, auth response packets don't get the desired tos markings.  Instead
> the following message appears in cache.log:
> 
> 2016/09/13 11:35:43 kid1| WARNING: authresponse ACL is used in context
> without an HTTP response. Assuming mismatch.
> 
> After reviewing
> http://lists.squid-cache.org/pipermail/squid-users/2016-May/010630.html it
> seems like this has cropped up before.  The suggestion in that thread was to
> exclude 407 responses from the access log.  Fortunately this works.  But I'm
> wondering if there is a way to introduce delay into the 407 response itself?
> Partly to minimise load associated with serving broken clients, and also to
> maintain logging of actual intrusion attempts.  Any suggestions?
> 
> Luke
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-16 Thread Dan Charlesworth
Hey Steve,

Deployed a 3.5.20 build with both of those patches and have noticed a big 
improvement in memory consumption of squid processes at a couple of 
splice-heavy sites.

Thank you, sir!

Dan

> On 12 Aug 2016, at 7:05 PM, Steve Hill  wrote:
> 
> 
>>This sounds very similar to Squid bug 4508. Factory proposed a fix
>>for that bug, but the patch is for Squid v4. You may be able to adapt it
>>to v3. Testing (with any version) is very welcomed, of course:
> 
> Thanks for that - I'll look into adapting and testing it.
> 
> (been chasing this bug off and on for months - hadn't spotted that there was 
> a bug report open for it :)
> 
> 
> -- 
> - Steve Hill
>   Technical Director
>   Opendium Limited http://www.opendium.com
> 
> Sales / enquiries:
>   Email:sa...@opendium.com
>   Phone:+44-1792-824568 / sip:sa...@opendium.com
> 
> Support:
>   Email:supp...@opendium.com
>   Phone:+44-1792-825748 / sip:supp...@opendium.com
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-11 Thread Dan Charlesworth
Pretty sure this is affecting our 3.5.x systems as well — we use a very
similar splicing implementation.

I'll keep an eye out in hope someone adapts that patch!

Dan

On 12 August 2016 at 06:22, Alex Rousskov 
wrote:

> On 08/11/2016 10:56 AM, Steve Hill wrote:
>
> > At ssl_bump step 2 we splice the connection and Squid does verification
> ...
> > Unfortunately, when verification fails, rather than actually dropping
> > the client's connection, Squid just leaves the client hanging.
>
> Hi Steve,
>
> This sounds very similar to Squid bug 4508. Factory proposed a fix
> for that bug, but the patch is for Squid v4. You may be able to adapt it
> to v3. Testing (with any version) is very welcomed, of course:
>
>   http://bugs.squid-cache.org/show_bug.cgi?id=4508
>
> Alex.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Rate limiting bad clients?

2016-08-08 Thread Dan Charlesworth
Hi all,

This is more of a squid-adjacent query. Hopefully relevant enough for someone 
here to help…

I’m sick of all these web apps that take it upon themselves to hammer proxies 
when they don’t get the response they want, like if they have to authenticate 
for example. On big networks, behind a forward proxy, there’s always a few 
computers with some software doing dozens of identical, failing, requests per 
second.

- What’s a good approach for rate limiting the clients computers which are 
doing this?
- Can anyone point to a good tutorial for this using, say, iptables if that’s 
appropriate?

Any advice welcome.

Thanks!
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Empty response from website via proxy

2016-07-06 Thread Dan Charlesworth
It looks like I'm probably going to get fobbed off by this site's
administrators. "It's our load balancer" — "Simply set up a bypass" etc.

Is there any straightforward way to disable the X-Forwarded-For header just
for requests to this one website? What would be implications of that be?

Dan

On 5 July 2016 at 15:07, Dan Charlesworth <d...@getbusi.com> wrote:

> That’s a super helpful analysis, thanks Amos.
>
> Now to see if I track down the site admins 
>
> > On 5 Jul 2016, at 3:04 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> >
> > On 5/07/2016 4:25 p.m., Dan Charlesworth wrote:
> >> This website seems not send back a proper web page if the request comes
> via a (squid?) proxy.
> >>
> >> http://passporttosafety.com.au/
> >>
> >> Can anyone tell what might be going wrong here?
> >>
> >
> > Happens whenever it sees an X-Forwarded-For header.
> >
> > It looks to me like the server or a script in the origin is trying to
> > use that header for something (usually tracking the user by IPs) but
> > very broken and crashing. A sadly common situation.
> >
> > In this case though there is a Varnish proxy in front of it adding a
> > "Content-Length: 0" header to 'fix' the problem when the response
> > payload fails to appear before the origin connection aborts.
> >
> > Amos
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Empty response from website via proxy

2016-07-04 Thread Dan Charlesworth
That’s a super helpful analysis, thanks Amos.

Now to see if I track down the site admins 

> On 5 Jul 2016, at 3:04 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 5/07/2016 4:25 p.m., Dan Charlesworth wrote:
>> This website seems not send back a proper web page if the request comes via 
>> a (squid?) proxy.
>> 
>> http://passporttosafety.com.au/
>> 
>> Can anyone tell what might be going wrong here?
>> 
> 
> Happens whenever it sees an X-Forwarded-For header.
> 
> It looks to me like the server or a script in the origin is trying to
> use that header for something (usually tracking the user by IPs) but
> very broken and crashing. A sadly common situation.
> 
> In this case though there is a Varnish proxy in front of it adding a
> "Content-Length: 0" header to 'fix' the problem when the response
> payload fails to appear before the origin connection aborts.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Empty response from website via proxy

2016-07-04 Thread Dan Charlesworth
This website seems not send back a proper web page if the request comes via a 
(squid?) proxy.

http://passporttosafety.com.au/

Can anyone tell what might be going wrong here?

Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to analyse squid memory usage

2016-06-02 Thread Dan Charlesworth
No worries—thanks for following up on it!

That’s very interesting, about the concurrent requests, because the “normal” 
report does around 80% more requests per day than the “leaky” one — a few 
hundred thousand vs a couple of million.

Does this CLOSE_WAIT sockets issue have a bug being tracked or anything like 
that? I’ve probably overlooked the discussion on the list.

> On 1 Jun 2016, at 10:26 PM, Amos Jeffries  wrote:
> 
> Hi Dan,
> sorry RL getting in the way these weeks.
> 
> Two things stand out for me.
> 
> Its a bit odd that exteral ACL entries shodul be so high. But your
> "normal" report has more allocated than the "leaky" report. So thats
> just a sign that your external ACLs are not working very efficiently
> (results being fairly unique, so the lookup cache not being much use there).
> 
> In the "leaky" report there are 10K concurrent requests still active.
> Normal report shows only 1K requests. So up to 10x the state data
> storeage is needed by that proxy.
> 
> 
> I'm a little suspicious you might be seeing another symptom of the issue
> behind what others have been reporting as too many CLOSE_WAIT sockets
> staying open with Squid not doing anything for them.
> 
> Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to analyse squid memory usage

2016-05-23 Thread Dan Charlesworth
 1 251683 0.765 0.091 0.000
cbdata CbDataList (42)   963 1 2 0.50 0.000 0 0 2 0.50 0.000 3 1 2 541376 1.646 0.261 0.000
ACLStrategised1362 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLStrategised1362 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLStrategised1362 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
RegexList  883 1 1 8.20 0.000 3 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
cbdata ps_state (27)  2561 1 1 1.19 0.000 0 0 1 1.19 0.000 1 1 1 54661 0.166 0.070 0.000
cbdata RemovalPolicy (7)  1042 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLUserData643 1 1 8.20 0.000 3 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
HttpHdrSc  1611 1 1 0.41 0.000 11 1 1 0.41 100.000 0 0 1 8 0.000 0.000 0.000
UFSStoreState::_queued_read   404 1 1 0.44 0.000 0 0 1 0.44 0.000 4 1 1 2828 0.009 0.001 0.000
cbdata IoResult (38)   404 1 1 0.46 0.000 0 0 1 0.46 0.000 4 1 1 251683 0.765 0.051 0.000
cbdata CbDataList (5)   642 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
cbdata StoreSearchHashIndex (17)  1041 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 1 7 0.000 0.000 0.000
ACLHTTPHeaderData  482 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLSslErrorData166 1 1 8.20 0.000 6 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
cbdata WriteRequest (39)   801 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 881667 2.681 0.354 0.001
ACLAtStepData  243 1 1 8.20 0.000 3 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
StoreSwapLogData   721 1 1 8.20 0.000 0 0 1 8.20 0.000 1 1 1 95999 0.292 0.035 0.000
ACLMethodData  163 1 1 8.20 0.000 3 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
dwrite_q   481 1 1 8.20 0.000 0 0 1 8.20 0.000 1 1 1 977678 2.973 0.236 0.001
HttpHdrContRange   242 1 1 2.88 0.000 0 0 1 2.88 0.000 2 1 1 345 0.001 0.000 0.000
ACLNoteData401 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
cbdata IoResult (40)   401 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 881667 2.681 0.177 0.001
ACLASN 162 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLDomainData  162 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLHierCodeData321 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLTimeData321 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
ACLServerNameData  162 1 1 8.20 0.000 2 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
CacheDigest321 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
StoreMetaMD5   321 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 9852 0.030 0.002 0.000
StoreMetaSTDLFS321 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 9852 0.030 0.002 0.000
StoreMetaURL   321 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 9852 0.030 0.002 0.000
StoreMetaObjSize   321 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 9852 0.030 0.002 0.000
StoreMetaVary  321 1 1 8.19 0.000 0 0 1 8.19 0.000 1 1 1 1884 0.006 0.000 0.000
HttpHdrRange   321 1 1 2.88 0.000 0 0 1 2.88 0.000 1 1 1 202 0.001 0.000 0.000
FwdServer  241 1 1 8.20 0.000 0 0 1 8.20 0.000 1 1 1 54662 0.166 0.007 0.000
ACLProtocolData161 1 1 8.20 0.000 1 1 1 8.20 100.000 0 0 0 0 0.000 0.000 0.000
HttpHdrRangeSpec   161 1 1 2.88 0.000 0 0 1 2.88 0.000 1 1 1 202 0.001 0.000 0.000
32K Buffer   327680 0 64 0.88 0.000 0 0 64 0.88 -1.000 0 0 64 2 0.000 0.000 0.000
dlink_node 240 0 1 8.19 0.000 0 0 1 8.19 -1.000 0 0 1 70 0.000 0.000 0.000
cbdata nsvc (8)720 0 1 3.36 0.000 0 0 1 3.36 -1.000 0 0 1 4 0.000 0.000 0.000
cbdata RebuildState (15)  6880 0 1 8.20 0.000 0 0 1 8.20 -1.000 0 0 1 0 0.000 0.000 0.000
Total   11770758 488686 488686 0.00 100.000 1768829 485604 485604 0.00 99.369 1929 3083 6006 30974128 94.181 96.746 0.022
Cumulative allocated volume: 19.925 GB
Current overhead: 41966 bytes (0.008%)
Idle pool limit: 5.00 MB
Total Pools created: 134
Pools ever used: 126 (shown above)
Currently in use:95
String Pool Impact
  (%strings) (%volume)
Short Strings69 28
Medium Strings   25 31
Long Strings 5 25
1KB Strings  1 13
4KB Strings  0 0
16KB Strings 0 3
Other Strings0 0

Large buffers: 0 (0 KB)


> On 12 May 2016, at 11:37 AM, Dan Charlesworth <d...@getbusi.com> wrote:
> 
> I’ve now got mgr:mem output from a leaky box for comparison but I’m having a 
> hard time spotting where the problem might be.
> 
> Would anyone more experienced mind taking at these and seeing if anything 
> jumps out as 

Re: [squid-users] How to analyse squid memory usage

2016-05-11 Thread Dan Charlesworth
1KB Strings  0 0
4KB Strings  0 1
16KB Strings 0 5
Other Strings0 0

Large buffers: 0 (0 KB)



Thanks!

> On 11 May 2016, at 2:37 PM, Dan Charlesworth <d...@getbusi.com> wrote:
> 
> Thanks Amos -
> 
> Not sure how self-explanatory the output is, though.
> 
> I’ve attached the output from a site with a 12GB server where top was showing 
> 2.9GB allocated to squid (this is normal e.g. “the control"). But the mem 
> output shows the allocated total as ~1GB, apparently?
> 
> Maybe things will become clearer once I have a “leaky” server’s output to 
> compare with it.
> 
> 
> 
>> On 10 May 2016, at 6:02 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>> 
>> On 10/05/2016 2:35 p.m., Dan Charlesworth wrote:
>>> A small percentage of deployments of our squid-based product are using 
>>> oodles of memory—there doesn’t seem to be a limit to it.
>>> 
>>> I’m wondering what the best way might be to analyse what squid is reserving 
>>> it all for in the latest 3.5 release?
>>> 
>>> The output of squidclient mgr:cache_mem is completely incomprehensible to 
>>> me.
>> 
>> Try mgr:mem report. It is TSV (tab-separated values) file format.
>> 
>> squidclient mgr:mem > mem.tsv
>> 
>> ... and load mem.tsv using your favourite spreadsheet program. The
>> column titles should then be self-explanatory.
>> 
>> Amos
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to analyse squid memory usage

2016-05-10 Thread Dan Charlesworth
0 0 1 9.60 -1.000 0 0 1 0 0.000 0.000 0.000
Total   15654759 1016434 1045039 0.76 100.000 5652896 1015122 1041138 0.77 99.871 1863 1313 9295 172333656 96.047 97.847 0.123
Cumulative allocated volume: 91.517 GB
Current overhead: 45448 bytes (0.004%)
Idle pool limit: 5.00 MB
Total Pools created: 145
Pools ever used: 137 (shown above)
Currently in use:104
String Pool Impact
  (%strings) (%volume)
Short Strings57 14
Medium Strings   24 19
Long Strings 19 60
1KB Strings  0 0
4KB Strings  0 1
16KB Strings 0 5
Other Strings0 0

Large buffers: 0 (0 KB)



> On 10 May 2016, at 6:02 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 10/05/2016 2:35 p.m., Dan Charlesworth wrote:
>> A small percentage of deployments of our squid-based product are using 
>> oodles of memory—there doesn’t seem to be a limit to it.
>> 
>> I’m wondering what the best way might be to analyse what squid is reserving 
>> it all for in the latest 3.5 release?
>> 
>> The output of squidclient mgr:cache_mem is completely incomprehensible to me.
> 
> Try mgr:mem report. It is TSV (tab-separated values) file format.
> 
>  squidclient mgr:mem > mem.tsv
> 
> ... and load mem.tsv using your favourite spreadsheet program. The
> column titles should then be self-explanatory.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to analyse squid memory usage

2016-05-09 Thread Dan Charlesworth
A small percentage of deployments of our squid-based product are using oodles 
of memory—there doesn’t seem to be a limit to it.

I’m wondering what the best way might be to analyse what squid is reserving it 
all for in the latest 3.5 release?

The output of squidclient mgr:cache_mem is completely incomprehensible to me.

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid :D

2016-03-07 Thread Dan Charlesworth
For anyone still following along, I’ve since discovered the resolv.conf option 
“single-request-reopen” which seems to fix the slowness in every situation 
except my squidclient tests e.g. curl and dig +trace.

Currently waiting to get access to an actual proxy client to see if it’s any 
better from a real browser.

http://man7.org/linux/man-pages/man5/resolver.5.html

> On 8 Mar 2016, at 4:09 AM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> 
> dig +trace results against ISP+other dns services shows 65000+ ms response 
> time which means that there is something wrong outside of squid.
> 
> Eliezer
> 
> On 07/03/2016 06:50, Dan Charlesworth wrote:
>> Alright, we’re getting somewhere.
>> 
>> A plain curl is about as slow as a default squid config curl:
>> 
>> P.S. I sent you a Skype request
>> 
>> ---
>> 
>> # time curl http://httpbin.org/ip
>> {
>>   "origin": "59.167.202.249"
>> }
>> 
>> real 0m5.513s
>> user 0m0.002s
>> sys  0m0.001s
>> 
>> # time curl http://httpbin.org/ip --proxy http://localhost:1
>> {
>>   "origin": "::1, 59.167.202.249"
>> }
>> 
>> real 0m5.469s
>> user 0m0.001s
>> sys  0m0.001s
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid :D

2016-03-03 Thread Dan Charlesworth
Eliezer,

I haven’t had time to put together a current squid.conf and make it readable, 
remove sensitive stuff. But we don’t have any DNS-related directives set, it’s 
all just defaults for that stuff.

As for the other things you asked about:

1. The current resolv.conf looks like this:
```
search tceo

nameserver 192.231.203.3
nameserver 172.16.100.5
```

2. Using `dns_v4_first on` and `dns_nameservers 192.231.203.3 172.16.100.5`, 
doesn’t make any difference.


3. Here’s a test to your site with a single IPv4 address:

# time squidclient -h 10.100.128.1 http://ngtech.co.il

HTTP/1.1 200 OK
Server: nginx/1.8.0
Date: Fri, 04 Mar 2016 01:51:34 GMT
Content-Type: text/html
Content-Length: 10167
Last-Modified: Tue, 09 Feb 2016 15:56:55 GMT
Accept-Ranges: bytes
Vary: Accept-Encoding
X-Cache: MISS from livestream.tceo
X-Cache-Lookup: MISS from livestream.tceo:3128
Via: 1.1 livestream.tceo (squid/3.5.13)
Connection: close



real0m16.339s
user0m0.000s
sys 0m0.002s

4. Reverse DNS lookups for both DNS servers

# dig -x 192.231.203.3

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> -x 192.231.203.3
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31360
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 8

;; QUESTION SECTION:
;3.203.231.192.in-addr.arpa.IN  PTR

;; ANSWER SECTION:
3.203.231.192.in-addr.arpa. 149 IN  PTR resolv2.internode.on.net.

;; AUTHORITY SECTION:
203.231.192.in-addr.arpa. 149   IN  NS  ns4.on.net.
203.231.192.in-addr.arpa. 149   IN  NS  ns3.on.net.
203.231.192.in-addr.arpa. 149   IN  NS  ns1.on.net.
203.231.192.in-addr.arpa. 149   IN  NS  ns2.on.net.

;; ADDITIONAL SECTION:
ns1.on.net. 13301   IN  A   203.16.213.172
ns1.on.net. 4681IN  2001:44b8:f020:ff00::80
ns2.on.net. 13906   IN  A   192.231.203.2
ns2.on.net. 12151   IN  2001:44b8:8020:ff00::80
ns3.on.net. 13407   IN  A   150.101.197.131
ns3.on.net. 4681IN  2001:44b8:b070:ff00::80
ns4.on.net. 13374   IN  A   192.231.203.4
ns4.on.net. 9533IN  2001:44b8:8060:ff00::80

;; Query time: 23 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Fri Mar  4 12:59:02 2016
;; MSG SIZE  rcvd: 330

# dig -x 172.16.100.5

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> -x 172.16.100.5
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 35335
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;5.100.16.172.in-addr.arpa. IN  PTR

;; AUTHORITY SECTION:
16.172.in-addr.arpa.86400   IN  SOA localhost. root.localhost. 1 
604800 86400 2419200 86400

;; Query time: 21 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Fri Mar  4 12:59:14 2016
;; MSG SIZE  rcvd: 93

---

Was there there anything else I missed?

> On 4 Mar 2016, at 9:49 AM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> 
> This is where you need to share your squid.conf..
> Also what was the result of the query I mentioned?
> 
> Another one to try is:
> http://www.squid-cache.org/Doc/config/dns_v4_first/
> 
> try adding to the end of squid.conf
> dns_v4_first on
> 
> All The Bests,
> Eliezer
> 
> On 04/03/2016 00:42, Dan Charlesworth wrote:
>> Thanks for your input Eliezer.
>> 
>> I've tested against various public DNS servers at this point so I'm
>> ruling out any DNS-server-side problems. The only time there's any
>> timeouts or slowness is when the request is going through squid. Doesn't
>> seem to matter which HTTP server I'm requesting, whether it returns
>> multiple IPs or not.
>> 
>> Also worth noting that this company has about 30 other sites with mostly
>> identical network topologies and equipment where it's completely fine.
>> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid 

2016-03-02 Thread Dan Charlesworth
Here we go:

# time dig -x 10.100.128.1

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> -x 10.100.128.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 11319
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;1.128.100.10.in-addr.arpa. IN  PTR

;; AUTHORITY SECTION:
10.in-addr.arpa.86400   IN  SOA localhost. root.localhost. 1 
604800 86400 2419200 86400

;; Query time: 32 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Thu Mar  3 18:07:21 2016
;; MSG SIZE  rcvd: 93

real0m0.037s
user0m0.003s
sys 0m0.001s


> On 3 Mar 2016, at 5:44 PM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> 
> can you try the next command:
> dig -x 10.100.128.1
> 
> Eliezer
> 
> On 03/03/2016 08:04, Dan Charlesworth wrote:
>> Like this:
>> 
>> # time nslookup httpbin.org
>> Server:  192.231.203.3
>> Address: 192.231.203.3#53
>> 
>> Non-authoritative answer:
>> Name:httpbin.org
>> Address: 54.175.222.246
>> 
>> real 0m0.026s
>> user 0m0.001s
>> sys  0m0.004s
>> 
>> 
>> # time dig httpbin.org
>> 
>> ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> httpbin.org
>> ;; global options: +cmd
>> ;; Got answer:
>> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44477
>> ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 4
>> 
>> ;; QUESTION SECTION:
>> ;httpbin.org.IN  A
>> 
>> ;; ANSWER SECTION:
>> httpbin.org. 577 IN  A   54.175.222.246
>> 
>> ;; AUTHORITY SECTION:
>> httpbin.org. 6161IN  NS  ns-769.awsdns-32.net.
>> httpbin.org. 6161IN  NS  ns-1074.awsdns-06.org.
>> httpbin.org. 6161IN  NS  ns-410.awsdns-51.com.
>> httpbin.org. 6161IN  NS  ns-1756.awsdns-27.co.uk.
>> 
>> ;; ADDITIONAL SECTION:
>> ns-410.awsdns-51.com.9966IN  A   205.251.193.154
>> ns-769.awsdns-32.net.13639   IN  A   205.251.195.1
>> ns-1074.awsdns-06.org.   11459   IN  A   205.251.196.50
>> ns-1756.awsdns-27.co.uk. 11489   IN  A   205.251.198.220
>> 
>> ;; Query time: 21 msec
>> ;; SERVER: 192.231.203.3#53(192.231.203.3)
>> ;; WHEN: Thu Mar  3 17:03:04 2016
>> ;; MSG SIZE  rcvd: 246
>> 
>> real 0m0.026s
>> user 0m0.004s
>> sys  0m0.001s
>> 
>> 
>>> On 3 Mar 2016, at 4:55 PM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
>>> 
>>> Hey Dan,
>>> 
>>> What dig+nslookup queries did you tested for?
>>> 
>>> Eliezer
>>> 
>>> On 03/03/2016 07:39, Dan Charlesworth wrote:
>>>> Right now we have 1 squid box (out of a lot), running 3.5.13, which does 
>>>> something like this for every request, taking about 10 seconds:
>>>> 
>>>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup: 
>>>> idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
>>>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1745) idnsALookup: 
>>>> idnsALookup: buf is 29 bytes for httpbin.org, id = 0x8528
>>>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1683) 
>>>> idnsSendSlaveQuery: buf is 29 bytes for httpbin.org, id = 0x69c2
>>>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1277) idnsRead: 
>>>> idnsRead: starting with FD 7
>>>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1323) idnsRead: 
>>>> idnsRead: FD 7: received 93 bytes from 192.231.203.132:53
>>>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
>>>> idnsGrokReply: QID 0x733a, -3 answers
>>>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1195) idnsGrokReply: 
>>>> idnsGrokReply: error Name Error: The domain name does not exist. (3)
>>>> 2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
>>>> idnsCheckQueue: ID dns8 QID 0x8528: timeout
>>>> 2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
>>>> idnsCheckQueue: ID dns0 QID 0x69c2: timeout
>>>> 2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1277) idnsRead: 
>>>> idnsRead: starting with FD 7
>>>> 2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1323) idnsRead: 
>>>> idnsRead: FD 7: received 110 bytes f

Re: [squid-users] Bizarrely slow, timing out DNS only via Squid 

2016-03-02 Thread Dan Charlesworth
Like this:

# time nslookup httpbin.org
Server: 192.231.203.3
Address:192.231.203.3#53

Non-authoritative answer:
Name:   httpbin.org
Address: 54.175.222.246

real0m0.026s
user0m0.001s
sys 0m0.004s


# time dig httpbin.org

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> httpbin.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44477
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 4

;; QUESTION SECTION:
;httpbin.org.   IN  A

;; ANSWER SECTION:
httpbin.org.577 IN  A   54.175.222.246

;; AUTHORITY SECTION:
httpbin.org.6161IN  NS  ns-769.awsdns-32.net.
httpbin.org.6161IN  NS  ns-1074.awsdns-06.org.
httpbin.org.6161IN  NS  ns-410.awsdns-51.com.
httpbin.org.6161IN  NS  ns-1756.awsdns-27.co.uk.

;; ADDITIONAL SECTION:
ns-410.awsdns-51.com.   9966IN  A   205.251.193.154
ns-769.awsdns-32.net.   13639   IN  A   205.251.195.1
ns-1074.awsdns-06.org.  11459   IN  A   205.251.196.50
ns-1756.awsdns-27.co.uk. 11489  IN  A   205.251.198.220

;; Query time: 21 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Thu Mar  3 17:03:04 2016
;; MSG SIZE  rcvd: 246

real0m0.026s
user0m0.004s
sys 0m0.001s


> On 3 Mar 2016, at 4:55 PM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> 
> Hey Dan,
> 
> What dig+nslookup queries did you tested for?
> 
> Eliezer
> 
> On 03/03/2016 07:39, Dan Charlesworth wrote:
>> Right now we have 1 squid box (out of a lot), running 3.5.13, which does 
>> something like this for every request, taking about 10 seconds:
>> 
>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup: 
>> idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1745) idnsALookup: 
>> idnsALookup: buf is 29 bytes for httpbin.org, id = 0x8528
>> 2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1683) 
>> idnsSendSlaveQuery: buf is 29 bytes for httpbin.org, id = 0x69c2
>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1277) idnsRead: 
>> idnsRead: starting with FD 7
>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1323) idnsRead: 
>> idnsRead: FD 7: received 93 bytes from 192.231.203.132:53
>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
>> idnsGrokReply: QID 0x733a, -3 answers
>> 2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1195) idnsGrokReply: 
>> idnsGrokReply: error Name Error: The domain name does not exist. (3)
>> 2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
>> idnsCheckQueue: ID dns8 QID 0x8528: timeout
>> 2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
>> idnsCheckQueue: ID dns0 QID 0x69c2: timeout
>> 2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1277) idnsRead: 
>> idnsRead: starting with FD 7
>> 2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1323) idnsRead: 
>> idnsRead: FD 7: received 110 bytes from 172.16.100.4:53
>> 2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
>> idnsGrokReply: QID 0x69c2, 0 answers
>> 2016/03/03 16:30:58.885 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
>> idnsCheckQueue: ID dns8 QID 0x8528: timeout
>> 2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1277) idnsRead: 
>> idnsRead: starting with FD 7
>> 2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1323) idnsRead: 
>> idnsRead: FD 7: received 246 bytes from 172.16.100.5:53
>> 2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
>> idnsGrokReply: QID 0x8528, 1 answers
>> 
>> AND YET, every nslookup or dig done at the command line on the same server 
>> is lightning fast. I’ve tried local and ISP-level DNS servers and get the 
>> same result.
>> 
>> What could be going on here?
>> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Bizarrely slow, timing out DNS only via Squid 

2016-03-02 Thread Dan Charlesworth
Right now we have 1 squid box (out of a lot), running 3.5.13, which does 
something like this for every request, taking about 10 seconds:

2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup: 
idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1745) idnsALookup: 
idnsALookup: buf is 29 bytes for httpbin.org, id = 0x8528
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1683) 
idnsSendSlaveQuery: buf is 29 bytes for httpbin.org, id = 0x69c2
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 93 bytes from 192.231.203.132:53
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x733a, -3 answers
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1195) idnsGrokReply: 
idnsGrokReply: error Name Error: The domain name does not exist. (3)
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns0 QID 0x69c2: timeout
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 110 bytes from 172.16.100.4:53
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x69c2, 0 answers
2016/03/03 16:30:58.885 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 246 bytes from 172.16.100.5:53
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x8528, 1 answers

AND YET, every nslookup or dig done at the command line on the same server is 
lightning fast. I’ve tried local and ISP-level DNS servers and get the same 
result.

What could be going on here? 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump memory leak

2016-02-24 Thread Dan Charlesworth
I’m just catching up with this one, but we’ve observed some memory leaks on a 
small percentage of our boxes, which we migrated to Peek & Splice late last 
year. 

We’re on 3.5.13, about to move to 3.5.15.

What’s the least disruptive way to keep this under control, if there is one?

Is there anything I can do to help get it patched?

> On 25 Feb 2016, at 9:37 AM, Amos Jeffries  wrote:
> 
> On 24/02/2016 11:17 p.m., Steve Hill wrote:
>> On 23/02/16 21:28, Amos Jeffries wrote:
>> 
>>> Ah, you said "a small number" of wiki cert strings with those details. I
>>> took that as meaning a small number of definitely squid generated ones
>>> amidst the 130K indeterminate ones leaking.
>> 
>> Ah, a misunderstanding on my part - sorry.  Yes, there were 302 strings
>> containing "signTrusted" (77 of them unique), all of them appear to be
>> server certificates (i.e. with a CN containing a domain name), so it is
>> possibly reasonable to assume that they were for in-progress sessions
>> and would therefore be cleaned up.
>> 
>> This leaves around 131297 other subject/issuer strings (581 unique)
>> which, to my mind, can't be explained by anything other than a leak
>> (whether that be a "real" leak where the pointers have been discarded
>> without freeing the data, or a "pseudo" leak caused by references to
>> them being held forever).
>> 
> 
> I agree its amost certainly a leak.
> 
> Christos and William L. have been fixed some leaks in the Squid-4 cert
> generator non-caching configs recently. I'm not sure yet if its
> applicable to 3.5 or not, but from the sounds of this it very well could
> be the same thing.
> Unfortunately the code is quite a bit different in this area now so the
> patches wont directly prot. I think you had best get in touch with
> Christos about this.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] any way to get squid-4 compiled on CentOS-6?

2016-02-23 Thread Dan Charlesworth
Thanks Amos, good to know. I didn’t see your original reply for some reason; 
sorry about that.

I thought I had read that these sort of errors could be avoided in Squid-4:
Error negotiating SSL connection on FD 66: error:1408A0C1:SSL 
routines:SSL3_GET_CLIENT_HELLO:no shared cipher (1/-1)

But now I can’t even a source for that … I need to spend some quality time with 
Google I think.

> On 24 Feb 2016, at 5:50 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 23/02/2016 1:05 p.m., Dan Charlesworth wrote:
>> I'm bumping this question back up, because I also would like to know.
>> 
>> We'd rather not need users of our squid-based software to need to deploy
>> new CentOS 7 servers to run it.
>> 
> 
> My reply to Jason on the 12th has not changed. A full system upgrade
> should not be required, just a parallel compiler installation, or VM for
> testing with if you do want to go the whole way.
> 
> While there are a lot of TLS/SSL related patches going into Squid-4, the
> one that stick there should largely be cosmetic code shuffling or
> renaming for later improvements. We are trying to get the bug fixes
> backported to 3.5 still. If you are aware of one that got missed and is
> causing pain please let us/Christos know.
> 
>> 
>> On 12 February 2016 at 19:59, Jason Haar wrote:
>> 
>>> Hi there
>>> 
>>> Given the real work on ssl-bump seems to be in squid-4, I thought to try
>>> it out. Unfortunately, we're using CentOS-6 and the compilers are too
>>> old? (gcc-c++-4.4.7/clang-3.4.2)
>>> 
>>> CentOS-7 should be fine - but replacing an entire system just to have a
>>> play is a bit too much to ask, so has anyone figured out how to get
>>> squid-4 working on such older systems?
>>> 
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] any way to get squid-4 compiled on CentOS-6?

2016-02-22 Thread Dan Charlesworth
I'm bumping this question back up, because I also would like to know.

We'd rather not need users of our squid-based software to need to deploy
new CentOS 7 servers to run it.


On 12 February 2016 at 19:59, Jason Haar  wrote:

> Hi there
>
> Given the real work on ssl-bump seems to be in squid-4, I thought to try
> it out. Unfortunately, we're using CentOS-6 and the compilers are too
> old? (gcc-c++-4.4.7/clang-3.4.2)
>
> CentOS-7 should be fine - but replacing an entire system just to have a
> play is a bit too much to ask, so has anyone figured out how to get
> squid-4 working on such older systems?
>
> Thanks
>
> --
> Cheers
>
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay Pools and HTTPS on Squid 3.x

2016-02-16 Thread Dan Charlesworth
It's been a while since I've looked at this—because the software we use to
generate our squid.conf just works around now—but we found that Squid 3
would only enforce exactly half the configured rate on HTTP requests but
enforce the full rate on HTTPS requests.

So we now make two delay pools for every "restriction": one for HTTP which
is x2 the byte rate and one for HTTPS which is normal.

I don't we looked much more into it or filed a bug 'cause none of the
developers seem very keen on pushing delay_pools forward, due their being
more robust network-level approaches these days.

On Wed, 17 Feb 2016 at 12:37 Hery Martin  wrote:

> Hello everybody:
>
> Since a few months ago I'm using squid to provide a solution as small
> business proxy in the network of my work place.
>
> I'm from Cuba, in our country the Internet is a very limited resource. I
> have only one link of 2Mbps to share with 20 ~ 25 users (even with my
> network have more than 60) this is the normal concurrent number.
>
> When I start the squid deployment in my network I started using 2.7stable9
> version, I made all arrangements to put it work with my AD to match ACLs
> using AD Groups and everything works perfect.
>
> I defined 1 class 2 delay pools to to limits traffic to 12 KBytes/s per
> user approx.
>
> delay_pool 1
> delay_class 1 2
> delay_parameters -1/-1 12228/12228
>
> The delay pool works perfect, I was checking with real-time tool sqstat
> and with squidclient mgr:delay
>
> NOW.
>
> I recently upgrade squid to 3.3.8 and I notice that delay pool started to
> going wrong when the users surf or download using HTTPS protocol
>
> I checked in real-time and when the users browse HTTPS the pool goes in
> negative numbers and start to grow and grow, its very easy to check, just
> define a delay pool with 5KB and start a download from an HTTPS source and
> you can check it with squidclient mgr:delay, the ip takes negative pool
> value and keep growing until the download finish.
>
> Frustrated with this behavior I put different squid versions in a
> Virtualization Server and definitely I saw that the problem occurs with
> squid 3.x versions, today I made a final test and I think that the
> implementation of HTTP v1.1 is maybe related with that problem (I'm not
> sure but tomorow I will make a few tests with squid 3.1 where HTTP v1.1 was
> not yet implemented)
>
> Please, if you have the opportunity, just test this in a Lab environment,
> I decided to write to this email list because I asked to many people that
> already have implemented squid as proxy in their networks and they didn't
> believed to me until I demostrated the issue.
>
> Have anyone information about this bug? There is any hope to fix this
> problem at code level?
>
> Anyway, I'm computer systems engineer, I use to write a lot C++ lines
> every week... I'm not related with the squid development (never saw the
> code in my life) but if somebody have any idea how to fix this and wants
> help just count with me.
>
> Greetings from Cuba and sorry about my English :)
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] using splice just to improve TLS SNI logging

2015-12-03 Thread Dan Charlesworth
It’s been a far superior client experience to bumping on the deployments I’ve 
seen. Obviously MITM-ing a connection is always going to be a less amenable 
situation for clients; technically and ethically.

The only problem I’ve had with splicing is this Host Header Forgery error squid 
has when it resolves a different IP for an HTTPS host than the client does. 
It’s pretty well minimised by making sure the client and squid box are using 
the same DNS server, but I still have the occasional timeouts on github.com and 
missing images/media on twitter.com because of it.

> On 4 Dec 2015, at 2:35 PM, Jason Haar  wrote:
> 
> Hi there
> 
> We just had an incident where I would really have liked to have had
> transparent TLS intercept in place. Currently I'm still in
> "experimental" phase and don't want to go full "bump", but some quick
> testing of just activating "splice" with TLS intercept seems to me to be
> zero risk
> 
> ie instead of allowing direct port 443 Internet access, redirect it back
> onto squid-3.5 set to splice all port 443 traffic. End result is squid
> logfiles containing the following
> 
> .. CONNECT 1.2.3.4:443 blah
> .. CONNECT real.SNI.name:443 blah
> 
> Then at least I can see what HTTPS sites have been visited when I need to.
> 
> Does going "splice" mode avoid all the potential SSL/TLS issues
> surrounding bump? ie it won't care about client certs, weird TLS
> extensions, etc? (ie other than availability, it shouldn't introduce a
> new way of failing?)
> 
> Thanks!
> 
> -- 
> Cheers
> 
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-11-24 Thread Dan Charlesworth
Alright, thanks for the hint.

My proxy and clients definitely have the same DNS server (I removed the 
secondary and tertiary ones to make totally sure) but the results definitely 
aren’t matching 99% of the time. Probably more like 90%.

Perhaps it’s 'cause my clients are caching records locally or something? It 
does seem to improve as the day progresses, after joining the intercepted wifi 
network in the morning.

Super annoying though trying to post a comment on GitHub or something and it 
just hangs.

> On 25 Nov 2015, at 11:19 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 25/11/2015 12:20 p.m., Dan Charlesworth wrote:
>> Thanks for the perspective on this, folks.
>> 
>> Going back to the technical stuff—and this isn’t really a squid thing—but is 
>> there any way I can minimise this using my DNS server? 
>> 
>> Can I force my local DNS to only ever return 1 address from the pool on a 
>> hostname I’m having trouble with?
> 
> That depends on your resolver, but I doubt it.
> 
> The DNS setup I mentioned in my last email to this thread is all I'm
> aware of that gets even close to a fix.
> 
> Note that you may have to intercept clients port 53 traffic (both UDP
> and TCP) to the resolver. That has implications with DNSSEC but should
> still work as long as you do not alter the DNS responses, the resolver
> is just there to ensure the same result goes to both querying parties.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-11-24 Thread Dan Charlesworth
Thanks for the perspective on this, folks.

Going back to the technical stuff—and this isn’t really a squid thing—but is 
there any way I can minimise this using my DNS server? 

Can I force my local DNS to only ever return 1 address from the pool on a 
hostname I’m having trouble with?

> On 30 Oct 2015, at 4:50 AM, Alex Rousskov  
> wrote:
> 
> On 10/29/2015 11:29 AM, Matus UHLAR - fantomas wrote:
>>> On 10/28/2015 10:46 PM, Amos Jeffries wrote:
 NP: these problems do not exist for forward proxies. Only for traffic
 hijacking interceptor proxies.
>> 
>> On 29.10.15 09:05, Alex Rousskov wrote:
>>> For intercepted connections, Squid should, with an admin permission,
>>> connect to the intended IP address without validating whether that IP
>>> address matches the domain name (and without any side effects of such
>>> validation). In interception mode, the proxy should be as "invisible"
>>> (or as "invasive") as the admin wants it to be IMO -- all validations
>>> and protections should be optional. We could still enable them by
>>> default, of course.
>>> 
>>> SslBumped CONNECT-to-IP tunnels are essentially intercepted connections
>>> as well, even if they are using forwarding (not intercepting) http_ports.
> 
>> the "admin permission" is the key qestion here.  
> 
> Agreed. And understanding of what giving that permission implies!
> 
> 
>> There's possible problem
>> where the malicious client can connect to malicious server, ask for any
>> server name and the malicious content could get cached by squid as a proper
>> response.
> 
> Very true, provided that Squid trusts the unverified domain name to do
> caching. Squid does not have to do that. As Amos have noted, there are
> smart ways to minimize most of these problems, but they require more
> development work.
> 
> IMHO, it is important to establish the "do no harm" principle first and
> then use that to guide our development efforts. Unfortunately, some of
> the validation code was introduced under different principles, and we
> may still be debating what "harm" really means in this context while
> adjusting that code to meet varying admin needs.
> 
> Alex.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-10-29 Thread Dan Charlesworth
This is happening when my client and proxy are using the same DNS server. In 
this case, a local OS X Server which forwards to my ISP’s DNS servers.

As far as I can tell Google’s DNS isn’t in the equation any more. Even so, if I 
run a `dig watch` on the domain, it happily cycles through a pool of IPs 
apparently at random.

> On 29 Oct 2015, at 3:46 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 29/10/2015 1:16 p.m., Dan Charlesworth wrote:
>> It looks like there’s certain hosts that are designed to load balance (or 
>> something) between a few IPs, regardless of geography.
>> 
>> For example pbs.twimg.com resolves to wildcard.twimg.com which returns two 
>> different IPs each time, from a pool of 5–6, at random. Basically rolling 
>> the dice whether the client and the proxy are going to get the same IPs at 
>> the same time.
>> 
>> What is one to do about that?
> 
> The same thing. Ensuring that the proxy and the clients are using the
> same DNS server.
> 
> The reasoning goes like so:
> * some client does a DNS fetch causing the result to be cached in *that*
> server.
> * then the proxy repeats the query and gets the DNS cached result.
> * those results should match 99% of the time even if the domain DNS is
> playing tricks.
> 
> This falls down with the Google DNS because "8.8.8.8" is not one server
> but an entire farm of servers spread aroudn the globe. The two
> consecutive queries done often go to different physical servers.
> 
> You can of course configure 8.8.8.8 to be an upstream resolver for your
> local DNS server if you think that is a good idea. The key think is
> having the same local-end DNS cache being used by the clients and Squid.
> 
> 
> NP: these problems do not exist for forward proxies. Only for traffic
> hijacking interceptor proxies.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-10-28 Thread Dan Charlesworth
It looks like there’s certain hosts that are designed to load balance (or 
something) between a few IPs, regardless of geography.

For example pbs.twimg.com resolves to wildcard.twimg.com which returns two 
different IPs each time, from a pool of 5–6, at random. Basically rolling the 
dice whether the client and the proxy are going to get the same IPs at the same 
time.

What is one to do about that?

> On 22 Oct 2015, at 10:00 PM, Yuri Voinov <yvoi...@gmail.com> wrote:
> 
> 
> 
> 22.10.15 15:58, Amos Jeffries пишет:
>> On 21/10/2015 4:53 p.m., Dan Charlesworth wrote:
>>> I’m getting these very frequently for api.github.com and github.com
>>> 
>>> I’m using the same DNS servers as my intercepting squid 3.5.10 proxy and 
>>> they only return the one IP when I do an nslookup as well …
>>> 
>>> Any updates from your end, Roel?
>> 
>> I just did a quick test of api.github.com and what I'm seeing is only
>> one IP at a time being delivered. BUT that IP is showing signs of being
>> geo-DNS based result and also has a 60 second TTL.
>> 
>> So ... when using the Google "free" DNS service it changes IP number
>> almost every second. Based on which of the Google servers you happen to
>> be working through with that particular request.
>> 
>> You can watch it cycling if you like:
>>  watch dig A api.github.com @8.8.8.8
>> 
>> 
>> You could run a local bind server and redirect UDP port 53 requests from
> ... or Unbound. ;) I use it.
>> clients to it so they stop using 8.8.8.8 etc and start using a DNS like
>> its supposed to work.
>> 
>> Amos
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-10-20 Thread Dan Charlesworth
I’m getting these very frequently for api.github.com and github.com

I’m using the same DNS servers as my intercepting squid 3.5.10 proxy and they 
only return the one IP when I do an nslookup as well …

Any updates from your end, Roel?

> On 8 Oct 2015, at 8:29 PM, Eliezer Croitoru  wrote:
> 
> Since they are using the same dns server there is no need to run some trials.
> The only test you should in any case test is to see how long is the IP list 
> from the DNS request for the domain name.
> 
> Eliezer
> 
> On 08/10/2015 12:12, Roel van Meer wrote:
>> Eliezer Croitoru writes:
>> 
>>> Are the users and proxy using different dns server?
>> 
>> No, they are using the same server.
>> 
>>> Can you run dig from the proxy on this domain and dump the content to
>>> verify that the ip is indeed there?
>> 
>> I'm currently running with 3.5.8 again, so I'll have to find a quiet
>> hour where I can upgrade and check this. I'll get back to you. Thanks!
>> 
>> Roel
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-18 Thread Dan Charlesworth
Amos -

I’m going to assume that request was directed at Alex, as I don’t have editor 
access to the wiki. Let me know if not.

> On 16 Oct 2015, at 4:22 PM, Amos Jeffries  wrote:
> 
> Can you please add to the Troubleshooting section at the end of
> ?
> 
> a brief sentence describing the symptom(s), then what what done to
> resolve it would be great.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Dan Charlesworth
Great, thanks. Don’t know why I didn’t think of it before but I’ll try 
elevating it from Login -> System keychain and see what happens.

> On 16 Oct 2015, at 11:51 AM, Jason Haar <jason_h...@trimble.com> wrote:
> 
> On 16/10/15 13:34, Dan Charlesworth wrote:
>> Thanks!
>> 
>> So ignoring the “bumpable” helper check, it’s effectively peeking at step1 
>> and then bumping it like my config’s doing.
>> 
>> I wonder what else could be differentiating it. Is your proxy CA just 
>> installed in the Login keychain?
> 
> Nope - did it "properly" at the OS level. Get a PEM version of your
> squidCA pubkey and as root do
> 
> security add-trusted-cert -d -r trustRoot -p ssl -p smime -p IPSec -p
> eap -p basic /path/squidCA.pem > /dev/null 2>&1 || true
> certtool i "/path/squidCA.pem"   k=/System/Library/Keychains/X509Anchors
>> /dev/null 2>&1 || true
> 
> The "ipsec/smime" stuff is actually not needed - but I don't care ;-) I
> went for the carpet bombing approach for the Mac (which I don't know well)
> 
> -- 
> Cheers
> 
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Dan Charlesworth
Thanks!

So ignoring the “bumpable” helper check, it’s effectively peeking at step1 and 
then bumping it like my config’s doing.

I wonder what else could be differentiating it. Is your proxy CA just installed 
in the Login keychain?

> On 16 Oct 2015, at 11:26 AM, Jason Haar <jason_h...@trimble.com> wrote:
> 
> On 16/10/15 13:08, Dan Charlesworth wrote:
>> ORLY
>> 
>> I seem to recall this happening on 10.10 as well, but it could be an El 
>> Capitan thing. Do you mind reminding me of your squid config Jason?
> 
> With my config I trying to "aggressively" figure out if the transaction
> is safely going to be bump-able. I'm more willing to throw away (ie
> splice) things I'm unsure about than risk a client seeing an error. But
> for the websites you see problems with, I see nice clean bump-ing
> 
> 
> http_port 3128 ssl-bump cert=/etc/squid/squidCA.cert 
> generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL
> acl DiscoverSNIHost at_step SslBump1
> ssl_bump peek DiscoverSNIHost
> #do we have a SNI? If not, it's not TLS
> acl SNIpresent ssl::server_name_regex .*
> 
> #this file contains https sites that we do not intercept - such as banks
> (because we want the data transfers to remain private)
> #and accounts.google.com (because Chrome uses cert pinning for that domain)
> # in general you will need to add all sites that involve cert pinning
> acl NoSSLIntercept ssl::server_name_regex -i
> "/etc/squid/acl-NoSSLIntercept.txt"
> 
> #this external_acl process will sanity-check HTTPS transactions that
> haven't being spliced yet
> #to ensure only the correct ones end up being bumped
> external_acl_type checkIfHTTPS children-max=20 concurrency=20
> negative_ttl=3600 ttl=3600 grace=90  %SRC %DST %PORT %ssl::>sni
> /usr/local/bin/confirm_https.pl
> acl is_ssl external checkIfHTTPS
> 
> ssl_bump splice !SNIpresent
> ssl_bump splice NoSSLIntercept
> ssl_bump bump is_ssl
> 
> -- 
> Cheers
> 
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Dan Charlesworth
ORLY

I seem to recall this happening on 10.10 as well, but it could be an El Capitan 
thing. Do you mind reminding me of your squid config Jason?

Thanks!

> On 16 Oct 2015, at 11:06 AM, Jason Haar <jason_h...@trimble.com> wrote:
> 
> Just a data point, but I've just got up Safari on Yosemite connecting
> through squid-3.5.10 to https://wikipedia.org/ with full bump-ing with
> no problems.
> 
> Same with twitter.com and github.com. Click on the padlock shows the
> server cert chaining to my squidCA cert (which is trusted of course)
> 
> ie this can't have anything to do with Elliptic Curves or pinning
> 
> Jason
> 
> On 15/10/15 12:19, Alex Rousskov wrote:
>> On 10/14/2015 05:00 PM, Dan Charlesworth wrote:
>> 
>>> I feel like if server-first is working there must be *some*
>>> combination of peek/stare/bump that’ll work too—it can’t be that
>>> “forward secrecy” cipher stuff.
>> 
>> While that feeling is natural, you should resist it. Newer SslBump
>> actions do not simply dissect the old ones into smaller steps. The old
>> actions (e.g., server-first) do not do some of the things that the new
>> actions do (e.g., peek extracts and sends SNI but server-first does
>> not). Doing more sometimes leads to more problems, especially in
>> experiment-driven features such as SslBump. Besides different cipher
>> negotiation patterns, you may be hitting a bug that server-first code
>> path lacks, for example.
>> 
>> 
>>> I really don’t want our customers to have to use server-first if they
>>> decide to employ bumping, so if any of you smart people have any
>>> other suggestions, please send them through.
>> 
>> I second Amos' implied suggestion to try the latest Squid 4.0 as the
>> next step. This does not mean you have to _deploy_ Squid 4.0:
>> 
>> * If Squid 4.0 does not work in your tests, we will not need to suspect
>> newer ciphers and may get more information from newer logs. We will also
>> be slightly more motivated to fix or improve something.
>> 
>> * If Squid 4.0 works, we will know more about your problem and may
>> suggest some other solutions if you have to run an older Squid.
>> 
>> In either case, do collect "debug_options ALL,9" cache logs for an
>> isolated test case.
>> 
>> 
>> Please note that I am not volunteering to examine your logs, and there
>> is no guarantee that this next step will lead to a solution, but it is
>> relatively easy to make that step.
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> 
>> 
>> 
>>>> On 15 Oct 2015, at 1:34 AM, Alex Rousskov 
>>>> <rouss...@measurement-factory.com> wrote:
>>>> 
>>>> On 10/13/2015 09:08 PM, Dan Charlesworth wrote:
>>>> 
>>>>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually
>>>>> splicing everything, it seems.
>>>> 
>>>> This may not be related to your specific problem, but I want to clarify
>>>> the above.
>>>> 
>>>> ssl_bump peek step1
>>>> ssl_bump bump step3
>>>> 
>>>> A recent Squid mis-configured using the above sketch should indeed
>>>> splice everything. When Squid reaches bumping step2, no ssl_bump rule
>>>> matches, so Squid uses the previous step rule to decide what to do.
>>>> Since peeking implies splicing, Squid splices at step2 and never gets to
>>>> step3.
>>>> 
>>>> It is possible that, in his "bump at step3" recommendation below, Amos
>>>> was talking about this kind of configuration:
>>>> 
>>>> ssl_bump stare all
>>>> ssl_bump bump all
>>>> 
>>>> Bugs notwithstanding, the above results in bumping at step3.
>>>> 
>>>> Alex.
>>>> 
>>>> 
>>>>>> On 14 Oct 2015, at 1:51 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>>>>> 
>>>>>> On 14/10/2015 1:13 p.m., Dan Charlesworth wrote:
>>>>>>> Throwing this out to the list in case anyone else might be trying to 
>>>>>>> get SSL Bump to work with the latest version of Safari.
>>>>>>> 
>>>>>>> Every other browser on OS X (and iOS) is happy with bumping for pretty 
>>>>>>> much all HTTPS sites, so long as the proxy’s CA is trusted. 
>>>>>>> 
>>>>>>> However Safari throws generic “secure connection couldn’t be 
>>>>>>> est

Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-15 Thread Dan Charlesworth
So after all that, it was my choice of keychain that was the problem. Every 
HTTPS site works with the CA cert in the System keychain as opposed to login.

I’ll put that down to OS X probably using some system-level processes to do 
some of Safari’s work, or something.

Thanks Alex, Amos, and Jason for your help on this.

   

> On 16 Oct 2015, at 11:55 AM, Dan Charlesworth <d...@getbusi.com> wrote:
> 
> Great, thanks. Don’t know why I didn’t think of it before but I’ll try 
> elevating it from Login -> System keychain and see what happens.
> 
>> On 16 Oct 2015, at 11:51 AM, Jason Haar <jason_h...@trimble.com> wrote:
>> 
>> On 16/10/15 13:34, Dan Charlesworth wrote:
>>> Thanks!
>>> 
>>> So ignoring the “bumpable” helper check, it’s effectively peeking at step1 
>>> and then bumping it like my config’s doing.
>>> 
>>> I wonder what else could be differentiating it. Is your proxy CA just 
>>> installed in the Login keychain?
>> 
>> Nope - did it "properly" at the OS level. Get a PEM version of your
>> squidCA pubkey and as root do
>> 
>> security add-trusted-cert -d -r trustRoot -p ssl -p smime -p IPSec -p
>> eap -p basic /path/squidCA.pem > /dev/null 2>&1 || true
>> certtool i "/path/squidCA.pem"   k=/System/Library/Keychains/X509Anchors
>>> /dev/null 2>&1 || true
>> 
>> The "ipsec/smime" stuff is actually not needed - but I don't care ;-) I
>> went for the carpet bombing approach for the Mac (which I don't know well)
>> 
>> -- 
>> Cheers
>> 
>> Jason Haar
>> Corporate Information Security Manager, Trimble Navigation Ltd.
>> Phone: +1 408 481 8171
>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-14 Thread Dan Charlesworth
Thanks for clarifying, Alex. We tried this config but Safari still doesn’t like 
it, sadly.

I feel like if server-first is working there must be *some* combination of 
peek/stare/bump that’ll work too—it can’t be that “forward secrecy” cipher 
stuff. 

I really don’t want our customers to have to use server-first if they decide to 
employ bumping, so if any of you smart people have any other suggestions, 
please send them through.

Thanks

> On 15 Oct 2015, at 1:34 AM, Alex Rousskov <rouss...@measurement-factory.com> 
> wrote:
> 
> On 10/13/2015 09:08 PM, Dan Charlesworth wrote:
> 
>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually
>> splicing everything, it seems.
> 
> 
> This may not be related to your specific problem, but I want to clarify
> the above.
> 
>  ssl_bump peek step1
>  ssl_bump bump step3
> 
> A recent Squid mis-configured using the above sketch should indeed
> splice everything. When Squid reaches bumping step2, no ssl_bump rule
> matches, so Squid uses the previous step rule to decide what to do.
> Since peeking implies splicing, Squid splices at step2 and never gets to
> step3.
> 
> It is possible that, in his "bump at step3" recommendation below, Amos
> was talking about this kind of configuration:
> 
>  ssl_bump stare all
>  ssl_bump bump all
> 
> Bugs notwithstanding, the above results in bumping at step3.
> 
> Alex.
> 
> 
>>> On 14 Oct 2015, at 1:51 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>> 
>>> On 14/10/2015 1:13 p.m., Dan Charlesworth wrote:
>>>> Throwing this out to the list in case anyone else might be trying to get 
>>>> SSL Bump to work with the latest version of Safari.
>>>> 
>>>> Every other browser on OS X (and iOS) is happy with bumping for pretty 
>>>> much all HTTPS sites, so long as the proxy’s CA is trusted. 
>>>> 
>>>> However Safari throws generic “secure connection couldn’t be established” 
>>>> errors for many popular HTTPS sites in including:
>>>> - wikipedia.org
>>>> - mail.google.com
>>>> - twitter.com
>>>> - github.com
>>>> 
>>>> But quite a number of others work, such as youtube.com.
>>>> 
>>>> This error gets logged to the system whenever it occurs:
>>>> com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
>>>> (kCFStreamErrorDomainSSL, -9802)
>>>> 
>>>> Apparently this is related to Apple’s new “App Transport Security” 
>>>> protections, in particular, the fact that “the server doesn’t support 
>>>> forward secrecy”. Even though it doesn’t seem to be affecting mobile 
>>>> Safari on iOS 9 at all.
>>>> 
>>>> It’s also notable that Safari seems perfectly happy with legacy 
>>>> server-first SSL bumping. 
>>>> 
>>>> I’m using Squid 3.5.10 and this is my current config: 
>>>> https://gist.github.com/djch/9b883580c6ee84f31cd1
>>>> 
>>>> Anyone have any idea what I can try?
>>> 
>>> You can try bump at step3 (roughly equivalent to server-first) instead
>>> of step2 (aka client-first).
>>> 
>>> 
>>> Amos
>>> 
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
I meant to say “forward secrecy”, which appears to be a list of specific 
ciphers:
https://developer.apple.com/library/watchos/technotes/App-Transport-Security-Technote/index.html

Anyone know how to translate that list of ciphers to use in sslproxy_cipher in 
squid.conf?

> On 14 Oct 2015, at 2:39 PM, Dan Charlesworth <d...@getbusi.com> wrote:
> 
> ¯\_(ツ)_/¯
> 
> All I really have to go on is those errors com.apple.WebKit.Networking is 
> logging which apparently points to a specific thing it’s missing called 
> “forward transport security”. Only the peek@step1 seems to make it as far as 
> any of squid’s logs.
> 
> No other browsers affected that I can find, not even mobile Safari. The sites 
> that do and don’t fail seems random too.
> 
> Fine: instagram.com, getpocket.com, youtube.com
> 
> Not fine: httpbin.org, news.ycombinator.com, basecamp.com, wikipedia.org, 
> dribbble.com, icloud.com, vimeo.com, reddit.com
> 
>> On 14 Oct 2015, at 2:13 PM, Jason Haar <jason_h...@trimble.com> wrote:
>> 
>> On 14/10/15 16:08, Dan Charlesworth wrote:
>>> I thought that fixed it for a second … 
>>> 
>>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually 
>>> splicing everything, it seems.
>>> 
>>> Any other advice? :-)
>> Could this imply be a pinning issue? ie does Safari track the CAs used
>> by those sites - thus causing the problem you see? Certainly matches the
>> symptoms
>> 
>> -- 
>> Cheers
>> 
>> Jason Haar
>> Corporate Information Security Manager, Trimble Navigation Ltd.
>> Phone: +1 408 481 8171
>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
 ¯\_(ツ)_/¯

All I really have to go on is those errors com.apple.WebKit.Networking is 
logging which apparently points to a specific thing it’s missing called 
“forward transport security”. Only the peek@step1 seems to make it as far as 
any of squid’s logs.

No other browsers affected that I can find, not even mobile Safari. The sites 
that do and don’t fail seems random too.

Fine: instagram.com, getpocket.com, youtube.com

Not fine: httpbin.org, news.ycombinator.com, basecamp.com, wikipedia.org, 
dribbble.com, icloud.com, vimeo.com, reddit.com

> On 14 Oct 2015, at 2:13 PM, Jason Haar <jason_h...@trimble.com> wrote:
> 
> On 14/10/15 16:08, Dan Charlesworth wrote:
>> I thought that fixed it for a second … 
>> 
>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually 
>> splicing everything, it seems.
>> 
>> Any other advice? :-)
> Could this imply be a pinning issue? ie does Safari track the CAs used
> by those sites - thus causing the problem you see? Certainly matches the
> symptoms
> 
> -- 
> Cheers
> 
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
I thought that fixed it for a second … 

But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually splicing 
everything, it seems.

Any other advice? :-)

> On 14 Oct 2015, at 1:51 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 14/10/2015 1:13 p.m., Dan Charlesworth wrote:
>> Throwing this out to the list in case anyone else might be trying to get SSL 
>> Bump to work with the latest version of Safari.
>> 
>> Every other browser on OS X (and iOS) is happy with bumping for pretty much 
>> all HTTPS sites, so long as the proxy’s CA is trusted. 
>> 
>> However Safari throws generic “secure connection couldn’t be established” 
>> errors for many popular HTTPS sites in including:
>> - wikipedia.org
>> - mail.google.com
>> - twitter.com
>> - github.com
>> 
>> But quite a number of others work, such as youtube.com.
>> 
>> This error gets logged to the system whenever it occurs:
>> com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
>> (kCFStreamErrorDomainSSL, -9802)
>> 
>> Apparently this is related to Apple’s new “App Transport Security” 
>> protections, in particular, the fact that “the server doesn’t support 
>> forward secrecy”. Even though it doesn’t seem to be affecting mobile Safari 
>> on iOS 9 at all.
>> 
>> It’s also notable that Safari seems perfectly happy with legacy server-first 
>> SSL bumping. 
>> 
>> I’m using Squid 3.5.10 and this is my current config: 
>> https://gist.github.com/djch/9b883580c6ee84f31cd1
>> 
>> Anyone have any idea what I can try?
> 
> You can try bump at step3 (roughly equivalent to server-first) instead
> of step2 (aka client-first).
> 
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
Throwing this out to the list in case anyone else might be trying to get SSL 
Bump to work with the latest version of Safari.

Every other browser on OS X (and iOS) is happy with bumping for pretty much all 
HTTPS sites, so long as the proxy’s CA is trusted. 

However Safari throws generic “secure connection couldn’t be established” 
errors for many popular HTTPS sites in including:
- wikipedia.org
- mail.google.com
- twitter.com
- github.com

But quite a number of others work, such as youtube.com.

This error gets logged to the system whenever it occurs:
com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
(kCFStreamErrorDomainSSL, -9802)

Apparently this is related to Apple’s new “App Transport Security” protections, 
in particular, the fact that “the server doesn’t support forward secrecy”. Even 
though it doesn’t seem to be affecting mobile Safari on iOS 9 at all.

It’s also notable that Safari seems perfectly happy with legacy server-first 
SSL bumping. 

I’m using Squid 3.5.10 and this is my current config: 
https://gist.github.com/djch/9b883580c6ee84f31cd1

Anyone have any idea what I can try?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-10-07 Thread Dan Charlesworth
Same here—I've been meaning to ask the list about this too. I’m still on 3.5.9, 
by the way.

> On 6 Oct 2015, at 10:55 PM, Roel van Meer  wrote:
> 
> Hi everyone,
> 
> I have a Squid setup on a linux box with transparent interception of both 
> http and https traffic. Everything worked fine with Squid 3.5.6. After 
> upgrading to version 3.5.10, I get many warnings about host header forgery:
> 
> SECURITY ALERT: Host header forgery detected on local=104.46.50.125:443 
> remote=192.168.9.126:52588 FD 22 flags=33 (local IP does not match any domain 
> IP)
> SECURITY ALERT: By user agent:
> SECURITY ALERT: on URL: nexus.officeapps.live.com:443
> 
> These warnings all seem to occur for https web sites that use multiple DNS 
> records. The warnings coincide with the fact that the clients are unable to 
> get the requested page.
> 
> I've read the wiki page 
> http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery
> and I can assert that:
> - we do NAT on the same box that is running Squid
> - both squid and the clients use the same DNS server
> 
> I've also tested 3.5.9, and this version also showed these warnings.
> Version 3.5.7 worked fine, and 3.5.8 did too.
> 
> So, one of the changes in 3.5.9 caused this behaviour.
> 
> Can anyone shed some more light on this? Is this a problem in my setup that 
> surfaced with 3.5.9, or is it a problem in Squid?
> 
> Thanks a lot for any help,
> 
> Roel
> 
> 
> My (abbreviated) config:
> 
> http_port 192.168.9.1:3128 ssl-bump cert=/etc/ssl/certs/server.pem
> http_port 192.168.9.1:3129 intercept
> https_port 192.168.9.1:3130 intercept ssl-bump cert=/etc/ssl/certs/server.pem
> icp_port 0
> 
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> 
> acl port-direct myportname 192.168.9.1:3128
> ssl_bump none port-direct
> acl port-trans_https myportname 192.168.9.1:3130
> external_acl_type sni children-max=3 children-startup=1 %URI %SRC %METHOD 
> %ssl::>sni /usr/bin/squidGuard-aclsni
> acl checksni external sni
> 
> ssl_bump peek port-trans_https step1
> ssl_bump terminate port-trans_https step2 checksni
> ssl_bump splice port-trans_https all
> 
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
> 
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] External ACL format tag for origin IP?

2015-10-04 Thread Dan Charlesworth
It seems there’s no way to get the equivalent of the `dst` internal ACL into an 
external ACL. %DST returns the hostname from DNS not the origin IP. 

Am I missing something? Perhaps there's a more creative way to pass the IP to 
an external ACL regardless of what the hostname is?

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.8 — SSL Bump questions

2015-09-09 Thread Dan Charlesworth
Thanks for all the info here, people.

This is probably because of some other dumb thing I’m doing in my ssl_bump 
config, but if I change ssl_bump peek step1 to ssl_bump peek all, I get this 
assertion failure:

PeerConnector.cc:747: "!callback"

> On 9 Sep 2015, at 6:59 pm, Amos Jeffries  wrote:
> 
> On 9/09/2015 7:39 p.m., Jason Haar wrote:
>> On 08/09/15 20:32, Amos Jeffries wrote:
>>> The second one is a fake CONNECT generated internally by Squid using
>> Is it too late to propose that intercepted SSL transactions be logged as
>> something besides "CONNECT"? I know I find it confusing - and so do
>> others. I appreciate the logic behind it - but people are people :-)
>> 
> 
> Yeah.  theres people - they need to stop looking at the *HTTP messages
> log* and thinking it says anything about bumping. All it says this the
> *side effects* of bumping which happen in the HTTP layer.
> 
> Then there is the actual log processing software. And access.log is an
> HTTP transaction log, the detail being logged is the HTTP method being
> enacted by the HTTP software (Squid).
> 
> 
> TLS/SSL is a different protocol to HTTP. It should not be warped into
> HTTP log syntax. Trying to do so is what is confusing you. And the HTTP
> side effects are not clear.
> 
> 
> Try this (a log for the actual TLS / SSL-bump details):
> 
> logformat tlslog %tS %6tr %>a:%>p %>la:%>lp \
>  %ssl::bump_mode %ssl::>sni %  "%ssl::>cert_subject" "%ssl::>cert_issuer"
> 
> access_log stdio:/var/log/squid/tls.log tlslog SSL_ports
> 
> That is;
> the time things started,
> how long it took in ms,
> the client IP:port,
> server IP:port it was connecting to (might be Squid),
> the bumping mode squid was doing,
> SNI (if any),
> the server actually connected to (FQDN and IP),
> the cert details that server presented.
> 
> I'm not sure which format code gets populated with SSL error details
> when cert validation fails. That should be added on the end too.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.8 — SSL Bump questions

2015-09-08 Thread Dan Charlesworth
This:
08/Sep/2015-17:41:38  11049 10.0.1.7 TCP_TUNNEL 200 12871 CONNECT 
api.github.com:443 api.github.com - peek 
Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010.10;%20rv:40.0)%20Gecko/20100101%20Firefox/40.0
 HIER_DIRECT/192.30.252.127 -

Compared to this:
08/Sep/2015-17:04:17  13359 10.0.1.7 TCP_TUNNEL 200 13741 CONNECT 
192.30.252.126:443 api.github.com - splice - ORIGINAL_DST/192.30.252.126 -


> On 8 Sep 2015, at 5:39 pm, Dan Charlesworth <d...@getbusi.com> wrote:
> 
> Thanks Amos.
> 
> To clarify about the user agents: I’m talking about anything with a (logged) 
> SSL bump mode of “splice” — I’m not expecting to see one for the synthetic 
> (“peek") connections. In this case it’s actually intercepted spliced 
> connections.
> 
> Wondering why a spliced connection doesn't log a UA when an explicit CONNECT 
> does.
> 
>> On 8 Sep 2015, at 5:17 pm, Amos Jeffries <squ...@treenet.co.nz 
>> <mailto:squ...@treenet.co.nz>> wrote:
>> 
>> On 8/09/2015 5:36 p.m., Dan Charlesworth wrote:
>>> Hello all
>>> 
>>> I’ve been testing out an SSL bumping config using 3.5.8 for the last week 
>>> or so and am scratching my head over a couple of things.
>>> 
>>> First, here’s my config (shout out to James Lay):
>>> 
>>> acl tcp_level at_step SslBump1
>>> acl client_hello_peeked at_step SslBump2
>>> acl bump_bypass_domains ssl::server_name “/path/to/some/domains.txt"
>>> ssl_bump splice client_hello_peeked bump_bypass_domains
>>> ssl_bump bump client_hello_peeked
>>> 
>>> 1. Why don’t spliced connections get a user agent logged like explicit 
>>> CONNECTs do?
>> 
>> If you are talking about the synthetic CONNECT created on intercepted
>> traffic it is because there is no User-Agent header and nothing to
>> create one from.
>> 
>> If you are seeing explicit CONNECT come in and not have a User-Agent
>> header when they are spliced. That would seem to be a bug. The
>> splice/bump stuff should not be affecting the original CONNECT message
>> the client sent.
>> 
>>> 
>>> 2. Safari produces this error visiting all sorts of websites (github, 
>>> wikipedia, gmail):
>>> Error negotiating SSL connection on FD 15: error:140A1175:SSL 
>>> routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)
>>> 
>>> … whereas Chrome and Firefox do not. What’s the story with this one?
>> 
>> "inappropriate fallback" means the client is claiming it has been forced
>> down to the SSLv3 (or some low/insecure TLS version) because no more
>> secure version was permitted. But the server is aware that it does
>> support a higher version.
>> 
>> It can happen two ways:
>> 1) somebody is MITM'ing the connection and performing the POODLE attack.
>> 
>> 2) client has misconfigured TLS/SSL support.
>> 
>> 
>> TLS agents are supposed to support a _continuous_ range of protocol
>> versions from the set { SSLv2, SSLv3, TLSv1.0, TLSv1.1, TLSv1.2, TLSv1.3
>> }, the client states what it highest is and if it is in the servers set
>> that gets used. If it gets rejected the client has to fallback to its
>> next-lower version and try again.
>> 
>> (2) happens when somebody pokes a hole by disabling one of the protocol
>> versions in the middle of their otherwise supported range. Usually it is
>> the client, but servers can do it too. When the 'hole' overlaps with the
>> highest supported version of the other end the fallback mechanism breaks
>> with the behaviour you see.
>> 
>> 
>> The solution is to ensure the TLS versions supported by the client are a
>> continuous range.
>> 
>> * SSLv2 should be dead and buried. Disabled everywhere. Kill it ASAP if
>> you see it enabled anywhere.
>> 
>> * SSLv3 _should_ be disabled now too. Using it is actively dangerous. In
>> the event that it cannot be disabled then TLSv1.0 through to the highest
>> supported TLS version also *need* to be enabled. No poking holes to
>> disable TLSv1.0 with SSLv3 still active.
>> 
>> * TLSv1.0 is a good idea to disable. It is not dangerous yet but very
>> will soon be, and there are a lot of its ciphers which _are_ actively
>> dangerous and require disabling if its going to be allowed. The only
>> reasons to have it enabled are old TLSv1.0-only software or when SSLv3
>> is required.
>> 
>> 
>> Amos
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org>
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.8 — SSL Bump questions

2015-09-08 Thread Dan Charlesworth
Thanks Amos.

To clarify about the user agents: I’m talking about anything with a (logged) 
SSL bump mode of “splice” — I’m not expecting to see one for the synthetic 
(“peek") connections. In this case it’s actually intercepted spliced 
connections.

Wondering why a spliced connection doesn't log a UA when an explicit CONNECT 
does.

> On 8 Sep 2015, at 5:17 pm, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
> On 8/09/2015 5:36 p.m., Dan Charlesworth wrote:
>> Hello all
>> 
>> I’ve been testing out an SSL bumping config using 3.5.8 for the last week or 
>> so and am scratching my head over a couple of things.
>> 
>> First, here’s my config (shout out to James Lay):
>> 
>> acl tcp_level at_step SslBump1
>> acl client_hello_peeked at_step SslBump2
>> acl bump_bypass_domains ssl::server_name “/path/to/some/domains.txt"
>> ssl_bump splice client_hello_peeked bump_bypass_domains
>> ssl_bump bump client_hello_peeked
>> 
>> 1. Why don’t spliced connections get a user agent logged like explicit 
>> CONNECTs do?
> 
> If you are talking about the synthetic CONNECT created on intercepted
> traffic it is because there is no User-Agent header and nothing to
> create one from.
> 
> If you are seeing explicit CONNECT come in and not have a User-Agent
> header when they are spliced. That would seem to be a bug. The
> splice/bump stuff should not be affecting the original CONNECT message
> the client sent.
> 
>> 
>> 2. Safari produces this error visiting all sorts of websites (github, 
>> wikipedia, gmail):
>> Error negotiating SSL connection on FD 15: error:140A1175:SSL 
>> routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)
>> 
>> … whereas Chrome and Firefox do not. What’s the story with this one?
> 
> "inappropriate fallback" means the client is claiming it has been forced
> down to the SSLv3 (or some low/insecure TLS version) because no more
> secure version was permitted. But the server is aware that it does
> support a higher version.
> 
> It can happen two ways:
> 1) somebody is MITM'ing the connection and performing the POODLE attack.
> 
> 2) client has misconfigured TLS/SSL support.
> 
> 
> TLS agents are supposed to support a _continuous_ range of protocol
> versions from the set { SSLv2, SSLv3, TLSv1.0, TLSv1.1, TLSv1.2, TLSv1.3
> }, the client states what it highest is and if it is in the servers set
> that gets used. If it gets rejected the client has to fallback to its
> next-lower version and try again.
> 
> (2) happens when somebody pokes a hole by disabling one of the protocol
> versions in the middle of their otherwise supported range. Usually it is
> the client, but servers can do it too. When the 'hole' overlaps with the
> highest supported version of the other end the fallback mechanism breaks
> with the behaviour you see.
> 
> 
> The solution is to ensure the TLS versions supported by the client are a
> continuous range.
> 
> * SSLv2 should be dead and buried. Disabled everywhere. Kill it ASAP if
> you see it enabled anywhere.
> 
> * SSLv3 _should_ be disabled now too. Using it is actively dangerous. In
> the event that it cannot be disabled then TLSv1.0 through to the highest
> supported TLS version also *need* to be enabled. No poking holes to
> disable TLSv1.0 with SSLv3 still active.
> 
> * TLSv1.0 is a good idea to disable. It is not dangerous yet but very
> will soon be, and there are a lot of its ciphers which _are_ actively
> dangerous and require disabling if its going to be allowed. The only
> reasons to have it enabled are old TLSv1.0-only software or when SSLv3
> is required.
> 
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 3.5.8 — SSL Bump questions

2015-09-07 Thread Dan Charlesworth
Hello all

I’ve been testing out an SSL bumping config using 3.5.8 for the last week or so 
and am scratching my head over a couple of things.

First, here’s my config (shout out to James Lay):

acl tcp_level at_step SslBump1
acl client_hello_peeked at_step SslBump2
acl bump_bypass_domains ssl::server_name “/path/to/some/domains.txt"
ssl_bump splice client_hello_peeked bump_bypass_domains
ssl_bump bump client_hello_peeked

1. Why don’t spliced connections get a user agent logged like explicit CONNECTs 
do?

2. Safari produces this error visiting all sorts of websites (github, 
wikipedia, gmail):
Error negotiating SSL connection on FD 15: error:140A1175:SSL 
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback (1/-1)

… whereas Chrome and Firefox do not. What’s the story with this one?

Thanks!

P.S. If it makes any difference, this is using an RPM I built for CentOS 6 
using openssl-1.0.1e-42.el6.x86_64.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Any plan for an SSL bump mode ACL?

2015-08-27 Thread Dan Charlesworth
I’m trying to figure out if there’s a way to avoid those 0 byte “peeked” 
requests being processed by the rest of our external ACLs etc. by allowing them 
early on in the transaction.

Unfortunately there doesn’t seem to be a way to target just those ones with 
http_access—the TAG_NONE isn’t an actual method and and there’s no ACL for the 
bump mode—without also targeting the spliced ones.

Any ideas, denizens of the mailing list?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why is overlapping dstdomains a FATAL error now?

2015-08-06 Thread Dan Charlesworth
Huh! Thanks for that, Amos. 

Our software actually flags the redundant entries and doesn't write those ones 
out, just that I realised I didn’t really understand “why” squid does that and 
why we need to work around it in the first place.

Doing a `-k parse` before loading any new changes is super good advice though. 
We’ll be implementing that failsafe for sure.

 On 7 Aug 2015, at 1:26 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 7/08/2015 11:48 a.m., Benjamin E. Nichols wrote:
 Agreed, whoever decided it was a wise decision to make this a stop error
 should be fired or at the very least, slapped in the back of the head.
 
 On 8/6/2015 6:44 PM, Dan Charlesworth wrote:
 This used to just cause a WARNING right? Is this really a good enough
 reason to stop Squid from starting up?
 
 2015/08/07 09:25:43| ERROR: '.ssl.gstatic.com
 http://ssl.gstatic.com/' is a subdomain of '.gstatic.com
 http://gstatic.com/'
 2015/08/07 09:25:43| ERROR: You need to remove '.ssl.gstatic.com
 http://ssl.gstatic.com/' from the ACL named 'cache_bypass_domains'
 FATAL: Bungled /etc/squid/squid.conf line 149: acl
 cache_bypass_domains dstdomain /acls/lists/8/squid_domains”
 
 
 It *seems* very daft. But there actually is a very good reason.
 
 Squid stores these data into a splay tree structure as it goes. Adding
 to a splay tree is a one-way operation. There is no remove short of
 dumping the entire squid.conf and re-configuring.
 
 [ just a side note: we *are* actively in the process of trying to
 eradicate these splay trees and use another type of fast hash we could
 remove from. ]
 
 There are several cases for what the loaded file may contain:
 
 A)
 .gstatic.com
 .ssl.gstatic.com
 
 B)
 .gstatic.com
 .gstatic.com
 
 C)
 .gstatic.com
 gstatic.com
 
 D)
 .ssl.gstatic.com
 .gstatic.com
 
 E)
 gstatic.com
 .gstatic.com
 
 
 Cases (A, B, C) will happily go along and add .gstatic.com to the ACL.
 The next one is an overlap with a more narrow matching range than
 already in the tree.
 These *are* nicely displayed as WARNING, and just ignored by Squid as
 it continues to run. If you put acl all src 0.0.0.0 into a squid-3
 config you can see that happen.
 
 
 Cases (D, E) are special. The first entry already added to the ACL splay
 will match *less* than the second one.
 Now the way using a splay tree works is that for any lookup one of them
 will be 'closer' and thus match. But not always the same one.
 So adding both to the splay will cause domains in the non-overlap area
 to randomly be blocked or allowed.
 These are what you are seeing displayed as ERROR.
 
 So then we get back to how restarting the entire reconfigure process
 from scratch is needed to remove the first entry from the splay tree.
 
 --- Doing a reconfigure will only load the same details in the same
 order unless manually fixed first.
 
 Squid must halt and wait for you to fix this. If it is left running the
 config *will not* be doing what you intended Squid to do. That is FATAL.
 
 
 And BTW, finding these issues in manually edited or third-party lists is
 one of the reasons one should always run squid -k parse before loading
 new config.
 
 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Why is overlapping dstdomains a FATAL error now?

2015-08-06 Thread Dan Charlesworth
This used to just cause a WARNING right? Is this really a good enough reason to 
stop Squid from starting up?

2015/08/07 09:25:43| ERROR: '.ssl.gstatic.com http://ssl.gstatic.com/' is a 
subdomain of '.gstatic.com http://gstatic.com/'
2015/08/07 09:25:43| ERROR: You need to remove '.ssl.gstatic.com 
http://ssl.gstatic.com/' from the ACL named 'cache_bypass_domains'
FATAL: Bungled /etc/squid/squid.conf line 149: acl cache_bypass_domains 
dstdomain /acls/lists/8/squid_domains”

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Detecting clients flooding squid with failed request

2015-08-03 Thread Dan Charlesworth
Thanks Antony. 

Fail2ban looks like a viable option though we would still need to write a regex 
definition to target this sort of behaviour. Their squid example targets 
aggressive hosts where my preference would be to target aggressive applications 
(that could be running on more than one host).

https://github.com/fail2ban/fail2ban/blob/master/config/filter.d/squid.conf

In my case “raise the alarm” would probably mean send an email to somebody and 
there are lots of ways to do that programmatically.

Still open to any other ideas anyone has.

 On 3 Aug 2015, at 5:11 pm, Antony Stone antony.st...@squid.open.source.it 
 wrote:
 
 On Monday 03 August 2015 at 08:06:35 (EU time), Dan Charlesworth wrote:
 
 Probably a lot of forward proxy users here have encountered applications
 which, if they can’t get their web requests through the proxy (because of
 407 Proxy Auth Required or whatever), just start aggressively, endlessly
 spamming requests.
 
 A recent example would be AVG’s “cloud” features generating around 90
 requests per second from one computer. Pretty annoying.
 
 I was wondering if anyone here has any creative ideas for detecting when
 this is happening programmatically?
 
 It’s obviously easy to spot as a human if you’re looking at the access log,
 but it would be awesome if we could somehow parse some squidclient manager
 output and/or the access logs and “raise the alarm” in some way.
 
 Would love to hear anyone’s ideas about how the logic would work for
 something like this.
 
 Depending on what action you want for raising the alarm, I'm pretty sure 
 fail2ban could be configured for this.
 
 
 Antony.
 
 -- 
 Anyone that's normal doesn't really achieve much.
 
 - Mark Blair, Australian rocket engineer
 
   Please reply to the list;
 please *don't* CC me.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.4.14

2015-07-29 Thread Dan Charlesworth
Hey folks

Is 3.4.14 going to be a thing or should we be moving to v3.5 if we want new
bug fixes?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tos miss-mask not working at all squid 3.5.5

2015-06-22 Thread Dan Charlesworth
It's also worth pointing out that your messages are getting flagged as Spam
by Gmail, which probably isn't helping visibility.

On 23 June 2015 at 06:11, mohammad al_luha...@yahoo.com wrote:

 why is no-one answering this ?!!

 BTW, i tried the kernel patch 2.6.35 from ZPH, it worked intermittently,
 and
 stopped working after a squid re-build.

 any help is appreciated



 --
 View this message in context:
 http://squid-web-proxy-cache.1019090.n4.nabble.com/tos-miss-mask-not-working-at-all-squid-3-5-5-tp4671815p4671844.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Individual delay pools and youtube

2015-04-30 Thread Dan Charlesworth
Thanks Amos. We're using the CONNECT ACL and everything is working as
expected.

On 29 April 2015 at 20:28, Amos Jeffries squ...@treenet.co.nz wrote:

 On 29/04/2015 5:44 p.m., dan wrote:
  I mentioned last time that we had to x2 all our delay_parameter’s
  bytes because of a weird bug where squid would apply it at half speed
  for no reason.
 
  It just occurred to me that (obviously) this is why HTTPS downloads
  are going too fast; because this bug must only affect HTTP traffic.
 
  So HTTPS downloads are going at the actual speed we’ve specified and
  HTTP is going at half that.
 
  Therefore, we should be able to work around it by setting different
  delay_parameters for HTTP and HTTPS requests.
 
  So my question is, how best to target only those requests? By the
  CONNECT method?

 Yes, CONNECT ACL matching the method should work.
 Or alternatively:
  acl HTTP proto HTTP

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: ../src/ipc/AtomicWord.h:88: Enabled()

2015-03-30 Thread Dan Charlesworth
Hey Amos

This error's still happening on the 3.5.3 RPM I just built. I know nothing 
about “atomics”, mind you.  I’m all ears if you have any other suggestions :-)

Squid Cache: Version 3.5.3
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' '--enable-referer-log' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--with-openssl' '--enable-ssl-crtd' '--enable-storeio=aufs,ufs,rock' 
'--with-aio' '--enable-wccpv2' '--enable-esi' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-maxfd=65535' '--with-dl' 
'--with-pthreads' '--with-included-ltdl' '--disable-arch-native' 
'--without-nettle' '--disable-optimizations' 
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
--enable-ltdl-convenience


 On 28 Mar 2015, at 3:11 am, Dan Charlesworth d...@getbusi.com wrote:
 
 Roger—thanks for heads up Amos.
 
  
 
 
 On Fri, Mar 27, 2015 at 9:50 PM, Amos Jeffries squ...@treenet.co.nz 
 mailto:squ...@treenet.co.nz wrote:
 
 Hi Dan,
 This appears by a breakage in the 3.5 snapshots' GNU atomics detection.
 Though we are still not sure why the error occurs yet with atomics disabled.
 
 Snapshots labelled r13783 or later available in a few hrs should be fixed.
 
 Cheers
 Amos
 
 
 On 27/03/2015 11:47 a.m., Dan Charlesworth wrote:
  Bumping this because I think it might have gone into the black hole the 
  other night.
  
  On 23 Mar 2015, at 5:44 pm, Dan Charlesworth d...@getbusi.com wrote:
 
  Turns out it’s also shitting the bed whenever I go to an SSL site now that 
  I’ve added --enable-storeio=rock:
 
  2015/03/23 17:40:13 kid1| assertion failed: ../src/ipc/AtomicWord.h:71: 
  Enabled()
  2015/03/23 17:42:02 kid1| assertion failed: ../src/ipc/AtomicWord.h:74: 
  Enabled()
 
  I feel like I’m definitely missing a dependency or something :-/
 
  On 23 Mar 2015, at 5:28 pm, Dan Charlesworth d...@getbusi.com 
  mailto:d...@getbusi.com wrote:
 
  Hey!
 
  Sorry for all the threads lately, folks -
 
  I just recompiled by 3.5 EL6 (64-bit) RPM (using 
  squid-3.5.2-20150321-r13782).
 
  I decided to add rock to my `—enable-storeio` option, so I could try SMP 
  and stuff, which was fine. But when I went to squid -z it, I got this 
  crash:
  assertion failed: ../src/ipc/AtomicWord.h:88: Enabled()
 
  Just using:
  cache_dir rock /var/spool/squid 2
  workers 2
 
  I’m hoping, for a change, this is some obvious thing I’ve missed and not 
  something I need to dig out backtraces for :-)
 
  Thanks, y'all
 
  
  
  
  
  ___
  squid-users mailing list
  squid-users@lists.squid-cache.org
  http://lists.squid-cache.org/listinfo/squid-users
  
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: ../src/ipc/AtomicWord.h:88: Enabled()

2015-03-26 Thread Dan Charlesworth
Bumping this because I think it might have gone into the black hole the other 
night.

 On 23 Mar 2015, at 5:44 pm, Dan Charlesworth d...@getbusi.com wrote:
 
 Turns out it’s also shitting the bed whenever I go to an SSL site now that 
 I’ve added --enable-storeio=rock:
 
 2015/03/23 17:40:13 kid1| assertion failed: ../src/ipc/AtomicWord.h:71: 
 Enabled()
 2015/03/23 17:42:02 kid1| assertion failed: ../src/ipc/AtomicWord.h:74: 
 Enabled()
 
 I feel like I’m definitely missing a dependency or something :-/
 
 On 23 Mar 2015, at 5:28 pm, Dan Charlesworth d...@getbusi.com 
 mailto:d...@getbusi.com wrote:
 
 Hey!
 
 Sorry for all the threads lately, folks -
 
 I just recompiled by 3.5 EL6 (64-bit) RPM (using 
 squid-3.5.2-20150321-r13782).
 
 I decided to add rock to my `—enable-storeio` option, so I could try SMP and 
 stuff, which was fine. But when I went to squid -z it, I got this crash:
 assertion failed: ../src/ipc/AtomicWord.h:88: Enabled()
 
 Just using:
 cache_dir rock /var/spool/squid 2
 workers 2
 
 I’m hoping, for a change, this is some obvious thing I’ve missed and not 
 something I need to dig out backtraces for :-)
 
 Thanks, y'all
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-03-25 Thread Dan Charlesworth
Okie dokie! boxes are crashing all over the place today so I finally have
some back traces without stuff optimised out.

Here are the details from two of these crashes which occurred on two
separate deployments—please let me know if they actually contain actionable
information now and I will upload them to the bug.

Thanks folks.


On 25 March 2015 at 09:28, Dan Charlesworth d...@getbusi.com wrote:

 Resending this after the last attempt went into the mail server black hole:

 Hey Amos

 I decided I’m not confident enough in 3.5.HEAD, after last time, to go
 back into production with it. Going to to do some more local testing first.

 That being said, I now have 3.4.12 in production with optimisations
 disabled and it seems to be doing fine performance and stability-wise. I
 only managed to capture one crash with optimisations disabled, so far, but
 it seemed to have some memory-related corruption, unfortunately.

 Updates to come over the next few days.


 On 23 March 2015 at 16:59, Dan Charlesworth d...@getbusi.com wrote:

 Hey Amos

 I decided I’m not confident enough in 3.5.HEAD, after last time, to go
 back into production with it. Going to to do some more local testing first.

 That being said, I now have 3.4.12 in production with optimisations
 disabled and it seems to be doing fine performance and stability-wise. I
 only managed to capture one crash with optimisations disabled, so far, but
 it seemed to have some memory-related corruption, unfortunately.

 More to come tomorrow :-)

  On 20 Mar 2015, at 6:37 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
  On 20/03/2015 8:34 p.m., Dan Charlesworth wrote:
  Thanks Amos.
 
 
  I'll put together a build with the upcoming snapshot on Monday, might
 even try disabling optimization for it too.
 
  Please do. If you're only getting 40 RPS out of the proxy during the
  test its hard to see how not optimizing the code could be any worse, and
  it will help identifiying some traffic details.
 
  Amos
 



#0  0x003edf832625 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x003edf833e05 in abort () at abort.c:92
#2  0x0062105f in xassert (msg=0x91a311 
connIsUsable(http-getConn()), file=0x9198b0 client_side.cc, line=1515) at 
debug.cc:566
#3  0x005db79f in clientSocketRecipient (node=0x50f0d008, 
http=0x51075ab8, rep=0x12f95940, receivedData=...) at client_side.cc:1515
#4  0x0061b574 in clientStreamCallback (thisObject=0x451353c8, 
http=0x51075ab8, rep=0x12f95940, replyBuffer=...) at clientStream.cc:186
#5  0x006061eb in clientReplyContext::processReplyAccessResult 
(this=0x5118d098, accessAllowed=...) at client_side_reply.cc:2058
#6  0x006057fd in clientReplyContext::ProcessReplyAccessResult (rv=..., 
voidMe=0x5118d098) at client_side_reply.cc:1961
#7  0x007cd727 in ACLChecklist::checkCallback (this=0x573cd338, 
answer=...) at Checklist.cc:161
#8  0x007ccddb in ACLChecklist::completeNonBlocking (this=0x573cd338) 
at Checklist.cc:46
#9  0x007cddd6 in ACLChecklist::resumeNonBlockingCheck 
(this=0x573cd338, state=0xc6ca20) at Checklist.cc:279
#10 0x0064cc01 in ExternalACLLookup::LookupDone (data=0x573cd338, 
result=...) at external_acl.cc:1623
#11 0x0064bd37 in externalAclHandleReply (data=0x573a3918, reply=...) 
at external_acl.cc:1427
#12 0x0067e01f in helperReturnBuffer (request_number=0, srv=0x17021e8, 
hlp=0x1701fe8, 
msg=0x17023a0 ERR 
log=%7B%22policy_group_id%22%3A%226%22%2C%22categories%22%3A%22%5B28%5D%22%2C%22user%22%3A%2215ifrain%22%2C%22set_id%22%3A%222%22%2C%22user_group%22%3A%22stu2015%22%7D,
 msg_end=0x170244b ) at helper.cc:858
#13 0x0067e9f3 in helperHandleRead (conn=..., 
buf=0x17023a0 ERR 
log=%7B%22policy_group_id%22%3A%226%22%2C%22categories%22%3A%22%5B28%5D%22%2C%22user%22%3A%2215ifrain%22%2C%22set_id%22%3A%222%22%2C%22user_group%22%3A%22stu2015%22%7D,
 len=172, flag=COMM_OK, xerrno=0, data=0x17021e8) at helper.cc:951
#14 0x007e6f2a in CommIoCbPtrFun::dial (this=0x4d838d80) at 
CommCalls.cc:188
#15 0x005fb498 in CommCbFunPtrCallTCommIoCbPtrFun::fire 
(this=0x4d838d50) at CommCalls.h:376
#16 0x007d2b40 in AsyncCall::make (this=0x4d838d50) at AsyncCall.cc:32
#17 0x007d61ff in AsyncCallQueue::fireNext (this=0x15e4c60) at 
AsyncCallQueue.cc:52
#18 0x007d5f5f in AsyncCallQueue::fire (this=0x15e4c60) at 
AsyncCallQueue.cc:38
#19 0x00644bef in EventLoop::dispatchCalls (this=0x7fffad41f1c0) at 
EventLoop.cc:158
#20 0x00644a7f in EventLoop::runOnce (this=0x7fffad41f1c0) at 
EventLoop.cc:135
#21 0x006448b8 in EventLoop::run (this=0x7fffad41f1c0) at 
EventLoop.cc:99
#22 0x006cddbe in SquidMain (argc=3, argv=0x7fffad41f3f8) at 
main.cc:1528
#23 0x006cd32d in SquidMainSafe (argc=3, argv=0x7fffad41f3f8) at 
main.cc:1260
#24 0x006cd308 in main (argc=3, argv=0x7fffad41f3f8) at main.cc:1252
#0  0x003edf832625 in raise (sig=6

Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-03-24 Thread Dan Charlesworth
Resending this after the last attempt went into the mail server black hole:

Hey Amos

I decided I’m not confident enough in 3.5.HEAD, after last time, to go back
into production with it. Going to to do some more local testing first.

That being said, I now have 3.4.12 in production with optimisations
disabled and it seems to be doing fine performance and stability-wise. I
only managed to capture one crash with optimisations disabled, so far, but
it seemed to have some memory-related corruption, unfortunately.

Updates to come over the next few days.


On 23 March 2015 at 16:59, Dan Charlesworth d...@getbusi.com wrote:

 Hey Amos

 I decided I’m not confident enough in 3.5.HEAD, after last time, to go
 back into production with it. Going to to do some more local testing first.

 That being said, I now have 3.4.12 in production with optimisations
 disabled and it seems to be doing fine performance and stability-wise. I
 only managed to capture one crash with optimisations disabled, so far, but
 it seemed to have some memory-related corruption, unfortunately.

 More to come tomorrow :-)

  On 20 Mar 2015, at 6:37 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
  On 20/03/2015 8:34 p.m., Dan Charlesworth wrote:
  Thanks Amos.
 
 
  I'll put together a build with the upcoming snapshot on Monday, might
 even try disabling optimization for it too.
 
  Please do. If you're only getting 40 RPS out of the proxy during the
  test its hard to see how not optimizing the code could be any worse, and
  it will help identifiying some traffic details.
 
  Amos
 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Server-first SSL bump in Squid 3.5.x

2015-03-19 Thread Dan Charlesworth
Right, I see.

So I’ve got a special ACL to always allow that Test URL for the sake of our 
certcheck … but it’s doing it by dstdomain. So if there are rules to say 
“always redirect to the certificate splash page if you can’t connect to the 
URL”, then it will never pass it because the initial CONNECT step can never 
match a dstdomain and will always be DENIED.

So what I really need to do is change that test URL’s ACL to be a dst instead 
(and find a URL that isn’t going to resolve to different IPs over time). Okay.

While we’re at it, is there a Peek  Splice equivalent of the config I posted 
before?

Kind regards
Dan

 On 19 Mar 2015, at 5:18 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 19/03/2015 6:36 p.m., Dan Charlesworth wrote:
 Hey y’all
 
 Finally got 3.5.2 running. I was under the impression that using 
 server-first SSL bump would still be compatible, despite all the Peek  
 Splice changes, but apparently not. Hopefully someone can explain what might 
 be going wrong here ...
 
 
 Sadly being compatible with an broken design does not mean working.
 server-first only works nicely if the client, Squid, and server are
 operating with the same TLS features - which is uncommon.
 
 
 Using the same SSL Bump config that we used for 3.4, we now seeing this 
 happen:
 19/Mar/2015-16:21:32 22 d4:f4:6f:71:90:e6 10.0.1.71 TCP_DENIED 200 0 
 CONNECT 94.31.29.230:443 - server-first - HIER_NONE/- - -
 
 
 The CONNECT request in the clear-text HTTP layer is now subject to
 access controls before any bumping takes place. Earlier Squid would let
 the CONNECT through if you were bumping, even if it would have been
 blocked by your access controls normally.
 
 This is unrelated to server-first or any other ssl_bump action.
 
 Instead of this:
 19/Mar/2015-14:42:04736 d4:f4:6f:71:90:e6 10.0.1.71 TCP_MISS 200 96913 
 GET https://code.jquery.com/jquery-1.11.0.min.js - server-first 
 Mozilla/5.0%20(iPhone;%20CPU%20iPhone%20OS%208_2%20like%20Mac%20OS%20X)%20AppleWebKit/600.1.4%20(KHTML,%20like%20Gecko)%20Mobile/12D508
  ORIGINAL_DST/94.31.29.53 application/x-javascript -
 
 
 That is a different HTTP message from inside the encryption.
 
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-03-19 Thread Dan Charlesworth
Well I got 3.5.2 into production for a few hours and Bad Things happened:1) A hefty performance hitLoad average was maybe a tad higher but CPU. memory and I/O were about the same. However the system seemed to top out at around 40 requests per second (on a client that usually hits 100—150 rps) and squid became very slow to respond to squidclient requests:[root@proxy-LS5 ~]# time squidclient -p 8080 mgr:utilization | grep client_http.requestsclient_http.requests = 40.965955/secclient_http.requests = 41.168528/secclient_http.requests = 42.111847/secclient_http.requests = 166646real	0m7.163suser	0m0.002ssys	0m0.006s2) Lots of Segment ViolationsThese obviously suck. Backtrace attached.Just cannot win. Is it possible these two issues are due to the patch for #4206?bt full
#0  0x00397e232625 in ?? ()
No symbol table info available.
#1  0x00397e233e05 in ?? ()
No symbol table info available.
#2  0x00bb88a8 in queried_keys ()
No symbol table info available.
#3  0x00bb88b0 in queried_keys ()
No symbol table info available.
#4  0x0039864f32c0 in ?? ()
No symbol table info available.
#5  0x0059000b in operator std::char_traitschar  (this=0x2f89f30) 
at /usr/include/c++/4.4.7/ostream:510
No locals.
#6  FileMap::grow (this=0x2f89f30) at filemap.cc:75
_dbo = @0x8d01b90
old_sz = 0
old_map = 0x8bbb9e0
__FUNCTION__ = grow
#7  0x0002 in ?? ()
No symbol table info available.
#8  0x3ffd091c087442c8 in ?? ()
No symbol table info available.
#9  0x00bb91e0 in queried_keys ()
No symbol table info available.
#10 0x0001 in ?? ()
No symbol table info available.
#11 0x000c6e84 in ?? ()
No symbol table info available.
#12 0x0002 in ?? ()
No symbol table info available.
#13 0x4135 in ?? ()
No symbol table info available.
#14 0x0020 in ?? ()
No symbol table info available.
#15 0x in ?? ()
No symbol table info available.
On 16 Mar 2015, at 6:18 pm, Amos Jeffries squ...@treenet.co.nz wrote:On 16/03/2015 7:16 p.m., Dan Charlesworth wrote:Hey again Amos -Unfortunately the patch for #4206 won’t apply to squid-3.4.12. I was going to try creating a new one but couldn’t find an equivalentline in client_side.cc for that version.I guess the #4206 issue doesn’t apply to v3.4.x after all?Correct. Oh well.[Not a C programmer]Thanks for your time today.P.S. I'd love to upgrade to v3.5 but I'm waiting for somebody smarter than me to take the lead on a CentOS 6 RPM SPEC file.Eliezer to the rescue ;-)http://wiki.squid-cache.org/KnowledgeBase/CentOS#Squid-3.5Amos___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: 1 swapin MD5 mismatches and BUG 3279: HTTP reply without Date:

2015-03-19 Thread Dan Charlesworth
Hey Eliezer

I don't actually use SMP. I could be wrong about the aufs thing; I haven't
personally tested—and don't currently plan to test—any other cache types. I
just gleaned that from the comments in the bug reports.

Kind regards
Dan


On 20 March 2015 at 13:45, Eliezer Croitoru elie...@ngtech.co.il wrote:

 Hey Dan and John,

 If indeed this bug is only for UFS\AUFS cache_dir then I would try to make
 sure that large-rock will not sustain the same issue.

 I have not seen in any of the bug reports anything that would reproduce
 the issue.
 To make sure the issue is understood and can or cannot be reproduced using
 ufs\aufs will give one direction.
 I would try to test large rock in my next testing round with SMP but if
 anyone has some option to test it first I will be glad if it will be done
 to make sure ufs\aufs is the culprit.

 Also if indeed it's with aufs\ufs only with SMP then it means that the
 issue is related to the way SMP can make a ufs\aufs cache_dir dirty and
 there for the answer would be pretty simple to the issue in hands.

 Eliezer

 On 20/03/2015 00:32, Dan Charlesworth wrote:

 Hi John

 This bug has been affecting me on an off for a while as well. I believe it
 only affects aufs and, unfortunately, has been around for years.

 See:http://bugs.squid-cache.org/show_bug.cgi?id=3279
 And see:http://bugs.squid-cache.org/show_bug.cgi?id=3483


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-03-19 Thread Dan Charlesworth
John -

For us the 3.4 series is definitely the stablest.

I was hoping 3.5.2 + plus a patch would avoid the error in this thread’s 
subject—and it might have done—but it introduced two other major problems (for 
us).

 On 20 Mar 2015, at 2:29 pm, johnzeng johnzeng2...@yahoo.com wrote:
 
 
 Hello Dan:
 
 i used squid 2.7stable9 ago ,and i worried whether squid 
 3.5.2 is stablest for us until now too .
 
 and you ?
 
 Do you think Whether version is stablest at squid 3.xxx  ?
 
 
 
 
 
 
 
 Well I got 3.5.2 into production for a few hours and Bad Things happened:
 
 *1) A hefty performance hit*
 Load average was maybe a tad higher but CPU. memory and I/O were about the 
 same. However the system seemed to top out at around 40 requests per second 
 (on a client that usually hits 100—150 rps) and squid became very slow to 
 respond to squidclient requests:
 [root@proxy-LS5 ~]# time squidclient -p 8080 mgr:utilization | grep 
 client_http.requests
 client_http.requests = 40.965955/sec
 client_http.requests = 41.168528/sec
 client_http.requests = 42.111847/sec
 client_http.requests = 166646
 
 real0m7.163s
 user0m0.002s
 sys0m0.006s
 
 *2) Lots of Segment Violations*
 These obviously suck. Backtrace attached.
 
 Just cannot win. Is it possible these two issues are due to the patch for 
 #4206?
 
 
 
 
 On 16 Mar 2015, at 6:18 pm, Amos Jeffries squ...@treenet.co.nz 
 mailto:squ...@treenet.co.nz wrote:
 
 On 16/03/2015 7:16 p.m., Dan Charlesworth wrote:
 Hey again Amos -
 
 Unfortunately the patch for #4206 won’t apply to squid-3.4.12. I was going 
 to try creating a new one but couldn’t find an equivalent line in 
 client_side.cc for that version.
 
 I guess the #4206 issue doesn’t apply to v3.4.x after all?
 
 Correct. Oh well.
 
 
 
 [Not a C programmer]
 
 Thanks for your time today.
 
 P.S. I'd love to upgrade to v3.5 but I'm waiting for somebody smarter than 
 me to take the lead on a CentOS 6 RPM SPEC file.
 
 Eliezer to the rescue ;-)
 http://wiki.squid-cache.org/KnowledgeBase/CentOS#Squid-3.5
 
 
 Amos
 
 
 
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: 1 swapin MD5 mismatches and BUG 3279: HTTP reply without Date:

2015-03-19 Thread Dan Charlesworth
Ours usually run 50–100 GB.

We don’t see it super frequently. But when it happens it tends to keep 
happening over and over until the swap.sate’s rebuilt.

 On 20 Mar 2015, at 2:37 pm, Alberto Perez alberto2pe...@gmail.com wrote:
 
 Another one here not using SMP, and using aufs.
 
 I stopped seen this issue frequently when I reduced my cache size,
 from 70 GB to 30 GB now.
 
 Regards
 
 On 3/19/15, Dan Charlesworth d...@getbusi.com wrote:
 Hey Eliezer
 
 I don't actually use SMP. I could be wrong about the aufs thing; I haven't
 personally tested—and don't currently plan to test—any other cache types. I
 just gleaned that from the comments in the bug reports.
 
 Kind regards
 Dan
 
 
 On 20 March 2015 at 13:45, Eliezer Croitoru elie...@ngtech.co.il wrote:
 
 Hey Dan and John,
 
 If indeed this bug is only for UFS\AUFS cache_dir then I would try to
 make
 sure that large-rock will not sustain the same issue.
 
 I have not seen in any of the bug reports anything that would reproduce
 the issue.
 To make sure the issue is understood and can or cannot be reproduced
 using
 ufs\aufs will give one direction.
 I would try to test large rock in my next testing round with SMP but if
 anyone has some option to test it first I will be glad if it will be done
 to make sure ufs\aufs is the culprit.
 
 Also if indeed it's with aufs\ufs only with SMP then it means that the
 issue is related to the way SMP can make a ufs\aufs cache_dir dirty and
 there for the answer would be pretty simple to the issue in hands.
 
 Eliezer
 
 On 20/03/2015 00:32, Dan Charlesworth wrote:
 
 Hi John
 
 This bug has been affecting me on an off for a while as well. I believe
 it
 only affects aufs and, unfortunately, has been around for years.
 
 See:http://bugs.squid-cache.org/show_bug.cgi?id=3279
 And see:http://bugs.squid-cache.org/show_bug.cgi?id=3483
 
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Server-first SSL bump in Squid 3.5.x

2015-03-18 Thread Dan Charlesworth
Hey y’all

Finally got 3.5.2 running. I was under the impression that using server-first 
SSL bump would still be compatible, despite all the Peek  Splice changes, but 
apparently not. Hopefully someone can explain what might be going wrong here ...

Using the same SSL Bump config that we used for 3.4, we now seeing this happen:
19/Mar/2015-16:21:32 22 d4:f4:6f:71:90:e6 10.0.1.71 TCP_DENIED 200 0 
CONNECT 94.31.29.230:443 - server-first - HIER_NONE/- - -

Instead of this:
19/Mar/2015-14:42:04736 d4:f4:6f:71:90:e6 10.0.1.71 TCP_MISS 200 96913 GET 
https://code.jquery.com/jquery-1.11.0.min.js - server-first 
Mozilla/5.0%20(iPhone;%20CPU%20iPhone%20OS%208_2%20like%20Mac%20OS%20X)%20AppleWebKit/600.1.4%20(KHTML,%20like%20Gecko)%20Mobile/12D508
 ORIGINAL_DST/94.31.29.53 application/x-javascript -

This request happens in a little splash page which is designed to test if 
squid’s CA cert is installed on the client and redirect them to some 
instructions if it’s not. This definitely isn’t happening for all intercepted 
HTTPS requests, just this (particularly important) one and some others.

SSL Bump config:
ssl_bump none localhost
ssl_bump server-first all
sslproxy_cert_error deny all

sslcrtd_program /usr/bin/squid_ssl_crtd -s /path/to/squid/ssl_db -M 4MB
sslcrtd_children 32 startup=5 idle=1

DNAT intercepting port config:
https_port 3130 intercept name=3130 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/path/to/squid/proxy-cert.cer 
key=/path/to/squid/proxy-key.key

Thanks!___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v3.5.x RPM for CentOS 6

2015-03-18 Thread Dan Charlesworth
Hi Donny

I gathered that much. I guess what I specifically am asking for is:

- Which CentOS 6 package includes the missing perl modules?
- How do I grant the “pinger” the correct permissions in CentOS 6?

Cheers
Dan

 On 18 Mar 2015, at 4:58 pm, Donny Vibianto l4n...@gmail.com wrote:
 
 hi Dan,
 
 eliezer already made binary for centos 6.x, you just missed perl modules and 
 pinger need to have correct permission.
 
 
 
 On Wed, Mar 18, 2015 at 11:54 AM, Dan Charlesworth d...@getbusi.com 
 mailto:d...@getbusi.com wrote:
 *Tory — sorry.
 
 On 18 Mar 2015, at 3:49 pm, Dan Charlesworth d...@getbusi.com 
 mailto:d...@getbusi.com wrote:
 
 Hi Tony
 
 Yeah, I wouldn’t mind taking a peek at your SRPM or spec file if you can 
 share—thanks!
 
 On 18 Mar 2015, at 3:15 pm, Tory M Blue tmb...@gmail.com 
 mailto:tmb...@gmail.com wrote:
 
 I've built a 3.5 and have it running. I can look and see if I can share it 
 with you. Don't believe there is anything special .
 
 Tory 
 
 Sent via the wild blue yonder
 
 
 On Mar 17, 2015, at 20:16, Dan Charlesworth d...@getbusi.com 
 mailto:d...@getbusi.com wrote:
 
 Hey Eliezer
 
 Do you have any plans to maintain a Squid 3.5.x rpm for CentOS 6? 
 
 I can see you’ve published one for CentOS 7. In fact I tried to use your 
 spec file from the EL7 version to build an EL6 rpm, but ran into errors 
 when updating from 3.4.12:
 
 1. Installing the separate squid-helpers package had a dependency error 
 I’m not sure how to resolve:
 --- Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
 -- Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Processing Dependency: perl(DBI) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Running transaction check
 --- Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
 --- Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
 -- Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Finished Dependency Resolution
 Error: Package: 7:squid-helpers-3.5.2-1.el6.x86_64 (getbusi-dev)
Requires: perl(Crypt::OpenSSL::X509)
  You could try using --skip-broken to work around the problem
 
  You could try running: rpm -Va --nofiles --nodigest
 
 2. Having disabled all the helpers which are missing because of that 
 package everything was okay except for an error regarding the “ICMP 
 Pinger”:
 2015/03/18 14:13:25| pinger: Initialising ICMP pinger ...
 2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
 2015/03/18 14:13:25| pinger: Unable to start ICMP pinger.
 2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
 2015/03/18 14:13:25| pinger: Unable to start ICMPv6 pinger.
 2015/03/18 14:13:25| FATAL: pinger: Unable to open any ICMP sockets.
 
 Do you have any advice on how to overcome these issues?
 
 Thanks!
 Dan
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org 
 mailto:squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users 
 http://lists.squid-cache.org/listinfo/squid-users
 
 
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org mailto:squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users 
 http://lists.squid-cache.org/listinfo/squid-users
 
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v3.5.x RPM for CentOS 6

2015-03-17 Thread Dan Charlesworth
Hi Tony

Yeah, I wouldn’t mind taking a peek at your SRPM or spec file if you can 
share—thanks!

 On 18 Mar 2015, at 3:15 pm, Tory M Blue tmb...@gmail.com wrote:
 
 I've built a 3.5 and have it running. I can look and see if I can share it 
 with you. Don't believe there is anything special .
 
 Tory 
 
 Sent via the wild blue yonder
 
 
 On Mar 17, 2015, at 20:16, Dan Charlesworth d...@getbusi.com 
 mailto:d...@getbusi.com wrote:
 
 Hey Eliezer
 
 Do you have any plans to maintain a Squid 3.5.x rpm for CentOS 6? 
 
 I can see you’ve published one for CentOS 7. In fact I tried to use your 
 spec file from the EL7 version to build an EL6 rpm, but ran into errors when 
 updating from 3.4.12:
 
 1. Installing the separate squid-helpers package had a dependency error I’m 
 not sure how to resolve:
 --- Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
 -- Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Processing Dependency: perl(DBI) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Running transaction check
 --- Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
 --- Package squid-helpers.x86_64 7:3.5.2-1.el6 will be installed
 -- Processing Dependency: perl(Crypt::OpenSSL::X509) for package: 
 7:squid-helpers-3.5.2-1.el6.x86_64
 -- Finished Dependency Resolution
 Error: Package: 7:squid-helpers-3.5.2-1.el6.x86_64 (getbusi-dev)
Requires: perl(Crypt::OpenSSL::X509)
  You could try using --skip-broken to work around the problem
 
  You could try running: rpm -Va --nofiles --nodigest
 
 2. Having disabled all the helpers which are missing because of that package 
 everything was okay except for an error regarding the “ICMP Pinger”:
 2015/03/18 14:13:25| pinger: Initialising ICMP pinger ...
 2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
 2015/03/18 14:13:25| pinger: Unable to start ICMP pinger.
 2015/03/18 14:13:25|  icmp_sock: (1) Operation not permitted
 2015/03/18 14:13:25| pinger: Unable to start ICMPv6 pinger.
 2015/03/18 14:13:25| FATAL: pinger: Unable to open any ICMP sockets.
 
 Do you have any advice on how to overcome these issues?
 
 Thanks!
 Dan
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org mailto:squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users 
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Random SSL bump DB corruption

2015-03-17 Thread Dan Charlesworth
Bumpity bump

Had this go down exactly the same way this past Monday at Deployment #1.

 On 10 Mar 2015, at 4:51 pm, Dan Charlesworth d...@getbusi.com wrote:
 
 Hey folks
 
 After having many of our systems running Squid 3.4.12 for a couple of weeks 
 now we had two different deployments fail today due to SSL DB corruption.
 
 Never seen this in almost 9 months of SSL bump being in production and there 
 were no problems in either cache log until the “wrong number of fields” 
 lines, apparently.
 
 Anyone else?
 
 Deployment #1 log excerpt:
 wrong number of fields on line 505 (looking for field 6, got 1, '' left)
 (squid_ssl_crtd): The SSL certificate database 
 /usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
 2015/03/10 09:04:24 kid1| WARNING: ssl_crtd #Hlpr0 exited
 2015/03/10 09:04:24 kid1| Too few ssl_crtd processes are running (need 1/32)
 2015/03/10 09:04:24 kid1| Starting new helpers
 2015/03/10 09:04:24 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
 processes
 2015/03/10 09:04:24 kid1| ssl_crtd helper returned NULL reply.
 wrong number of fields on line 505 (looking for field 6, got 1, '' left)
 (squid_ssl_crtd): The SSL certificate database 
 /usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
 
 Deployment #2 log excerpt:
 wrong number of fields on line 2 (looking for field 6, got 1, '' left)
 (squid_ssl_crtd): The SSL certificate database 
 /usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
 2015/03/10 15:29:16 kid1| WARNING: ssl_crtd #Hlpr0 exited
 2015/03/10 15:29:16 kid1| Too few ssl_crtd processes are running (need 1/32)
 2015/03/10 15:29:16 kid1| Starting new helpers
 2015/03/10 15:29:16 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
 processes
 2015/03/10 15:29:17 kid1| ssl_crtd helper returned NULL reply.
 wrong number of fields on line 2 (looking for field 6, got 1, '' left)
 (squid_ssl_crtd): The SSL certificate database 
 /usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Random SSL bump DB corruption

2015-03-09 Thread Dan Charlesworth
Hey folks

After having many of our systems running Squid 3.4.12 for a couple of weeks now 
we had two different deployments fail today due to SSL DB corruption.

Never seen this in almost 9 months of SSL bump being in production and there 
were no problems in either cache log until the “wrong number of fields” lines, 
apparently.

Anyone else?

Deployment #1 log excerpt:
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
2015/03/10 09:04:24 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 09:04:24 kid1| Too few ssl_crtd processes are running (need 1/32)
2015/03/10 09:04:24 kid1| Starting new helpers
2015/03/10 09:04:24 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
processes
2015/03/10 09:04:24 kid1| ssl_crtd helper returned NULL reply.
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

Deployment #2 log excerpt:
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
2015/03/10 15:29:16 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 15:29:16 kid1| Too few ssl_crtd processes are running (need 1/32)
2015/03/10 15:29:16 kid1| Starting new helpers
2015/03/10 15:29:16 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
processes
2015/03/10 15:29:17 kid1| ssl_crtd helper returned NULL reply.
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-26 Thread Dan Charlesworth
Alright I got abrtd on board, finally.Here’s a a backtrace from this morning (bt and bt full versions included separately):#0  0x00397e232625 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x00397e233e05 in abort () at abort.c:92
#2  0x005656ef in xassert (msg=0x80e183 
connIsUsable(http-getConn()), file=0x80d070 client_side.cc, line=1515) at 
debug.cc:566
#3  0x0053e2cf in clientSocketRecipient (node=0x49e03568, 
http=0x46f9fdf8, rep=0x6030a6a0, receivedData=...) at client_side.cc:1515
#4  0x00546207 in clientReplyContext::processReplyAccessResult 
(this=0x46fa2bf8, accessAllowed=value optimized out) at 
client_side_reply.cc:2058
#5  0x00546643 in clientReplyContext::ProcessReplyAccessResult (rv=..., 
voidMe=value optimized out) at client_side_reply.cc:1961
#6  0x006ec9db in ACLChecklist::checkCallback (this=0x31210228, 
answer=value optimized out) at Checklist.cc:161
#7  0x00588daf in externalAclHandleReply (data=value optimized out, 
reply=value optimized out) at external_acl.cc:1427
#8  0x005bd62a in helperReturnBuffer (conn=value optimized out, 
buf=value optimized out, len=value optimized out, flag=value optimized 
out, 
xerrno=value optimized out, data=0x4fadea8) at helper.cc:858
#9  helperHandleRead (conn=value optimized out, buf=value optimized out, 
len=value optimized out, flag=value optimized out, xerrno=value optimized 
out, 
data=0x4fadea8) at helper.cc:951
#10 0x006efde6 in AsyncCall::make (this=0x6031d880) at AsyncCall.cc:32
#11 0x006f2ea2 in AsyncCallQueue::fireNext (this=value optimized out) 
at AsyncCallQueue.cc:52
#12 0x006f31f0 in AsyncCallQueue::fire (this=0x1a45f30) at 
AsyncCallQueue.cc:38
#13 0x00584a34 in EventLoop::runOnce (this=0x7cf244c0) at 
EventLoop.cc:135
#14 0x00584b88 in EventLoop::run (this=0x7cf244c0) at 
EventLoop.cc:99
#15 0x00604918 in SquidMain (argc=value optimized out, 
argv=0x7cf246b8) at main.cc:1528
#16 0x006052a8 in SquidMainSafe (argc=value optimized out, 
argv=value optimized out) at main.cc:1260
#17 main (argc=value optimized out, argv=value optimized out) at 
main.cc:1252
#0  0x00397e232625 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
64return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt full
#0  0x00397e232625 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
resultvar = 0
pid = value optimized out
selftid = 22170
#1  0x00397e233e05 in abort () at abort.c:92
save_stage = 2
act = {__sigaction_handler = {sa_handler = 0x735573496e6e6f63, 
sa_sigaction = 0x735573496e6e6f63}, sa_mask = {__val = {8391446528407659105, 
95, 246932897408, 
  1515, 11643056, 1613801120, 2116527793, 70, 779769992, 779769992, 
779769904, 779769992, 247066473152, 140737437122128, 140737437122096, 
1613801120}}, 
  sa_flags = -2044089756, sa_restorer = 0xb1a8d0 Debug::CurrentDebug}
sigs = {__val = {32, 0 repeats 15 times}}
#2  0x005656ef in xassert (msg=0x80e183 
connIsUsable(http-getConn()), file=0x80d070 client_side.cc, line=1515) at 
debug.cc:566
__FUNCTION__ = xassert
#3  0x0053e2cf in clientSocketRecipient (node=0x49e03568, 
http=0x46f9fdf8, rep=0x6030a6a0, receivedData=...) at client_side.cc:1515
context = {p_ = 0x46fa1b48}
mustSendLastChunk = value optimized out
#4  0x00546207 in clientReplyContext::processReplyAccessResult 
(this=0x46fa2bf8, accessAllowed=value optimized out) at 
client_side_reply.cc:2058
__FUNCTION__ = processReplyAccessResult
localTempBuffer = {flags = {error = value optimized out}, length = 0, 
offset = value optimized out, data = 0x46fa1cb4 }
buf = 0x46fa1b68 HTTP/1.1 200 OK\r\nServer: Apache\r\nETag: 
\7d861e702caf2102333f5e730b15fa7d:1424978619\\r\nLast-Modified: Thu, 26 Feb 
2015 19:00:47 GMT\r\nAccept-Ranges: bytes\r\nContent-Length: 
18206\r\nContent-Type: image/png...
body_buf = value optimized out
body_size = value optimized out
#5  0x00546643 in clientReplyContext::ProcessReplyAccessResult (rv=..., 
voidMe=value optimized out) at client_side_reply.cc:1961
me = value optimized out
#6  0x006ec9db in ACLChecklist::checkCallback (this=0x31210228, 
answer=value optimized out) at Checklist.cc:161
callback_ = 0x546630 
clientReplyContext::ProcessReplyAccessResult(allow_t, void*)
cbdata_ = 0x46fa2bf8
__FUNCTION__ = checkCallback
#7  0x00588daf in externalAclHandleReply (data=value optimized out, 
reply=value optimized out) at external_acl.cc:1427
cbdata = 0x31210228
state = 0x4cc1d078
__FUNCTION__ = externalAclHandleReply
next = value optimized out
entryData = {result = {code = ACCESS_DENIED, kind = 0}, notes = {Lock 
= {_vptr.Lock = 0xaedb80, count_ = 0}, _vptr.NotePairs = 0xaedb58, 

Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-19 Thread Dan Charlesworth
Thanks Eliezer …

We've only ever used `kill` as very last resort when the squid process wouldn’t 
respond to anything else.

Anyway, I think I missed what led you to think the crash is related to the 
reply_body_max_size rules' external ACL as opposed to the many others we define?

That would certainly narrow it down a lot further than before.

Cheers
Dan

 On 20 Feb 2015, at 2:57 pm, Eliezer Croitoru elie...@ngtech.co.il wrote:
 
 Hey Dan,
 
 I am not the best at reading squid long debug output and it is needed in 
 order to understand the path that the request is traveling between the ACLs 
 and helper to determine if the issue is since the connection is unusable 
 because of a helper or because of another reason.
 
 And so from what you describe I assume that the needed helper\external ACL is 
 a fake one so a python helper is out of the question for such a purpose.
 The fact that it's crashing describes some kind of failure from the 
 combination of something.
 In order to test if the issue is because of the helper or something related 
 to the existence of this specific helper for the reply body max try to 
 disable this helper and use only the basic limit, while it will force you to 
 not show a nice deny info page it will prevent the some of the issues from 
 happening.
 
 From stability point of view running all these kill -9 what so ever is a very 
 wrong approach.
 The crashes else then causing down time causing a much deeper issue.
 Assuming that the users transactions are important these crashes are damaging 
 in many cases even more then any down time.
 I know that some admins do not agree with my approach but a stable service is 
 one of the basic fundamentals for success and happiness!!
 
 I must admit that there are cases which a kill -9 can help but it has it's 
 price.
 
 Just asking loudly from both CEO + SYSADMIN + CLIENTS + others:
 What would you prefer?
 - stability based a good product
 - stability based on patches
 - stability based on human resources recruitment's
 - stability based on some unclosed known bugs
 - stability based on 1k programmers work
 - stability based on protocol compatibility
 
 
 And I must stop here with the list since the above list can become very long 
 and which will prove that humans can look at the same picture and see many 
 different things.
 
 Eliezer
 
 * I am almost sure that you may use a fake acl that will match all requests 
 instead of using an external_acl helper that will help you to select the 
 100MB limit.
 
 On 20/02/2015 05:34, Dan Charlesworth wrote:
 Installed v3.4.12 and almost went a whole day without this crash.
 Ended up rearing its head during a spike in traffic after lunch. Seems
 to be more prone to occurring when the HTTP requests per second
 reaches about 100.
 
 I have a script running that runs a squid reload whenever this crash
 occurs and that seems to limit the impact (downtime) to a few
 seconds—but occasionally Squid seems to get deadlocked by the crash
 and needs to be killed (sometimes with -9) before it can be restarted.
 
 In lieu of being able to diagnose and fix this, does anyone have any
 other creative ideas as to limiting its impact?
 
 Thanks
 Dan
 
 
 On 12 February 2015 at 09:51, Dan Charlesworth d...@getbusi.com wrote:
 Hey Eliezer
 
 With the response_size_100 ACL definition:
 - 100 tells the external ACL the limit in MB
 - 192.168.0.10 tells the external ACL the squid IP
 
 I think one or both of these is only needed to build the deny page. You 
 can’t use deny_info with reply_body_max_size so we had to customise the 
 ERR_TOO_BIG source to do a redirect to our own page.
 
 The http_access allow line is because result caching cannot alter the 
 EXT_LOG for fast ACLs as cache lookups include the EXT_LOG, so we need to 
 check the result twice to alter the EXT_LOG and then have the result cached 
 against the altered EXT_LOG.
 
 Cheers
 Dan
 
 On 11 Feb 2015, at 11:09 pm, Eliezer Croitoru elie...@ngtech.co.il wrote:
 
 Hey Dan,
 
 First I must admit that this squid.conf is quite complicated but kind of 
 self explanatory.
 
 I have tried to understand the next lines:
 # File size (download) restrictions
 acl response_size_100 external response_size_type 100 192.168.0.10
 http_access allow response_size_100 response_size_100
 reply_body_max_size 100 MB response_size_100
 
 But I am unsure how it works with external_acl exactly.
 If you wish to deny 100MB size files you should have only one rule for the 
 reply body max size, what are the others for exactly?
 
 Eliezer
 
 * I might missing some concepts some sorry in advance.
 
 On 11/02/2015 00:30, Dan Charlesworth wrote:
 Hi Eliezer
 
 Took a while to get this up—sorry about that. Here’s an example of a 
 production config of ours (with some confidential stuff necessarily taken 
 out/edited):
 https://gist.github.com/djch/92cf0b04afbd7917  
 https://gist.github.com/djch/92cf0b04afbd7917
 
 Let me know if there’s any other info I can provide

Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-19 Thread Dan Charlesworth
Thanks Amos -

So then it more than likely is related to our external ACLs that deal with the 
HTTP response?

 On 20 Feb 2015, at 5:06 pm, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 20/02/2015 5:46 p.m., Eliezer Croitoru wrote:
 Hey Dan,
 
 The basic rule of thumb in programming lands is script vs compiled code.
 Where compiled code can be considered very reliable and in most cases
 tested much more then scripts.
 I am fearing that there is some race between all sorts of things on
 runtime which might lead to this failed test.
 
 There are couple possibilities that can cause the issue you are writing
 about.
 From the compiled side of the code the main suspect is that the
 connection got into a non usable state before squid could do something
 else.
 I have not seen yet the source code for connIsUsable but if you wish I
 can try and look at the function\method\call\code source and start a
 basic lookup to understand the issue a bit more.
 
 Its a simple test to check to ensure the client connection is open when
 writing some response data to it.
 
 Something has earlier cause client connection closure, and something
 else earlier has failed to cleanup the state or check the state was sane
 before getting to the point of assertion.
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-19 Thread Dan Charlesworth
Installed v3.4.12 and almost went a whole day without this crash.
Ended up rearing its head during a spike in traffic after lunch. Seems
to be more prone to occurring when the HTTP requests per second
reaches about 100.

I have a script running that runs a squid reload whenever this crash
occurs and that seems to limit the impact (downtime) to a few
seconds—but occasionally Squid seems to get deadlocked by the crash
and needs to be killed (sometimes with -9) before it can be restarted.

In lieu of being able to diagnose and fix this, does anyone have any
other creative ideas as to limiting its impact?

Thanks
Dan


On 12 February 2015 at 09:51, Dan Charlesworth d...@getbusi.com wrote:
 Hey Eliezer

 With the response_size_100 ACL definition:
 - 100 tells the external ACL the limit in MB
 - 192.168.0.10 tells the external ACL the squid IP

 I think one or both of these is only needed to build the deny page. You can’t 
 use deny_info with reply_body_max_size so we had to customise the ERR_TOO_BIG 
 source to do a redirect to our own page.

 The http_access allow line is because result caching cannot alter the EXT_LOG 
 for fast ACLs as cache lookups include the EXT_LOG, so we need to check the 
 result twice to alter the EXT_LOG and then have the result cached against the 
 altered EXT_LOG.

 Cheers
 Dan

 On 11 Feb 2015, at 11:09 pm, Eliezer Croitoru elie...@ngtech.co.il wrote:

 Hey Dan,

 First I must admit that this squid.conf is quite complicated but kind of 
 self explanatory.

 I have tried to understand the next lines:
 # File size (download) restrictions
 acl response_size_100 external response_size_type 100 192.168.0.10
 http_access allow response_size_100 response_size_100
 reply_body_max_size 100 MB response_size_100

 But I am unsure how it works with external_acl exactly.
 If you wish to deny 100MB size files you should have only one rule for the 
 reply body max size, what are the others for exactly?

 Eliezer

 * I might missing some concepts some sorry in advance.

 On 11/02/2015 00:30, Dan Charlesworth wrote:
 Hi Eliezer

 Took a while to get this up—sorry about that. Here’s an example of a 
 production config of ours (with some confidential stuff necessarily taken 
 out/edited):
 https://gist.github.com/djch/92cf0b04afbd7917  
 https://gist.github.com/djch/92cf0b04afbd7917

 Let me know if there’s any other info I can provide that might point 
 towards the cause of this crash.

 And thanks again for taking a look.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-10 Thread Dan Charlesworth
Hi Eliezer

Took a while to get this up—sorry about that. Here’s an example of a production 
config of ours (with some confidential stuff necessarily taken out/edited):
https://gist.github.com/djch/92cf0b04afbd7917 
https://gist.github.com/djch/92cf0b04afbd7917

Let me know if there’s any other info I can provide that might point towards 
the cause of this crash.

And thanks again for taking a look.

 On 3 Feb 2015, at 2:49 pm, Dan Charlesworth d...@getbusi.com wrote:
 
 Hi Eliezer
 
 Thanks for paying attention, as always. I’m working on getting an 
 (appropriately censored) example of our squid.conf up for your perusal.
 
 In the mean time I just wanted to point out that when this crash occurs some 
 of the most busy external_acl_types appear to crash too. Though the exact 
 ones seems to vary a bit between occurrences:
 
 2015/02/03 13:03:05 kid1| assertion failed: client_side.cc:1515: 
 connIsUsable(http-getConn())
 Traceback (most recent call last):
   File max_file_size_acl.pyo, line 76, in module
 IOError: [Errno 104] Connection reset by peer
 2015/02/03 13:04:01 kid1| Set Current Directory to /var/spool/squid
 Traceback (most recent call last):
   File set_finder_acl.pyo, line 94, in module
 IOError: [Errno 104] Connection reset by peer
 2015/02/03 13:04:01 kid1| Starting Squid Cache version 3.4.11 for 
 x86_64-redhat-linux-gnu...
 
 Those lines it’s pointing to in the Traceback are just the last line in each 
 ACL e.g. `line = sys.stdin.readline()`
 
 Cheers
 Dan
 
 On 2 Feb 2015, at 11:35 am, Eliezer Croitoru elie...@ngtech.co.il 
 mailto:elie...@ngtech.co.il wrote:
 
 Hey Dan,
 
 Just to get around the environment, can you share your squid.conf?(censuring 
 confidential data)
 
 Thanks,
 Eliezer
 
 On 02/02/2015 01:14, Dan Charlesworth wrote:
 Bumping this one for the new year 'cause I still don't understand squid
 traces and because it's still happening with v3.4.11.
 
 I would speculate that's it's something to do with the External ACLs
 (there's a bunch). Let me know if a more recent traceback (than those
 earlier in the thread) would help.
 
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org mailto:squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-02 Thread Dan Charlesworth
Hi Eliezer

Thanks for paying attention, as always. I’m working on getting an 
(appropriately censored) example of our squid.conf up for your perusal.

In the mean time I just wanted to point out that when this crash occurs some of 
the most busy external_acl_types appear to crash too. Though the exact ones 
seems to vary a bit between occurrences:

2015/02/03 13:03:05 kid1| assertion failed: client_side.cc:1515: 
connIsUsable(http-getConn())
Traceback (most recent call last):
  File max_file_size_acl.pyo, line 76, in module
IOError: [Errno 104] Connection reset by peer
2015/02/03 13:04:01 kid1| Set Current Directory to /var/spool/squid
Traceback (most recent call last):
  File set_finder_acl.pyo, line 94, in module
IOError: [Errno 104] Connection reset by peer
2015/02/03 13:04:01 kid1| Starting Squid Cache version 3.4.11 for 
x86_64-redhat-linux-gnu...

Those lines it’s pointing to in the Traceback are just the last line in each 
ACL e.g. `line = sys.stdin.readline()`

Cheers
Dan

 On 2 Feb 2015, at 11:35 am, Eliezer Croitoru elie...@ngtech.co.il wrote:
 
 Hey Dan,
 
 Just to get around the environment, can you share your squid.conf?(censuring 
 confidential data)
 
 Thanks,
 Eliezer
 
 On 02/02/2015 01:14, Dan Charlesworth wrote:
 Bumping this one for the new year 'cause I still don't understand squid
 traces and because it's still happening with v3.4.11.
 
 I would speculate that's it's something to do with the External ACLs
 (there's a bunch). Let me know if a more recent traceback (than those
 earlier in the thread) would help.
 
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] assertion failed: client_side.cc:1515: connIsUsable(http-getConn())

2015-02-01 Thread Dan Charlesworth
Bumping this one for the new year 'cause I still don't understand squid
traces and because it's still happening with v3.4.11.

I would speculate that's it's something to do with the External ACLs
(there's a bunch). Let me know if a more recent traceback (than those
earlier in the thread) would help.

On 2 February 2015 at 10:14, Dan Charlesworth d...@getbusi.com wrote:

 Bumping this one for the new year 'cause I still don't understand squid
 traces and because it's still happening with v3.4.11.

 I would speculate that's it's something to do with the External ACLs
 (there's a bunch). Let me know if a more recent traceback (than those
 earlier in the thread) would help.

 On 13 November 2014 at 16:02, d...@getbusi.com wrote:

 Oh sure, sorry:

  Squid Cache: Version 3.4.8
 configure options:  '--build=x86_64-redhat-linux-gnu'
 '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'
 '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
 '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
 '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64'
 '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
 '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
 '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
 '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
 '--with-logdir=$(localstatedir)/log/squid'
 '--with-pidfile=$(localstatedir)/run/squid.pid'
 '--disable-dependency-tracking' '--enable-follow-x-forwarded-for'
 '--enable-auth'
 '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake'
 '--enable-auth-digest=file,LDAP,eDirectory'
 '--enable-auth-negotiate=kerberos,wrapper'
 '--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group,session'
 '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
 '--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
 '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-referer-log'
 '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
 '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs'
 '--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--with-aio'
 '--with-default-user=squid' '--with-filedescriptors=16384'
 '--with-maxfd=65535' '--with-dl' '--with-openssl' '--with-pthreads'
 '--with-included-ltdl' 'build_alias=x86_64-redhat-linux-gnu'
 'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'
 'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
 -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
 'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
 -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
 --enable-ltdl-convenience





 On Thu, Nov 13, 2014 at 4:01 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 7/11/2014 12:25 p.m., dan wrote:
  Bumping this with another backtrace. Happened at 16:05 this time,
  when the system was not very very busy.
 
  It’s causing squid to crash in such a way that I actually have to
  `kill -9` the process in order to get things restarted properly.
 
  Would really appreciate any feedback at all from anyone who can
  understand these back traces.


 Any hints, like what release version of Squid you are using?

 Amos

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (MingW32)

 iQEcBAEBAgAGBQJUZDr6AAoJELJo5wb/XPRjxL8H/iQO8pG5twVV2lcXxlFEgLuE
 NR0c4ezkPOiwgM3iHzrVxcDtCRLHB2YrwT9GuapslmSkcTOP6sBKxekOHsZmWtlK
 Sd8A/jK6l/GXFqPpdUHjut6g1aEUwTfJxsRr2NxtMW2f7a91qyOE9f31WYKQq73m
 odjtt6ayc+yA2jMEfHaIHaqXhIzAxV21ipN8GH5CWhrfMfo6IxpP4326z8SMa1am
 6HXJQhTkt1qqV4jCjGdYQ4BkAZygBtsHNb2AKgJ5Wmb4OCsM4MZlbIiPmWqWWCfY
 8ccyLVvodfpPVtjCBgStcJTkWxamu6BaHNhy8qCV03zAa4faxqcYVwAvSA5Q2sg=
 =tHfy
 -END PGP SIGNATURE-
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS intercept, simple configuration to avoid bank bumping

2015-01-26 Thread Dan Charlesworth
Wasn't somebody saying that you'd need write an External ACL to evaluate
the SNI host because dstdomain isn't hooked into that code (yet? ever?)?

On 27 January 2015 at 08:33, Jason Haar jason_h...@trimble.com wrote:


 Well the documentation says

 #   SslBump1: After getting TCP-level and HTTP CONNECT info.
 #   SslBump2: After getting SSL Client Hello info.
 #   SslBump3: After getting SSL Server Hello info.


 So that means SslBump1 only works for direct proxy (ie CONNECT) sessions,
 it's SslBump2 that peeks into the traffic to discover the client SNI
 hostname. So I think you actually need (I'll use more descriptive acl names
 and comment out those that I think don't add any value)

 acl domains_nobump dstdomain /etc/squid/domains_nobump.acl
 #no added value: acl DiscoverCONNECTHost at_step SslBump1
 acl DiscoverSNIHost at_step SslBump2
 #don't use - breaks bump: acl DiscoverServerHost at_step SslBump3
 #no added value - in fact forces peek for some reason: ssl_bump peek
 DiscoverCONNECTHost all
 ssl_bump peek DiscoverSNIHost all

 ssl_bump splice domains_nobump
 #DiscoverSNIHost should now mean Squid knows about all the SNI details
 ssl_bump bump all

 Sadly, this doesn't work for me *in transparent mode*. Works fine when
 using squid as a formal proxy, but when used via https_port intercept, we
 end up with IP address certs instead of SNI certs.

 We really need someone who knows more to tell us how to make this work :-(


 --
 Cheers

 Jason Haar
 Corporate Information Security Manager, Trimble Navigation Ltd.
 Phone: +1 408 481 8171
 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos Authentication Failing for Windows 7+ with BH gss_accept_sec_context() failed

2014-10-25 Thread Dan Charlesworth
I was recently receiving this (incredibly vague) error. Turns out my squid user 
didn’t have permission to read the keytab.

On Sat, Oct 25, 2014 at 8:37 PM, Pedro Lobo pal...@gmail.com wrote:

 Hi Markus,
 I used msktutil to create the keytab.
   msktutil -c -s HTTP/proxy01tst.fake.net -h proxy01tst.fake.net -k 
 /etc/squid3/PROXY.keytab --computer-name proxy01-tst --upn 
 HTTP/proxy01tst.fake.net --server srv01.fake.net --verbose
 Output of klist -ekt:
  2 10/24/2014 22:59:50 proxy01-tst$@FAKE.NET (arcfour-hmac)
  2 10/24/2014 22:59:50 proxy01-tst$@FAKE.NET 
 (aes128-cts-hmac-sha1-96)
  2 10/24/2014 22:59:50 proxy01-tst$@FAKE.NET 
 (aes256-cts-hmac-sha1-96)
  2 10/24/2014 22:59:50 HTTP/proxy01tst.fake@fake.net 
 (arcfour-hmac)
  2 10/24/2014 22:59:50 HTTP/proxy01tst.fake@fake.net 
 (aes128-cts-hmac-sha1-96)
  2 10/24/2014 22:59:50 HTTP/proxy01tst.fake@fake.net 
 (aes256-cts-hmac-sha1-96)
  2 10/24/2014 22:59:50 host/proxy01tst.fake@fake.net 
 (arcfour-hmac)
  2 10/24/2014 22:59:50 host/proxy01tst.fake@fake.net 
 (aes128-cts-hmac-sha1-96)
  2 10/24/2014 22:59:50 host/proxy01tst.fake@fake.net 
 (aes256-cts-hmac-sha1-96)
 Yep, using MIT Kerberos
 Thanks in advance for any help.
 Cheers,
 Pedro
 On 25 Oct 2014, at 1:26, Markus Moeller wrote:
 Hi Pedro,

 How did you create your keytab ?  What does klist –ekt 
 squid.keytab show ( I assume you use MIT Kerberos) ?

 Markus

 Pedro Lobo pal...@gmail.com wrote in message 
 news:40e1e0e7-50c6-4117-94aa-50b065734...@gmail.com...
 Hi Squid Gurus,

 I'm at my wit's end and in dire need of some squid expertise.

 We've got a production environment with a couple of squid 2.7 servers 
 using NTLM and basic authentication. Recently though, we decided to 
 upgrade and I'm now setting up squid 3.3 with Kerberos and NTLM 
 Fallback. I've followed just about every guide I could find and in my 
 testing environment, things were working great. Now that I've hooked 
 it up to the main domain, things are awry.

 If I use a machine that's not part of the domain, NTLM kicks in and I 
 can surf the web fine. If I use a Windows XP or Windows Server 2003, 
 kerberos works just fine, however, if I use a machine Windows 7, 8 or 
 2008 server, I keep getting a popup asking me to authenticate and even 
 then, it's and endless loop until it fails. My cache.log is littered 
 with:

 negotiate_kerberos_auth.cc(200): pid=1607 :2014/10/24 23:03:01| 
 negotiate_kerberos_auth: ERROR: gss_accept_sec_context() failed: 
 Unspecified GSS failure.  Minor code may provide more information.
 2014/10/24 23:03:01| ERROR: Negotiate Authentication validating user. 
 Error returned 'BH gss_accept_sec_context() failed: Unspecified GSS 
 failure.  Minor code may provide more information. '
 The odd thing, is that this has worked before. Help me Obi Wan... 
 You're my only hope! :)

 Current Setup
 Squid 3.3 running on Ubuntu 14.04 server. It's connected to a 2003 
 server with function level 2000 (I know, we're trying to fase out the 
 older servers).

 krb5.conf

 [libdefaults]
 default_realm = FAKE.NET
 dns_lookup_kdc = yes
 dns_lookup_realm = yes
 ticket_lifetime = 24h
 default_keytab_name = /etc/squid3/PROXY.keytab

 ; for Windows 2003
 default_tgs_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
 default_tkt_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
 permitted_enctypes = rc4-hmac des-cbc-crc des-cbc-md5

 [realms]
 FAKE.NET = {
 kdc = srv01.fake.net
 kdc = srv02.fake.net
 kdc = srv03.fake.net
 admin_server = srv01.fake.net
 default_domain = fake.net
 }

 [domain_realm]
 .fake.net = FAKE.NET
 fake.net = FAKE.NET


 [logging]
 kdc = FILE:/var/log/kdc.log
 admin_server = FILE:/var/log/kadmin.log
 default = FILE:/var/log/krb5lib.log
 squid.conf

 auth_param negotiate program /usr/lib/squid3/negotiate_kerberos_auth 
 -d -r -s HTTP/proxy01tst.fake.net
 auth_param negotiate children 20 startup=0 idle=1
 auth_param negotiate keep_alive off

 auth_param ntlm program /usr/bin/ntlm_auth --diagnostics 
 --helper-protocol=squid-2.5-ntlmssp --domain=FAKE.NET
 auth_param ntlm children 10
 auth_param ntlm keep_alive off
 Cheers,
 Pedro

 Cumprimentos
 Pedro Lobo
 Solutions Architect | System Engineer

 pedro.l...@pt.clara.net
 Tlm.: +351 939 528 827 | Tel.: +351 214 127 314

 Claranet Portugal
 Ed. Parque Expo
 Av. D. João II, 1.07-2.1, 4º Piso
 1998-014 Lisboa
 www.claranet.pt





 Empresa certificada ISO 9001, ISO 2 e ISO 27001




 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org