[squid-users] squid on it's own server

2017-01-26 Thread John Pearson
hi all, my current setup: laptop(10.0.1.10) and squid-box(10.0.1.11) and
debian router(10.0.1.1).

I am doing wget on laptop

wget squid-cache.org

I am redirecting packets on the router to squid-box by changing the
destination MAC address and destination IP and port address. I am able to
see the packets reaching the squid-box and in squid log I am seeing many

10.0.1.11 TCP_MISS/503 47502 GET http://squid-cache.org/ - ORIGINAL_DST/
10.0.1.11 text/html

The log stream is really fast. All I see on laptop is “HTTP request sent,
awaiting response …" Any advice? thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL bad rewritten using directive "deny_info"

2017-01-26 Thread Amos Jeffries
On 26/01/2017 1:00 a.m., javier.sanchez wrote:
> 
> Hi all.
> 
> 
> I using squid 3.4.8 over Debian as reverse proxy in order to protect
> with SSL some of our servers.
> 


> 
> How can I solve that? Is this a bug or something that can be solved
> changing configuration.

Please try the Debian backports package of the 3.5 version.

If the issue remains, then you may have to use url_rewrite_program with
a helper to do the redirection instead of deny_info.

Have the helper produce the same 301:https://... URLs that deny_info
should have, and the http_access to allow HTTP traffic.
 That helper interface is a bit slower but will definitely work.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid3 : Really need to use external (slow) acl with peer_cache_access

2017-01-26 Thread Amos Jeffries
On 25/01/2017 10:29 p.m., ho...@free.fr wrote:
> 
> Hi everybody,
> 
> I really try to find a answer with google, and within
> the archives of this mailing list but couldn't find anything
> so... here I am...
> 
> I need to select a squid parent based on the login of the
> user (and others things). With squid 2.7, I had a configuration
> like this one :
> 
> -
> cache_peer 169.254.1.1 parent 3128 0 default name=parent1
> cache_peer 169.254.1.2 parent 3128 0 default name=parent2
> [...] (many parents)
> 
> external_acl_type choose_parent ttl=60,children-max=1 %EXT_USER %SRC %LOGIN 
> %ACL /home/user/myhelper.sh
> acl p0 external choose_parent
> 
> external_acl_type myparent1 ttl=60,children-max=1 %ACL %EXT_USER  
> /home/user/another_helper
> acl p1 external myparent1
> external_acl_type myparent2 ttl=60,children-max=1 %ACL %EXT_USER  
> /home/user/another_helper
> acl p2 external myparent2
> [...]
> 
> cache_peer_access parent1 allow p1
> cache_peer_access parent2 allow p2
> [...]
> 
> cache_peer_access path1 deny all
> cache_peer_access path2 deny all
> [...]
> 
> ---
> 
> The idea is to deny all squid parents except the one I want this user
> (with this specific IP and so on) to use.
> 
> But with squid3, I just have lot's of error in cache.log:
> 
> 2017/01/25 10:22:16.053 kid1| external_acl.cc(868) aclMatchExternal: 
> myparent1("p1 p1") = lookup needed
> 2017/01/25 10:22:16.053 kid1| external_acl.cc(871) aclMatchExternal: "p1 p1": 
> queueing a call.
> 2017/01/25 10:22:16.053 kid1| Checklist.cc(115) goAsync: 0x7fff415cf470 a 
> fast-only directive uses a slow ACL!
> 2017/01/25 10:22:16.053 kid1| external_acl.cc(873) aclMatchExternal: "p1 p1": 
> no async support!
> 2017/01/25 10:22:16.053 kid1| external_acl.cc(874) aclMatchExternal: "p1 p1": 
> return -1.
> 
> The documentation made it perfectly clear that "cache_peer_acccess" is a 
> "fast ACL" that can only use fast ones...
> But I really need to use external "slow" acl. Please, is there a way to do it 
> ?
> Again, this was working in 2.7 :(


Well, no. 2.7 was just being silent about the situation and guessing
whether you wanted OK/ERR result. Whereas Squid-3 tells you when the
fast-only cannot handle the ACL check results.

What you need to do is perform the external ACL check during one of the
*_access checks that permites slow lookups. eg. http_access.

Then use the 'note' ACL type in your fast-only access controls to check
some annotation that the helper returns to Squid.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Amos Jeffries
On 27/01/2017 9:46 a.m., Yuri Voinov wrote:
> 
> 
> 27.01.2017 2:44, Matus UHLAR - fantomas пишет:
>>> 26.01.2017 2:22, boruc пишет:
 After a little bit of analyzing requests and responses with WireShark I
 noticed that many sites that weren't cached had different
 combination of
 below parameters:

 Cache-Control: no-cache, no-store, must-revalidate, post-check,
 pre-check,
 private, public, max-age, public
 Pragma: no-cache
>>
>> On 26.01.17 02:44, Yuri Voinov wrote:
>>> If the webmaster has done this - he had good reason to. Trying to break
>>> the RFC in this way, you break the Internet.
>>
>> Actually, no. If the webmaster has done the above - he has no damn
>> idea what
>> those mean (private and public?) , and how to provide properly cacheable
>> content.
> It was sarcasm.


You may have intended it to be. But you spoke the simple truth.

Other than 'public' there really are situations which have "good reason"
to send that set of controls all at once.

For example; any admin who wants a RESTful or SaaS application to
actually work for all their potential customers.


I have been watching the below cycle take place for the past 20 years in
HTTP:

Webmaster: dont cache this please.

  "Cache-Control: no-store"

Proxy Admin: ignore-no-store


Webmaster: I meant it. Dont deliver anything you cached without fetching
a updated version.

  ... "no-store, no-cache"

Proxy Admin: ignore-no-cache


Webmaster: really you MUST revalidate before using ths data.

 ... "no-store, no-cache, must-revalidate"

Proxy Admin: ignore-must-revalidate


Webmaster: Really I meant it. This is non-storable PRIVATE DATA!

... "no-store, no-cache, must-revalidate, private"

Proxy Admin: ignore-private


Webmaster: Seriously. I'm changing it on EVERY request! dont store it.

... "no-store, no-cache, must-revalidate, private, max-age=0"
"Expires: -1"

Proxy Admin: ignore-expires


Webmaster: are you one of those dumb HTTP/1.0 proxies who dont
understand Cache-Control?

"Pragma: no-cache"
"Expires: 1 Jan 1970"

Proxy Admin: hehe! I already ignore-no-cache ignore-expires


Webmaster: F*U!  May your clients batch up their traffic to slam you
with it all at once!

... "no-store, no-cache, must-revalidate, private, max-age=0,
pre-check=1, post-check=1"


Proxy Admin: My bandwidth! I need to cache more!

Webmaster: Doh! Oh well, so I have to write my application to force new
content then.

Proxy Admin: ignore-reload


Webmaster: Now What? Oh HTTPS wont have any damn proxies in the way

... the cycle repeats again within HTTPS. Took all of 5 years this time.

... the cycle repeats again within SPDY. That took only ~1 year.

... the cycle repeats again within CoAP. The standards are not even
finished yet and its underway.


Stop this cycle of stupidity. It really HAS "broken the Internet".


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Amos Jeffries
On 27/01/2017 9:44 a.m., Matus UHLAR - fantomas wrote:
>> 26.01.2017 2:22, boruc пишет:
>>> After a little bit of analyzing requests and responses with WireShark I
>>> noticed that many sites that weren't cached had different combination of
>>> below parameters:
>>>
>>> Cache-Control: no-cache, no-store, must-revalidate, post-check,
>>> pre-check,
>>> private, public, max-age, public
>>> Pragma: no-cache
> 
> On 26.01.17 02:44, Yuri Voinov wrote:
>> If the webmaster has done this - he had good reason to. Trying to break
>> the RFC in this way, you break the Internet.
> 
> Actually, no. If the webmaster has done the above - he has no damn idea
> what
> those mean (private and public?) , and how to provide properly cacheable
> content.
> 


I think boruc has just listed all the cache controls he has noticed in
one line. Not actually what is being seen ...


> Which is very common and also a reason why many proxy admins tend to ignore
> those controls...
> 

... the URLs used for expanded details show the usual combos webmasters
use to 'fix' broken behaviour of such proxies. For example adding
"no-cache, private, max-age=0" to get around proxies ignoring various of
the controls.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump - peek & splice logging IP rather than server name

2017-01-26 Thread Eliezer Croitoru
Is it stock REDHAT or CentOS or Other?
You can use the RPM's of squid I am building which are quite generic and works 
very well.
I have just released 3.5.23 and 4.0.17.
I have built it both for RH and CentOS:
http://wiki.squid-cache.org/KnowledgeBase/CentOS#Squid-3.5
And you can take a peek and browse at:
http://www1.ngtech.co.il/repo/
If what you are running is not there let me know and I will try to build a 
binary for it.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Alex Rousskov
Sent: Friday, January 27, 2017 1:57 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] ssl_bump - peek & splice logging IP rather than 
server name

On 01/26/2017 03:38 PM, Mark Hoare wrote:

> To reiterate, my desire is to have Squid running and capable of 
> blocking access to http and https sites primarily based on the server 
> name requested by the client (so no need to go beyond a peek)

... or even beyond a peek at the client.


> From everything I’ve read, it looks like the following ssl_bump lines 
> should provide access to the SNI server name requested by the client.
> ssl_bump peek all
> ssl_bump splice all

Yes, but you are also telling Squid to peek at the server certificate.
If you want to avoid doing that, then replace "peek all" with "peek step1" 
while providing the right step1 ACL. The SNI-based denial you want should 
happen earlier anyway, but if you do peek at the server certificate, then you 
may also deny later, based on server-supplied information (e.g., when SNI was 
missing or was not matching). Whether more denials based on server info is a 
good thing is your decision.


> I can’t help thinking that I must have something wrong with my config:
> - Log output correctly shows
> - SNI server name via ssl::>sni
> - Bump mode via ssl::bump_mode
> - Implies my ssl_bump config is working
> - Works fine for HTTP

Great!


> - Restricting access via a squid ACL doesn’t use the SNI server name 
> for an HTTPS request

You may be right, but not for the reasons you think. The output you have shown 
does not necessarily confirm any problems.


> Example ACL:
> acl blocked_sites ssl::server_name .apple.com
> http_access deny blocked_sites

Please note that your Squid version is missing a critical ssl::server_name fix 
detailed below.


> Example access log output:
> %ts.%03tu   %6tr  %>a%Ss/%03>Hs% %ssl::>sni   %ssl::bump_mode %[un  %Sh/% 1485468402.401  575   10.1.0.1  TCP_TUNNEL/200 592  CONNECT 23.63.86.92:443
> store.apple.com  peek -   ORIGINAL_DST/23.63.86.92  -

> 1485469054.633  5110.1.0.1  TCP_DENIED/403 3962 GET 
> http://store.apple.com/
> ---   HIER_NONE/-   text/html

The above shows that Squid peeked and denied access. To serve the error page to 
the client, Squid bumped the client connection first and then denied the first 
encrypted HTTP request. This is normal/expected.


> Example cache log output:
> 2017/01/26 21:54:21.745 kid1| 28,5| Acl.cc(138) matches: checking 
> blocked_sites
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 
> '23.63.86.92'
> 2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) 
> aclHostDomainCompare: Match:23.63.86.92 <> .apple.com
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: 
> '23.63.86.92' NOT found
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 'none'
> 2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) 
> aclHostDomainCompare: Match:none <> .apple.com
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: 'none' 
> NOT found
> 2017/01/26 21:54:21.745 kid1| 28,5| Checklist.cc(400) bannedAction: 
> Action 'ALLOWED/0is not banned

If the above was for step1 checks, then it makes sense: The access was not 
banned based on TCP level information. Proceed to step2 (extract SNI and test 
again). There should be more checks like the above, and then Squid decided to 
deny access. However, the timing of that decision and the sources of 
information used for that decision may change after the ssl::server_name fix 
mentioned below.


> Squid Cache: Version 3.5.20

You are missing the following server_name fix (among other things):

> revno: 14110
> branch nick: 3.5
> timestamp: Mon 2016-11-14 23:51:24 +1300
> message:
>   Fix ssl::server_name ACL badly broken since inception.
>   
>   The original server_name code mishandled all SNI checks and some rare
>   host checks:
>   
>   * The SNI-derived value was pointing to an already freed memory storage.
>   * Missing host-derived values were not detected (host() is never nil).
>   * Mismatches were re-checked with an undocumented "none" value
> instead of being treated as mismatches.
>   
>   Same for ssl::server_name_regex.
>   
>   Also set SNI for more server-first and client-first tran

Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Amos Jeffries
On 27/01/2017 11:08 a.m., reinerotto wrote:
>> reply_header_access Cache-Control deny all<
> Will this only affect downstream caches, or will this squid itself also
> ignore any Cache-Control header info
> received from upstream ? 
> 

It will only affect the clients caches. eg. their browser cache.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump - peek & splice logging IP rather than server name

2017-01-26 Thread Alex Rousskov
On 01/26/2017 03:38 PM, Mark Hoare wrote:

> To reiterate, my desire is to have Squid running and capable of blocking
> access to http and https sites primarily based on the server name
> requested by the client (so no need to go beyond a peek)

... or even beyond a peek at the client.


> From everything I’ve read, it looks like the following ssl_bump lines
> should provide access to the SNI server name requested by the client. 
> ssl_bump peek all
> ssl_bump splice all

Yes, but you are also telling Squid to peek at the server certificate.
If you want to avoid doing that, then replace "peek all" with "peek
step1" while providing the right step1 ACL. The SNI-based denial you
want should happen earlier anyway, but if you do peek at the server
certificate, then you may also deny later, based on server-supplied
information (e.g., when SNI was missing or was not matching). Whether
more denials based on server info is a good thing is your decision.


> I can’t help thinking that I must have something wrong with my config:
> - Log output correctly shows 
> - SNI server name via ssl::>sni 
> - Bump mode via ssl::bump_mode 
> - Implies my ssl_bump config is working
> - Works fine for HTTP

Great!


> - Restricting access via a squid ACL doesn’t use the SNI server name for
> an HTTPS request 

You may be right, but not for the reasons you think. The output you have
shown does not necessarily confirm any problems.


> Example ACL:
> acl blocked_sites ssl::server_name .apple.com
> http_access deny blocked_sites

Please note that your Squid version is missing a critical
ssl::server_name fix detailed below.


> Example access log output:
> %ts.%03tu   %6tr  %>a%Ss/%03>Hs% %ssl::>sni   %ssl::bump_mode %[un  %Sh/% 1485468402.401  575   10.1.0.1  TCP_TUNNEL/200 592  CONNECT 23.63.86.92:443
> store.apple.com  peek -   ORIGINAL_DST/23.63.86.92  -

> 1485469054.633  5110.1.0.1  TCP_DENIED/403 3962 GET 
> http://store.apple.com/
> ---   HIER_NONE/-   text/html

The above shows that Squid peeked and denied access. To serve the error
page to the client, Squid bumped the client connection first and then
denied the first encrypted HTTP request. This is normal/expected.


> Example cache log output:
> 2017/01/26 21:54:21.745 kid1| 28,5| Acl.cc(138) matches: checking 
> blocked_sites
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 
> '23.63.86.92'
> 2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) aclHostDomainCompare: 
> Match:23.63.86.92 <> .apple.com
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: '23.63.86.92' 
> NOT found
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 'none'
> 2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) aclHostDomainCompare: 
> Match:none <> .apple.com
> 2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: 'none' NOT found
> 2017/01/26 21:54:21.745 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
> 'ALLOWED/0is not banned

If the above was for step1 checks, then it makes sense: The access was
not banned based on TCP level information. Proceed to step2 (extract SNI
and test again). There should be more checks like the above, and then
Squid decided to deny access. However, the timing of that decision and
the sources of information used for that decision may change after the
ssl::server_name fix mentioned below.


> Squid Cache: Version 3.5.20

You are missing the following server_name fix (among other things):

> revno: 14110
> branch nick: 3.5
> timestamp: Mon 2016-11-14 23:51:24 +1300
> message:
>   Fix ssl::server_name ACL badly broken since inception.
>   
>   The original server_name code mishandled all SNI checks and some rare
>   host checks:
>   
>   * The SNI-derived value was pointing to an already freed memory storage.
>   * Missing host-derived values were not detected (host() is never nil).
>   * Mismatches were re-checked with an undocumented "none" value
> instead of being treated as mismatches.
>   
>   Same for ssl::server_name_regex.
>   
>   Also set SNI for more server-first and client-first transactions.
>   
>   This is a Measurement Factory project.

The first rule of SslBumping: Use the latest code.


HTH,

Alex.
P.S. Please avoid HTMLifying your emails, especially when quoting logs.



>> On 3 Jan 2017, at 23:35, Alex Rousskov wrote:
>>
>> On 01/03/2017 04:11 PM, Mark Hoare wrote:
>>
>>> I think these are hangovers from earlier syntax (ssl_bump
>>> server-first all) which shouldn't be required under 3.5.
>>
>> Please note that the depricated server-first is a "bumping" (not
>> splicing!) action, and you may see a lot more information in the
>> bumping-Squid logs, naturally.
>>
>> Alex.
>>
> 
>> On 3 Jan 2017, at 23:10, Alex Rousskov
>> > > wrote:
>>
>> On 01/03/2017 03:41 PM, Eliezer  Croitoru wrote:
>>
>>> Squid in intercept or tproxy mode will know one thing about the
>>> tunnel\co

Re: [squid-users] ssl_bump - peek & splice logging IP rather than server name

2017-01-26 Thread Mark Hoare
Alex/Eliezer,

Thanks for you earlier comments and apologies for not responding (and saying 
thank you previously, squid got back-burnered unfortunately)

Getting logging working with transparent proxying was my initial step prior to 
looking at restricting specific sites via either ACLs or a URL rewriter 
(ufdbGuard, SquidGuard etc - although I don’t think SquidGuard works with SNI) 

To reiterate, my desire is to have Squid running and capable of blocking access 
to http and https sites primarily based on the server name requested by the 
client (so no need to go beyond a peek)
For HTTP requests this is obviously out of the box stuff but for HTTPS it seems 
more complicated.

From everything I’ve read, it looks like the following ssl_bump lines should 
provide access to the SNI server name requested by the client. 
ssl_bump peek all
ssl_bump splice all

I can’t help thinking that I must have something wrong with my config:
- Log output correctly shows 
- SNI server name via ssl::>sni 
- Bump mode via ssl::bump_mode 
- Implies my ssl_bump config is working
- Restricting access via a squid ACL doesn’t use the SNI server name for an 
HTTPS request 
- Works fine for HTTP

Example ACL:
acl blocked_sites ssl::server_name .apple.com
http_access deny blocked_sites

Example access log output:
%ts.%03tu %6tr  %>a%Ss/%03>Hs  %sni%ssl::bump_mode %[un  %Sh/%http://store.apple.com/  --  -HIER_NONE/-   
text/html

Example cache log output:
2017/01/26 21:54:21.745 kid1| 28,5| Acl.cc(138) matches: checking blocked_sites
2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 
'23.63.86.92'
2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) aclHostDomainCompare: 
Match:23.63.86.92 <>  .apple.com
2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: '23.63.86.92' NOT 
found
2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(42) match: checking 'none'
2017/01/26 21:54:21.745 kid1| 28,7| ServerName.cc(32) aclHostDomainCompare: 
Match:none <>  .apple.com
2017/01/26 21:54:21.745 kid1| 28,3| ServerName.cc(47) match: 'none' NOT found
2017/01/26 21:54:21.745 kid1| 28,3| Acl.cc(158) matches: checked: blocked_sites 
= 0
2017/01/26 21:54:21.745 kid1| 28,3| Acl.cc(158) matches: checked: http_access#5 
= 0
2017/01/26 21:54:21.745 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned

squid -v output:
Squid Cache: Version 3.5.20
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--disable-strict-error-checking' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,SMB_LM,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group'
 '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups' 
'--enable-linux-netfilter' '--enable-removal-policies=heap,lru' '--enable-snmp' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' '--enable-wccpv2' 
'--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid' 
'--with-dl' '--with-openssl' '--with-pthreads' '--disable-arch-native' 
'--disable-icap-client' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic -fpie' 
'LDFLAGS=-Wl,-z,relro  -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic -fpie' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'

Is there anything obvious that I am missing as I’m a bit stumped now.

Thanks again

Mark

> On 3 Jan 2017, at 23:35, Alex Rousskov  
> wrote:
> 
> On 01/03/2017 04:11 PM, Mark Hoare wrote:
> 

>> I think these are hangovers from earlier syntax (ssl_bump
>> server-first all) which shouldn't be required under 3.5.
> 

> Please note that the depricated server-first is a "bumping" (not
> splicing!) action, and you may see a lot m

Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread reinerotto
>reply_header_access Cache-Control deny all<
Will this only affect downstream caches, or will this squid itself also
ignore any Cache-Control header info
received from upstream ? 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Not-all-html-objects-are-being-cached-tp4681293p4681339.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-26 Thread Matus UHLAR - fantomas

On 26.01.17 08:41, Alexander wrote:

It seems that I have solved the issue by using nf_conntrack_ftp and
redirecting "NEW,RELATED" traffic to squid:

ftp_port 2121 intercept

modprobe nf_conntrack_ftp ports=2121

iptables -t nat -A PREROUTING -p tcp --dport 21 -j REDIRECT --to-port 2121
iptables -t nat -A PREROUTING -p tcp -m state --state NEW,RELATED -j
REDIRECT


just note that connections may be related to different connections than
FTP...

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
BSE = Mad Cow Desease ... BSA = Mad Software Producents Desease
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Yuri Voinov


27.01.2017 2:44, Matus UHLAR - fantomas пишет:
>> 26.01.2017 2:22, boruc пишет:
>>> After a little bit of analyzing requests and responses with WireShark I
>>> noticed that many sites that weren't cached had different
>>> combination of
>>> below parameters:
>>>
>>> Cache-Control: no-cache, no-store, must-revalidate, post-check,
>>> pre-check,
>>> private, public, max-age, public
>>> Pragma: no-cache
>
> On 26.01.17 02:44, Yuri Voinov wrote:
>> If the webmaster has done this - he had good reason to. Trying to break
>> the RFC in this way, you break the Internet.
>
> Actually, no. If the webmaster has done the above - he has no damn
> idea what
> those mean (private and public?) , and how to provide properly cacheable
> content.
It was sarcasm.
>
> Which is very common and also a reason why many proxy admins tend to
> ignore
> those controls...
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Matus UHLAR - fantomas

26.01.2017 2:22, boruc пишет:

After a little bit of analyzing requests and responses with WireShark I
noticed that many sites that weren't cached had different combination of
below parameters:

Cache-Control: no-cache, no-store, must-revalidate, post-check, pre-check,
private, public, max-age, public
Pragma: no-cache


On 26.01.17 02:44, Yuri Voinov wrote:

If the webmaster has done this - he had good reason to. Trying to break
the RFC in this way, you break the Internet.


Actually, no. If the webmaster has done the above - he has no damn idea what
those mean (private and public?) , and how to provide properly cacheable
content.

Which is very common and also a reason why many proxy admins tend to ignore
those controls...

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
There's a long-standing bug relating to the x86 architecture that
allows you to install Windows.   -- Matthew D. Fuller
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-26 Thread Alex Rousskov
On 01/26/2017 03:16 AM, Vieri wrote:

> I'm guessing that it
> should be possible for Squid to tell OpenSSL to report what it
> actually said to the server without the need for an admin to do a
> traffic dump and analysis.

Your are correct, but, in most cases, it is a lot easier to dump and
analyze traffic than to ask OpenSSL inside Squid to report what it
actually said. This is, in part, Squid's own fault, but OpenSSL does not
make it easy either. More on that below at (**).


> Let's take this simple example into consideration where I use cURL to connect 
> to the same MS Exchange server:
> 
> # curl -k -v https://10.215.144.21
> * Rebuilt URL to: https://10.215.144.21/
> *   Trying 10.215.144.21...
> * Connected to 10.215.144.21 (10.215.144.21) port 443 (#0)
> * ALPN, offering http/1.1
> * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/ssl/certs/ca-certificates.crt
> CApath: /etc/ssl/certs
> * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * Unknown SSL protocol error in connection to 10.215.144.21:443 
> * Closing connection 0
> curl: (35) Unknown SSL protocol error in connection to 10.215.144.21:443
> 
> Now that's clear. cURL tried to use TLS1.2 and failed. The nasty server 
> didn't even say hello.

Please note that the information you see above details what Curl did,
not what OpenSSL did.

(**) Squid prints many similar details as well, but because Squid
generally deals with many concurrent transactions over a long time, and
does a lot more than Curl does, those details are not as easy to
find/isolate in Squid debugging logs. You found some of them. Again,
Squid could do a lot better in this area, but nobody is working on that
right now AFAIK, and even my attempts to form consensus on how this
should be done have failed so far.


> It's interesting to note that the following actually DOES give more 
> information (unsupported protocol):

* If the server sent nothing, then Curl gave you potentially incorrect
information (i.e., Curl is just _guessing_ what went wrong).

* If the server sent something, then either Squid situation was
different or I did not see that additional info in the logs you have
posted.

Based on everything I have seen so far, it is probably the former -- the
server sent nothing.


> I haven't looked at the source code but I guess Squid uses the OpenSSL 
> library.

Yes, in your case, it does.


> However, I haven't found any hint of what the client (cache_peer) tried to 
> offer.

Cache_peer is not the client here, but yes, see (**) above.


> Maybe if Squid gets an SSL negotiation error with no apparent reason
> then it might need to retry connecting by being more explicit, just
> like in my cURL and openssl binary examples above.

Sorry, I do not know what "retry connecting by being more explicit"
means. AFAICT, neither Curl nor s_client tried reconnecting in your
examples. Also, an appropriate default for a command-line client is
often a bad default for a proxy. It is complicated.


> I would have understood earlier the reason of the connection failure
> if Squid/OpenSSL had logged how they were actually hitting on the
> server.

Agreed.


> Anyway, it's not a big deal now that I know what to do if this kind
> of connection issue comes back up. It could be useful to others
> though if the logging could be a tad more verbose or if Squid could
> retry connections by explictly specifying protocols (and logging
> them).

Agreed in general, but the devil is in the details. Improving this is
difficult, and nobody is working on it at the moment AFAIK.

http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


Cheers,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-26 Thread Alexander
Well, actually these rules are just a kind of proof of concept and there is
something to think about later. The redirection rule should be more precise
and include destination address. Also, 'NEW' state should probably be
excluded from the list.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Native-FTP-relay-connection-closes-after-cannot-assign-requested-address-error-tp4681208p4681334.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-26 Thread Antony Stone
On Thursday 26 January 2017 at 17:41:21, Alexander wrote:

> It seems that I have solved the issue by using nf_conntrack_ftp and
> redirecting "NEW,RELATED" traffic to squid:

Excellent news.

> ftp_port 2121 intercept
> 
> modprobe nf_conntrack_ftp ports=2121
> 
> iptables -t nat -A PREROUTING -p tcp --dport 21 -j REDIRECT --to-port 2121
> iptables -t nat -A PREROUTING -p tcp -m state --state NEW,RELATED -j
> REDIRECT

Just out of interest, how are you getting the FTP traffic to the Squid box in 
the first place?

I assume you're not routing all Internet-bound traffic via this machine 
(otherwise that second REDIRECT rule would cause problems for SSH, SMTP, IMAP, 
etc), so how are you identifying the FTP traffic to get it from your router to 
the Squid box?


Antony.

-- 
Police have found a cartoonist dead in his house.  They say that details are 
currently sketchy.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-26 Thread Alexander
It seems that I have solved the issue by using nf_conntrack_ftp and
redirecting "NEW,RELATED" traffic to squid:

ftp_port 2121 intercept

modprobe nf_conntrack_ftp ports=2121

iptables -t nat -A PREROUTING -p tcp --dport 21 -j REDIRECT --to-port 2121
iptables -t nat -A PREROUTING -p tcp -m state --state NEW,RELATED -j
REDIRECT

Thank you for your time.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Native-FTP-relay-connection-closes-after-cannot-assign-requested-address-error-tp4681208p4681332.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-26 Thread Vieri


- Original Message -
From: Alex Rousskov 
> If my reconstruction of the events was correct, then OpenSSL supplied as
> much information as it could -- the "unsupported TLS/SSL versions" is
> _your_ conclusion based on the information that neither Squid nor
> OpenSSL had access to.
>
> 
>> I'm only supposing that
>> without the ssloptions I posted above, openssl will try TLS 1.2 and
>> silently fail if that doesn't succeed.
>
> It takes two to tango. How silent that failure is depends, in part, on
> the server. AFAICT, your server was 100% silent about the reason behind
> its abrupt connection closure, and OpenSSL correctly declined to
> speculate about those reasons due to lack of info. From OpenSSL/client
> point of view, it could have been anything from an unsupported TLS
> version to a crashed server.


Thanks for taking the time to explain. I understand the point you make but I'm 
still a bit scepticle, probably due to my lack of knowledge in this domain.

I haven't read the RFCs for TLSv1*, SSLv*, etc. However, let's try to give a 
simple and straightforward example, just to clear things up. Suppose you (the 
client) meet someone (the server) and ask her/him out. That person can turn 
away and refuse your proposal without saying a word so you'll never know the 
reason. That's what you explained and I get that. However, you must "obviously" 
know what you told that person. Maybe it's not that "obvious" in the case of 
Squid & OpenSSL, but I'm guessing that it should be possible for Squid to tell 
OpenSSL to report what it actually said to the server without the need for an 
admin to do a traffic dump and analysis.

Let's take this simple example into consideration where I use cURL to connect 
to the same MS Exchange server:

# curl -k -v https://10.215.144.21
* Rebuilt URL to: https://10.215.144.21/
*   Trying 10.215.144.21...
* Connected to 10.215.144.21 (10.215.144.21) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to 10.215.144.21:443 
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 10.215.144.21:443

Now that's clear. cURL tried to use TLS1.2 and failed. The nasty server didn't 
even say hello.

It's interesting to note that the following actually DOES give more information 
(unsupported protocol):

# curl --tlsv1.1 -k -v https://10.215.144.21/
*   Trying 10.215.144.21...
* Connected to 10.215.144.21 (10.215.144.21) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.1 (OUT), TLS header, Certificate Status (22):
* TLSv1.1 (OUT), TLS handshake, Client hello (1):
* error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol
* Closing connection 0
curl: (35) error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported 
protocol


Of course, the following test succeeds:

# curl --tlsv1.0 -k -v https://10.215.144.21/
*   Trying 10.215.144.21...
* Connected to 10.215.144.21 (10.215.144.21) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.0 (OUT), TLS handshake, Client hello (1):
* TLSv1.0 (IN), TLS handshake, Server hello (2):
* TLSv1.0 (IN), TLS handshake, Certificate (11):
* TLSv1.0 (IN), TLS handshake, Server finished (14):
* TLSv1.0 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.0 (OUT), TLS change cipher, Client hello (1):
* TLSv1.0 (OUT), TLS handshake, Finished (20):
* TLSv1.0 (IN), TLS change cipher, Client hello (1):
* TLSv1.0 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.0 / DES-CBC3-SHA


Now with the openssl command-line client:

#  openssl s_client -connect 10.215.144.21:443 -tls1_2
CONNECTED(0003)
3072153276:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake 
failure:s3_pkt.c:656:


#  openssl s_client -connect 10.215.144.21:443 -tls1_1
CONNECTED(0003)
3072530108:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version 
number:s3_pkt.c:362:


...and it obviously works with -tls1.

I haven't looked at the source code but I guess Squid uses the OpenSSL library.

I searched the Squid log above and below these lines:

2017/01/24 17:20:28.997 kid1| 83,5| NegotiationHistory.cc(83) 
retrieveNegotiatedInfo: SSL connection info on FD 18 SSL version NONE/0.0 
negotiated cipher 
2017/01/24 17:20:28.997 kid1| Error negotiating SSL on FD