Re: [squid-users] Reverse DNS Lookup for client IPs

2017-06-27 Thread Eliezer Croitoru
Thanks Alex,

Now it makes more sense and I will try to follow there.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Tuesday, June 27, 2017 22:44
To: Eliezer Croitoru ; 'Ralf Hildebrandt' 
; squid-us...@squid-cache.org
Subject: Re: [squid-users] Reverse DNS Lookup for client IPs

On 06/27/2017 08:19 AM, Eliezer Croitoru wrote:

> Can you put a link to the thread here?

The best relevant link is probably bug #4575:

  http://bugs.squid-cache.org/show_bug.cgi?id=4575

Alex.


> Are you talking about this thread:
> http://lists.squid-cache.org/pipermail/squid-users/2016-February/008999.html
> 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-DNS-Lookup-for-client-IPs-td4675872.html
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Ralf Hildebrandt
> Sent: Tuesday, June 20, 2017 14:35
> To: squid-us...@squid-cache.org
> Subject: [squid-users] Reverse DNS Lookup for client IPs
> 
> I have to chime in on the "Reverse DNS Lookup for client IPs" thread back in 
> Feb 2016. I tried redefining the logging format for url_rewrite_extras and 
> store_id_extras in the config, but that wouldn't work.
> 
> I had to change the file src/cf.data.pre and recompiled, after that the 
> number of reverse lookups dropped considerably.
> 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12

2017-06-27 Thread Todd Pearson
I appreciate the input.  Do you (or anyone else) know if keytab is required in 
a windows only environment for kerberos authentication?

  From: Amos Jeffries 
 To: Todd Pearson ; "squid-users@lists.squid-cache.org" 
 
 Sent: Tuesday, June 27, 2017 10:37 AM
 Subject: Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 
Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12
   
On 28/06/17 05:12, Todd Pearson wrote:
> 
> Thank you for the information.  Is there any place to download the 
> helper binaries for NTLM?  Or do I need to build them myself?
> 

Since you were using the SSPI helper for NTLM you should have the 
Negotiate/Kerberos equivalent already. It is mswin_sspi in Squid-2 or 
negotiate_sspi_auth in Squid-3.2+. The group checking helpers work with 
both auth types.

Diladele provide Squid-3 builds for Windows 
() if you are still going that way.


> Is there additional information on kerberos configuration in a windows 
> environment.  Trying to wrap my head around the keytab and creation of 
> it in a windows only environment.


This may be of help understanding what the Kerberos process is:


though the config examples and setup commands we have are all for 
non-Windows Squid machines it seems.


PS. I don't use Windows Squid servers myself, so cant be much help here. 
Maybe someone more familiar can help out.

Amos


   ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reverse DNS Lookup for client IPs

2017-06-27 Thread Alex Rousskov
On 06/27/2017 08:19 AM, Eliezer Croitoru wrote:

> Can you put a link to the thread here?

The best relevant link is probably bug #4575:

  http://bugs.squid-cache.org/show_bug.cgi?id=4575

Alex.


> Are you talking about this thread:
> http://lists.squid-cache.org/pipermail/squid-users/2016-February/008999.html
> 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-DNS-Lookup-for-client-IPs-td4675872.html
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Ralf Hildebrandt
> Sent: Tuesday, June 20, 2017 14:35
> To: squid-us...@squid-cache.org
> Subject: [squid-users] Reverse DNS Lookup for client IPs
> 
> I have to chime in on the "Reverse DNS Lookup for client IPs" thread back in 
> Feb 2016. I tried redefining the logging format for url_rewrite_extras and 
> store_id_extras in the config, but that wouldn't work.
> 
> I had to change the file src/cf.data.pre and recompiled, after that the 
> number of reverse lookups dropped considerably.
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HIER_NONE on TCP_MISS?

2017-06-27 Thread bump skier
Hmm. I don't have ICAP/eCAP or collapsed forwarding configured. Are there
any situations where something similar to collapsed forwarding can happen
by default?

On Tue, Jun 27, 2017 at 11:55 AM Amos Jeffries  wrote:

> On 27/06/17 15:28, bump skier wrote:
> > Hi,
> >
> > I'm trying to understand the following behavior I'm seeing with Squid
> > running in accelerator mode. In short, I'm seeing some TCP_MISS for
> > requests to a static javascript file which is initially cached and
> > returned as a cache hit. I suspect the missed cache hits are due to the
> > cache size being too small and the file eventually getting evicted.
> > However, I'm confused about what I'm seeing in the Squid access log. For
> > some of the cache misses I can see in the access log that Squid fetches
> > the file from the configured origin server but for a vast majority of
> > them I see HIER_NONE even though Squid is actually returning the file.
> >
> > Under what situations would Squid fetch content from the origin server
> > during a cache miss but print HIER_NONE?
>
>
> It may happen if you have content adaptation (ICAP/eCAP) providing a
> response instead of either cache or origin server.
>
> Maybe also if the collapsed forwarding feature is in use. AFAIK, we have
> not got the log entries quite right there yet.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12

2017-06-27 Thread Amos Jeffries

On 28/06/17 05:12, Todd Pearson wrote:


Thank you for the information.  Is there any place to download the 
helper binaries for NTLM?  Or do I need to build them myself?




Since you were using the SSPI helper for NTLM you should have the 
Negotiate/Kerberos equivalent already. It is mswin_sspi in Squid-2 or 
negotiate_sspi_auth in Squid-3.2+. The group checking helpers work with 
both auth types.


Diladele provide Squid-3 builds for Windows 
() if you are still going that way.



Is there additional information on kerberos configuration in a windows 
environment.  Trying to wrap my head around the keytab and creation of 
it in a windows only environment.



This may be of help understanding what the Kerberos process is:


though the config examples and setup commands we have are all for 
non-Windows Squid machines it seems.



PS. I don't use Windows Squid servers myself, so cant be much help here. 
Maybe someone more familiar can help out.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12

2017-06-27 Thread Todd Pearson

Thank you for the information.  Is there any place to download the helper 
binaries for NTLM?  Or do I need to build them myself?
Is there additional information on kerberos configuration in a windows 
environment.  Trying to wrap my head around the keytab and creation of it in a 
windows only environment.  From: Amos Jeffries 
 To: squid-users@lists.squid-cache.org 
 Sent: Tuesday, June 27, 2017 8:40 AM
 Subject: Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 
Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12
   
On 27/06/17 12:06, Todd Pearson wrote:
> 
> I am hosting the squid proxy on Windows 2K12 server.  Squid 2.7.STABLE8 
> Squid Web Proxy version worked well for authentication until recent 
> Windows 10 update killed Sha1.  Now I am upgrading to squid proxy 
> version 3.5.x.x to restore authentication.

FYI: upgrading to Squid-3 will not solve that problem by itself. The 
helpers in both Squid series are performing the same logic, with the 
same crypto limitations.

The core problem is that NTLM protocol itself is not capable of anything 
actually considered secure these days. It was declared EOL by MS more 
then 11 years ago, so loss of NTLM related things in Win10 is hardly a 
surprise.

To solve your auth problem what you need is actually a migration to 
Kerberos authentication (Negotiate auth). You might find that slightly 
easier after the Squid-3 upgrade, but the two are really independent 
changes.


> 
> The below settings are longer available in the 3.5.x.x version since the 
> progams do not exist for the new version:
> 
> auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe
> 
> external_acl_type win_domain_group %LOGIN 
> c:/squid/libexec/mswin_check_ad_group.exe -G
> 
> 
> What are the equivalent setting for v 3.5.  Once again I am in windows 
> environment.

The helpers still exist, they just got renamed to follow a structured 
taxonomy:



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


   ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version 3.5.20

2017-06-27 Thread Amos Jeffries

On 28/06/17 03:46, Cherukuri, Naresh wrote:

Hi,

Thank You for quick turnover, as per your request I changed squid config 
like below, still I going to www.google.com


acl CONNECT method CONNECT

acl sslconnect dstdomain -i https://www.google.com

acl GoogleRecaptcha url_regex ^https://www.google.com/recaptcha/$

http_access allow CONNECT sslconnect



Er. That will never work.

* Firstly because "https://...; are not valid dstdomain values.

* Secondly because as the CONNECT message uses an authority-form URL 
structure, not an absolute-form URL.


Your Squid will simply not see the https:// URL unless you are 
decrypting the TLS tunnel inside the CONNECT payload.  That means 
SSL-Bump functionality is mandatory for what you are attempting to do.


Also, be aware that Google services are using HSTS and certificate 
pinning. So SSL-Bump is much more likely not to work for their URLs.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid caching bad objects

2017-06-27 Thread Razor Cross
On Tue, Jun 27, 2017 at 11:34 AM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 06/27/2017 10:11 AM, Razor Cross wrote:
> > On Mon, Jun 26, 2017 at 12:06 PM, Alex Rousskov wrote:
>
> > >I suspect that the COMPLETE_NONPERSISTENT_MSG case in
> > >HttpStateData::processReplyBody() should be changed to call
> > >StoreEntry::lengthWentBad("missing last-chunk") when lastChunk is
> false
> > >and HttpStateData::flags.chunked is true.
>
> >   We are able to reproduce the issue . If server socket is closed
> > after sending first chunk of data, squid is caching the partial object
> > even though it did not receive the remaining chunks.
>
> If you are not going to fix this yourself, please consider filing a bug
> report, citing this email thread.
>
>
> > I feel it has to
> > make sure that lastchunk has received before caching the data.
>
> That is impossible in general (the response may be too big to buffer)
> but is also unnecessary in most cases (because Squid can stop caching
> and delete the being-cached object in-flight). My paragraph quoted above
> has the blueprint for a possible fix.
>
> Thanks for your inputs..
I just want to hear from squid official forum/owner whether it has fixed in
any recent squid releases so that we can upgrade/patch the fix.

- Cross

>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Block doc documents

2017-06-27 Thread Amos Jeffries

On 27/06/17 23:53, Daniel Rieken wrote:

Hello,

I would like to block my users from downloading doc- and docm-files,
but not docx.

So this works fine for me:
/etc/squid3/blockExtensions.acl:
\.doc(\?.*)?$
\.docm(\?.*)?$

acl blockExtensions urlpath_regex -i "/etc/squid3/blockExtensions.acl"
http_access deny blockExtensions


But in some cases the URL doesn't contain the extension (e.g. doc).
For URLs like this the above ACL doesn't work:
- http://www.example.org/download.pl?file=wordfile
- http://www.example.org/invoice-5479657415/

Here I need to work with mime-types:
acl blockMime rep_mime_type application/msword
acl blockMime rep_mime_type application/vnd.ms-word.document.macroEnabled.12
http_reply_access deny blockMime

This works fine, too. But I see a problem: The mime-type is defined on
the webserver. So the badguy could configure his webserver to serve a
doc-file as application/i.am.not.a.docfile and the above ACL isn't
working anymore.



HTTP contains no concept of "file". That is a human concept. All of what 
you mention above are the consequences of that difference.


I recommend you drop this concept of "file" from your thinking and 
concentrate on detecting what HTTP details represent a bad HTTP message. 
The "file" related things should be dealt with at other layers by other 
software like AV scanning or as Brendan suggested ICAP payload scanners.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACLs allow/deny logic

2017-06-27 Thread Alex Rousskov
On 06/27/2017 12:31 AM, Vieri wrote:

> http_access deny denied_restricted1_mimetypes_req 
> !allowed_restricted1_domains !allowed_restricted1_ips
> http_reply_access deny denied_restricted1_mimetypes_rep 
> !allowed_restricted1_domains !allowed_restricted1_ips
> http_access deny intercepted !localnet
> http_access allow localnet
> http_access deny all

> "The reply for POST http://149.154.165.120/api is DENIED, because it matched 
> allowed_restricted1_ips"

Squid "matched ACL" reporting code is badly designed and often leads to
misleading results. In this particular case, Squid wanted to say "it
matched !allowed_restricted1_ips" but could not. Older Squids were
especially broken in this area, but even modern ones suffer from the
same design flaw. This flaw is a known problem:

> // XXX: AclMatchedName does not contain a matched ACL name when the acl
> // does not match. It contains the last (usually leaf) ACL name checked
> // (or is NULL if no ACLs were checked).

You can work around most of these problems by appending an
always-matching ACL to every http_access rule you want to identify and
making sure that at least one rule always matches. The former can be
done using an any-of ACL in older Squids or annotate_transaction ACL in
modern Squids. You are already doing the latter with "deny all".


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid caching bad objects

2017-06-27 Thread Razor Cross
On Mon, Jun 26, 2017 at 12:06 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 06/26/2017 10:11 AM, Razor Cross wrote:
>
> > We are using squid 3.5. for our server. Recently we have noticed that
> > squid is caching incomplete objects in case of chunked response.
> >
> > We have gone through the squid code. It looks likes squid is caching
> > incomplete response in case of EOF from the server even though it does
> > not receive the last empty chunk.
> >
> >
> >  if (eof) // already reached EOF
> > return COMPLETE_NONPERSISTENT_MSG;
>
> You are looking at the wrong code. HttpStateData::persistentConnStatus()
> and related *_MSG codes do not determine whether the entire object was
> received. They determine whether
>
> (a) Squid should expect more response bytes and
>
> (b) The connection can be kept open if no more response bytes are expected.
>
> The COMPLETE_NONPERSISTENT_MSG return value is correct here (I am
> ignoring the sad fact that we are abusing the word "complete" to cover
> both whole and truncated responses).
>
>
> > Is this expected? Because of this problem, our server ends up serving
> > bad objects to the user.
>
> >What you describe sounds like a bug, but the exact code you are quoting
> >is not responsible for that bug. I di not study this in detail, but I
> >suspect that the COMPLETE_NONPERSISTENT_MSG case in
> >HttpStateData::processReplyBody() should be changed to call
> >StoreEntry::lengthWentBad("missing last-chunk") when lastChunk is false
> > and HttpStateData::flags.chunked is true.
>
>   We are able to reproduce the issue . If server socket is closed
after sending first chunk of data, squid is caching the partial object even
though it did not receive the remaining chunks. I feel it has to make sure
that lastchunk has received before caching the data.

>
> - Cross
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HIER_NONE on TCP_MISS?

2017-06-27 Thread Amos Jeffries

On 27/06/17 15:28, bump skier wrote:

Hi,

I'm trying to understand the following behavior I'm seeing with Squid 
running in accelerator mode. In short, I'm seeing some TCP_MISS for 
requests to a static javascript file which is initially cached and 
returned as a cache hit. I suspect the missed cache hits are due to the 
cache size being too small and the file eventually getting evicted. 
However, I'm confused about what I'm seeing in the Squid access log. For 
some of the cache misses I can see in the access log that Squid fetches 
the file from the configured origin server but for a vast majority of 
them I see HIER_NONE even though Squid is actually returning the file.


Under what situations would Squid fetch content from the origin server 
during a cache miss but print HIER_NONE?



It may happen if you have content adaptation (ICAP/eCAP) providing a 
response instead of either cache or origin server.


Maybe also if the collapsed forwarding feature is in use. AFAIK, we have 
not got the log entries quite right there yet.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version 3.5.20

2017-06-27 Thread Cherukuri, Naresh
Hi,

Thank You for quick turnover, as per your request I changed squid config like 
below, still I going to www.google.com
acl CONNECT method CONNECT
acl sslconnect dstdomain -i https://www.google.com
acl GoogleRecaptcha url_regex ^https://www.google.com/recaptcha/$
http_access allow CONNECT sslconnect
http_access allow backoffice_users GoogleRecaptcha


Thanks& Regards,
Naresh
From: Flashdown [mailto:flashd...@data-core.org]
Sent: Tuesday, June 27, 2017 11:37 AM
To: squid-users@lists.squid-cache.org; Cherukuri, Naresh; Eliezer Croitoru
Subject: Re: [squid-users] Squid Version 3.5.20

Well, I know that issue very good and google is the issue since they should put 
their captcha on a own subdomain. Then we could effectivley allow only the 
access to the captcha.

Until that there is no good way to achive this. But there is a non reliable way 
of blocking google.com

First allow the Connect method for google.com
Acl CONNECT method CONNECT
acl sslconnect dstdomain -i www.google.com
http_access allow CONNECT sslconnect
Then use an url regex and allow 
google.com/recaptcha

This way sometimes www.google.com is blocked, sometimes 
not. But access to recaptcha will always work.

Why we can't block it reliable? Well when browser/client wants to connect to 
https website then the firsr thing the browser trie is open a ssl tunnel to the 
FQDN
As soon as the tunnel is up it will request the ressource. May it helps if you 
add a url regex deny between allowing the connect method and allowing the url 
www.google.com/recaptcha

Written on my mobile..

Br,
Flashdown


Am 27. Juni 2017 17:07:19 MESZ schrieb "Cherukuri, Naresh" 
>:

Hi Eliezer,

We successfully blocked gmail, google images, google drive and rest all google 
related. Now we allowing www.google.com and www. 
google/Recaptcha. We still need to block www.google.com 
and just allow www.google/recaptcha. Is there a 
way to do that?

Appreciate your quick turnover!

Thanks,
Naresh


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Tuesday, June 27, 2017 10:16 AM
To: Cherukuri, Naresh; 
squid-users@lists.squid-cache.org
Subject: RE: [squid-users] Squid Version 3.5.20

Hey,

I can try to help you but I do not have enough logs for it.
Also it's not so simple.
Basically you will need to block gmail and google drive themselves in one rule 
that will not include other google services.

All The Bests,
Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Cherukuri, Naresh
Sent: Friday, June 23, 2017 23:34
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Version 3.5.20

Hello All,

I installed Squid version 3.5.20 on RHEL 7 and generated selfsigned CA 
certificates, can you shed some light on how to "Configure regular expression 
of the Google ReCaptcha URL with ACL".

My requirement :

This requirement is to allow Google's ReCaptcha URL (HTTPS) so associates can 
successfully use ADP which now utilizes Google's ReCaptcha which is called via 
an HTTPS URL, without allowing users to access other Google-related services 
such as Gmail or Google Drive.

Any ideas much appreciated!

Thanks,
Naresh



squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NTLM authentication worked in Squid 2.7.STABLE8 Squid Web Proxy, now need it in v3.5 hosted on Windows server 2k12

2017-06-27 Thread Amos Jeffries

On 27/06/17 12:06, Todd Pearson wrote:


I am hosting the squid proxy on Windows 2K12 server.   Squid 2.7.STABLE8 
Squid Web Proxy version worked well for authentication until recent 
Windows 10 update killed Sha1.  Now I am upgrading to squid proxy 
version 3.5.x.x to restore authentication.


FYI: upgrading to Squid-3 will not solve that problem by itself. The 
helpers in both Squid series are performing the same logic, with the 
same crypto limitations.


The core problem is that NTLM protocol itself is not capable of anything 
actually considered secure these days. It was declared EOL by MS more 
then 11 years ago, so loss of NTLM related things in Win10 is hardly a 
surprise.


To solve your auth problem what you need is actually a migration to 
Kerberos authentication (Negotiate auth). You might find that slightly 
easier after the Squid-3 upgrade, but the two are really independent 
changes.





The below settings are longer available in the 3.5.x.x version since the 
progams do not exist for the new version:


auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe

external_acl_type win_domain_group %LOGIN 
c:/squid/libexec/mswin_check_ad_group.exe -G



What are the equivalent setting for v 3.5.  Once again I am in windows 
environment.


The helpers still exist, they just got renamed to follow a structured 
taxonomy:




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version 3.5.20

2017-06-27 Thread Flashdown
Well, I know that issue very good and google is the issue since they should put 
their captcha on a own subdomain. Then we could effectivley allow only the 
access to the captcha.

Until that there is no good way to achive this. But there is a non reliable way 
of blocking google.com

First allow the Connect method for google.com
Acl CONNECT method CONNECT
acl sslconnect dstdomain -i www.google.com
http_access allow CONNECT sslconnect
Then use an url regex and allow google.com/recaptcha

This way sometimes www.google.com is blocked, sometimes not. But access to 
recaptcha will always work.

Why we can't block it reliable? Well when   browser/client wants to connect to  
https website then the firsr thing the browser trie is open a ssl tunnel to the 
FQDN
As soon as the tunnel is up it will request the ressource. May it helps if you 
add a url regex deny between allowing the connect method and allowing the url 
www.google.com/recaptcha

Written on  my mobile..

Br,
Flashdown



Am 27. Juni 2017 17:07:19 MESZ schrieb "Cherukuri, Naresh" 
:
>Hi Eliezer,
>
>We successfully blocked gmail, google images, google drive and rest all
>google related. Now we allowing www.google.com and www.
>google/Recaptcha. We still need to block www.google.com and just allow
>www.google/recaptcha. Is there a way to do that?
>
>Appreciate your quick turnover!
>
>Thanks,
>Naresh
>
> 
>-Original Message-
>From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
>Sent: Tuesday, June 27, 2017 10:16 AM
>To: Cherukuri, Naresh; squid-users@lists.squid-cache.org
>Subject: RE: [squid-users] Squid Version 3.5.20
>
>Hey,
>
>I can try to help you but I do not have enough logs for it.
>Also it's not so simple.
>Basically you will need to block gmail and google drive themselves in
>one rule that will not include other google services.
>
>All The Bests,
>Eliezer
>
>
>http://ngtech.co.il/lmgtfy/
>Linux System Administrator
>Mobile: +972-5-28704261
>Email: elie...@ngtech.co.il
>
>
>From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>Behalf Of Cherukuri, Naresh
>Sent: Friday, June 23, 2017 23:34
>To: squid-users@lists.squid-cache.org
>Subject: [squid-users] Squid Version 3.5.20
>
>Hello All,
>
>I installed Squid version 3.5.20 on RHEL 7 and generated selfsigned CA
>certificates, can you shed some light on how to "Configure regular
>expression of the Google ReCaptcha URL with ACL".
>
>My requirement :
>
>This requirement is to allow Google's ReCaptcha URL (HTTPS) so
>associates can successfully use ADP which now utilizes Google's
>ReCaptcha which is called via an HTTPS URL, without allowing users to
>access other Google-related services such as Gmail or Google Drive.
>
>Any ideas much appreciated!
>
>Thanks,
>Naresh
>
>___
>squid-users mailing list
>squid-users@lists.squid-cache.org
>http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version 3.5.20

2017-06-27 Thread Cherukuri, Naresh
Hi Eliezer,

We successfully blocked gmail, google images, google drive and rest all google 
related. Now we allowing www.google.com and www. google/Recaptcha. We still 
need to block www.google.com and just allow www.google/recaptcha. Is there a 
way to do that?

Appreciate your quick turnover!

Thanks,
Naresh

 
-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Tuesday, June 27, 2017 10:16 AM
To: Cherukuri, Naresh; squid-users@lists.squid-cache.org
Subject: RE: [squid-users] Squid Version 3.5.20

Hey,

I can try to help you but I do not have enough logs for it.
Also it's not so simple.
Basically you will need to block gmail and google drive themselves in one rule 
that will not include other google services.

All The Bests,
Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Cherukuri, Naresh
Sent: Friday, June 23, 2017 23:34
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Version 3.5.20

Hello All,

I installed Squid version 3.5.20 on RHEL 7 and generated selfsigned CA 
certificates, can you shed some light on how to "Configure regular expression 
of the Google ReCaptcha URL with ACL".

My requirement :

This requirement is to allow Google's ReCaptcha URL (HTTPS) so associates can 
successfully use ADP which now utilizes Google's ReCaptcha which is called via 
an HTTPS URL, without allowing users to access other Google-related services 
such as Gmail or Google Drive.

Any ideas much appreciated!

Thanks,
Naresh

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Header order in squid proxy

2017-06-27 Thread Sonya Roy
The sites I was talking about don't just target the header order. That's
just one of the things they check. Of course, they have their own system to
protect themselves again ddos attacks or use services like akamai or
cloudflare. The header order is just one of the common bot-detection
techniques that they use to filter out unwanted traffic.

For example, akamai's bot detection system checks header order as well
among a lot of other things.

Anyway, after Alex pointed me to the right direction, I managed to edit
couple lines in squid to prevent the change in header order.

With regards,

On Tue, Jun 27, 2017 at 8:02 PM, Eliezer Croitoru 
wrote:

> If I may add a word or two:
> If sites are securing their systems based on headers order then I believe
> they are aiming at the wrong target.
> It's a "nice to have" but not actual deep application level defense.(based
> on my low level in the subject)
> One example I have seen of a DOS\DDOS issue is:
> "Hey, We are having high CPU usage, what should we do?"
> - The bot was hammering the service from an AWS instance ... so block it..
> - How many requests per second from a single IP is considered normal?
> - Then, how many *new* cookies requests per second is considered normal?
> - What about NAT? would a Chinese client be considered legit despite to
> him being under one big NAT?
> - Would you be able to differentiate between a specific single ip or
> subnet that is considered legit?
> - What about RBL?
>
> The above are a things I heard here or there which I think are more
> important than headers order.
> Take my words as coming from a person which is not an expert in the
> security area.
>
> All The Bests,
> Eliezer
>
> 
> http://ngtech.co.il/lmgtfy/
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Sonya Roy
> Sent: Thursday, June 22, 2017 21:54
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Header order in squid proxy
>
> The sites I am talking about check the User-Agent header and makes sure
> the user-agent is for a well-known browser, i.e. a browser that they
> support. And any browser like Firefox, Chrome, Safari, Edge for example,
> sends the headers in a certain order and the order depends on the browser.
> And this header order for well-known headers like Accept, Accept-Language,
> Accept-Encoding, Content-Length, Host, Connection, Referer, Cookie, etc.
> And they match the order of the received request with the standard header
> order for the browser for that user-agent.
>
> This detects bots like a poorly written bot(i.e ones that don't consider
> this header order) using python requests or in any language for that matter
> where the requests are handled using a low level http requests library.
>
> So, keeping the header order sent from the client intact would prevent
> them from dropping proxied requests(ones that use squid). I know for a fact
> that they don't intend to block proxies.
>
> Could you point me in the direction to where I should look for in the
> source code of squid? the part that handles the header data sent from the
> client.
>
> With regards,
> Sonya Roy.
>
> On Fri, Jun 23, 2017 at 12:02 AM, Alex Rousskov  rouss...@measurement-factory.com> wrote:
> On 06/22/2017 11:49 AM, Sonya Roy wrote:
>
> > I noticed that squid changes the header order received from the client
> > before sending it to the origin server.
> >
> > I assume this is because squid parses the header data and adds some
> > headers depending on the config file and then recreates the header data.
>
> IIRC, modern Squids change a header field position when the received
> field is deleted and then added back. This is typical for hop-by-hop
> headers such as Connection, but there are other reasons for Squid to
> delete and add a header field. When the value of the added field is the
> same as the value of the removed field, such pointless "editing" looks
> like mindless "reordering" to the outside observer.
>
> The two actions (field deletion and addition) may happen in a single
> piece of code or may be separated by lots of code and even time.
> Preventing pointless editing in the former cases is straightforward, but
> the latter cases are difficult to handle. Correct avoidance of pointless
> editing may improve performance and, if it does, can be considered a
> useful optimization on its own, regardless of your use case.
>
>
> > Is there any way to prevent this?
>
> Not without changing Squid code (or adding more proxies). However,
> before we even talk about code changes, we should clarify the problem we
> are dealing with. The questions below will guide you.
>
> It is probably much easier to ensure some fixed field send order
> (regardless of the received order) than to preserve the received order.
> Will a fixed order (e.g., always alphabetical) address your use case?
> This feature will hurt 

Re: [squid-users] Header order in squid proxy

2017-06-27 Thread Eliezer Croitoru
If I may add a word or two:
If sites are securing their systems based on headers order then I believe they 
are aiming at the wrong target.
It's a "nice to have" but not actual deep application level defense.(based on 
my low level in the subject)
One example I have seen of a DOS\DDOS issue is:
"Hey, We are having high CPU usage, what should we do?"
- The bot was hammering the service from an AWS instance ... so block it..
- How many requests per second from a single IP is considered normal?
- Then, how many *new* cookies requests per second is considered normal?
- What about NAT? would a Chinese client be considered legit despite to him 
being under one big NAT?
- Would you be able to differentiate between a specific single ip or subnet 
that is considered legit?
- What about RBL?

The above are a things I heard here or there which I think are more important 
than headers order.
Take my words as coming from a person which is not an expert in the security 
area.

All The Bests,
Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Sonya Roy
Sent: Thursday, June 22, 2017 21:54
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Header order in squid proxy

The sites I am talking about check the User-Agent header and makes sure the 
user-agent is for a well-known browser, i.e. a browser that they support. And 
any browser like Firefox, Chrome, Safari, Edge for example, sends the headers 
in a certain order and the order depends on the browser. And this header order 
for well-known headers like Accept, Accept-Language, Accept-Encoding, 
Content-Length, Host, Connection, Referer, Cookie, etc. And they match the 
order of the received request with the standard header order for the browser 
for that user-agent.

This detects bots like a poorly written bot(i.e ones that don't consider this 
header order) using python requests or in any language for that matter where 
the requests are handled using a low level http requests library. 

So, keeping the header order sent from the client intact would prevent them 
from dropping proxied requests(ones that use squid). I know for a fact that 
they don't intend to block proxies.

Could you point me in the direction to where I should look for in the source 
code of squid? the part that handles the header data sent from the client.

With regards,
Sonya Roy.

On Fri, Jun 23, 2017 at 12:02 AM, Alex Rousskov 
 wrote:
On 06/22/2017 11:49 AM, Sonya Roy wrote:

> I noticed that squid changes the header order received from the client
> before sending it to the origin server.
>
> I assume this is because squid parses the header data and adds some
> headers depending on the config file and then recreates the header data.

IIRC, modern Squids change a header field position when the received
field is deleted and then added back. This is typical for hop-by-hop
headers such as Connection, but there are other reasons for Squid to
delete and add a header field. When the value of the added field is the
same as the value of the removed field, such pointless "editing" looks
like mindless "reordering" to the outside observer.

The two actions (field deletion and addition) may happen in a single
piece of code or may be separated by lots of code and even time.
Preventing pointless editing in the former cases is straightforward, but
the latter cases are difficult to handle. Correct avoidance of pointless
editing may improve performance and, if it does, can be considered a
useful optimization on its own, regardless of your use case.


> Is there any way to prevent this?

Not without changing Squid code (or adding more proxies). However,
before we even talk about code changes, we should clarify the problem we
are dealing with. The questions below will guide you.

It is probably much easier to ensure some fixed field send order
(regardless of the received order) than to preserve the received order.
Will a fixed order (e.g., always alphabetical) address your use case?
This feature will hurt performance, but you might be able to convince
others to accept it if you have a very compelling/specific/detailed use
case because it can be disabled by default.


> I am asking because some sites detect bots using the header order and
> they drop any such connection. So they unintentionally block squid
> proxies even if its not being used by a bot.

Are you implying that bots often change header field order between their
requests? Or that bots often use a different (fixed) header field order
than the (fixed) field order used by non-bots? Preserving received order
may help in the former case but not in the latter case.

Also, do those blocking sites pay attention to all headers or just
end-to-end headers?

Please note that there are many other ways to detect a proxy so if a
site wants to block proxies rather 

Re: [squid-users] Reverse DNS Lookup for client IPs

2017-06-27 Thread Eliezer Croitoru
Hey,

Can you put a link to the thread here?
Are you talking about this thread:
http://lists.squid-cache.org/pipermail/squid-users/2016-February/008999.html

http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-DNS-Lookup-for-client-IPs-td4675872.html

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Ralf Hildebrandt
Sent: Tuesday, June 20, 2017 14:35
To: squid-us...@squid-cache.org
Subject: [squid-users] Reverse DNS Lookup for client IPs

I have to chime in on the "Reverse DNS Lookup for client IPs" thread back in 
Feb 2016. I tried redefining the logging format for url_rewrite_extras and 
store_id_extras in the config, but that wouldn't work.

I had to change the file src/cf.data.pre and recompiled, after that the number 
of reverse lookups dropped considerably.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
https://www.charite.de Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version 3.5.20

2017-06-27 Thread Eliezer Croitoru
Hey,

I can try to help you but I do not have enough logs for it.
Also it's not so simple.
Basically you will need to block gmail and google drive themselves in one
rule that will not include other google services.

All The Bests,
Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Cherukuri, Naresh
Sent: Friday, June 23, 2017 23:34
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Version 3.5.20

Hello All,

I installed Squid version 3.5.20 on RHEL 7 and generated selfsigned CA
certificates, can you shed some light on how to "Configure regular
expression of the Google ReCaptcha URL with ACL".

My requirement :

This requirement is to allow Google's ReCaptcha URL (HTTPS) so associates
can successfully use ADP which now utilizes Google's ReCaptcha which is
called via an HTTPS URL, without allowing users to access other
Google-related services such as Gmail or Google Drive.

Any ideas much appreciated!

Thanks,
Naresh

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Block doc documents

2017-06-27 Thread brendan kearney
You need an ICAP server intelligent enough to differentiate between the
file types.  Squid is a proxy and can only deal with the protocol.  An ICAP
server can deal with the content.  C-icap and ecap are a couple options
that seem to be available.  I havr no experience with either.

On Jun 27, 2017 7:53 AM, "Daniel Rieken"  wrote:

> Hello,
>
> I would like to block my users from downloading doc- and docm-files,
> but not docx.
>
> So this works fine for me:
> /etc/squid3/blockExtensions.acl:
> \.doc(\?.*)?$
> \.docm(\?.*)?$
>
> acl blockExtensions urlpath_regex -i "/etc/squid3/blockExtensions.acl"
> http_access deny blockExtensions
>
>
> But in some cases the URL doesn't contain the extension (e.g. doc).
> For URLs like this the above ACL doesn't work:
> - http://www.example.org/download.pl?file=wordfile
> - http://www.example.org/invoice-5479657415/
>
> Here I need to work with mime-types:
> acl blockMime rep_mime_type application/msword
> acl blockMime rep_mime_type application/vnd.ms-word.
> document.macroEnabled.12
> http_reply_access deny blockMime
>
> This works fine, too. But I see a problem: The mime-type is defined on
> the webserver. So the badguy could configure his webserver to serve a
> doc-file as application/i.am.not.a.docfile and the above ACL isn't
> working anymore.
> Is there any way to make squid block doc- and docm files based on the
> response-headers file-type?
> Or in other words: Is squid able to match the "doc" in the
> Content-Disposition header of the response?
>
> HTTP/1.0 200 OK
> Date: Tue, 27 Jun 2017 11:40:57 GMT
> Server: Apache Phusion_Passenger/4.0.10 mod_bwlimited/1.4
> Cache-Control: no-cache, no-store, max-age=0, must-revalidate
> Pragma: no-cache
> Content-Type: application/baddoc
> Content-Disposition: attachment;
> filename="gescanntes-Dokument-VPPAW-072-JCD3032.doc"
> Content-Transfer-Encoding: binary
> X-Powered-By: PHP/5.3.29
> Connection: close
>
>
> Regards, Daniel
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Block doc documents

2017-06-27 Thread Daniel Rieken
Hello,

I would like to block my users from downloading doc- and docm-files,
but not docx.

So this works fine for me:
/etc/squid3/blockExtensions.acl:
\.doc(\?.*)?$
\.docm(\?.*)?$

acl blockExtensions urlpath_regex -i "/etc/squid3/blockExtensions.acl"
http_access deny blockExtensions


But in some cases the URL doesn't contain the extension (e.g. doc).
For URLs like this the above ACL doesn't work:
- http://www.example.org/download.pl?file=wordfile
- http://www.example.org/invoice-5479657415/

Here I need to work with mime-types:
acl blockMime rep_mime_type application/msword
acl blockMime rep_mime_type application/vnd.ms-word.document.macroEnabled.12
http_reply_access deny blockMime

This works fine, too. But I see a problem: The mime-type is defined on
the webserver. So the badguy could configure his webserver to serve a
doc-file as application/i.am.not.a.docfile and the above ACL isn't
working anymore.
Is there any way to make squid block doc- and docm files based on the
response-headers file-type?
Or in other words: Is squid able to match the "doc" in the
Content-Disposition header of the response?

HTTP/1.0 200 OK
Date: Tue, 27 Jun 2017 11:40:57 GMT
Server: Apache Phusion_Passenger/4.0.10 mod_bwlimited/1.4
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Content-Type: application/baddoc
Content-Disposition: attachment;
filename="gescanntes-Dokument-VPPAW-072-JCD3032.doc"
Content-Transfer-Encoding: binary
X-Powered-By: PHP/5.3.29
Connection: close


Regards, Daniel
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACLs allow/deny logic

2017-06-27 Thread Vieri
Please bear with me because I still don't quite grasp the AND logic with ACLs.

Let's consider the logic "http_access deny (if) X (and) Y (and) Z" and the 
following squid configuration section:

[squid.conf - start]
acl denied_restricted1_mimetypes_req req_mime_type -i 
"/usr/local/proxy-settings/denied.restricted1.mimetypes"
acl denied_restricted1_mimetypes_rep rep_mime_type -i 
"/usr/local/proxy-settings/denied.restricted1.mimetypes"
acl allowed_restricted1_domains dstdomain -i 
"/usr/local/proxy-settings/allowed.restricted1.domains"
acl allowed_restricted1_ips dst 
"/usr/local/proxy-settings/allowed.restricted1.ips"

http_access deny denied_restricted1_mimetypes_req !allowed_restricted1_domains 
!allowed_restricted1_ips
http_reply_access deny denied_restricted1_mimetypes_rep 
!allowed_restricted1_domains !allowed_restricted1_ips

http_access deny intercepted !localnet

http_access allow localnet

http_access deny all
[squid.conf - finish]

In particular:

http_reply_access deny (if) denied_restricted1_mimetypes_rep (and not) 
allowed_restricted1_domains (and not) allowed_restricted1_ips

where 

denied_restricted1_mimetypes_rep: matches mime type application/octet-stream
allowed_restricted1_domains: matches DESTINATION domain .telegram.org
allowed_restricted1_ips: matches DESTINATION IP addresses (any one of 
149.154.167.91 or 149.154.165.120)

So, it should translate to something like this:

http_reply_access deny (if) (mime type is application/octet-stream) (and) 
(DESTINATION domain is NOT .telegram.org) (and) (DESTINATION IP address is NOT 
any of 149.154.167.91 or 149.154.165.120)

Correct?
If so, then I'm still struggling to understand the first message in the log:

"The reply for POST http://149.154.165.120/api is DENIED, because it matched 
allowed_restricted1_ips"

I don't think "the server's reply (application/octet-stream) should be denied" 
if it comes from one of 149.154.167.91 or 149.154.165.120.

Anyway, I'll try out the configuration directives you suggested and see if that 
logic applies correctly (at least to my undertsanding ;-) ).

Thanks for your valuable help,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users