Re: [squid-users] HTTPS site filtering

2017-01-20 Thread roadrage27
>How is that LAN traffic getting to Squid?
Squid is sitting on the internal LAN, not an external facing server

>That is odd. Because Squid ACL logics implicitly use the inverse of the
l>ast line as the default action.

>So your "allow localhost" sho>uld be causing an impicit "deny all" to
>exist at that point in the processing anyway.
>(/me wonders who broke what.)

I read the documentation which suggested the same but it goes nowhere.  I
thought i snapped it in half which is why i burned the server down and
rebuilt it from scratch then brought the config back over.  I attempted the
implicit deny on the new build with the same result.


On Fri, Jan 20, 2017 at 12:42 PM Amos Jeffries [via Squid Web Proxy Cache] <
ml-node+s1019090n4681230...@n4.nabble.com> wrote:

> On 21/01/2017 7:30 a.m., roadrage27 wrote:
> >> I see no 'localnet' ACL use. If this proxy is supposed to be servicing
> >> LAN clients, that will be needed and the keepgoing and artwork ACLs
> >> probably not needed.
> >
> > I am connecting on a LAN to it now with no issues and multiple testers
> on
> > the same subnet can also use it.  why would i add a directive if its
> > already working?
>
> Because your config file says the only traffic allowed is those specific
> keepgoing domains, the squid artwork file, and traffic was generated by
> locahost (aka. 127.0.0.1 on the proxy machine itself).
>
> How is that LAN traffic getting to Squid?
>
> Amos
>
> ___
> squid-users mailing list
> [hidden email] 
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681230.html
> To unsubscribe from HTTPS site filtering, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681231.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread Amos Jeffries
On 21/01/2017 7:30 a.m., roadrage27 wrote:
>> I see no 'localnet' ACL use. If this proxy is supposed to be servicing
>> LAN clients, that will be needed and the keepgoing and artwork ACLs
>> probably not needed.
> 
> I am connecting on a LAN to it now with no issues and multiple testers on
> the same subnet can also use it.  why would i add a directive if its
> already working?

Because your config file says the only traffic allowed is those specific
keepgoing domains, the squid artwork file, and traffic was generated by
locahost (aka. 127.0.0.1 on the proxy machine itself).

How is that LAN traffic getting to Squid?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread Amos Jeffries
On 21/01/2017 6:59 a.m., roadrage27 wrote:
> When I add the final deny all then no traffic traverses squid.  When I
> removed it then squid started passing traffic
> 

That is odd. Because Squid ACL logics implicitly use the inverse of the
last line as the default action.

So your "allow localhost" should be causing an impicit "deny all" to
exist at that point in the processing anyway.

(/me wonders who broke what.)

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread roadrage27
>I see no 'localnet' ACL use. If this proxy is supposed to be servicing
>LAN clients, that will be needed and the keepgoing and artwork ACLs
>probably not needed.

I am connecting on a LAN to it now with no issues and multiple testers on
the same subnet can also use it.  why would i add a directive if its
already working?

I uncommented out the other lines cant recall why i commented them but yeah
mistake there.

>Whats the idea behind this "keepgoing" ACL ?
Once i put that in with those domain it allowed them to connect as those
were domains needed for access via SSL
 >Is this proxy supposed to have reverse-proxy duties for them?
Nope, just a simple proxy that locks out the web unless the ACL allows it.

On Fri, Jan 20, 2017 at 12:00 PM Alex Tate  wrote:

> When I add the final deny all then no traffic traverses squid.  When I
> removed it then squid started passing traffic
>
> On Fri, Jan 20, 2017, 11:46 AM Amos Jeffries [via Squid Web Proxy Cache] <
> ml-node+s1019090n4681226...@n4.nabble.com> wrote:
>
> On 21/01/2017 5:52 a.m., roadrage27 wrote:
>
> > I was able to resolve my issue partially.  I burned down the server and
> > rebuilt it clean so all previous changes that were made attempting to
> make
> > SSL work were gone.  Once i reloaded squid and the config files i was
> able
> > to allow SSL traffic using the dstdomain acl type.  I currently have a
> few
> > URLS that are regex type that need to be allowed so im currently
> cranking
> > out those.
> >
> > On Fri, Jan 20, 2017 at 8:36 AM roadrage27 wrote:
> >
> >>> That tells me either you have screwed up the CONNECT ACL definition.
> Or
> >>> the SSL_ports one.
> >> Very possible as im pretty green on squid, my current conf file is
> below.
> >>  with that conf the SSL sites just sit and spin until the eventually
> time
> >> out.
> >>
> >> acl site_squid_art url_regex ^http://www.squid-cache.org/Artwork
> >> acl keepgoing dstdomain .plateau.com .skillwsa.com .successfactors.com
> >>
>
> Whats the idea behind this "keepgoing" ACL ?
>  Is this proxy supposed to have reverse-proxy duties for them?
>
> >> acl SSL_ports port 443
> >> acl Safe_ports port 80 # http
> >> acl Safe_ports port 21 # ftp
> >> acl Safe_ports port 443 # https
> >> acl Safe_ports port 70 # gopher
> >> acl Safe_ports port 210 # wais
> >> acl Safe_ports port 1025-65535 # unregistered ports
> >> acl Safe_ports port 280 # http-mgmt
> >> acl Safe_ports port 488 # gss-http
> >> acl Safe_ports port 591 # filemaker
> >> acl Safe_ports port 777 # multiling http
> >> acl CONNECT method CONNECT
> >>
> >> http_access allow keepgoing
> >> http_access deny !Safe_ports
> >> http_access deny CONNECT !SSL_ports
> >> #http_access allow CONNECT SSL_ports
> >> http_access allow localhost manager
> >> http_access allow site_squid_art
> >> http_access allow localhost
> >>
>
> I see no 'localnet' ACL use. If this proxy is supposed to be servicing
> LAN clients, that will be needed and the keepgoing and artwork ACLs
> probably not needed.
>
> The final "http_access deny all" is missing as well. Squid is just doing
> that impicitly anyway. So its more needed to remind you of what is
> happening and prevent possible mistakes implicitly allowing lots of
> unexpected things through the proxy later.
>
>
> >>
> >> http_port 3132
> >>
> >>
> >> access_log /var/log/squid3/squid3132.log squid
> >>
> >> pid_filename /var/run/squid3132.pid
> >> coredump_dir /var/spool/squid3
> >>
> >> refresh_pattern ^ftp: 1440 20% 10080
> >> refresh_pattern ^gopher: 1440 0% 1440
> >> #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>
> FYI: The above commented out line is rather critical to the correct
> behaviour for dynamic web content.
>
> If the server is not producing the required cache controls dynamically
> changing data should not be allowed to store for one second, let alone
> the default 7 days.
>
> >> refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
> >> #refresh_pattern . 0 20% 4320
> >>
>
> Whats the point of commenting that out?
>
> Amos
> ___
> squid-users mailing list
> [hidden email] 
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681226.html
> To unsubscribe from HTTPS site filtering, click here
> 
> .
> NAML
> 

Re: [squid-users] HTTPS site filtering

2017-01-20 Thread roadrage27
When I add the final deny all then no traffic traverses squid.  When I
removed it then squid started passing traffic

On Fri, Jan 20, 2017, 11:46 AM Amos Jeffries [via Squid Web Proxy Cache] <
ml-node+s1019090n4681226...@n4.nabble.com> wrote:

> On 21/01/2017 5:52 a.m., roadrage27 wrote:
>
> > I was able to resolve my issue partially.  I burned down the server and
> > rebuilt it clean so all previous changes that were made attempting to
> make
> > SSL work were gone.  Once i reloaded squid and the config files i was
> able
> > to allow SSL traffic using the dstdomain acl type.  I currently have a
> few
> > URLS that are regex type that need to be allowed so im currently
> cranking
> > out those.
> >
> > On Fri, Jan 20, 2017 at 8:36 AM roadrage27 wrote:
> >
> >>> That tells me either you have screwed up the CONNECT ACL definition.
> Or
> >>> the SSL_ports one.
> >> Very possible as im pretty green on squid, my current conf file is
> below.
> >>  with that conf the SSL sites just sit and spin until the eventually
> time
> >> out.
> >>
> >> acl site_squid_art url_regex ^http://www.squid-cache.org/Artwork
> >> acl keepgoing dstdomain .plateau.com .skillwsa.com .successfactors.com
> >>
>
> Whats the idea behind this "keepgoing" ACL ?
>  Is this proxy supposed to have reverse-proxy duties for them?
>
> >> acl SSL_ports port 443
> >> acl Safe_ports port 80 # http
> >> acl Safe_ports port 21 # ftp
> >> acl Safe_ports port 443 # https
> >> acl Safe_ports port 70 # gopher
> >> acl Safe_ports port 210 # wais
> >> acl Safe_ports port 1025-65535 # unregistered ports
> >> acl Safe_ports port 280 # http-mgmt
> >> acl Safe_ports port 488 # gss-http
> >> acl Safe_ports port 591 # filemaker
> >> acl Safe_ports port 777 # multiling http
> >> acl CONNECT method CONNECT
> >>
> >> http_access allow keepgoing
> >> http_access deny !Safe_ports
> >> http_access deny CONNECT !SSL_ports
> >> #http_access allow CONNECT SSL_ports
> >> http_access allow localhost manager
> >> http_access allow site_squid_art
> >> http_access allow localhost
> >>
>
> I see no 'localnet' ACL use. If this proxy is supposed to be servicing
> LAN clients, that will be needed and the keepgoing and artwork ACLs
> probably not needed.
>
> The final "http_access deny all" is missing as well. Squid is just doing
> that impicitly anyway. So its more needed to remind you of what is
> happening and prevent possible mistakes implicitly allowing lots of
> unexpected things through the proxy later.
>
>
> >>
> >> http_port 3132
> >>
> >>
> >> access_log /var/log/squid3/squid3132.log squid
> >>
> >> pid_filename /var/run/squid3132.pid
> >> coredump_dir /var/spool/squid3
> >>
> >> refresh_pattern ^ftp: 1440 20% 10080
> >> refresh_pattern ^gopher: 1440 0% 1440
> >> #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>
> FYI: The above commented out line is rather critical to the correct
> behaviour for dynamic web content.
>
> If the server is not producing the required cache controls dynamically
> changing data should not be allowed to store for one second, let alone
> the default 7 days.
>
> >> refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
> >> #refresh_pattern . 0 20% 4320
> >>
>
> Whats the point of commenting that out?
>
> Amos
> ___
> squid-users mailing list
> [hidden email] 
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681226.html
> To unsubscribe from HTTPS site filtering, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681227.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread Amos Jeffries
On 21/01/2017 5:52 a.m., roadrage27 wrote:
> I was able to resolve my issue partially.  I burned down the server and
> rebuilt it clean so all previous changes that were made attempting to make
> SSL work were gone.  Once i reloaded squid and the config files i was able
> to allow SSL traffic using the dstdomain acl type.  I currently have a few
> URLS that are regex type that need to be allowed so im currently cranking
> out those.
> 
> On Fri, Jan 20, 2017 at 8:36 AM roadrage27 wrote:
> 
>>> That tells me either you have screwed up the CONNECT ACL definition. Or
>>> the SSL_ports one.
>> Very possible as im pretty green on squid, my current conf file is below.
>>  with that conf the SSL sites just sit and spin until the eventually time
>> out.
>>
>> acl site_squid_art url_regex ^http://www.squid-cache.org/Artwork
>> acl keepgoing dstdomain .plateau.com .skillwsa.com .successfactors.com
>>

Whats the idea behind this "keepgoing" ACL ?
 Is this proxy supposed to have reverse-proxy duties for them?

>> acl SSL_ports port 443
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>>
>> http_access allow keepgoing
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> #http_access allow CONNECT SSL_ports
>> http_access allow localhost manager
>> http_access allow site_squid_art
>> http_access allow localhost
>>

I see no 'localnet' ACL use. If this proxy is supposed to be servicing
LAN clients, that will be needed and the keepgoing and artwork ACLs
probably not needed.

The final "http_access deny all" is missing as well. Squid is just doing
that impicitly anyway. So its more needed to remind you of what is
happening and prevent possible mistakes implicitly allowing lots of
unexpected things through the proxy later.


>>
>> http_port 3132
>>
>>
>> access_log /var/log/squid3/squid3132.log squid
>>
>> pid_filename /var/run/squid3132.pid
>> coredump_dir /var/spool/squid3
>>
>> refresh_pattern ^ftp: 1440 20% 10080
>> refresh_pattern ^gopher: 1440 0% 1440
>> #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

FYI: The above commented out line is rather critical to the correct
behaviour for dynamic web content.

If the server is not producing the required cache controls dynamically
changing data should not be allowed to store for one second, let alone
the default 7 days.

>> refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
>> #refresh_pattern . 0 20% 4320
>>

Whats the point of commenting that out?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dst and dstdomain ACLs

2017-01-20 Thread Amos Jeffries
On 21/01/2017 3:19 a.m., cred...@eml.cc wrote:
> On Fri, Jan 20, 2017, at 01:42 AM, Amos Jeffries wrote:
>> On 20/01/2017 3:01 p.m., creditu wrote:
>>> Had a question about dst and dstdomain acls.  Given the sample below:
>>>
>>> http_port 192.168.100.1:80 accel defaultsite=www.example.com vhost
>>> acl www dstdomain www.example.com dev.example.com
>>> cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
>>> round-robin
>>> cache_peer_access 10.10.10.1 allow www
>>> cache_peer_access 10.10.10.1 deny all
>>> ...
>>> http_access allow www
>>> http_access deny all
>>>
>>> When someone tries to access the site by specifying an IP
>>> (192.168.100.1) instead of the name the client gets a standard access
>>> denied squid page.
>>
>> What is the rDNS for 192.168.100.1 ?
> 
> Shoot and thanks.  It's a rDNS issue.  We were using vport in a previous
> config and it may have not been noticed because of that.
> 
>>
>> The dstdomain you have configured only the exact two domains listed to
>> match.
>>
>>>  It seems that a separate acl needs to be defined for
>>> when someone tries to access the site using an IP?  For instance:
>>> acl dst www_ip 192.168.100.1
>>
>> You could add the raw-IP to the www ACL:
>>  acl www dstdomain -n 192.168.100.1
>>
>>  ... but what will 10.10.10.1 do when asked for the site hosted at
>> 192.168.100.1 ?
> 
> 10.10.10.1 doesn't allow it, so might as well stop at squid. So, is the
> best way be to create an ACL and deny cache peer access then do
> something with deny info?  Something like:
> 
> acl www_ip dstdomain -n 192.168.100.1
> cache_peer_access 10.10.10.1 deny www_ip
> 
> deny_info http:// www_ip
> http_access deny www_ip
> 

Pretty much. But without the cache_peer_access bit. The denied request
never gets near the cache_peer.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread roadrage27
I was able to resolve my issue partially.  I burned down the server and
rebuilt it clean so all previous changes that were made attempting to make
SSL work were gone.  Once i reloaded squid and the config files i was able
to allow SSL traffic using the dstdomain acl type.  I currently have a few
URLS that are regex type that need to be allowed so im currently cranking
out those.

On Fri, Jan 20, 2017 at 8:36 AM roadrage27 [via Squid Web Proxy Cache] <
ml-node+s1019090n4681219...@n4.nabble.com> wrote:

> >That tells me either you have screwed up the CONNECT ACL definition. Or
> >the SSL_ports one.
> Very possible as im pretty green on squid, my current conf file is below.
>  with that conf the SSL sites just sit and spin until the eventually time
> out.
>
> acl site_squid_art url_regex ^http://www.squid-cache.org/Artwork
> acl keepgoing dstdomain .plateau.com .skillwsa.com .successfactors.com
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> http_access allow keepgoing
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> #http_access allow CONNECT SSL_ports
> http_access allow localhost manager
> http_access allow site_squid_art
> http_access allow localhost
>
>
> http_port 3132
>
>
> access_log /var/log/squid3/squid3132.log squid
>
> pid_filename /var/run/squid3132.pid
> coredump_dir /var/spool/squid3
>
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
> #refresh_pattern . 0 20% 4320
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681219.html
> To unsubscribe from HTTPS site filtering, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681224.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump

2017-01-20 Thread Antony Stone
On Friday 20 January 2017 at 17:12:04, Mustafa Mohammad wrote:

> What are the steps to setup SSL Bump?

Don't.

Use peek and splice instead.

See http://wiki.squid-cache.org/Features/SslBump for info, then 
http://wiki.squid-cache.org/Features/SslPeekAndSplice for guidance.


Antony.

-- 
If at first you don't succeed, destroy all the evidence that you tried.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump

2017-01-20 Thread Giles Coochey



On 20/01/17 16:12, Mustafa Mohammad wrote:

What are the steps to setup SSL Bump?

http://lmgtfy.com/?iie=1&q=What+are+the+steps+to+setup+SSL+Bump%3F

--
Regards,

Giles Coochey
+44 (0) 7584 634 135
+44 (0) 1803 529 451
gi...@coochey.net




smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL Bump

2017-01-20 Thread Mustafa Mohammad
What are the steps to setup SSL Bump?


Thanks,

Mustafa Mohammad
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-20 Thread Alex Rousskov
On 01/20/2017 02:13 AM, Amos Jeffries wrote:

> The key part is the "Error negotiating SSL on FD 16:
> error::lib(0):func(0):reason(0) (5/0/0)"
> 
> Which is OpenSSL's very obtuse way of telling Squid "an error
> rhappened". With no helpful details about what error it was.

Actually, this is Squid's very obtuse way of telling us that peer closed
the connection while violating the SSL protocol (i.e., a
protocol-violating EOF during an SSL_connect() network read).

OpenSSL error reporting is ugly indeed, but we should not blame it for
our own lack of code to render OpenSSL-supplied details in a
human-friendly way (or for losing critical information along the way).


Cheers,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS site filtering

2017-01-20 Thread roadrage27
>That tells me either you have screwed up the CONNECT ACL definition. Or
>the SSL_ports one.
Very possible as im pretty green on squid, my current conf file is below. 
with that conf the SSL sites just sit and spin until the eventually time
out.

acl site_squid_art url_regex ^http://www.squid-cache.org/Artwork
acl keepgoing dstdomain .plateau.com .skillwsa.com .successfactors.com

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow keepgoing
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access allow CONNECT SSL_ports
http_access allow localhost manager
http_access allow site_squid_art
http_access allow localhost


http_port 3132


access_log /var/log/squid3/squid3132.log squid

pid_filename /var/run/squid3132.pid
coredump_dir /var/spool/squid3

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
#refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
#refresh_pattern .  0   20% 4320



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTPS-site-filtering-tp4681198p4681219.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dst and dstdomain ACLs

2017-01-20 Thread creditu
On Fri, Jan 20, 2017, at 01:42 AM, Amos Jeffries wrote:
> On 20/01/2017 3:01 p.m., creditu wrote:
> > Had a question about dst and dstdomain acls.  Given the sample below:
> > 
> > http_port 192.168.100.1:80 accel defaultsite=www.example.com vhost
> > acl www dstdomain www.example.com dev.example.com
> > cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
> > round-robin
> > cache_peer_access 10.10.10.1 allow www
> > cache_peer_access 10.10.10.1 deny all
> > ...
> > http_access allow www
> > http_access deny all
> > 
> > When someone tries to access the site by specifying an IP
> > (192.168.100.1) instead of the name the client gets a standard access
> > denied squid page.
> 
> What is the rDNS for 192.168.100.1 ?

Shoot and thanks.  It's a rDNS issue.  We were using vport in a previous
config and it may have not been noticed because of that.

> 
> The dstdomain you have configured only the exact two domains listed to
> match.
> 
> >  It seems that a separate acl needs to be defined for
> > when someone tries to access the site using an IP?  For instance:
> > acl dst www_ip 192.168.100.1
> 
> You could add the raw-IP to the www ACL:
>  acl www dstdomain -n 192.168.100.1
> 
>  ... but what will 10.10.10.1 do when asked for the site hosted at
> 192.168.100.1 ?

10.10.10.1 doesn't allow it, so might as well stop at squid. So, is the
best way be to create an ACL and deny cache peer access then do
something with deny info?  Something like:

acl dstdomain -n 192.168.100.1
cache_peer_access 10.10.10.1 deny www_ip

deny_info http:// www_ip
http_access deny www_ip

> 
> 
> >  
> > If we wanted to pass to the backend we would need to add a extra
> > cache_peer_access statement
> >  cache_peer_access 10.10.10.1 allow www_ip
> > 
> > Then add:
> > http_access allow www_ip
> > 
> > Is that correct?
> 
> Not for matching raw-IP. The dst will match also for any domain name
> that resolves to the IP given.
> 
> If you want an ACL that matches the textual representation of the raw-IP
> you need to use dsdomain with the -n (no DNS lookup) flag, or the
> dstdom_regex type.
> 
> >  If we wanted to not allow IP based requests we would
> > still define the acl and use a http_access deny www_ip  and then use
> > deny_info to redirect or send a TCP Reset?
> 
> That is another way, and somewhat better than just accepting the raw-IP
> URLs to the backend server.
> 
> 
> Amos
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Users inserted incorrectly in access.log

2017-01-20 Thread Eduardo Carneiro
Amos Jeffries wrote
> Please start by selecting one of round-robin and sourcehash. They are
> very different selection algorithms.
> 
> Given that Kerberos auth requires HTTP/1 multiplexing to be disabled for
> the auth to work I suggest that you drop the round-robin. It forces
> multiplexing to be used.
> 
> If the problem still remains try adding the connection-auth=on to those
> Squid's listening ports as well.

Hi Amos, I disabled the round-robin in my frontend server and added
connection-auth = on config in my parents as you suggested me. But the
problem persists. I use squid 3.5.19. 

I have a system developed by myself that gets the squid logs inserted in a
postgres database, and shows them in a PHP page. So, if the accesses are
coming with a wrong user, you can imagine the disorder that causes them. All
the system reports aren't trusted.

Thanks in advance.
Eduardo



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Users-inserted-incorrectly-in-access-log-tp4681196p4681217.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-20 Thread Amos Jeffries
On 20/01/2017 10:44 p.m., Vieri wrote:
> 
> - Original Message -
> From: Amos Jeffries
> 
>> Firstly remove the ssloptions=ALL from your config.
>>
> 
>> Traffic should be able to go through at that point.
> 
> Thanks for the feedback.
> 
> I tried it again, but this time with a non-OWA IIS HTTPS server.
> 
> Here's the squid.conf:
> 


> 
> I'm not getting any useful debug information, at least not the one I can 
> understand.
> 
> Maybe I should rebuild Squid?
> 

You could try with a newer Squid version since the bio.cc code might be
making something else happen in 3.5.23. If that still fails the 4.0 beta
has different logic and far better debug info in this area.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Will squid core dump with worker threads? Investigating squid crash, 3.5.23

2017-01-20 Thread Amos Jeffries
On 19/01/2017 10:13 p.m., squid wrote:
> 
>>>
>>> assertion failed: MemBuf.cc:216: "0 <= tailSize && tailSize <= cSize"
>>>
>>
>> This is . We have
> 
> 
> Is there a workaround for this - something that I can put in the config
> perhaps?  I'm getting the same issue a few times a day.  I suspect it's
> mainly due to clients accessing Windows Updates, but difficult to tell.
> 
> I am automatically restarting squid, but the delays for other users
> while all this is happening can generate a poor browsing experience.
> 

All that is known is in that bug report, sorry.

If you can assist with the debugging to find out the cause it would be a
great step toward a fix.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.23 memory usage

2017-01-20 Thread Amos Jeffries
On 20/01/2017 1:23 p.m., Ivan Larionov wrote:
> Hello.
> 
> I'm pretty sure this question has been asked multiple times already, but
> after reading everything I found I still can't figure out squid memory
> usage patterns.
> 
> We're currently trying to upgrade from squid 2.7 to squid 3.5 and memory
> usage on squid 3 is much much higher compared to squid 2 with the same
> configuration.

One thing to be aware of with this big step in versions is that 3.x has
a lot more things 64-bit enabled where 2.x was more 32-bit oriented. It
is minor in any one place, but does add up when dealing with large
numbers of objects.


> 
> What do I see:
> 
> squid running for several days with low traffic:
> 
> # top
>  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>  7367 squid 20   0 4780m 4.4g 5224 S  6.0 60.6 105:01.76 squid -N
> 
> So it uses 4.4GB resident memory. Ok, let's see important config options:
> 
> cache_mem 2298756 KB
> maximum_object_size_in_memory 8 KB
> memory_replacement_policy lru
> cache_replacement_policy lru
> 
> cache_dir aufs /mnt/services/squid/cache 445644 16 256
> 
> minimum_object_size 64 bytes # none-zero so we dont cache mistakes
> maximum_object_size 102400 KB
> 
> So we configured 2.2GB memory cache and 500GB disk cache. Disk cache is
> quite big but current usage is only 3GB:
> 
> # du -sh /mnt/services/squid/cache # cache_dir
> 3.0G  /mnt/services/squid/cache
> 
> Now I'm looking into this page
> http://wiki.squid-cache.org/SquidFaq/SquidMemory and see:
> 
> 14 MB of memory per 1 GB on disk for 64-bit Squid
> 

These wiki numbers are based on an average object size of 32KB.

By setting "maximum_object_size_in_memory 8 KB" you reduce that by 3x so
need to multiply the overhead per object by (3x more objects in same
space) for the cache_mem value to get a better estimate.

So,
 ... up to 100 MB of index for cache_mem
 ... up to 6 GB of index for cache_dir

Another difference is the buffers in Squid-3 are a bit bigger than those
used for Squid-2. Up to 256 KB per FD (Squid-2 stopped at 64KB).
 BUT, your pool details below show only the 16KB buffer being used much.
So I doubt it is client connections related.



> Which means disk cache should use ~50MB of RAM.
> 
> All these means we have ~2.2GB ram used for everything else except
> cache_mem and disk cache index.

No, the index is also in that 2.2 GB which is not being used by the
cache_mem.


> 
> Let's see top pools from mgr:mem:
> 
> Pool  (KB) %Tot
> mem_node  2298833  55.082
> Short Strings 622365   14.913
> HttpHeaderEntry   404531   9.693
> Long Strings  284520   6.817
> MemObject 182288   4.368
> HttpReply 155612   3.729
> StoreEntry739651.772
> Medium Strings711521.705
> cbdata MemBuf (12)355730.852
> LRU policy node   304030.728
> MD5 digest113800.273
> 16K Buffer1056 0.025
> 
> These pools consume ~35% of total squid memory usage: Short Strings,
> HttpHeaderEntry, Long Strings, HttpReply. Looks suspicious. On squid 2 same
> pools use 10 times less memory.


The mem_node is the cache_mem space itself, plus active transactions data.

The StoreEntry is the index entry for each object (cache_dir, cache_mem
and in-transit).
The MemObject is the index entry for each in-memory object (cache_mem
and in-transit).
The HttpReply are those cached objects in parsed format.
The HttpHeaderEntry are all the headers in those reply objects.
The various Strings are the individual words/lines etc in those headers.

So we are under 1% values by the time we are done eliminating data
stored in cache_mem objects and active transaction data.


> 
> I found a bug which looks similar to our experience:
> http://bugs.squid-cache.org/show_bug.cgi?id=4084.
> 

Since you have configured your cache_mem to be 2.2 GB and total memory
usage is 4.4 GB the report saying 55% of memory is used for mem_node
looks fine to me. 50% of 4.4 GB is your 2.2 GB cache_mem setting, and
the extra 5% is probably active transactions and maybe some nodes for
the cache_dir data.

So I dont think it is the issue I mentioned in comments 8.
That said we have not fully identified what the bug problem was.


> I'm attaching our config, mgr:info, mgr:mem and some system info I
> collected.
> 
> Could someone say if this is normal and why it's so much different from
> squid 2?
> 

Well, tentatively yes. The Squid-3 numbers all looks reasonably
accurate. So there is no obvious sign of any problem from this one point
in time.

But if it worries you keep an eye on it for a week or so and see if
anything starts to skew. Graphs like Martin had in comment 5 on that bug
report would be a good indicator of whether there is a problem or just a
new "normal" level.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-user

Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-20 Thread Vieri




- Original Message -
From: Amos Jeffries 

> Firstly remove the ssloptions=ALL from your config.
> 

> Traffic should be able to go through at that point.

Thanks for the feedback.

I tried it again, but this time with a non-OWA IIS HTTPS server.

Here's the squid.conf:

https_port 10.215.144.91:35443 accel cert=/etc/ssl/squid/cert.cer 
key=/etc/ssl/squid/key.pem defaultsite=www.mydomain.org

cache_peer 10.215.144.66 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/etc/ssl/squid/client.cer sslkey=/etc/ssl/squid/client_key.pem 
front-end-https=on name=httpsServer

acl HTTPSACL dstdomain www.mydomain.org
cache_peer_access httpsServer allow HTTPSACL
never_direct allow HTTPSACL

http_access allow HTTPSACL
http_access deny all

And here's the log when trying to connect from a web browser:

2017/01/20 10:31:06.724 kid1| 5,3| comm.cc(553) commSetConnTimeout: 
local=10.215.144.91:57753 remote=10.215.144.66:443 FD 14 flags=1 timeout 30
2017/01/20 10:31:06.724 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 14, type=1, 
handler=1, client_data=0x80cb86e0, timeout=0
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(152) callEnd: 
Ssl::PeerConnector status out: [ FD 14 job16]
2017/01/20 10:31:06.724 kid1| 93,5| AsyncCallQueue.cc(57) fireNext: leaving 
AsyncJob::start()
2017/01/20 10:31:06.724 kid1| 83,5| bio.cc(118) read: FD 14 read 0 <= 7
2017/01/20 10:31:06.724 kid1| Error negotiating SSL on FD 14: 
error::lib(0):func(0):reason(0) (5/0/0)
2017/01/20 10:31:06.724 kid1| TCP connection to 10.215.144.66/443 failed
2017/01/20 10:31:06.724 kid1| 5,5| comm.cc(1038) comm_remove_close_handler: 
comm_remove_close_handler: FD 14, AsyncCall=0x80cd0ff8*2
2017/01/20 10:31:06.724 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
Ssl::PeerConnector::commCloseHandler [call117] because comm_remove_close_handler
2017/01/20 10:31:06.724 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
PeerConnector.cc(742) will call FwdState::ConnectedToPeer(0x80cae868, 
local=10.215.144.91:57753 remote=10.215.144.66:443 FD 14 flags=1, 
0x80cd0ed0/0x80cd0ed0) [call115]
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(137) callEnd: 
Ssl::PeerConnector::negotiateSsl() ends job [ FD 14 job16]
2017/01/20 10:31:06.724 kid1| 83,5| PeerConnector.cc(58) ~PeerConnector: Peer 
connector 0x80cb86e0 gone
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
destructed, this=0x80cb8704 type=Ssl::PeerConnector [job16]
2017/01/20 10:31:06.725 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x80cae868, local=10.215.144.91:57753 
remote=10.215.144.66:443 FD 14 flags=1, 0x80cd0ed0/0x80cd0ed0)
2017/01/20 10:31:06.725 kid1| 17,4| AsyncCall.cc(38) make: make call 
FwdState::ConnectedToPeer [call115]
2017/01/20 10:31:06.725 kid1| 17,3| FwdState.cc(415) fail: 
ERR_SECURE_CONNECT_FAIL "Service Unavailable"

I'm not getting any useful debug information, at least not the one I can 
understand.

Maybe I should rebuild Squid?

# squid -v
Squid Cache: Version 3.5.14
Service Name: squid
configure options:  '--prefix=/usr' '--build=i686-pc-linux-gnu' 
'--host=i686-pc-linux-gnu' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc' 
'--localstatedir=/var/lib' '--disable-dependency-tracking' 
'--disable-silent-rules' '--libdir=/usr/lib' '--sysconfdir=/etc/squid' 
'--libexecdir=/usr/libexec/squid' '--localstatedir=/var' 
'--with-pidfile=/run/squid.pid' '--datadir=/usr/share/squid' 
'--with-logdir=/var/log/squid' '--with-default-user=squid' 
'--enable-removal-policies=lru,heap' '--enable-storeio=aufs,diskd,rock,ufs' 
'--enable-disk-io' 
'--enable-auth-basic=MSNT-multi-domain,NCSA,POP3,getpwnam,SMB,LDAP,PAM,RADIUS' 
'--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-ntlm=smb_lm' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=file_userip,session,unix_group,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group'
 '--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-cache-digests' '--enable-delay-pools' '--enable-eui' '--enable-icmp' 
'--enable-follow-x-forwarded-for' '--with-large-files' 
'--disable-strict-error-checking' '--disable-arch-native' 
'--with-ltdl-includedir=/usr/include' '--with-ltdl-libdir=/usr/lib' 
'--with-libcap' '--enable-ipv6' '--disable-snmp' '--with-openssl' 
'--with-nettle' '--with-gnutls' '--enable-ssl-crtd' '--disable-ecap' 
'--disable-esi' '--enable-htcp' '--enable-wccp' '--enable-wccpv2' 
'--enable-linux-netfilter' '--with-mit-krb5' '--without-heimdal-krb5' 
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu' 
'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=i686 -pipe' 'LDFLAGS=-Wl,-O1 
-Wl,--as-needed' 'CXXFLAGS=-O2 -march=i686 -pipe' 
'PKG_CONFIG_PATH=/usr/lib/pkgconfig'

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-20 Thread Amos Jeffries
On 20/01/2017 9:40 p.m., Alexander wrote:
> Hello, I have a question regarding a native FTP relay (squid's version is
> 3.5.23).

Have you tried NAT intercept for the FTP port?
TPROXY has some low-level things including socket mapping that might not
go so well with how FTP uses multiple connections.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-20 Thread Amos Jeffries
On 20/01/2017 1:03 p.m., Vieri wrote:
> Hi,
> 
> I'm trying to set up Squid as a reverse proxy on a host with IP address 
> 10.215.144.91 so that web browsers can connect to it on port 443 and request 
> pages from an OWA server at 10.215.144.21:443.
> 
> I have this in my squid.conf:
> 
> https_port 10.215.144.91:443 accel cert=/etc/ssl/squid/owa_cert.cer 
> key=/etc/ssl/squid/owa_key.pem defaultsite=webmail2.mydomain.org
> 
> cache_peer 10.215.144.21 parent 443 0 no-query originserver login=PASS ssl 
> sslcert=/etc/ssl/squid/client.cer sslkey=/etc/ssl/squid/client_key.pem 
> ssloptions=ALL sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=owaServer
> # cache_peer 10.215.144.21 parent 80 0 no-query originserver login=PASS 
> front-end-https=on name=owaServer
> 

The ssloptions and sslflags values are bad for debugging.
 * The ssloptions=ALL may be adding unknown new issues OWA does not like
from all the insecure things it turns on in Squids TLS handshake.
 * The sslflags disabling verify actions is hiding from you any issues
Squid will have with the OWA handshake security settings.

Knowing what all that hidden stuff does is important to finding the
right fixes or workarounds.



> acl OWA dstdomain webmail2.mydomain.org
> cache_peer_access owaServer allow OWA
> never_direct allow OWA
> 
> http_access allow OWA
> http_access deny all
> miss_access allow OWA
> miss_access deny all

The miss_access is not useful. Your earlier *_access rules already
prevent the unwanted things happening.

> 
> Note that if I comment out the "cache_peer parent 443" line above and 
> uncomment the "cache_peer parent 80" line then the web browser client 
> successfully connects and can view the OWA pages after logging in.
> 
> However, the connection fails if I use 443 between squid at 10.215.144.91 and 
> the OWA backend at 10.215.144.21. The client views a Squid error page with an 
> SSL handshake error.
> 
> Here's the cache log when I try to connect with a client:
> 
> 2017/01/20 00:10:42.284 kid1| Error negotiating SSL on FD 16: 
> error::lib(0):func(0):reason(0) (5/0/0)
> 2017/01/20 00:10:42.284 kid1| TCP connection to 10.215.144.21/443 failed
> 2017/01/20 00:10:42.285 kid1| 5,5| comm.cc(1038) comm_remove_close_handler: 
> comm_remove_close_handler: FD 16, AsyncCall=0x80d93a00*2
> 2017/01/20 00:10:42.285 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
> Ssl::PeerConnector::commCloseHandler [call453] because 
> comm_remove_close_handler
> 2017/01/20 00:10:42.285 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
> PeerConnector.cc(742) will call FwdState::ConnectedToPeer(0x80d8b9f0, 
> local=10.215.144.91:55948 remote=10.215.144.21:443 FD 16 flags=1, 
> 0x809d49a0/0x809d49a0) [call451]
> 2017/01/20 00:10:42.285 kid1| 93,5| AsyncJob.cc(137) callEnd: 
> Ssl::PeerConnector::negotiateSsl() ends job [ FD 16 job42]
> 2017/01/20 00:10:42.285 kid1| 83,5| PeerConnector.cc(58) ~PeerConnector: Peer 
> connector 0x80d8b590 gone
> 2017/01/20 00:10:42.285 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
> destructed, this=0x80d8b5b4 type=Ssl::PeerConnector [job42]
> 2017/01/20 00:10:42.285 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
> FwdState::ConnectedToPeer(0x80d8b9f0, local=10.215.144.91:55948 
> remote=10.215.144.21:443 FD 16 flags=1, 0x809d49a0/0x809d49a0)
> 2017/01/20 00:10:42.285 kid1| 17,4| AsyncCall.cc(38) make: make call 
> FwdState::ConnectedToPeer [call451]
> 2017/01/20 00:10:42.285 kid1| 17,3| FwdState.cc(415) fail: 
> ERR_SECURE_CONNECT_FAIL "Service Unavailable"
> https://webmail2.mydomain.org/Exchange2/
> 2017/01/20 00:10:42.285 kid1| TCP connection to 10.215.144.21/443 failed
> 
> I don't understand the "Service Unavailable" bit above.

It is Squid telling the client it cannot proxy the request. Because the
cache_peer failed. Just one of the symptoms of whatever the problem is.

The key part is the "Error negotiating SSL on FD 16:
error::lib(0):func(0):reason(0) (5/0/0)"

Which is OpenSSL's very obtuse way of telling Squid "an error
rhappened". With no helpful details about what error it was.

> I can connect just fine from the command line on the squid server at 
> 10.215.144.91 as you can see below.
> 
> # wget --no-check-certificate -O -  https://10.215.144.21 
> --2017-01-20 00:41:10--  https://10.215.144.21/
> Connecting to 10.215.144.21:443... connected.
> WARNING: cannot verify 10.215.144.21's certificate, issued by 
> '/C=xx/ST=xx/O=xx/OU=xx/CN=xxx/emailAddress=x...@xx.xxx':
> Unable to locally verify the issuer's authority.
> WARNING: certificate common name 'XYZ' doesn't match requested host name 
> '10.215.144.21'.

Firstly remove the ssloptions=ALL from your config.

Traffic should be able to go through at that point. But dont take that
as "working", the TLS layer is not in any way secure yet.
 - if not try the front-end-https setting I mentioned earlier.

Then add a sslcafile= option to tell Squid the CA cert which signed the
OWA servers certificate.

Then remove the sslflags option. 

Re: [squid-users] HTTPS site filtering

2017-01-20 Thread Amos Jeffries
On 20/01/2017 9:32 a.m., roadrage27 wrote:
> I was able to solve my previous issue of no connections and now have a
> working squid along with http site filtering and regex working nicely.
> 
> My current issue is the need to allow only certain sites which do include
> some HTTPS sites.  If i leave the line
> 
> http_access deny CONNECT !SSL_ports
> 
> within my conf file, no HTTPS traffic works,

That tells me either you have screwed up the CONNECT ACL definition. Or
the SSL_ports one.

I suspect that whatever you have done is making HTTPS no longer use port
443. That needs to be fixed.


> commenting it out and putting
> in
> 
> http_access allow CONNECT SSL_ports 
> 
> allows SSL but it allows all sites that are available to work with SSL to be
> accessed.  
>

Quite. The security protection intended by that rule is to deny the
identifiably bad things and let your custom rules that follow decide
what is allowed.


> Is there a way to limit this access with an ACL and if so what is they
> syntax?

The required syntax is the default:

 acl SSL_Ports port 443
 acl CONNECT method CONNECT
 http_access deny CONNECT !SSL_Ports

Since you say that is not working, the problem is elsewhere and ACL
definition will not solve the breakage.

If you still need help, we will need to see what your squid.conf
contains currently. And if you are intercepting, the rules used for
doing that.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dst and dstdomain ACLs

2017-01-20 Thread Amos Jeffries
On 20/01/2017 3:01 p.m., creditu wrote:
> Had a question about dst and dstdomain acls.  Given the sample below:
> 
> http_port 192.168.100.1:80 accel defaultsite=www.example.com vhost
> acl www dstdomain www.example.com dev.example.com
> cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
> round-robin
> cache_peer_access 10.10.10.1 allow www
> cache_peer_access 10.10.10.1 deny all
> ...
> http_access allow www
> http_access deny all
> 
> When someone tries to access the site by specifying an IP
> (192.168.100.1) instead of the name the client gets a standard access
> denied squid page.

What is the rDNS for 192.168.100.1 ?

The dstdomain you have configured only the exact two domains listed to
match.

>  It seems that a separate acl needs to be defined for
> when someone tries to access the site using an IP?  For instance:
> acl dst www_ip 192.168.100.1

You could add the raw-IP to the www ACL:
 acl www dstdomain -n 192.168.100.1

 ... but what will 10.10.10.1 do when asked for the site hosted at
192.168.100.1 ?


>  
> If we wanted to pass to the backend we would need to add a extra
> cache_peer_access statement
>  cache_peer_access 10.10.10.1 allow www_ip
> 
> Then add:
> http_access allow www_ip
> 
> Is that correct?

Not for matching raw-IP. The dst will match also for any domain name
that resolves to the IP given.

If you want an ACL that matches the textual representation of the raw-IP
you need to use dsdomain with the -n (no DNS lookup) flag, or the
dstdom_regex type.

>  If we wanted to not allow IP based requests we would
> still define the acl and use a http_access deny www_ip  and then use
> deny_info to redirect or send a TCP Reset?

That is another way, and somewhat better than just accepting the raw-IP
URLs to the backend server.


Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Native FTP relay: connection closes (?) after 'cannot assign requested address' error

2017-01-20 Thread Alexander
Hello, I have a question regarding a native FTP relay (squid's version is
3.5.23).

I've tried to test this feature like this:

[Filezilla Client, 1.1.1.2] <-> [ Router: iptables + squid ]
<-> [vsftpd server, 5.5.5.10]

The router is CentOS 6.5 machine. Firewall settings are:

ip route flush table 100
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 0x01/0x01
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 21 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 2121
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3128

No other rules are defined and default policy for INPUT/OUTPUT/FORWARD is
ACCEPT. The rp_filter is disabled.

Squid's configuration file is attached.

With HTTP everything works fine, however FTP causes a problem. A client
successfully connects and authenticates, but when it tries to execute LIST
or RETR (when data connection should be established), Filezilla says
"Connection closed by server". Meanwhile squid says the following:

commBind: Cannot bind socket FD 17 to 1.1.1.2: (99) Cannot assign requested
address

What can be wrong with this setup?


squid.conf
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users