Re: [squid-users] Problems with NTLM authentication

2015-11-24 Thread Amos Jeffries
On 25/11/2015 4:44 a.m., Brendan Kearney wrote:
> On 11/24/2015 10:08 AM, Verónica Ovando wrote:
>> My Squid Version:  Squid 3.4.8
>>
>> OS Version:  Debian 8
>>
>> I have installed Squid on a server using Debian 8 and seem to have the
>> basics operating, at least when I start the squid service, I have am
>> no longer getting any error messages.  At this time, the goal is to
>> authenticate users from Active Directory and log the user and the
>> websites they are accessing.

Please ensure you run "squid3 -k parse" to check if there is anything
minor still potentially being a problem. I doubt it will help with the
current issue, but you may find some things to make it work more smoothly.

>>
>> I followed the official guide
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm. I
>> verified that samba is properly configured, as the guide suggest, with
>> the basic helper in this way:
>>
>> # /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-basic
>> domain\user pass
>> OK
>>
>> Here is a part of my squid.conf where I defined my ACLs for the groups
>> in AD:
>>
>> 
>>
>> auth_param ntlm program /usr/local/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp --domain=DOMAIN.com
>> auth_param ntlm children 30

Try also using:
  auth_param ntlm keepalive off

>>
>> auth_param basic program /usr/local/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>> auth_param basic children 5
>> auth_param basic realm Servidor proxy-cache de mi Dominio
>> auth_param basic credentialsttl 2 hours
>>
>> external_acl_type AD_Grupos ttl=10 children=10 %LOGIN
>> /usr/lib/squid3/ext_wbinfo_group_acl -d
>>
>> acl AD_Standard external Grupos_AD Standard
>> acl AD_Exceptuados external Grupos_AD Exceptuados
>> acl AD_Bloqueados external Grupos_AD Bloqueados
>>
>> acl face url_regex -i "/etc/squid3/facebook"
>> acl gob url_regex -i "/etc/squid3/gubernamentales"
>>
>> http_access allow AD_Standard
>> http_access allow AD_Exceptuados !face !gob
>> http_access deny AD_Bloqueados
>> 
>>
>>
>> I tested using only the basic scheme (I commented the lines out for
>> NTLM auth) and every time I open the browser it asks me my user and
>> pass. And it works well because I can see in the access.log my
>> username and all the access policies defined are correctly applied.
>>

Good.

>> But if I use NTLM auth, the browser still shows me the pop-up (it must
>> no be shown) and if I enter my user and pass it still asks me for them
>> until I cancel it.
>>
>> My access.log, in that case, shows a TCP_DENIED/407 as expected.

It should show one with Basic, and two with NTLM. Always.

The popup and 407 are different things.

* The 407 means the client is behaving and not broadcasting credentials
everywhere. Also Squid is now informing it that they do need to be sent
on this connection, using the Basic or NTLM schema.

* The popup means the browser was unable to find credentials to answer
the 407 with. If some were sent earlier the proxy rejected them.

 ... that includes the proxy rejecting via "deny AD_Bloqueados". Users
in group Bloqueados may be prompted for a popup until they enter
somebody elses credentials, who is not in that group.
Add " all" to the right hand end of the "deny AD_Bloqueados" line to
prevent that.


>>
>> What could be the problem? It suppose that both Kerberos and NTLM
>> protocols work together, I mean that can live together in the same
>> environment and Kerberos is used by default.

You have not configued your Squid to offer Kerberos. Therefore it is not
an option the client can choose, and not part of the equation.

If the client is new enough software with no NTLM support. eg most MS
software written since Vista / ~2008. Then lack of Kerberos may be the
problem. In which case it should use the Basic.

If the client is pre-empting the initial 407, by sending Kerberos
credentials. Broken.

FYI: Basic authentication is ironically more secure than NTLM these
days. Even the "secure" NTLMv2 extensions can now be decrypted given a
few hours. At least with Basic the software handling it assumes
insecurity and does necessary paranoid things to protect the credentials
- most NTLM software does not.


>> How can I check that NTLM
>> is really working? Could it be a squid problem in the conf? Or maybe
>> AD is not allowing NTLM traffic?

NTLM does not work. It was designed broken. (sorry, joke. But not far
from the truth).

>>
>> Sorry for my English. Thanks in advance.
>>

> make sure Internet Explorer is set to use Integrated Windows
> Authentication (IWA).  Tools --> Internet Options --> Advanced -->
> Security --> Enable Integrated Windows Authentication.

And be aware that sometimes random software on the machine will do
automated HTTP requests to the proxy using the machines own AD account
credentials. Not a 

Re: [squid-users] 2 way SSL on a non standard SSL Port

2015-11-24 Thread Amos Jeffries
On 25/11/2015 11:41 a.m., Bart Spedden wrote:
> Hello,
> 
> I have a java application that is successfully making REST calls to a 3rd
> party vendor that requires 2 way SSL on port 8184 for some calls and 1 way
> SSL on port 8185 for other calls. However, when I start proxying the calls
> with squid all 1 and 2 way SSL calls fail.
> 

What is "X way SSL" ?

Squid 3.4 supports TLS, SSLv2, and SSLv3.


> I added ports 8184 and 8185 to both SSL_Ports and Safe_ports via the
> following:
> 
> acl SSL_ports port 8184
> 
> acl SSL_ports port 8185
> 
> acl Safe_ports port 8184
> 
> acl Safe_ports port 8185
> 

You don't need to add any ports 1025 or higher to Safe_ports. They are
already included in the range "1025-65535 # unregistered ports"

The change to SSL_ports is correct for allowing CONNECT to those ports.

Squid is now relaying traffic between the client and server across blind
tunnels. It has ZERO interaction with them or the data sent.


That said, there are a few major bugs in CONNECT handling that have been
uncovered and fixed since 3.4.3 release was made. Please try an upgrade
to latest Squid-3.5 and see if the problem disappears.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 2 way SSL on a non standard SSL Port

2015-11-24 Thread Eliezer Croitoru

Hey Bart,

What OS are you using? I have just pushed the latest(3.5.11) CentOS 
RPMs, details at: http://wiki.squid-cache.org/KnowledgeBase/CentOS .


Eliezer

On 25/11/2015 02:11, Amos Jeffries wrote:

That said, there are a few major bugs in CONNECT handling that have been
uncovered and fixed since 3.4.3 release was made. Please try an upgrade
to latest Squid-3.5 and see if the problem disappears.

Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-11-24 Thread Amos Jeffries
On 25/11/2015 12:20 p.m., Dan Charlesworth wrote:
> Thanks for the perspective on this, folks.
> 
> Going back to the technical stuff—and this isn’t really a squid thing—but is 
> there any way I can minimise this using my DNS server? 
> 
> Can I force my local DNS to only ever return 1 address from the pool on a 
> hostname I’m having trouble with?

That depends on your resolver, but I doubt it.

The DNS setup I mentioned in my last email to this thread is all I'm
aware of that gets even close to a fix.

Note that you may have to intercept clients port 53 traffic (both UDP
and TCP) to the resolver. That has implications with DNSSEC but should
still work as long as you do not alter the DNS responses, the resolver
is just there to ensure the same result goes to both querying parties.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-11-24 Thread Dan Charlesworth
Alright, thanks for the hint.

My proxy and clients definitely have the same DNS server (I removed the 
secondary and tertiary ones to make totally sure) but the results definitely 
aren’t matching 99% of the time. Probably more like 90%.

Perhaps it’s 'cause my clients are caching records locally or something? It 
does seem to improve as the day progresses, after joining the intercepted wifi 
network in the morning.

Super annoying though trying to post a comment on GitHub or something and it 
just hangs.

> On 25 Nov 2015, at 11:19 AM, Amos Jeffries  wrote:
> 
> On 25/11/2015 12:20 p.m., Dan Charlesworth wrote:
>> Thanks for the perspective on this, folks.
>> 
>> Going back to the technical stuff—and this isn’t really a squid thing—but is 
>> there any way I can minimise this using my DNS server? 
>> 
>> Can I force my local DNS to only ever return 1 address from the pool on a 
>> hostname I’m having trouble with?
> 
> That depends on your resolver, but I doubt it.
> 
> The DNS setup I mentioned in my last email to this thread is all I'm
> aware of that gets even close to a fix.
> 
> Note that you may have to intercept clients port 53 traffic (both UDP
> and TCP) to the resolver. That has implications with DNSSEC but should
> still work as long as you do not alter the DNS responses, the resolver
> is just there to ensure the same result goes to both querying parties.
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SOLVED] Transparent HTTPS Squid proxy with upstream parent

2015-11-24 Thread Michael Ludvig

On 24/11/15 18:26, Amos Jeffries wrote:

That is two separate and entirely different traffic types:

A) [client] -> HTTP--(NAT)--> [my_proxy]

B) [client] -> TLS--(NAT)--> [my_proxy]


(A) requires "http_port ... intercept ssl-bump cert=/path/to/cert"

(B) requires "https_port ... intercept ssl-bump cert=/path/to/cert"

above is the minimum configuration. The generate-* etc settings you
mention below are useful as well.

In order to impersonate the server you also need to fetch the server 
details (peek or stare at step2), then bump at step3.


Yay, that seems to work! Here is the working config for [my_proxy]:


http_port 3128
http_port 8080 intercept
https_port 8443 intercept ssl-bump generate-host-certificates=on \
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/my-proxy.pem
sslproxy_options NO_SSLv2,NO_SSLv3,SINGLE_DH_USE
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslcrtd_children 5

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

#ssl_bump peek step1# <- enabling this breaks it
ssl_bump stare step2
ssl_bump bump step3

cache_peer parent.example.com parent 3129 0 no-query ssl
never_direct allow all


And two iptables rules:

iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT 
--to-ports 8080
iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT 
--to-ports 8443


Now the clients can either go explicitly to proxy on port 3128 or those 
who don't support setting proxy have [my_proxy] as their default gateway 
and the transparent proxy setup kicks in.


Thanks a lot Amos for your help!

Michael

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 2 way SSL on a non standard SSL Port

2015-11-24 Thread Bart Spedden
Hello,

I have a java application that is successfully making REST calls to a 3rd
party vendor that requires 2 way SSL on port 8184 for some calls and 1 way
SSL on port 8185 for other calls. However, when I start proxying the calls
with squid all 1 and 2 way SSL calls fail.

I added ports 8184 and 8185 to both SSL_Ports and Safe_ports via the
following:

acl SSL_ports port 8184

acl SSL_ports port 8185

acl Safe_ports port 8184

acl Safe_ports port 8185

Here's a little config information

squid -v

Squid Cache: Version 3.4.3

Here's my full configuration:

#

# Recommended minimum configuration:

#


# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

acl localnet src 172.16.0.0/12 # RFC1918 possible internal network

acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

acl localnet src fc00::/7   # RFC 4193 local private network range

acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines


acl SSL_ports port 443

acl SSL_ports port 8184

acl SSL_ports port 8185

acl Safe_ports port 80 # http

acl Safe_ports port 21 # ftp

acl Safe_ports port 443 # https

acl Safe_ports port 70 # gopher

acl Safe_ports port 210 # wais

acl Safe_ports port 1025-65535 # unregistered ports

acl Safe_ports port 280 # http-mgmt

acl Safe_ports port 488 # gss-http

acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http

acl Safe_ports port 8184

acl Safe_ports port 8185

acl CONNECT method CONNECT


#

# Recommended minimum Access Permission configuration:

#

# Deny requests to certain unsafe ports

http_access deny !Safe_ports


# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports


# Only allow cachemgr access from localhost

http_access allow localhost manager

http_access deny manager


# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost


#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#


# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

http_access allow localnet

http_access allow localhost


# And finally deny all other access to this proxy

http_access deny all


# Squid normally listens to port 3128

http_port 3128


# Uncomment and adjust the following to add a disk cache directory.

#cache_dir ufs /var/spool/squid 100 16 256


# Leave coredumps in the first cache dir

coredump_dir /var/spool/squid


#

# Add any of your own refresh_pattern entries above these.

#

refresh_pattern ^ftp: 1440 20% 10080

refresh_pattern ^gopher: 1440 0% 1440

refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

refresh_pattern . 0 20% 4320

Any help is greatly appreciated!

Thanks!
-- 
Bart Spedden  |  Senior Developer
+1.720.210.7041  |
*bart.sped...@3sharecorp.com *
3 | S H A R E  |  Adobe Digital Marketing Experts  |  An Adobe®  Business
Plus Level Solution PartnerConsulting  |  Training  |  Remote Operations
Management



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Duplicate Headers

2015-11-24 Thread Amos Jeffries
On 25/11/2015 6:58 a.m., Benjamin Reed wrote:
> Any idea how my X-Cache, X-Cache-Lookup, and Via: headers are getting
> messed up on my accelerator configuration?
>
> Here's the output from a sample HEAD request:
>
>
http://paste.opennms.eu/?26c282e7abba631e#oqU/8pAmAUXHhMXPHhr9vWjJAA1FVcgn49W5BWO1vIs=
>

This is a forwarding loop of a slightly unusual kind:

When Squid received the request, it asked its peers who had ability to
reach the object. They all did (X-Cache-Lookup: HIT...), so it picked
the first responder and sent the request there.
Unfortunately the first responder was just another mirror, so when it
received that request ... it does exactly the same thing.

If any mirror sees itself as listed in the Via header it will reject the
request with fowarding loop error, and the mirror that sent the request
to it will move on to the next possible destination for it.

Eventually the origin will be reached. But possibly after having gone
through all mirrors or some large portion of them.


> The 4 systems are set up as cache peers to each other, with a parent
> host that contains all the upstream content.

Instead of "cache_peer_access X allow all" use:
  cache_peer_access X allow !mirrors

That will ensure that mirrors go to the origin for any request that was
received from another mirror. Mirrors will still be available as
alternative sources for clients sent requests.


PS. you can also remove the "cache allow all" line. It does nothing.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Duplicate Headers

2015-11-24 Thread Benjamin Reed
On 11/24/15 1:09 PM, Antony Stone wrote:
> squid.conf, minus blank lines and comments, please?

Here you go.  Each system is identical but with itself commented out of
the "cache_peer" and "cache_peer_access" lines.

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
 
acl our_sites dstdomain yum.opennms.org debian.opennms.org maven.opennms.org 
repo.opennms.org .mirrors.opennms.org .mirrors.opennms.com
acl mirrors src 45.55.163.22/32
acl mirrors src 2604:a880:800:10::60:4001/128
acl mirrors src 104.236.160.233/32
acl mirrors src 2604:a880:1:20::d6:7001/128
acl mirrors src 46.101.6.157/32
acl mirrors src 2a03:b0c0:1:d0::7a:7001/128
acl mirrors src 46.101.211.239/32
acl mirrors src 2a03:b0c0:3:d0::8a:6001/128
 
http_access deny !Safe_ports
 
http_access deny CONNECT
 
# manager access
http_access allow localhost manager
http_access deny manager
 
# proxy access
http_access allow our_sites
http_access allow localhost
http_access deny all
 
# peer access
icp_access allow mirrors
icp_access deny all
icp_port 3130
 
# cache access
cache allow all
 
http_port 80 accel defaultsite=www.mirrors.opennms.org vhost
http_port 8080 accel defaultsite=www.mirrors.opennms.org vhost
#http_port 3128 accel defaultsite=www.mirrors.opennms.org vhost
 
coredump_dir /var/spool/squid3
 
logfile_rotate 10
#cache_store_log stdio:/var/log/squid3/store.log
debug_options rotate=10

client_ip_max_connections 8
 
# how much to cache/keep
minimum_object_size 0
maximum_object_size 600 MB
minimum_expiry_time 60 seconds
refresh_pattern . 900 80% 604800
 
memory_cache_mode disk
memory_replacement_policy heap LFUDA
 
cache_replacement_policy heap LFUDA
cache_peer mirror.internal.opennms.com parent  80 0no-query originserver 
name=myAccel
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
 
#cache_peer ny-1.mirrors.opennms.orgsibling 80 3130 name=ny1
cache_peer sf-1.mirrors.opennms.orgsibling 80 3130 name=sf1
cache_peer uk-1.mirrors.opennms.orgsibling 80 3130 name=uk1
cache_peer de-1.mirrors.opennms.orgsibling 80 3130 name=de1
#cache_peer_access ny1 allow all
cache_peer_access sf1 allow all
cache_peer_access uk1 allow all
cache_peer_access de1 allow all
 
cache_dir aufs /var/spool/squid3/cache-small 2000 16 256 min-size=0 
max-size=10
cache_dir aufs /var/spool/squid3/cache-large 14000 16 256 min-size=10 
max-size=6
 
# cache 404s for 5 minutes
negative_ttl 300 seconds


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Duplicate Headers

2015-11-24 Thread Benjamin Reed

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Any idea how my X-Cache, X-Cache-Lookup, and Via: headers are getting
messed up on my accelerator configuration?

Here's the output from a sample HEAD request:

http://paste.opennms.eu/?26c282e7abba631e#oqU/8pAmAUXHhMXPHhr9vWjJAA1FVcgn49W5BWO1vIs=

The 4 systems are set up as cache peers to each other, with a parent
host that contains all the upstream content.
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2

iD8DBQFWVKUpUu+jZtP2Zf4RAvdoAJ0S7/F4p17BrChqgNHYK43vsPMk1gCgiL2D
V7PTmJhbgShx7jNrCxnxY/8=
=NdxH
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Duplicate Headers

2015-11-24 Thread Antony Stone
On Tuesday 24 November 2015 at 18:58:01, Benjamin Reed wrote:

> Any idea how my X-Cache, X-Cache-Lookup, and Via: headers are getting
> messed up on my accelerator configuration?
> 
> Here's the output from a sample HEAD request:
> 
> http://paste.opennms.eu/?26c282e7abba631e#oqU/8pAmAUXHhMXPHhr9vWjJAA1FVcgn4
> 9W5BWO1vIs=
> 
> The 4 systems are set up as cache peers to each other, with a parent
> host that contains all the upstream content.

squid.conf, minus blank lines and comments, please?


Antony.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 3.5.10] - Unable to cache objects from Cloudflare

2015-11-24 Thread Eliezer Croitoru
Have you tried clearing the local cache of the browser before you run 
your test each time?


Eliezer

On 20/11/2015 01:59, David Touzeau wrote:

Hi

It seems that squid is not able to save in cache objects from CloudFlare
websites.

Here it is the header information:

Connecting to 127.0.0.1:8182... connected.
Proxy request sent, awaiting response...
   HTTP/1.1 200 OK
   Date: Thu, 19 Nov 2015 18:03:31 GMT
   Content-Type: image/png
   Set-Cookie: __cfduid=d1ca8a069c4db15a451d81f2327781ced1447956211;
expires=Fri, 18-Nov-16 18:03:31 GMT; path=/; domain=.mutaz.net; HttpOnly
   Last-Modified: Fri, 23 Oct 2015 11:18:39 GMT
   Vary: Accept-Encoding
   X-Cache: HIT from Backend
   CF-Cache-Status: HIT
   Server: cloudflare-nginx
   CF-RAY: 247dd510143a08fc-CDG
   X-Cache: MISS from MySquid3-5-10
   X-Cache-Lookup: MISS from MySquid3-5-10:3128
   Transfer-Encoding: chunked
   Connection: keep-alive

I have seen the same issue in tracker as 3806
http://bugs.squid-cache.org/show_bug.cgi?id=3806

Can somebody encounter the same behavior with latest squid branch ?

best regards.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9

2015-11-24 Thread Dan Charlesworth
Thanks for the perspective on this, folks.

Going back to the technical stuff—and this isn’t really a squid thing—but is 
there any way I can minimise this using my DNS server? 

Can I force my local DNS to only ever return 1 address from the pool on a 
hostname I’m having trouble with?

> On 30 Oct 2015, at 4:50 AM, Alex Rousskov  
> wrote:
> 
> On 10/29/2015 11:29 AM, Matus UHLAR - fantomas wrote:
>>> On 10/28/2015 10:46 PM, Amos Jeffries wrote:
 NP: these problems do not exist for forward proxies. Only for traffic
 hijacking interceptor proxies.
>> 
>> On 29.10.15 09:05, Alex Rousskov wrote:
>>> For intercepted connections, Squid should, with an admin permission,
>>> connect to the intended IP address without validating whether that IP
>>> address matches the domain name (and without any side effects of such
>>> validation). In interception mode, the proxy should be as "invisible"
>>> (or as "invasive") as the admin wants it to be IMO -- all validations
>>> and protections should be optional. We could still enable them by
>>> default, of course.
>>> 
>>> SslBumped CONNECT-to-IP tunnels are essentially intercepted connections
>>> as well, even if they are using forwarding (not intercepting) http_ports.
> 
>> the "admin permission" is the key qestion here.  
> 
> Agreed. And understanding of what giving that permission implies!
> 
> 
>> There's possible problem
>> where the malicious client can connect to malicious server, ask for any
>> server name and the malicious content could get cached by squid as a proper
>> response.
> 
> Very true, provided that Squid trusts the unverified domain name to do
> caching. Squid does not have to do that. As Amos have noted, there are
> smart ways to minimize most of these problems, but they require more
> development work.
> 
> IMHO, it is important to establish the "do no harm" principle first and
> then use that to guide our development efforts. Unfortunately, some of
> the validation code was introduced under different principles, and we
> may still be debating what "harm" really means in this context while
> adjusting that code to meet varying admin needs.
> 
> Alex.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3.x have issue with some sites, squid2.x not.

2015-11-24 Thread Amos Jeffries
On 24/11/2015 8:53 p.m., Matus UHLAR - fantomas wrote:
> On 24.11.15 15:27, Amos Jeffries wrote:
>> 3.4 has about 12 years of code development difference to 2.7.
>> It is no surprise when they act different (good or bad).
> 
> how do you compare this? 2.7 versions were produces in 2008 to 2010, where
> are those 12 years?
> 

v2.6 and v2.7 were a fork. v3.0 is the next release after 2.5.x. So 9
years direct decent down the mainline, and there is still a fair chunk
of background functionality work that never got ported back to v3.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: LDAP group authorisation not supported

2015-11-24 Thread Serge Tarik
Hello,im getting  this error while trying to configuring
integration of squid 3.3.8 with Active Directory and by
ext_kerberos_ldap_group_acl   helper,im getting this error ,LDAP group
authorisation not supported ? cant find the solution on web,any help will
do. ive configured keytab, and get helper with making from source,and now
trying to check if it will see the list of groups for users with this
command *-* ext_kerberos_ldap_group_acl -a -i -g DenyInternet -m 64 -D
EXAMPLE.ORG -u squid -p passWD
usern...@example.org
and im getting this error
its Cent os 7 ,but also tried on Ubuntu server,with the same error .
thnx.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 3.5.10] - Unable to cache objects from Cloudflare

2015-11-24 Thread Eliezer Croitoru

What version of squid are you using? what squid.conf?
CloudFlare in general is cache friendly but squid maybe have a bug here 
and there.

To test a theory I would like you to try the next log format:
logformat cache_headers %ts.%03tu %6tr %>a %Ss/%03>Hs %%Sh/%h" "%{Cache-Control}>ha" "%{Pragma}>h" 
"%{Pragma}>ha" "%{User-Agent}>h"

access_log daemon:/var/log/squid/access.log cache_headers

Change the log filename or\and path as you need.
Then when you will have enough traffic logged in it send me the file 
somehow privately for analysis.


Also have you tried to use REDBOT to test the page for cachability?
What ideas did you had until now?

Just as a side note I want to mention that I have seen Varnish users 
which removes many times the cookies for pub JPG pictures and it helps 
with all sort of thinks but in this case the site sent you a cookie.
In order to make this object publicly cachable the cookie must disappear 
to my opinion or else all of the clients will get the same cookie 
which is a bad idea.


I will wait for the data so I can understand the picture better.

Eliezer

On 20/11/2015 01:59, David Touzeau wrote:

Hi

It seems that squid is not able to save in cache objects from CloudFlare
websites.

Here it is the header information:

Connecting to 127.0.0.1:8182... connected.
Proxy request sent, awaiting response...
   HTTP/1.1 200 OK
   Date: Thu, 19 Nov 2015 18:03:31 GMT
   Content-Type: image/png
   Set-Cookie: __cfduid=d1ca8a069c4db15a451d81f2327781ced1447956211;
expires=Fri, 18-Nov-16 18:03:31 GMT; path=/; domain=.mutaz.net; HttpOnly
   Last-Modified: Fri, 23 Oct 2015 11:18:39 GMT
   Vary: Accept-Encoding
   X-Cache: HIT from Backend
   CF-Cache-Status: HIT
   Server: cloudflare-nginx
   CF-RAY: 247dd510143a08fc-CDG
   X-Cache: MISS from MySquid3-5-10
   X-Cache-Lookup: MISS from MySquid3-5-10:3128
   Transfer-Encoding: chunked
   Connection: keep-alive

I have seen the same issue in tracker as 3806
http://bugs.squid-cache.org/show_bug.cgi?id=3806

Can somebody encounter the same behavior with latest squid branch ?

best regards.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: LDAP group authorisation not supported

2015-11-24 Thread Amos Jeffries
On 24/11/2015 10:06 p.m., Serge Tarik wrote:
> Hello,im getting  this error while trying to configuring
> integration of squid 3.3.8 with Active Directory and by
> ext_kerberos_ldap_group_acl   helper,im getting this error ,LDAP group
> authorisation not supported ?


Where is that message seen?
  cache.log and/or command line testing of the helper, or somewhere else?

And what is the *exact* text? include surrounding details for context.


 cant find the solution on web,any help will
> do. ive configured keytab, and get helper with making from source,and now
> trying to check if it will see the list of groups for users with this
> command *-* ext_kerberos_ldap_group_acl -a -i -g DenyInternet -m 64 -D
> EXAMPLE.ORG -u squid -p passWD
> usern...@example.org
> and im getting this error
> its Cent os 7 ,but also tried on Ubuntu server,with the same error .
> thnx.
> 


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] routing to parent using carp

2015-11-24 Thread Amos Jeffries
On 24/11/2015 11:11 p.m., Sreenath BH wrote:
> Hi all,
> 
> We are planning to use carp to route requests based on request URL.
> A part of the URL refers to a part of the file that is being requested
> in the GET request(say a part of a video file)
> 
> However, to make the back-end more efficient, it would be great if all
> requests for a particular file  went to same parent server.
> 
> Is there a way in Squid to make it use a part of the URL when it
> calculates the hash to map the URL to a parent?

See the documentation on CARP options:


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] routing to parent using carp

2015-11-24 Thread Sreenath BH
Hi all,

We are planning to use carp to route requests based on request URL.
A part of the URL refers to a part of the file that is being requested
in the GET request(say a part of a video file)

However, to make the back-end more efficient, it would be great if all
requests for a particular file  went to same parent server.

Is there a way in Squid to make it use a part of the URL when it
calculates the hash to map the URL to a parent?

thanks for any tips,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Antony Stone
On Tuesday 24 November 2015 at 14:31:15, Ahmad Alzaeem wrote:

> The DNS is not broken , it will resolve some websites to ip address of
> squid and other websites will rslve to other ip

That sounds pretty broken to me (unless the Squid machine really is the web 
server for those sites whose hostname resolves to this IP address).

DNS might be deliberately broken, but it sure isn't working correctly or 
normally.

> Assume ips are static ips on clients

You have no alternative but to configure the proxy on the clients, then.

As Yuri says, Squid is an HTTP/S proxy - if you tell the clients to use it as 
a proxy (and provided you point Squid itself at a working DNS server), then it 
will work.

If you do not tell the clients to use Squid (ie: you are trying to use it in 
intercept mode) then the clients have to correctly resolve the destination IP, 
and they need to route via the Squid box so that it can intercept the packets.

If neither of those is an available option for you, then Squid can't help deal 
with your very unusual setup.


Regards,


Antony

-- 
Tinned food was developed for the British Navy in 1813.

The tin opener was not invented until 1858.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 3.5.10] - Unable to cache objects from Cloudflare

2015-11-24 Thread Eliezer Croitoru

Hey,

I do not see any issue.
I analyzed the logs and they seem to work as expected.
The logs all personal details removed at: 
http://paste.ngtech.co.il/p8ncwgnlg


What issue do you see in the logs?
What would you expect?
Does the site loads slower in any form?
What would expect to be "fixed" in the case that there are some issues 
which doesn't meat your needs\desires?


All The Bests,
Eliezer Croitoru

On 20/11/2015 01:59, David Touzeau wrote:

Hi

It seems that squid is not able to save in cache objects from CloudFlare
websites.

Here it is the header information:

Connecting to 127.0.0.1:8182... connected.
Proxy request sent, awaiting response...
   HTTP/1.1 200 OK
   Date: Thu, 19 Nov 2015 18:03:31 GMT
   Content-Type: image/png
   Set-Cookie: __cfduid=d1ca8a069c4db15a451d81f2327781ced1447956211;
expires=Fri, 18-Nov-16 18:03:31 GMT; path=/; domain=.mutaz.net; HttpOnly
   Last-Modified: Fri, 23 Oct 2015 11:18:39 GMT
   Vary: Accept-Encoding
   X-Cache: HIT from Backend
   CF-Cache-Status: HIT
   Server: cloudflare-nginx
   CF-RAY: 247dd510143a08fc-CDG
   X-Cache: MISS from MySquid3-5-10
   X-Cache-Lookup: MISS from MySquid3-5-10:3128
   Transfer-Encoding: chunked
   Connection: keep-alive

I have seen the same issue in tracker as 3806
http://bugs.squid-cache.org/show_bug.cgi?id=3806

Can somebody encounter the same behavior with latest squid branch ?

best regards.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] routing to parent using carp

2015-11-24 Thread Sreenath BH
Thanks.

I should have read the documentation completely before posting.

carp-key=key-specification

rgds,
Sreenath


On 11/24/15, Amos Jeffries  wrote:
> On 24/11/2015 11:11 p.m., Sreenath BH wrote:
>> Hi all,
>>
>> We are planning to use carp to route requests based on request URL.
>> A part of the URL refers to a part of the file that is being requested
>> in the GET request(say a part of a video file)
>>
>> However, to make the back-end more efficient, it would be great if all
>> requests for a particular file  went to same parent server.
>>
>> Is there a way in Squid to make it use a part of the URL when it
>> calculates the hash to map the URL to a parent?
>
> See the documentation on CARP options:
> 
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Problems with NTLM authentication

2015-11-24 Thread Verónica Ovando

My Squid Version:  Squid 3.4.8

OS Version:  Debian 8

I have installed Squid on a server using Debian 8 and seem to have the basics 
operating, at least when I start the squid service, I have am no longer getting 
any error messages.  At this time, the goal is to authenticate users from 
Active Directory and log the user and the websites they are accessing.

I followed the official guide 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm. I verified that 
samba is properly configured, as the guide suggest, with the basic helper in 
this way:

# /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-basic
domain\user pass
OK

Here is a part of my squid.conf where I defined my ACLs for the groups in AD:


auth_param ntlm program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp --domain=DOMAIN.com
auth_param ntlm children 30

auth_param basic program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Servidor proxy-cache de mi Dominio
auth_param basic credentialsttl 2 hours

external_acl_type AD_Grupos ttl=10 children=10 %LOGIN 
/usr/lib/squid3/ext_wbinfo_group_acl -d

acl AD_Standard external Grupos_AD Standard
acl AD_Exceptuados external Grupos_AD Exceptuados
acl AD_Bloqueados external Grupos_AD Bloqueados
 
acl face url_regex -i "/etc/squid3/facebook"

acl gob url_regex -i "/etc/squid3/gubernamentales"

http_access allow AD_Standard
http_access allow AD_Exceptuados !face !gob
http_access deny AD_Bloqueados


I tested using only the basic scheme (I commented the lines out for NTLM auth) 
and every time I open the browser it asks me my user and pass. And it works 
well because I can see in the access.log my username and all the access 
policies defined are correctly applied.

But if I use NTLM auth, the browser still shows me the pop-up (it must no be 
shown) and if I enter my user and pass it still asks me for them until I cancel 
it.

My access.log, in that case, shows a TCP_DENIED/407 as expected.

What could be the problem? It suppose that both Kerberos and NTLM protocols 
work together, I mean that can live together in the same environment and 
Kerberos is used by default. How can I check that NTLM is really working? Could 
it be a squid problem in the conf? Or maybe AD is not allowing NTLM traffic?

Sorry for my English. Thanks in advance.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Ahmad Alzaeem
Hi Devs ,

 

I have a server that send to squid http/https with wrong destination ips 

So assume I want  to open google

 

 

The request hit the squid with https/http  packet with payload
www.google.com   with ds tip 10.0.0.1 not  the real
ds tip of google like 74.125.x.x

 

The question is being asked here is .

 

Is it possible to let squid to do another resolving again and chck the right
dst ip (74.125.x.x) and reach it ?

 

Or at least let squid skip looking @ the ds tip and look only at the payload
(google.com) and try to resolve it and operate ?

 

 

 

 

Is that possible on squid ?

 

 

thanks

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Antony Stone
On Tuesday 24 November 2015 at 12:22:40, Ahmad Alzaeem wrote:

> Hi Devs ,
> 
> I have a server that send to squid http/https with wrong destination ips

It has already been recommended that you fix your DNS so that it works 
correctly / normally.

> So assume I want  to open google
> 
> The request hit the squid with https/http  packet with payload
> www.google.com   with ds tip 10.0.0.1 not  the real
> ds tip of google like 74.125.x.x

Is 10.0.0.1 the IP address of your Squid server?

> The question is being asked here is .
> 
> Is it possible to let squid to do another resolving again and chck the
> right dst ip (74.125.x.x) and reach it ?

Yes - turn off intercept mode, and point the client specifically at Squid as a 
configured proxy.  The client will then not attempt a DNS lookup for the 
destination server, but will simply send the entire request to Squid for it to 
look up where to send the request.


Regards,


Antony.

-- 
Atheism is a non-prophet-making organisation.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Ahmad Alzaeem
Guys I understand that 


The question is being asked , can squid fix this issue or not  ?


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Antony Stone
Sent: Tuesday, November 24, 2015 2:42 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TCP-MISS 503 for wrong destination ip

On Tuesday 24 November 2015 at 12:22:40, Ahmad Alzaeem wrote:

> Hi Devs ,
> 
> I have a server that send to squid http/https with wrong destination 
> ips

It has already been recommended that you fix your DNS so that it works 
correctly / normally.

> So assume I want  to open google
> 
> The request hit the squid with https/http  packet with payload 
> www.google.com   with ds tip 10.0.0.1 not  the 
> real ds tip of google like 74.125.x.x

Is 10.0.0.1 the IP address of your Squid server?

> The question is being asked here is .
> 
> Is it possible to let squid to do another resolving again and chck the 
> right dst ip (74.125.x.x) and reach it ?

Yes - turn off intercept mode, and point the client specifically at Squid as a 
configured proxy.  The client will then not attempt a DNS lookup for the 
destination server, but will simply send the entire request to Squid for it to 
look up where to send the request.


Regards,


Antony.

--
Atheism is a non-prophet-making organisation.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Antony Stone
On Tuesday 24 November 2015 at 13:13:17, Ahmad Alzaeem wrote:

> Guys I understand that
> 
> The question is being asked , can squid fix this issue or not?

Yes, provided you use it in configured-proxy mode, instead of intercept mode.


Antony.

> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Antony Stone Sent: Tuesday, November 24, 2015 2:42 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] TCP-MISS 503 for wrong destination ip
> 
> On Tuesday 24 November 2015 at 12:22:40, Ahmad Alzaeem wrote:
> > Hi Devs ,
> > 
> > I have a server that send to squid http/https with wrong destination
> > ips
> 
> It has already been recommended that you fix your DNS so that it works
> correctly / normally.
> 
> > So assume I want  to open google
> > 
> > The request hit the squid with https/http  packet with payload
> > www.google.com   with ds tip 10.0.0.1 not  the
> > real ds tip of google like 74.125.x.x
> 
> Is 10.0.0.1 the IP address of your Squid server?
> 
> > The question is being asked here is .
> > 
> > Is it possible to let squid to do another resolving again and chck the
> > right dst ip (74.125.x.x) and reach it ?
> 
> Yes - turn off intercept mode, and point the client specifically at Squid
> as a configured proxy.  The client will then not attempt a DNS lookup for
> the destination server, but will simply send the entire request to Squid
> for it to look up where to send the request.
> 
> 
> Regards,
> 
> 
> Antony.

-- 
BASIC is to computer languages what Roman numerals are to arithmetic.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Yuri Voinov
We do not know and can not know why the server sends such a request. 
There are only assumptions of varying degrees of reliability. SQUID 
configuration in this case is absolutely not enough to give a reasonable 
answer.


If the problem is DNS - then what's the Squid?

24.11.15 17:22, Ahmad Alzaeem пишет:


Hi Devs ,

I have a server that send to squid http/https with wrong destination ips

So assume I want  to open google

The request hit the squid with https/http  packet with payload 
www.google.com  with ds tip 10.0.0.1 not  the 
real ds tip of google like 74.125.x.x


The question is being asked here is .

Is it possible to let squid to do another resolving again and chck the 
right dst ip (74.125.x.x) and reach it ?


Or at least let squid skip looking @ the ds tip and look only at the 
payload (google.com) and try to resolve it and operate ?


Is that possible on squid ?

thanks



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Ahmad Alzaeem
Well , what I have done is :

I configured squid http_port xx and http_port xxy intercept

And uses iptables to redirect http & https to squid ports

But it don’t work and I have logs :

1448121527.423  10.1.1.1 TCP_MISS/503 4183 GET http://cnn.com/ - 
ORIGINAL_DST/10.159.144.206 text/html
1448121554.217  10.1.1.1 TCP_MISS/503 4771 GET http://cnn.com/ - 
ORIGINAL_DST/10.159.144.206 text/html
1448121555.574  10.1.1.1 TCP_MISS/503 4685 GET http://cnn.com/favicon.ico - 
ORIGINAL_DST/10.159.144.206 text/html


As u see the ds tip is wrong and its spoofed with 10.159.144.206

So how to let squid bypass checking it ?


Is my way above wrong ?


U say we need proxy mode ?? 

How should I implement proxy mode since user will not put ip:port in his browser

Thanks a lot for helping

cheers
-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Antony Stone
Sent: Tuesday, November 24, 2015 3:18 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TCP-MISS 503 for wrong destination ip

On Tuesday 24 November 2015 at 13:13:17, Ahmad Alzaeem wrote:

> Guys I understand that
> 
> The question is being asked , can squid fix this issue or not?

Yes, provided you use it in configured-proxy mode, instead of intercept mode.


Antony.

> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
> On Behalf Of Antony Stone Sent: Tuesday, November 24, 2015 2:42 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] TCP-MISS 503 for wrong destination ip
> 
> On Tuesday 24 November 2015 at 12:22:40, Ahmad Alzaeem wrote:
> > Hi Devs ,
> > 
> > I have a server that send to squid http/https with wrong destination 
> > ips
> 
> It has already been recommended that you fix your DNS so that it works 
> correctly / normally.
> 
> > So assume I want  to open google
> > 
> > The request hit the squid with https/http  packet with payload 
> > www.google.com   with ds tip 10.0.0.1 not  
> > the real ds tip of google like 74.125.x.x
> 
> Is 10.0.0.1 the IP address of your Squid server?
> 
> > The question is being asked here is .
> > 
> > Is it possible to let squid to do another resolving again and chck 
> > the right dst ip (74.125.x.x) and reach it ?
> 
> Yes - turn off intercept mode, and point the client specifically at 
> Squid as a configured proxy.  The client will then not attempt a DNS 
> lookup for the destination server, but will simply send the entire 
> request to Squid for it to look up where to send the request.
> 
> 
> Regards,
> 
> 
> Antony.

--
BASIC is to computer languages what Roman numerals are to arithmetic.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Yuri Voinov
In the case of obviously faulty DNS you can, for example, set up your 
own caching DNS (for example, Unbound), which takes data from a known 
clean source - for example, by using DNSCrypt and, possible, with DNSSEC 
validation. And specifying it as a source of information for Squid's 
name resolving.


24.11.15 17:22, Ahmad Alzaeem пишет:


Hi Devs ,

I have a server that send to squid http/https with wrong destination ips

So assume I want  to open google

The request hit the squid with https/http  packet with payload 
www.google.com  with ds tip 10.0.0.1 not  the 
real ds tip of google like 74.125.x.x


The question is being asked here is .

Is it possible to let squid to do another resolving again and chck the 
right dst ip (74.125.x.x) and reach it ?


Or at least let squid skip looking @ the ds tip and look only at the 
payload (google.com) and try to resolve it and operate ?


Is that possible on squid ?

thanks



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Yuri Voinov
The reason may be, for example, in the DNS cache poisoning. Or the 
transparent interception of DNS requests. In either case, the need to 
solve various actions and they are not connected with the SQUID.


24.11.15 17:22, Ahmad Alzaeem пишет:


Hi Devs ,

I have a server that send to squid http/https with wrong destination ips

So assume I want  to open google

The request hit the squid with https/http  packet with payload 
www.google.com  with ds tip 10.0.0.1 not  the 
real ds tip of google like 74.125.x.x


The question is being asked here is .

Is it possible to let squid to do another resolving again and chck the 
right dst ip (74.125.x.x) and reach it ?


Or at least let squid skip looking @ the ds tip and look only at the 
payload (google.com) and try to resolve it and operate ?


Is that possible on squid ?

thanks



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Antony Stone
On Tuesday 24 November 2015 at 13:34:51, Ahmad Alzaeem wrote:

> Well , what I have done is :
> 
> I configured squid http_port xx and http_port xxy intercept
> 
> And uses iptables to redirect http & https to squid ports

1. Have you fixed DNS so that clients are now resolving the correct addresses 
for destination servers?

2. Are you performing NAT *only* on the machine where Squid is running?

> But it don’t work and I have logs :
> 
> 1448121527.423  10.1.1.1 TCP_MISS/503 4183 GET http://cnn.com/ -
> ORIGINAL_DST/10.159.144.206 text/html 1448121554.217  10.1.1.1
> TCP_MISS/503 4771 GET http://cnn.com/ - ORIGINAL_DST/10.159.144.206
> text/html 1448121555.574  10.1.1.1 TCP_MISS/503 4685 GET
> http://cnn.com/favicon.ico - ORIGINAL_DST/10.159.144.206 text/html
> 
> As u see the ds tip is wrong and its spoofed with 10.159.144.206

Do you know where that IP address comes from?  Is your DNS still broken, is 
this the IP address of the Squid server, does it mean anythign at all in your 
network?

> So how to let squid bypass checking it ?

It's not a matter of bypassing Squid checking it - it's a matter of making it 
correct so that the checks do not fail.

> Is my way above wrong ?

I think so, but please answer the questions above so we can be more sure.

> U say we need proxy mode ??
> 
> How should I implement proxy mode since user will not put ip:port in his
> browser

Use DHCP options and/or WPAD.

> Thanks a lot for helping

Please do not reply to (or CC) me - please just reply to the list.


Regards,


Antony.

-- 
"Black holes are where God divided by zero."

 - Steven Wright

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Ahmad Alzaeem
Ok 


1. Have you fixed DNS so that clients are now resolving the correct addresses 
for destination servers?
No , the issues will not be solved and will always dns resolve the ip of 
websites to the ip address of squid ( http & https requestst with the wrong ds 
tip will hit squid)

Again , I want to solve this issue form squid

2. Are you performing NAT *only* on the machine where Squid is running?


Yes I have redirect rules  that redirect the http & https to the port that 
squid listen  .
So I have :
http_port 3128
http_port 10.159.144.206:11611 intercept

iptables :

ptables –t nat -A PREROUTING -p tcp -m tcp --dport 80 -j DNAT --to-destination 
10.159.144.206:11611
ptables –t nat -A PREROUTING -p tcp -m tcp --dport 443 -j DNAT --to-destination 
10.159.144.206:11611


Do you know where that IP address comes from?  Is your DNS still broken, is 
this the IP address of the Squid server, does it mean anythign at all in your 
network?

Some ips are locally and some ips are  outside  , so we have port forwarding 
well

For now , skip the outside users and focous in the inside users
The dns is separated server differ than squid , but both on same network 

The DNS is not broken , it will resolve some websites to ip address of squid 
and other websites will rslve to other ip , so again I don’t want to touch the 
DNS and I want to work on the current state

> So how to let squid bypass checking it ?

It's not a matter of bypassing Squid checking it - it's a matter of making it 
correct so that the checks do not fail.

Im open to let squid do it and let wrong dstp ips  forwarded well on squid .


> Is my way above wrong ?

I think so, but please answer the questions above so we can be more sure.

> U say we need proxy mode ??
> 
> How should I implement proxy mode since user will not put ip:port in 
> his browser

Use DHCP options and/or WPAD.

Assume ips are static ips on clients




Thanks again and im awaiting ur suggestions

cheers


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP-MISS 503 for wrong destination ip

2015-11-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


24.11.15 19:31, Ahmad Alzaeem пишет:
> Ok 
>
>
> 1. Have you fixed DNS so that clients are now resolving the correct
addresses for destination servers?
> No , the issues will not be solved and will always dns resolve the ip
of websites to the ip address of squid ( http & https requestst with the
wrong ds tip will hit squid)
>
> Again , I want to solve this issue form squid
Squid can't solve this. Squid is *NOT* DNS-server. Neither DNS server,
nor DNS cache. It's only HTTP/HTTPS caching proxy.
>
>
> 2. Are you performing NAT *only* on the machine where Squid is running?
>
>
> Yes I have redirect rules  that redirect the http & https to the port
that squid listen  .
> So I have :
> http_port 3128
> http_port 10.159.144.206:11611 intercept
>
> iptables :
>
> ptables –t nat -A PREROUTING -p tcp -m tcp --dport 80 -j DNAT
--to-destination 10.159.144.206:11611
> ptables –t nat -A PREROUTING -p tcp -m tcp --dport 443 -j DNAT
--to-destination 10.159.144.206:11611
>
>
> Do you know where that IP address comes from?  Is your DNS still
broken, is this the IP address of the Squid server, does it mean
anythign at all in your network?
>
> Some ips are locally and some ips are  outside  , so we have port
forwarding well
>
> For now , skip the outside users and focous in the inside users
> The dns is separated server differ than squid , but both on same network
>
> The DNS is not broken , it will resolve some websites to ip address of
squid and other websites will rslve to other ip , so again I don’t want
to touch the DNS and I want to work on the current state
>
>> So how to let squid bypass checking it ?
>
> It's not a matter of bypassing Squid checking it - it's a matter of
making it correct so that the checks do not fail.
>
> Im open to let squid do it and let wrong dstp ips  forwarded well on
squid .
>
>
>> Is my way above wrong ?
>
> I think so, but please answer the questions above so we can be more sure.
>
>> U say we need proxy mode ??
>>
>> How should I implement proxy mode since user will not put ip:port in
>> his browser
>
> Use DHCP options and/or WPAD.
>
> Assume ips are static ips on clients
>
>
>
>
> Thanks again and im awaiting ur suggestions
>
> cheers
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWVGdvAAoJENNXIZxhPexG/rwH/0DrUvdpp3T2o5F5r3UzbsHE
QtuZ9YC7Dc/9fR0uKIoTb7/yEwnuk7bqvMDVezoytDil7l+Id+HVbH6foStjch+B
aN6NFXtzcV0bMKSUiJM6rX0tXLfOun1dlbsYaBb6SQlItj4LUAeVNZA/Mlaef94j
Fu/rJB2mgxz5mlIdjJQlR2cEbGGZZgKd3+TAAf2i1GXFRReyaFvzn2wfSkZzb2vU
gaGrVSKhBvzW0XUe8xGLp/KVHA1jr//zoF1raEoqRrDqFTbGjjepHbAVnes/SR32
JxMyoIJ/8H8ybFnBFG3OT1ilC1spSke8tKQRO8Rjz9TWWRcp7+ApXrp+Ezqoi3s=
=wz9M
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with NTLM authentication

2015-11-24 Thread Brendan Kearney

On 11/24/2015 10:08 AM, Verónica Ovando wrote:

My Squid Version:  Squid 3.4.8

OS Version:  Debian 8

I have installed Squid on a server using Debian 8 and seem to have the 
basics operating, at least when I start the squid service, I have am 
no longer getting any error messages.  At this time, the goal is to 
authenticate users from Active Directory and log the user and the 
websites they are accessing.


I followed the official guide 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm. I 
verified that samba is properly configured, as the guide suggest, with 
the basic helper in this way:


# /usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-basic
domain\user pass
OK

Here is a part of my squid.conf where I defined my ACLs for the groups 
in AD:


 

auth_param ntlm program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp --domain=DOMAIN.com

auth_param ntlm children 30

auth_param basic program /usr/local/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 5
auth_param basic realm Servidor proxy-cache de mi Dominio
auth_param basic credentialsttl 2 hours

external_acl_type AD_Grupos ttl=10 children=10 %LOGIN 
/usr/lib/squid3/ext_wbinfo_group_acl -d


acl AD_Standard external Grupos_AD Standard
acl AD_Exceptuados external Grupos_AD Exceptuados
acl AD_Bloqueados external Grupos_AD Bloqueados

acl face url_regex -i "/etc/squid3/facebook"
acl gob url_regex -i "/etc/squid3/gubernamentales"

http_access allow AD_Standard
http_access allow AD_Exceptuados !face !gob
http_access deny AD_Bloqueados
 



I tested using only the basic scheme (I commented the lines out for 
NTLM auth) and every time I open the browser it asks me my user and 
pass. And it works well because I can see in the access.log my 
username and all the access policies defined are correctly applied.


But if I use NTLM auth, the browser still shows me the pop-up (it must 
no be shown) and if I enter my user and pass it still asks me for them 
until I cancel it.


My access.log, in that case, shows a TCP_DENIED/407 as expected.

What could be the problem? It suppose that both Kerberos and NTLM 
protocols work together, I mean that can live together in the same 
environment and Kerberos is used by default. How can I check that NTLM 
is really working? Could it be a squid problem in the conf? Or maybe 
AD is not allowing NTLM traffic?


Sorry for my English. Thanks in advance.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
make sure Internet Explorer is set to use Integrated Windows 
Authentication (IWA).  Tools --> Internet Options --> Advanced --> 
Security --> Enable Integrated Windows Authentication.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users