Re: [squid-users] [MASSMAIL] Squid: forward to another squid server with authentication

2015-12-16 Thread Amaury Viera Hernández

Thank you. I will follow you instructions.

Amaury.
On 15/12/15 19:03, Amaury Viera Hernández wrote:

Hello everyone. This is a more detailed explanation about my trouble:

I have two network cards:

a shared Wifi card(wlp2s0) : 10.42.0.1
a Network card with access to my LAN(enp4s0): 10.8.77.1

In short, I am looking for a simple way to do the following (please give code 
samples if possible):

Set up and start a transparent proxy server on my computer (wifi card, say that 
squid will listen at 10.42.0.1:3128) that can capture all web requests from my 
phone, once the http request from phone comes to this proxy, it will forward it 
to the university proxy (say address is 10.0.0.1:8080 with user and password 
authentication)

Note: Is posible that one of the authentication methods of my proxy server will 
be ntlm

Now, more details to fully explain my situation:

In my university, authentication is needed to pass through a proxy so that we 
can connect to the internet. I normally enter my active directory 
username/password to authenticate when the pop up appears in the web browser

Now, I want to connect my phone to my hared wifi(10.42.0.1) and using the 
network card with access to the lan(10.8.77.1), forward de http request of my 
phone to the proxy server in the university( 10.0.0.1:8080 with user and 
password authentication) because some application of my phone require a direct 
connection, without proxy and without proxy authentication. So, I am planning 
to set up a transparent proxy on my laptop to catch all requests from my phone. 
Of course, I don't need to use the proxy for local domains (uci.cu in this case)

I'm using ubuntu 15.10 with squid3 (3.3.8)

I have this configuration in squid.conf (This is very functional for local 
domain(without proxy authentications, against the local domains, for example: 
intranet.uci.cu, but for internet domains I need to authenticate(cache_peer my 
proxy with the proxy of my university)) )

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl localdst dstdomain
acl mi_red src 10.42.0.0/24
http_access allow mi_red
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny all
http_port 10.42.0.1:3128 transparent
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
cache_mem 512 MB
cache_dir ufs /var/spool/squid3 2048 16 256
cache_effective_user proxy
cache_effective_group proxy
half_closed_clients off
maximum_object_size 1024 KB
cache_swap_low 90
cache_swap_high 95
memory_pools off
error_directory /usr/share/squid3/errors/es/
access_log /var/log/squid3/access.log squid
cache_peer 10.0.0.1 parent 8080 0 no-query default no-digest 
login=avhernandez:MyPass
never_direct allow all


I'm using this firewall script

#!/bin/sh
# IP del servidor SQUID
SQUID_SERVER="10.42.0.1"
# Interface conectada a Internet
INTERNET="enp4s0"
# Interface interna
LAN_IN="wlp2s0"
# Puerto Squid
SQUID_PORT="3128"

# Limpia las reglas anteriores
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
# Carga los modulos IPTABLES para NAT e IP con soporte conntrack
modprobe ip_conntrack
modprobe ip_conntrack_ftp
echo 1 > /proc/sys/net/ipv4/ip_forward
# Politica de filtro por defecto
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
# Acceso ilimitado a loop back
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Permite UDP, DNS y FTP pasivo
iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT
# Establece el servidor como router para la red
iptables --table nat --append POSTROUTING --out-interface $INTERNET -j 
MASQUERADE
iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT
# acceso ilimiato a la LAN
iptables -A INPUT -i $LAN_IN -j ACCEPT
iptables -A OUTPUT -o $LAN_IN -j ACCEPT
# Redirige las peticiones de la red interna hacia el proxy
iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to 
$SQUID_SERVER:$SQUID_PORT
# Redirige la entrada al proxy
iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT 
--to-port $SQUID_PORT

Best regards. Amaury.
___
squid-users mailing list
squid-users@lists.squid-cache.org

Re: [squid-users] Squid with NTLM auth behind netscaler

2015-12-16 Thread Amos Jeffries
On 17/12/2015 5:34 a.m., Fabio Bucci wrote:
> i'm planning to migrate to kerberos instead NTLM.i got a question for
> you Amos: sometimes a client reports issue in navigation and searching into
> log file i cannot see "username" and all the request are 407
> 
> In these cases is there a way to reset a user session or it's a completely
> client issue?

Usually it is the client stuck in a loop trying Negtiate/NTLM auth for
some reason. Some old Firefox, most Safari, and older IE can all get
stuck trying those credentials and ignoring the offers of Basic.

It might be possible to figure out some LmCompatibility settings change
that makes the problem just go away (eg, forcing NTLM of all versions to
disabled on the client).

Other than that Squid does have some workaround responses it can be made
to send back that might help the client reach the right conclusion:

a) list Basic auth first in the config. Any properly working client will
re-sort the auth types by security level and do theKerberos anyway. But
the broken ones (particularly IE7 and older) will have more chance of
using Basic.

b) sending 407 response with no auth headers. Such as a deny 407 status
generated by external ACL deny, or a URL-redirector. These tell the
client that auth failed, but there is no acceptible fallback.

c) sending Connection:close. Sometimes (mostly Firefox v20-v40) it is
the client prematurely attaching the credentials to the connection and
re-using them. That is supposed to have been fixed recently, but I've
not confirmed.

d) sending 403 status response. To just flat-out block the client once
it enters the looping state. Hoping that later requests will start to
work again.


HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid authentication mechs

2015-12-16 Thread Amos Jeffries
On 17/12/2015 5:35 a.m., Eugene M. Zheganin wrote:
> Hi.
> 
> Is there a way to limit the number of available authentication
> mechanisms (for a client browser) basing on certain squid IP which this
> browser connects to, like, using http_port configuration directive ? For
> example this is needed when one need to allow the non-domain machines to
> pass through authentication/authorization checks using squid with
> full-fledged AD integraion (or Kerberos/NTLM, anyway), otherwise they
> are unable to do it. Once they were, for example using Chrome < 41, but
> since >41 Chrome has removed all the options to exclude certain
> authentication methods from it's CLI sequence (I still wander what a
> genious proposed this).

Theoretically the client browser is fully aware of what credentials it
can use for what schemes (Kerberos, Basic, NTLM, Digest [in that orer of
security]). And also for remembering which credentials worked or failed
on previosu attempts with the offered schemes.

So there is no need to filter them at the proxy. *it* is perfectly able
to authenticate any credentials it gets given using any of the schemes
it is offering. You just happen to not like the outcome when validation
prevents login.


> 
> If not(and I believe there isn't) could this message be treated as a
> feature request ?

It has been a feature request for years to allow ACL control of auth
schemes offered. I even have a design plan laid out for implemeting it.
But nobody seems to want it enough to sponsor the addition (if you do
please contact me directly to discuss).

I am specifically waiting for sponsorship on this one because it needs
someone with an actual use-case and implementation to test that it works
properly with Negotiate and NTLM.

Otherwise please open a feature request bug to track the status and get
notification when somebody does get around to adding it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Time for cache synchronization between siblings

2015-12-16 Thread Amos Jeffries
On 17/12/2015 3:10 a.m., Sreenath BH wrote:
> Hi,
> 
> Thanks for the tips. After disabling digest I believe performance improved.
> However, I found that randomly requests were being routed to parent
> even when siblings had the data cached.
> 
> From access.log I found TIMEOUT_CARP. I assumed this meant HTCP timed
> out and squid was forced to go to fetch the data. So I increased
> icp_query_timeout to 4000 milliseconds, and the hit rate increased
> further.
> 
> But I still find that sometimes, even after getting a HIT response
> from a sibling, squid, for some reason still decides to go to the
> parent for requested object.
> 
> Are there any other reasons why squid will decide to go to parent servers?

Just quirks of timing I think. Squid tracks response latency and prefers
the fastest source. If the parent is responding faster than the sibling
for man requests over a short period then Squid might switch to using
the parent as first choice for a


Some traffic is also classified as "non-hierarchical". Meaning that it
makes no sense sending it to a sibling unless all parents are down.
Things such as CONNECT, OPTIONS, POST etc where the response is not
possible to be cached at the sibling.


> 
> And another question: When the hash key is computed for storing cache
> objects, does Squid use the hostname(or IP address) also as part of
> URL, or just the part that appears after the hostname/IP:port numbers?

No. The primary Store ID/key is the absolute URL alone. Unless you are
using the Store-ID feature of Squid to change it to some other explicit
string value.

If the URL produces a reply object with Vary header, then the expansion
of the Vary header format is appended to the primary Store ID/key.

> 
> For example: if ip address is squid servers is 10.135.85.2 and
> 10.135.85.3, and a request made to 1st server would have had the IP
> address as part of the URL. However, next time same request is made to
> server2, a different IP address would be used. Does this affect cache
> hit at the sibling server?
> 
> I think it should not, but is this the case?

Correct the Squid IP has nothing to do with the cache storage.

> 
> We will have a load balancer that sends requests to each squid server,
> and we want cache peering to work correctly in this case.

FYI; the digest and HTCP algorithms you are dealing with are already
load balancing algorithms. They are just designed for use in a flat
1-layer heirarchy.

If you intend to have a 2-layer heirarchy (frontend LB and backend
caches) I suggest you might want to look into Squid as the frontend LB
using CARP algorithm. The CARP algorithm ensures deterministic storage
locations for what URLs get sent to which caches. So there is no need
for siblings communication as they all get unique URLs.

 *  has
details of how to split the frontend and backend config. The specific
example is for doing it using SMP workers within a single proxy
instance. But the split can even more easily be done across different
machines.

 *  has
some details on how to add iptables port splitting on top of CARP to get
ridiculously high performance out of a proxy heirarchy. The last numbers
I heard from these setups were pushing just under the Gbps mark.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSTP_DUPLEX_POST method

2015-12-16 Thread Wayne Gillan
Yes SSTP is a type of SSL VPN. Why behind a reverse proxy? Well just like other 
SSL services I need to share port 443 with one public IP address.

I've run packet captures on the client, vpn server and squid. The request is 
getting through ok and the vpn server is sending a reply. But squid is not 
forwarding the reply to the client I believe. Here's some snippets of the squid 
log:

2015/12/17 14:26:48.550| http.cc(762) processReplyHeader: HTTP Server REPLY:
-
HTTP/1.1 200
Content-Length: 18446744073709551615
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 17 Dec 2015 03:26:48 GMT
--
2015/12/17 14:26:48.556| client_side.cc(1377) sendStartOfMessage: HTTP Client 
local=ip.of.squid:443 remote=1.2.3.4:44582 FD 9 flags=1
2015/12/17 14:26:48.556| client_side.cc(1378) sendStartOfMessage: HTTP Client 
REPLY:
-
HTTP/1.1 200 OK
Content-Length: 18446744073709551615
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 17 Dec 2015 03:26:48 GMT
X-Cache: MISS from 
X-Cache-Lookup: MISS from :443
Connection: keep-alive
--
2015/12/17 14:26:48.557| client_side_reply.cc(1114) storeNotOKTransferDone: 
storeNotOKTransferDone  out.size=240 expectedLength=-9223372036854775569
2015/12/17 14:26:48.557| client_side.cc(1827) stopSending: sending error 
(local=ip.of.squid:443 remote=1.2.3.4:44582 FD 9 flags=1): 
STREAM_UNPLANNED_COMPLETE; old receiving error: none

2015/12/17 14:26:48.673| Server.cc(362) sentRequestBody: sentRequestBody called
2015/12/17 14:26:48.673| Server.cc(423) sendMoreRequestBody: will wait for more 
request body bytes or eof


Seems like the large value of the Content-Length header field is causing 
issues. Squid waits for more data but the server never sends it because it's 
waiting for something from the client. 

Is there any way to make squid just pass traffic exactly as it comes in?


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Eliezer Croitoru
Sent: Tuesday, 15 December 2015 6:23 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SSTP_DUPLEX_POST method

Isn't SSTP is some kind of secure VPN service? which is based on SSL?
Why would you want to put a reverse proxy in front of a VPN service? 
There are many things to do in the IP level but not much to do in the HTTP 
level.

Eliezer

On 15/12/2015 07:20, Wayne Gillan wrote:
> Hi all,
>
> I am trying to configure squid as a reverse proxy in front of a Microsoft 
> SSTP VPN server but squid does not appear to be forwarding the requests. I 
> think it may have something to do with this custom verb/method that Microsoft 
> use. Seehttps://msdn.microsoft.com/en-us/library/cc247364.aspx. Should it 
> work ok? I am running 3.1.19.
>
> Thank you,
> Wayne

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com 
__

__
This email is confidential.  If you are not the intended recipient, you must 
not disclose  or  use the  information  contained in it.
If you have received this email in error,  please notify us immediately by 
return email and delete the email and any attachments.
Any personal views/ opinions expressed by the writer may not necessarily 
reflect the views/ opinions of the company.
__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
__
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Configure proxy to multiple physical hosts through one Domain Name.

2015-12-16 Thread PaleBlue
Hi All, 

I am new to squid and have only been aware of it for minutes. however it
looks like what I desparately need to accomplish is doable with squid.

I am trying to accomplish the following 

apple.mydomain.com ---> Server01 (website and Blog)
orange.mydomain.com ---> Server02 (GitLab)
pear.mydomain.com ---> Server03 (Future use)

I want to put a VM with Ubuntu server and squid behind my firewall to proxy
the requests to the different servers but have no even where to get started
as I am so new to squid. I have read the around and believe it is possible
based on  This

  
post however I would like to stress. *I am not trying to load balance, I am
trying to make one domain name show different things from different servers.
*


if anyone could give me some guidance on getting started with this I would
be *Very* appreciative.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Configure-proxy-to-multiple-physical-hosts-through-one-Domain-Name-tp4675235.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Time for cache synchronization between siblings

2015-12-16 Thread Sreenath BH
Hi,

Thanks for the tips. After disabling digest I believe performance improved.
However, I found that randomly requests were being routed to parent
even when siblings had the data cached.

From access.log I found TIMEOUT_CARP. I assumed this meant HTCP timed
out and squid was forced to go to fetch the data. So I increased
icp_query_timeout to 4000 milliseconds, and the hit rate increased
further.

But I still find that sometimes, even after getting a HIT response
from a sibling, squid, for some reason still decides to go to the
parent for requested object.

Are there any other reasons why squid will decide to go to parent servers?

And another question: When the hash key is computed for storing cache
objects, does Squid use the hostname(or IP address) also as part of
URL, or just the part that appears after the hostname/IP:port numbers?

For example: if ip address is squid servers is 10.135.85.2 and
10.135.85.3, and a request made to 1st server would have had the IP
address as part of the URL. However, next time same request is made to
server2, a different IP address would be used. Does this affect cache
hit at the sibling server?

I think it should not, but is this the case?

We will have a load balancer that sends requests to each squid server,
and we want cache peering to work correctly in this case.

thanks,
Sreenath


On 12/16/15, Amos Jeffries  wrote:
> On 16/12/2015 7:16 a.m., Sreenath BH wrote:
>> Hi,
>>
>> I have a setup with three squid peers (siblings in squid.conf) and
>> three upstream servers(peers with parent and originserver in
>> squid.conf).
>>
>> I am using htcp for the three squid siblings.
>> How much time does it take for one squid server to 'know' that another
>> peer has a particular object cached? I see digests exchanged between
>> the siblings, as logged in cache.log.
>
> When both HTCP an dDgests are active between siblings the maximum time
> is however long it takes for the HTCP packet to reach the sibling, be
> parsed, looked up in the cache and response to get back.
>
> Digests are used to short-circuit the ICP or HTCP process. If the digest
> contains an entry for the URL the peer will be selected as a possible
> destination server. Regardless of whether the object stored for that URL
> is the same one the client is fetching.
>
> Digests are updated every digest_rebuild_period (default 1 hr). You can
> disable digests with either "digest_generation off" or per-peer with the
> cache_peer no-digest option.
>
>
>>
>> I have been able to make a request to one sibling and it resulted in a
>> sibling_hit.
>>
>> How I do this test is this:
>> 1. bring up all siblings
>> 2. issue a request to one server (sibling 1)
>> 3. Make sure it is cached in sibling 1
>> 4. Wait for some time (I don't know how long to wait)
>
> Until the log of sibling1 contains a digest fetch from sibling2. A
> restart of sibling2 will make that happen faster.
>
>> 5. Make same request to another sibling, say sibling 2
>> 6. Check if it went to upstream server for the request or it was a sibling
>> hit.
>>
>> My problem is that the sibling hits seem to be random. I am  not able
>> to figure out exactly
>> how log it takes for the cache information to propagate to all siblings.
>
> Digest is a old algorithm designed as an optimization of ICP, and
> likewise is based on URL alone - which is great for HTTP/1.0 traffic. In
> modern HTTP/1.1 traffic the Vary headers have a big part to play and
> HTCP with full-header lookups works much better.
>
> I suggest trying with only HTCP (digests disabled) and see if your
> performance improves at all. YMMV though.
>
> Be aware that there is no guarantee that any object is still in cache,
> even with the more reliable HTCP on-demand lookups. Any object could be
> dropped from sibling1 cache picoseconds after the "i have it" reply
> started being formed for delivery to sibling2 (before it even hits the
> wire on its way back).
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Using cache_peer for transparent NTLM authentication

2015-12-16 Thread Antonio Petrelli
2015-12-16 12:58 GMT+01:00 Amos Jeffries :

> On 16/12/2015 11:12 p.m., Antonio Petrelli wrote:
> > Hello
> > sorry if this question has been asked, and answered, already, I dug the
> > mailing list archive but it is still not clear to me.
> >
> > Is it possible to use a cache_peer that supports NTLM authentication in a
> > "transparent" way, so that the user of the Squid proxy does not need to
> > enter any authentication and Squid does all the authentication by itself?
>
> No that is not supported. And we have no intention of adding it, NTLM
> has been deprected for nearly 10 years.
>
> Squid supports performing Negotiate/Kerberos authentication to
> cache_peer instead.
>
>
> > Notice that, until now, I am using CNTLM to do this, however CNTLM has
> > severe problems under Windows 8.
>
> Notice that NTLM has increasingly severe problems under any Windows XP
> SP3 or later system.
>
>
Thank you. Notice that I asked this question because I would like to use
Squid as a personal local proxy for different type of upstream proxy, so
that a user can use their Squid instance on their machine and the only
thing to do is switch the correct configuration file when switching
workplace. Currently I do it by using Squid or CNTLM when needed and I also
wrote a proxy in Java for it, however it is really memory-consuming,
especially compared to Squid:
https://github.com/apetrelli/scafa

Thank you anyway for your answer and sorry for the rant

Antonio
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5.10 samba4 kerberos few questions (debain Jessie)

2015-12-16 Thread L . P . H . van Belle
Hai, 

 

Im having the following running. 

Debian Jessie, squid 3.5.10 (recompiled from sid)  with icap and authorisation 
agains a samba 4 AD DC. 

I begin with, this works great !.. so now my questions and the conf part for 
this. 

 

I am using the following authentications. 

First Kerberos:

auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth -d \

    --kerberos /usr/lib/squid/negotiate_kerberos_auth -s 
HTTP/hostname.domain.tld@KERB.REALM \

    --ntlm /usr/bin/ntlm_auth --helper-protocol=gss-spnego --domain=NTDOMAIN

 

And this works also

#auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth \

#    --kerberos /usr/lib/squid/negotiate_kerberos_auth -s GSS_C_NO_NAME -d \

#    --ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --domain= 
NTDOMAIN \

 

I use as fallback  basic auth.

auth_param basic program /usr/lib/squid/basic_ldap_auth -R \

    -b "ou=SOMEOU,dc=internal,dc=domain.dc=tld" \

    -D ldap-bind@ KERB.REALM  -W /etc/squid/private/ldap-bind \

    -f (|(userPrincipalName=%s)(sAMAccountName=%s)) \

    -h samba4-dc2.internal.domain.tld \

    -h samba4-dc1.internal.domain.tld

 

I know the following: 

## 1) Pure Kerberos. Passthrough auth for windows users with windows DOMAIN 
JOINED pc's.

##    Fallback to Ldap for NON WINDOWS NON DOMAIN JOINED Devices.

##    NO NTLM. AKA, a windows pc, NOT JOINED in the domain, with end up in 
always user popup for auth.

##    Which will always fail because of NTLM TYPE 1 and TYPE 2, authorisations.

## 2) NEGOTIATE AUTH, which will do all of above, but also authenticated 
Windows PC's Not domain Joined.

 

When people access websites a see a lot of : TCP_DENIED/407 

Sometimes about 10-12 times the TCP_DENIED/407, even when the user already 
access the website and it authenticated. 

Is this because of pc’s auth, or user auth, or by design as i did read here : 

 

http://www.squid-cache.org/mail-archive/squid-users/201310/0006.html

acl AuthRequest http_status 407 
access_log ... !AuthRequest ...

 

 

is this the only solution to reduce the 407, or am i missing some setting here? 

If you need more info, just ask.. 

 

 

Greetz, 

 

Louis

 

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Using cache_peer for transparent NTLM authentication

2015-12-16 Thread Amos Jeffries
On 16/12/2015 11:12 p.m., Antonio Petrelli wrote:
> Hello
> sorry if this question has been asked, and answered, already, I dug the
> mailing list archive but it is still not clear to me.
> 
> Is it possible to use a cache_peer that supports NTLM authentication in a
> "transparent" way, so that the user of the Squid proxy does not need to
> enter any authentication and Squid does all the authentication by itself?

No that is not supported. And we have no intention of adding it, NTLM
has been deprected for nearly 10 years.

Squid supports performing Negotiate/Kerberos authentication to
cache_peer instead.


> Notice that, until now, I am using CNTLM to do this, however CNTLM has
> severe problems under Windows 8.

Notice that NTLM has increasingly severe problems under any Windows XP
SP3 or later system.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.12 and ecap

2015-12-16 Thread behrad eslami
Hi 
thanks for you response. I compiled squid on debian jessie 


On Wednesday, December 16, 2015 6:38 AM, Amos Jeffries 
 wrote:
 

 On 16/12/2015 12:53 a.m., behrad eslami wrote:
> Hi I add simple rule to ecap module and deny some url. I forground
> (squid -N -d10) all things work well. when i run service wtih mutiple
> workers, after a while some url not filter and user can open them in
> browsers. I compile Squid 3.5.12 and libecpa 1.0.0. squid compiled
> with below options:
> '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' 
> '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' 
> '--sysconfdir=/etc' '--localstatedir=/var' 
> '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' 
> '--disable-dependency-tracking' '--disable-silent-rules' 
> '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
> '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' 
> '--enable-async-io=8' '--enable-storeio=aufs,rock' 
> '--enable-removal-policies=lru,heap' '--enable-delay-pools' 
> '--enable-cache-digests' '--enable-follow-x-forwarded-for' '--enable-eui' 
> '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-auto-locale' 
> '--disable-translation' '--with-swapdir=/var/spool/squid3' 
> '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' 
> '--with-filedescriptors=65536' '--with-large-files' 
> '--with-default-user=proxy' '--enable-build-info= linux' 
> '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu
' 'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat 
-Werror=format-security -Wall' 'LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now' 
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong 
-Wformat -Werror=format-security' '--disable-ipv6' '--disable-wccp' 
'--disable-auth-basic' '--disable-auth-digest' '--disable-auth-negotiate' 
'--disable-auth-ntlm' '--disable-external-acl-helpers' 
'--disable-url-rewrite-helpers' '--disable-storeid-rewrite-helpers' 
'--without-mit-krb5' '--without-heimdal-krb5' '--without-gnugss' 
'--disable-unlinkd' '--disable-ident-lookups' '--disable-esi' 
'--disable-select' '--disable-poll' '--disable-kqueue' '--disable-devpoll' 
'--enable-epoll'
> 

This looks like Debian based build settings. With a bunch of extra
things disabled. What OS is this being used on?

NP: you can use --disable-auth to do all of the --disable-auth-*
settings in one simpler option.


> and my ecap config is:
> icap_enable on
> icap_send_client_ip on
> icap_client_username_encode on

None of that is eCAP configuration. The 'i' (not 'e') at the start of
the directive names should give it away.

This is the eCAP part:

> loadable_modules /usr/local/lib/ecap_adapter_MY_processing.so
> ecap_enable on
> ecap_service ecapModifier respmod_precache \
>      uri=ecap://www.deltaglobal.net/adapter_My_processing  \
>        victim=sadeghsalehi\
>        replacement=***
> adaptation_access ecapModifier allow all
> loadable_modules /usr/local/lib/ecap_adapter_My_request.so
> ecap_enable onecap_service eReqmod reqmod_precache bypass=1  
> ecap://e-cap.org/ecap/services/My/request
> adaptation_access  eReqmod  allow all

Looks correct, and should be working.

Notice that you have "bypass=1" configured, so any problem down to just
a long delay in processing time can cause the second eCAP module to be
bypassed and do nothing.

Your config looks fine. You will need to dig down into what the module
is actually doing and what it is having trouble with.

NP: -N is no just foreground, but also disables all multi-process
activity by the workers. Perhapse there is a problem with the ecap
module being either loaded and/or used by multiple processes simultaneously.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.10 samba4 kerberos few questions (debain Jessie)

2015-12-16 Thread Amos Jeffries
On 16/12/2015 11:48 p.m., L.P.H. van Belle wrote:
> Hai, 
> 
>  
> 
> Im having the following running. 
> 
> Debian Jessie, squid 3.5.10 (recompiled from sid)  with icap and 
> authorisation agains a samba 4 AD DC. 
> 
> I begin with, this works great !.. so now my questions and the conf part for 
> this. 
> 
>  
> 
> I am using the following authentications. 
> 
> First Kerberos:
> 
> auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth -d \
> 
> --kerberos /usr/lib/squid/negotiate_kerberos_auth -s 
> HTTP/hostname.domain.tld@KERB.REALM \
> 
> --ntlm /usr/bin/ntlm_auth --helper-protocol=gss-spnego --domain=NTDOMAIN
> 
>  
> 
> And this works also
> 
> #auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth \
> 
> #--kerberos /usr/lib/squid/negotiate_kerberos_auth -s GSS_C_NO_NAME -d \
> 
> #--ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --domain= 
> NTDOMAIN \
> 
>  
> 
> I use as fallback  basic auth.
> 
> auth_param basic program /usr/lib/squid/basic_ldap_auth -R \
> 
> -b "ou=SOMEOU,dc=internal,dc=domain.dc=tld" \
> 
> -D ldap-bind@ KERB.REALM  -W /etc/squid/private/ldap-bind \
> 
> -f (|(userPrincipalName=%s)(sAMAccountName=%s)) \
> 
> -h samba4-dc2.internal.domain.tld \
> 
> -h samba4-dc1.internal.domain.tld
> 
>  
> 
> I know the following: 
> 
> ## 1) Pure Kerberos. Passthrough auth for windows users with windows DOMAIN 
> JOINED pc's.
> 
> ##Fallback to Ldap for NON WINDOWS NON DOMAIN JOINED Devices.
> 
> ##NO NTLM. AKA, a windows pc, NOT JOINED in the domain, with end up in 
> always user popup for auth.
> 
> ##Which will always fail because of NTLM TYPE 1 and TYPE 2, 
> authorisations.
> 
> ## 2) NEGOTIATE AUTH, which will do all of above, but also authenticated 
> Windows PC's Not domain Joined.
> 

AFAIK #2 is incorrect. Negotiate still needs the *joined* part to be
true. They just do not have to have current connectivity to the DC
provided the secret-token part of credentials that comes from the DC is
still current on the client machine.


> 
> When people access websites a see a lot of : TCP_DENIED/407 
> 
> Sometimes about 10-12 times the TCP_DENIED/407, even when the user already 
> access the website and it authenticated. 
> 
> Is this because of pc’s auth, or user auth, or by design as i did read here : 
> 
> 
> http://www.squid-cache.org/mail-archive/squid-users/201310/0006.html


> 
> acl AuthRequest http_status 407 
> access_log ... !AuthRequest ...
> 
> is this the only solution to reduce the 407, or am i missing some setting 
> here? 

It just hides them from the logging. They still happen.

Another workaround is suggested in that thread; to bypass and not
require authentication for some popular domains. That has the added
benefit of letting HTTP performance optimizations work - most HTTP
features actually have to be disabled in the presence of NTLM or Negotiate.

The solution is for clients to actually make use of the connection
persistence that NTLM and Negotiate *require* Squid to setup just to
perform those auth types. Tearing it all down after just one HTTP level
transaction is very wasteful.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with NTLM auth behind netscaler

2015-12-16 Thread Fabio Bucci
i'm planning to migrate to kerberos instead NTLM.i got a question for
you Amos: sometimes a client reports issue in navigation and searching into
log file i cannot see "username" and all the request are 407

In these cases is there a way to reset a user session or it's a completely
client issue?

thanks,
Fabio

2015-12-12 5:00 GMT+01:00 Amos Jeffries :

> On 12/12/2015 3:42 a.m., Fabio Bucci wrote:
> > Thank Amos i know you suggested kerberos. How can i implement it instead
> of
> > LDAP?
>
> 
>
> Amos
>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid authentication mechs

2015-12-16 Thread Eugene M. Zheganin

Hi.

Is there a way to limit the number of available authentication 
mechanisms (for a client browser) basing on certain squid IP which this 
browser connects to, like, using http_port configuration directive ? For 
example this is needed when one need to allow the non-domain machines to 
pass through authentication/authorization checks using squid with 
full-fledged AD integraion (or Kerberos/NTLM, anyway), otherwise they 
are unable to do it. Once they were, for example using Chrome < 41, but 
since >41 Chrome has removed all the options to exclude certain 
authentication methods from it's CLI sequence (I still wander what a 
genious proposed this).


If not(and I believe there isn't) could this message be treated as a 
feature request ?


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Using cache_peer for transparent NTLM authentication

2015-12-16 Thread Antonio Petrelli
Hello
sorry if this question has been asked, and answered, already, I dug the
mailing list archive but it is still not clear to me.

Is it possible to use a cache_peer that supports NTLM authentication in a
"transparent" way, so that the user of the Squid proxy does not need to
enter any authentication and Squid does all the authentication by itself?
Notice that, until now, I am using CNTLM to do this, however CNTLM has
severe problems under Windows 8.

Thank you in advance
Antonio Petrelli
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users