Re: [squid-users] Squid with NTLM auth behind netscaler

2015-12-04 Thread Amos Jeffries
On 5/12/2015 5:39 a.m., Fabio Bucci wrote:
> Thanks Amos.
> Actually my load balancing is configured to perform round robin balancing
> between the two nodes. I added a session persistance by source ip in order
> to avoid to login again with some sites.
> 
> my squid.conf is very simple:
> auth_param ntlm program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp
> auth_param ntlm children 100
> auth_param ntlm keep_alive off
> 
> acl auth proxy_auth REQUIRED
> 
> http_access allow auth
> 

Okay. That *should* work. With some NTLM-specific caveats.


> forwarded_for on
> follow_x_forwarded_for allow netscaler
> 

If the LB is touching the traffic enough to add headers then it is a
proxy. NTLM does not work at all well through proxies. NTLM as a whole
is based on the assumption that there is one (and only one) TCP
connection between it and the proxy - the credentials are tied to the
TCP connection state.

There is one VERY slim hack that lets NTLM pass straight through a
frontend proxy/LB. That is by pinning the LB's inbound and outbound TCP
connections together. This is not just session persistence, but absolute
prohibition on any other traffic (even from other connections by the
same client) being sent to that outbound LB->proxy connection. Some LB
can do it, some can't.


I recommend advertising both/all proxy IPs to the clients and letting
each select the one(s) it wants to contact. That way the client can
perform NTLM directly to the Squid.


On the other hand NTLM was deprecated back in 2006, you should try
migrating to Negotiate/Kerberos. Kerberos is a bit of a learning curve
and can be tricky working with older client software. But is *way* more
efficient and friendlier to HTTP (but still not fully).


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] doubts about the squid3

2015-12-04 Thread Amos Jeffries
On 5/12/2015 11:20 a.m., Marcio Demetrio Bacci wrote:
> Hi Amos,
> 
> Thanks for help me.
> 
> Follow my whole squid.conf


> acl manager proto cache_object


I see you still have the old Squid-2 definition for "manager" ACL. If
your Squid is not complaining about that, it means you are using a very
old version and need to upgrade.
 The config should work, so I think you are hitting bugs in Squid. With
Squid older than 3.4 it could be bug 2305 and the related nest of
horrible auth code that used to exist in Squid.

Please ensure you are using a Squid-3.4 or later. If the problem
remains, you will have to try to isolate some situatino that always
causes it. With that a ALL,9 debug log from Squid could help.


Also, be aware that there is always the possibility of browser bugs
being involved. Firefx 25-40 did not do NTLM properly, and Chrome 47
just had a major regression where it broke all NTLM to a proxy - similar
but less high profile things have happened before with both of them, and
old IE 0-8 can be randomly problems as they do their own undocumented
things.



> acl Safe_ports port 80 8080 21 443 563 70 210 280 488 591 777 3001
> 1025-65535

You don't have to add port 8080 or 3001 to that list. They are included
in the 1025-65535 set.


> acl connect_abertas maxconn 8

connect_abertas is unused. You should remove it.

> acl grupo_admins external ad_group gg_admins
> acl grupo_users external ad_group gg_users
> acl extensoes_bloqueadas url_regex -i "/etc/squid3/acls/extensoes-proibidas"
> acl sites_liberados url_regex -i "/etc/squid3/acls/sites-permitidos"
> acl sites_bloqueados url_regex -i "/etc/squid3/acls/sites-proibidos"
> acl palavras_bloqueadas url_regex -i "/etc/squid3/acls/palavras-proibidas"
> acl autenticados proxy_auth REQUIRED
> http_access deny !autenticados
> http_access allow grupo_admins
> http_access deny extensoes_bloqueadas
> http_access allow sites_liberados
> http_access deny sites_bloqueados
> http_access deny palavras_bloqueadas
> http_access allow grupo_users
> http_access allow autenticados

Only autenticados can get past the "deny !autenticados" at the top. So
this "allow autenticados" will always match. The below lines do nothing
useful.

So you could replace the above "allow autenticados" with "allow all" and
save some extra auth checking.


> acl network_servers src 192.168.0.0/25
> acl Lan1 src 192.168.1.0/24
> acl lan2 src 192.168.2.0/23
> http_access allow  network_servers
> http_access allow lan1
> http_access allow lan2
> http_access deny all
> error_directory /usr/share/squid3/errors/pt-br

I hear Brazil is a multi-cultural country. You might want to seriously
consider removing that line which forces all users and clients to read
Portuguese (Brazi) language messages from the proxy.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication Problem

2015-12-04 Thread Samuel Anderson
Hi Amos and Dima,

I'm having the exact same problem. After updating Chrome to version
(47.0.2526.73
m) I'm no longer able to authenticate. IE and Firefox still seem to work
fine. I haven't changed anything in my config file for months.

On Fri, Dec 4, 2015 at 5:22 AM, Dima Ermakov  wrote:

> Thank you, Amos.
>
> I checked all, that you wrote.
> It didn't help me.
>
> I have this problem only on google chrome browser.
> Before 2015-12-03 all was good.
> I didn't change my configuration more than one month.
>
> Ten minutes ago "Noel Kelly nke...@citrusnetworks.net" wrote in this
> list, that google chrome v47 has broken NTLM authentication.
> My clients with problems has google chrome v47 (((
>
> Mozilla Firefox clients work good.
>
> Thank you!
>
> This is message from Noel Kelly:
> "
>
> Hi
>
> For information, the latest version of Google Chrome (v47.0.2526.73M) has
> broken NTLM authentication:
>
> https://code.google.com/p/chromium/issues/detail?id=544255
>
> https://productforums.google.com/forum/#!topic/chrome/G_9eXH9c_ns;context-place=forum/chrome
>
> Cheers
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> "
>
> On 4 December 2015 at 04:55, Amos Jeffries  wrote:
>
>> On 4/12/2015 9:46 a.m., Dima Ermakov wrote:
>> > Hi!
>> > I have a problem with authentiation.
>> >
>> > I use samba ntlm authentication in my network.
>> >
>> > Some users ( not all ) have problems with http traffic.
>> >
>> > They see basic authentication request.
>>
>> Meaning you *dont* have NTLM authentication on your network.
>>
>> Or you are making the mistake of thinking a popup means Basic
>> authentication.
>>
>> > If they enter correct domain login and password, they have auth error.
>> > If this users try to open https sites: all works good, they have not any
>> > type of errors.
>>
>> So,
>>  a) they are probably not going through this proxy, or
>>  b) the browser is suppressing the proxy-auth popups, or
>>  c) the authentication request is not coming from *your* proxy.
>>
>> >
>> > So we have errors only with unencrypted connections.
>> >
>> > I have this error on two servers:
>> > debian8, squid3.4 (from repository)
>> > CentOS7, squid3.3.8 (from repository).
>> >
>>
>> Two things to try:
>>
>> 1) Adding a line like this before the group access controls in
>> frntend.conf. This will ensure that authentiation credentials are valid
>> before doing group lookups:
>>  http_access deny !AuthorizedUsers
>>
>>
>> 2) checking up on the Debian winbind issue mentioned in
>> <
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm#winbind_privileged_pipe_permissions
>> >
>>
>> Im not sure about this it is likely to be involved on Debian, but CentOS
>> is not known to have that issue.
>>
>>
>> Oh and:
>>  3) remove the "acl manager" line from squid.conf.
>>
>>  4) change your cachemgr_passwd. Commenting it out does not hide it from
>> view when you post it on this public mailing list.
>>
>> You should remove all the commented out directives as well, some of them
>> may be leading to misunderstanding of what the config is actually doing.
>>
>>
>> Amos
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
>
>
> --
> С уважением, Дмитрий Ермаков.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
Samuel Anderson  |  System Administrator  |  International Document Services

IDS  |  11629 South 700 East, Suite 200  |  Draper, UT 84020-4607

-- 
CONFIDENTIALITY NOTICE:
This e-mail and any attachments are confidential. If you are not an 
intended recipient, please contact the sender to report the error and 
delete all copies of this message from your system.  Any unauthorized 
review, use, disclosure or distribution is prohibited.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mail upload problem

2015-12-04 Thread vivek singh
Thanks a lot
1. These logs are of the moment when when an gmail attachment was
initiated on the user machine.
2. Logs have been filtered for that particular user, and hence have
not been shown in the previous post.
3. I am worried that, while initiating the mail attachment in the user
machine, no other task was being performed on that machine, so how
come the line
1449226966.745: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/
come into picture,
of cource access to http://download.newnext.me/spark.bin is not
allowed in proxy server, but even we dont think to allow that,
4. if http://download.newnext.me/spark.bin have nothing to do with
mail attachement problem, then what could be the cause,

   I really appreciate your replies, Again I am thankful to you


-- 




*Thanks and RegardsVivek Kumar SinghJ.T.O./ITPC-KolkataMobile
 08902000538Landline 033-23211548*
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Understand debug Logs

2015-12-04 Thread Amos Jeffries
On 5/12/2015 5:53 a.m., Patrick Flaherty wrote:
> Hi,
> 
> I have debug level set to 2 (ALL,2) and was wondering if ANY of the
> following messages in the logs below were of concern.

level "ALL,0" displays only critical issues.

level ALL,1 displays the above plus other important issues that should
be attended to, but are not serious enough to panic about.

Everything else is informational. Squid has a long history of people
randomly selecting what level to display things at, so there is not much
coherency.


> I'm new to Squid and
> loving it. Particularly where it says always_direct = DENIED & never_direct
> = DENIED.
> 

The first means the message was not forced to go DIRECT to a DNS listed
server.

The second means the message was not forced to go to a cache_peer.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rock storage integrity

2015-12-04 Thread Alex Rousskov
On 12/04/2015 08:37 AM, Hussam Al-Tayeb wrote:
> Since this is a database, it is possible for part of the database to
> get corrupted through a crash or incorrect poweroff?

It depends on your definition of "corruption". Yes, it is possible that
some database updates will be incomplete because of a poweroff. Bugs
notwithstanding, after a restart,

* Squid will not notice that an entry that was supposed to be deleted
was not. Squid will continue to serve such an entry from the cache.

* Assuming atomic single-write disk I/Os, Squid should notice an entry
that was only partially saved and not serve it from the cache. Its slots
will be considered free space.

* In the event a single-write disk I/O was only partially completed,
Squid may or may not notice a partial save, depending on what was
actually written to disk. There is currently no Squid code that detects
non-atomic single-write disk I/Os. AFAICT, this might corrupt up to two
cache entries per cache_dir in such a way that Squid will not notice the
corruption unless there are some OS-level protections against that.
Squid uses regular file system calls for writing entries...

* There should be no effect on entries already fully stored at the time
of the power outage.


> I had an incorrect poweroff yesterday but cache.log did not list
> anything weird.

> Nevertheless, what would be the best way to check if there was some
> damage to the database (unusable slots/cells/whatever)?

IIRC, for Rock, all validation is currently done automagically upon startup.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Understand debug Logs

2015-12-04 Thread Patrick Flaherty
Hi,

 

I have debug level set to 2 (ALL,2) and was wondering if ANY of the
following messages in the logs below were of concern. I'm new to Squid and
loving it. Particularly where it says always_direct = DENIED & never_direct
= DENIED.

 

Thanks

Patrick

 

CONNECT mydomain.com:443 HTTP/1.1

Host: mydomain.com

Proxy-Connection: keep-alive

User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML,
like Gecko) Chrome/18.0.1003.1 Safari/535.19 Awesomium/1.7.1

 

 

--

2015/12/04 11:44:59.322 kid1| 85,2| client_side_request.cc(741)
clientAccessCheckDone: The request CONNECT mydomain.com:443 is ALLOWED; last
ACL checked: whitelist

2015/12/04 11:44:59.322 kid1| 85,2| client_side_request.cc(717)
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW

2015/12/04 11:44:59.322 kid1| 85,2| client_side_request.cc(741)
clientAccessCheckDone: The request CONNECT mydomain.com:443 is ALLOWED; last
ACL checked: whitelist

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: mydomain.com:443' via smart911.rave411.com

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'mydomain.com:443'

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths:
always_direct = DENIED

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths:
never_direct = DENIED

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=0.0.0.0 remote=205.126.126.230:443 flags=1

2015/12/04 11:44:59.322 kid1| 44,2| peer_select.cc(295) peerSelectDnsPaths:
timedout = 0

2015/12/04 11:44:59.457 kid1| 33,2| client_side.cc(815) swanSong:
local=192.168.1.1:3128 remote=192.168.1.233:49352 flags=1

2015/12/04 11:45:09.429 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New
connection on FD 10

2015/12/04 11:45:09.429 kid1| 5,2| TcpAcceptor.cc(295) acceptNext:
connection on local=[::]:3128 remote=[::] FD 10 flags=9

2015/12/04 11:45:09.430 kid1| 11,2| client_side.cc(2345) parseHttpRequest:
HTTP Client local=192.168.1.1:3128 remote=192.168.1.233:49353 FD 58 flags=1

2015/12/04 11:45:09.430 kid1| 11,2| client_side.cc(2346) parseHttpRequest:
HTTP Client REQUEST:

-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Deny Access based on SSL-Blacklists (SHA1-Fingerprint) with ssl_bump

2015-12-04 Thread Alex Rousskov
On 12/04/2015 05:40 AM, Amos Jeffries wrote:
> On 4/12/2015 9:34 p.m., Tom Tom wrote:
>> Why do I need a "full" ssl_bump-configuration to prevent access based
>> on fingerprints?


> Because "deny" in the form you are trying to do it is an HTTP message.
> In order to perform HTTP over a TLS connection you have to decrypt it first.


> What you actually want to be doing is:
> 
>   acl step1 at_step SslBump1
>   acl whitelist ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
>   acl blacklist server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"
> 
>   ssl_bump splice whitelist
>   ssl_bump peek step1
>   ssl_bump stare all
>   ssl_bump terminate blacklist
>   ssl_bump bump all


Please consider adding this fine example to the SslPeekAndSplice wiki
page at http://wiki.squid-cache.org/Features/SslPeekAndSplice


Please note that if you do not want to bump anything, then the following
should also work (bugs notwithstanding):

ssl_bump splice whitelist
ssl_bump peek all
ssl_bump terminate blacklist
ssl_bump splice all


Thank you,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to limit user traffic quota? (GoGo net)

2015-12-04 Thread Amos Jeffries
On 5/12/2015 4:57 a.m., GoGo net wrote:
> Limit rate is another direction to limit traffic, I will think about
> it.
> 
> Currently, I prefer to use the script to monitor access.log, and I
> find a problem today:
> 
> From [squid wiki](http://wiki.squid-cache.org/Features/LogFormat):
> 
>> bytes The size is the amount of data delivered to the client. Mind
>> that this does not constitute the net object size, as headers are
>> also counted.
> 
> It seems that **bytes** only includes response size (including http
> header). What I really want is counting both http-request and
> http-response. Is there any way to enable http-request **bytes**
> being logged in access.log?

You need to use the %st code in a custom log format.


PS. there is another problem you may not have noticed yet. The log
entries are only recorded at the *end* of each transaction. Which means
that all transactions started before the user hits their limit will be
allowed to continue consuming bandwidth until they exit naturally. At
which time the counted quota-spent value continues to go up past the
limit you set. CONNECT tunnels have indefinite lifetimes, some have been
seen lasting for weeks.

This is one of the reasons I recommend QoS controls external to Squid.
The OS can measure as each packet happens and terminate the over-quota
transactions at the TCP level.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rock storage integrity

2015-12-04 Thread Amos Jeffries
On 5/12/2015 4:37 a.m., Hussam Al-Tayeb wrote:
> Hi. I am using squid with rock storage right now to cache computer
> updates for my Linux computers. It works well.
> Since this is a database, it is possible for part of the database to
> get corrupted through a crash or incorrect poweroff?
> I know from sql database that incorrect shutdowns can cause binary
> corruption.
> 
> I had an incorrect poweroff yesterday but cache.log did not list
> anything weird.

Such types of corruption only happen if files were actively in the
process of writing data when the power went out (read does not count).

The way Squid uses a separate memory cache as the front-line I/O area
and Diskers cache as a secondary the risks of that happening are a bit
lower than with DB services that process directly to disk.


Also, unless you tuned them to a different size the rock slots/cells are
sized to match the OS natural page sizes. So there is almost no delay in
the wrote(2) within which the corruption can happen.

Your underlying FS can also play a part in preventing the corruption.

> 
> 2015/12/03 01:00:11| Store rebuilding is 0.31% complete
> 2015/12/03 01:01:00| Finished rebuilding storage from disk.
> 2015/12/03 01:01:00|31 Entries scanned
> 2015/12/03 01:01:00| 0 Invalid entries.
> 2015/12/03 01:01:00| 0 With invalid flags.
> 2015/12/03 01:01:00| 55523 Objects loaded.
> 2015/12/03 01:01:00| 0 Objects expired.
> 2015/12/03 01:01:00| 0 Objects cancelled.
> 2015/12/03 01:01:00| 0 Duplicate URLs purged.
> 2015/12/03 01:01:00| 0 Swapfile clashes avoided.
> 2015/12/03 01:01:00|   Took 49.94 seconds (.79 objects/sec).
> 2015/12/03 01:01:00| Beginning Validation Procedure
> 2015/12/03 01:01:00|   Completed Validation Procedure
> 2015/12/03 01:01:00|   Validated 0 Entries
> 2015/12/03 01:01:00|   store_swap_size = 3187216.00 KB
> 
> Nevertheless, what would be the best way to check if there was some
> damage to the database (unusable slots/cells/whatever)?

Squid does that automatically on (every) startup. The log lines you
quote above are the scan happening and its results.

If Squid had found any corruption there would have been a non-0 value on
the lines:

> 2015/12/03 01:01:00| 0 Invalid entries.
> 2015/12/03 01:01:00| 0 With invalid flags.

and possibly some WARNING notices above the "Finished rebuilding" line
if important problems were found.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to limit user traffic quota? (GoGo net)

2015-12-04 Thread GoGo net
Limit rate is another direction to limit traffic, I will think about it.

Currently, I prefer to use the script to monitor access.log, and I find a 
problem today:

From [squid wiki](http://wiki.squid-cache.org/Features/LogFormat):

> bytes The size is the amount of data delivered to the client. Mind that this 
> does not constitute the net object size, as headers are also counted.

It seems that **bytes** only includes response size (including http header). 
What I really want is counting both http-request and http-response. Is there 
any way to enable http-request **bytes** being logged in access.log?


> On Dec 4, 2015, at 12:23 AM, Robert Plamondon  wrote:
> 
> I haven't used delay pools in a while, but I would think that the updated 
> Squid 3 delay pools (with 64-bit counters and per-authenticated-user buckets) 
> would allow such quotas. 
> 
> I'd take the monthly quota and turn it into a per-second rate. If my math 
> isn't failing me, 100 GB/month = 38,500 bytes per second. That would be the 
> refill rate on the delay pool. Users will be guaranteed this rate. Their BW 
> would never be cut off, just throttled to the rate they're paying for.
> 
> Then pick a max value to taste. I like to populate delay pools to support an 
> enormous burst size (the "maximum" parameter in the pool), so the bandwidth 
> limitations will rarely if ever be encountered by the average user. 10% of 
> the monthly allotment, or 10 GB, (3 days' worth of bandwidth) strikes me as a 
> good starting point, but I wouldn't have much resistance to even higher 
> numbers, like 25%.
> 
> Robert
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] rock storage integrity

2015-12-04 Thread Hussam Al-Tayeb
Hi. I am using squid with rock storage right now to cache computer
updates for my Linux computers. It works well.
Since this is a database, it is possible for part of the database to
get corrupted through a crash or incorrect poweroff?
I know from sql database that incorrect shutdowns can cause binary
corruption.

I had an incorrect poweroff yesterday but cache.log did not list
anything weird.

2015/12/03 01:00:11| Store rebuilding is 0.31% complete
2015/12/03 01:01:00| Finished rebuilding storage from disk.
2015/12/03 01:01:00|31 Entries scanned
2015/12/03 01:01:00| 0 Invalid entries.
2015/12/03 01:01:00| 0 With invalid flags.
2015/12/03 01:01:00| 55523 Objects loaded.
2015/12/03 01:01:00| 0 Objects expired.
2015/12/03 01:01:00| 0 Objects cancelled.
2015/12/03 01:01:00| 0 Duplicate URLs purged.
2015/12/03 01:01:00| 0 Swapfile clashes avoided.
2015/12/03 01:01:00|   Took 49.94 seconds (.79 objects/sec).
2015/12/03 01:01:00| Beginning Validation Procedure
2015/12/03 01:01:00|   Completed Validation Procedure
2015/12/03 01:01:00|   Validated 0 Entries
2015/12/03 01:01:00|   store_swap_size = 3187216.00 KB

Nevertheless, what would be the best way to check if there was some
damage to the database (unusable slots/cells/whatever)?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] using splice just to improve TLS SNI logging

2015-12-04 Thread Alex Rousskov
On 12/03/2015 08:35 PM, Jason Haar wrote:

> Does going "splice" mode avoid all the potential SSL/TLS issues
> surrounding bump? ie it won't care about client certs, weird TLS
> extensions, etc? (ie other than availability, it shouldn't introduce a
> new way of failing?)

Obtaining SNI information requires parsing TLS handshake, so you will be
partially exposed to the dangers of that experimental and changing code.
Splicing at step1 is safer but does not give you SNI.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with NTLM auth behind netscaler

2015-12-04 Thread Amos Jeffries
On 4/12/2015 11:14 p.m., Fabio Bucci wrote:
> Hi All,
> my task is implementing a squid proxy that allow all my authenticated
> (windows AD) internal users to surf internet without any credential request
> (pop-up).
> 
> Plus, i created two squid nodes and put them behind a citrix netscaler in
> order to perform a load balance service.
> 

How does this LB device work exactly? when dealing with NTLM the
specifics matter *a lot*.

Some LB sniff the HTTP traffic then route them on a per-message basis.
This is incompatible with both NTLM and Negotiate authentication, and
can cause bad confusion between the browser and proxy randomly.

Note that HTTP is a stateless protocol. So none of the browser, LB or
proxy are broken when this is going on. It is those to auth schemes that
are broken and incompatible with the designed statelessness feature of
HTTP being used by the LB.


> I configured squid with samba and ntlm helper in order to perform a
> transparent authentication but sometimes some user report me their browsers
> require authentication via pop-up.
> 
> I'm not a deep expert about squid and i'd like to receive your help in
> order to understand if my configuration is correct or not and if there is a
> way to prevent popup.

With HTTP authentication there should only ever be one popup no matter
what type of authentication scheme is used. HTTP being stateless,
requires that every single message has credentials attached (NTLM
violates that and some browsers dont always re-send while the connection
is alive; Squid accepts that, the LB may not). It is the browsers
responsibility to remember the credentials that work and continue using
them without annoying the user.


There are some proxy configurations that allow for the proxy to force
the Browser to change credentials. These can result in popups as that
change happens. We will need to see your squid.conf to provide any
specific help on that.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Deny Access based on SSL-Blacklists (SHA1-Fingerprint) with ssl_bump

2015-12-04 Thread Amos Jeffries
On 5/12/2015 3:32 a.m., Tom Tom wrote:
> Hi Amos
> 
> The configuration you provided above works also fine. Thank you. Which
> configuration is generally proposed or "the way to go"?: The one,
> which terminates SSL-Blacklists with "ssl_bump terminate" or the other
> which denies https-Blacklist with "http_access deny"? Are there some
> speed-/security...-considerations?

terminate is the correct way to go if you are rejecting based on just
the TLS details. Squid may decrypt, but will only do the absolute
minimum necessary to get the error back to the client. Not getting
involved with the clients HTTPS data is a good idea.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Deny Access based on SSL-Blacklists (SHA1-Fingerprint) with ssl_bump

2015-12-04 Thread Tom Tom
Hi Amos

The configuration you provided above works also fine. Thank you. Which
configuration is generally proposed or "the way to go"?: The one,
which terminates SSL-Blacklists with "ssl_bump terminate" or the other
which denies https-Blacklist with "http_access deny"? Are there some
speed-/security...-considerations?

Kind regards,
Tom

On Fri, Dec 4, 2015 at 1:40 PM, Amos Jeffries  wrote:
> On 4/12/2015 9:34 p.m., Tom Tom wrote:
>> Hi list,
>>
>> I'm trying to implement SSL-Blacklists based on SHA1-Fingerprints
>> (squid 3.5.11). As I know, certificate-fingerprints are one of the
>> parts of a certificate, which are visible in a uncrypted traffic.
>>
>> It seems, that blocking https-sites based on fingerprints is only
>> working with a ssl_bump-enabled configuration. The directive, which
>> denies the access based on the fingerprint is "ssl_bump bump all" in
>> my case.
>>
>> The necessary parts of my config:
>> acl DENY_SSL_BUMP ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
>> acl tls_s1_connect at_step SslBump1
>> acl SSL_BL server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"
>> http_access deny SSL_BL
>>
>> http_port 3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=4MB cert=/usr/local/certs/myCA.pem
>> ssl_bump peek tls_s1_connect all
>> ssl_bump splice DENY_SSL_BUMP
>> ssl_bump bump all
>>
>>
>>
>> Question:
>> Why do I need a "full" ssl_bump-configuration to prevent access based
>> on fingerprints?
>
> Because "deny" in the form you are trying to do it is an HTTP message.
> In order to perform HTTP over a TLS connection you have to decrypt it first.
>
>
>> Why is it not enough with just "peeking" the
>> certificate/connection?
>
> Because peeking is an action done to the TLS layer.
>
>
> What you actually want to be doing is:
>
>   acl step1 at_step SslBump1
>   acl whitelist ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
>   acl blacklist server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"
>
>   ssl_bump splice whitelist
>   ssl_bump peek step1
>   ssl_bump stare all
>   ssl_bump terminate blacklist
>   ssl_bump bump all
>
>
> Notice how http_access is not part of the TLS ssl_bump processing.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mail upload problem

2015-12-04 Thread Amos Jeffries
On 5/12/2015 3:07 a.m., vivek singh wrote:
> I accept http://download.newnext.me/spark.bin to be a virus redirection,
> but not sure, and dint understand how it is so, i have checked the computer
> for any unwanted third party  and were not found.
> 

Well, it is not an upload, and does not visibly have anything to do with
mail. So your earlier report and teh subject of this thread was confusing.

These are explicit Access Control denied responses send by Squid, your
http_access rules are doing "deny".

You are not logging the client IP or the full URL so there is no way to
see if the client is correctly matched with the necessary whitelist, or
if the blacklist is having its desired effect.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mail upload problem

2015-12-04 Thread vivek singh
I accept http://download.newnext.me/spark.bin to be a virus redirection,
but not sure, and dint understand how it is so, i have checked the computer
for any unwanted third party  and were not found.




*Thanks and RegardsVivek Kumar SinghMobile   ​+918902000538*

On Fri, Dec 4, 2015 at 7:11 PM, vivek singh  wrote:

> please find below the access log while problem occur
> 1449226819.307: 0: TCP_DENIED/403: 4089: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226828.671: 249222: TCP_TUNNEL/200: 6610: CONNECT:
> clients2.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449226829.308: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226839.323: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226849.216: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226859.119: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226868.917: 0: TCP_DENIED/403: 4088: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226878.635: 0: TCP_DENIED/403: 4089: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226888.391: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226898.104: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226907.951: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226917.685: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226927.463: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226937.162: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226947.042: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226956.901: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226966.745: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226976.559: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226986.260: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449226996.214: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449227006.198: 0: TCP_DENIED/403: 4090: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449227016.198: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449227026.184: 0: TCP_DENIED/403: 4091: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449227036.072: 0: TCP_DENIED/403: 4089: GET:
> http://download.newnext.me/spark.bin?: -: HIER_NONE/-
> 1449227042.281: 791782: TCP_TUNNEL/200: 5014: CONNECT:
> mtalk.google.com:443: -: HIER_DIRECT/74.125.130.188
> 1449227042.537: 714649: TCP_TUNNEL/200: 7775: CONNECT: play.google.com:443:
> -: HIER_DIRECT/216.58.196.110
> 1449227042.537: 68131: TCP_TUNNEL/200: 5813: CONNECT:
> lh3.googleusercontent.com:443: -: HIER_DIRECT/216.58.196.97
> 1449227042.538: 70423: TCP_TUNNEL/200: 2303: CONNECT: apis.google.com:443:
> -: HIER_DIRECT/216.58.196.110
> 1449227042.538: 184079: TCP_TUNNEL/200: 698: CONNECT: csi.gstatic.com:443:
> -: HIER_DIRECT/216.58.211.3
> 1449227042.539: 190277: TCP_TUNNEL/200: 3353: CONNECT: ssl.gstatic.com:443:
> -: HIER_DIRECT/216.58.196.99
> 1449227042.539: 143474: TCP_TUNNEL/200: 723: CONNECT:
> clients5.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449227042.539: 142248: TCP_TUNNEL/200: 5317: CONNECT:
> clients5.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449227042.540: 165512: TCP_TUNNEL/200: 1107: CONNECT:
> clients1.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449227042.540: 188929: TCP_TUNNEL/200: 7668: CONNECT: plus.google.com:443:
> -: HIER_DIRECT/216.58.196.110
> 1449227042.540: 388342: TCP_TUNNEL/200: 4996: CONNECT:
> clients6.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449227042.540: 396197: TCP_TUNNEL/200: 2101: CONNECT: www.google.com:443:
> -: HIER_DIRECT/216.58.196.100
> 1449227042.542: 106590: TCP_TUNNEL/200: 575: CONNECT:
> clients2.google.com:443: -: HIER_DIRECT/216.58.196.110
> 1449227042.542: 88135: TCP_TUNNEL/200: 963: CONNECT: play.google.com:443:
> -: HIER_DIRECT/216.58.196.110
> 1449227042.543: 6778: TCP_TUNNEL/200: 60202: CONNECT: www.google.co.in:443:
> -: HIER_DIRECT/216.58.196.99
> 1449227042.543: 786962: TCP_TUNNEL/200: 16071: CONNECT:
> 0.client-channel.google.com:443: -: HIER_DIRECT/74.125.200.189
> 1449227042.544: 6709: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
> -: HIER_DIRECT/216.58.196.99
> 1449227042.544: 6630: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
> -: HIER_DIRECT/216.58.196.99
> 1449227042.544: 6399: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
> -: HIER_DIRECT/216.58.196.99
> 144922

Re: [squid-users] mail upload problem

2015-12-04 Thread vivek singh
please find below the access log while problem occur
1449226819.307: 0: TCP_DENIED/403: 4089: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226828.671: 249222: TCP_TUNNEL/200: 6610: CONNECT:
clients2.google.com:443: -: HIER_DIRECT/216.58.196.110
1449226829.308: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226839.323: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226849.216: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226859.119: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226868.917: 0: TCP_DENIED/403: 4088: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226878.635: 0: TCP_DENIED/403: 4089: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226888.391: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226898.104: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226907.951: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226917.685: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226927.463: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226937.162: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226947.042: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226956.901: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226966.745: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226976.559: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226986.260: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449226996.214: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227006.198: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227016.198: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227026.184: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227036.072: 0: TCP_DENIED/403: 4089: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227042.281: 791782: TCP_TUNNEL/200: 5014: CONNECT: mtalk.google.com:443:
-: HIER_DIRECT/74.125.130.188
1449227042.537: 714649: TCP_TUNNEL/200: 7775: CONNECT: play.google.com:443:
-: HIER_DIRECT/216.58.196.110
1449227042.537: 68131: TCP_TUNNEL/200: 5813: CONNECT:
lh3.googleusercontent.com:443: -: HIER_DIRECT/216.58.196.97
1449227042.538: 70423: TCP_TUNNEL/200: 2303: CONNECT: apis.google.com:443:
-: HIER_DIRECT/216.58.196.110
1449227042.538: 184079: TCP_TUNNEL/200: 698: CONNECT: csi.gstatic.com:443:
-: HIER_DIRECT/216.58.211.3
1449227042.539: 190277: TCP_TUNNEL/200: 3353: CONNECT: ssl.gstatic.com:443:
-: HIER_DIRECT/216.58.196.99
1449227042.539: 143474: TCP_TUNNEL/200: 723: CONNECT:
clients5.google.com:443: -: HIER_DIRECT/216.58.196.110
1449227042.539: 142248: TCP_TUNNEL/200: 5317: CONNECT:
clients5.google.com:443: -: HIER_DIRECT/216.58.196.110
1449227042.540: 165512: TCP_TUNNEL/200: 1107: CONNECT:
clients1.google.com:443: -: HIER_DIRECT/216.58.196.110
1449227042.540: 188929: TCP_TUNNEL/200: 7668: CONNECT: plus.google.com:443:
-: HIER_DIRECT/216.58.196.110
1449227042.540: 388342: TCP_TUNNEL/200: 4996: CONNECT:
clients6.google.com:443: -: HIER_DIRECT/216.58.196.110
1449227042.540: 396197: TCP_TUNNEL/200: 2101: CONNECT: www.google.com:443:
-: HIER_DIRECT/216.58.196.100
1449227042.542: 106590: TCP_TUNNEL/200: 575: CONNECT:
clients2.google.com:443: -: HIER_DIRECT/216.58.196.110
1449227042.542: 88135: TCP_TUNNEL/200: 963: CONNECT: play.google.com:443:
-: HIER_DIRECT/216.58.196.110
1449227042.543: 6778: TCP_TUNNEL/200: 60202: CONNECT: www.google.co.in:443:
-: HIER_DIRECT/216.58.196.99
1449227042.543: 786962: TCP_TUNNEL/200: 16071: CONNECT:
0.client-channel.google.com:443: -: HIER_DIRECT/74.125.200.189
1449227042.544: 6709: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
-: HIER_DIRECT/216.58.196.99
1449227042.544: 6630: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
-: HIER_DIRECT/216.58.196.99
1449227042.544: 6399: TCP_TUNNEL/200: 234: CONNECT: www.google.co.in:443:
-: HIER_DIRECT/216.58.196.99
1449227045.855: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227055.855: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227065.855: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227075.855: 0: TCP_DENIED/403: 4090: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-
1449227085.855: 0: TCP_DENIED/403: 4091: GET:
http://download.newnext.me/spark.bin?: -: HIER_NONE/-

Re: [squid-users] Deny Access based on SSL-Blacklists (SHA1-Fingerprint) with ssl_bump

2015-12-04 Thread Amos Jeffries
On 4/12/2015 9:34 p.m., Tom Tom wrote:
> Hi list,
> 
> I'm trying to implement SSL-Blacklists based on SHA1-Fingerprints
> (squid 3.5.11). As I know, certificate-fingerprints are one of the
> parts of a certificate, which are visible in a uncrypted traffic.
> 
> It seems, that blocking https-sites based on fingerprints is only
> working with a ssl_bump-enabled configuration. The directive, which
> denies the access based on the fingerprint is "ssl_bump bump all" in
> my case.
> 
> The necessary parts of my config:
> acl DENY_SSL_BUMP ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
> acl tls_s1_connect at_step SslBump1
> acl SSL_BL server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"
> http_access deny SSL_BL
> 
> http_port 3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB cert=/usr/local/certs/myCA.pem
> ssl_bump peek tls_s1_connect all
> ssl_bump splice DENY_SSL_BUMP
> ssl_bump bump all
> 
> 
> 
> Question:
> Why do I need a "full" ssl_bump-configuration to prevent access based
> on fingerprints?

Because "deny" in the form you are trying to do it is an HTTP message.
In order to perform HTTP over a TLS connection you have to decrypt it first.


> Why is it not enough with just "peeking" the
> certificate/connection?

Because peeking is an action done to the TLS layer.


What you actually want to be doing is:

  acl step1 at_step SslBump1
  acl whitelist ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
  acl blacklist server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"

  ssl_bump splice whitelist
  ssl_bump peek step1
  ssl_bump stare all
  ssl_bump terminate blacklist
  ssl_bump bump all


Notice how http_access is not part of the TLS ssl_bump processing.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Authentication Problem

2015-12-04 Thread Dima Ermakov
Thank you, Amos.

I checked all, that you wrote.
It didn't help me.

I have this problem only on google chrome browser.
Before 2015-12-03 all was good.
I didn't change my configuration more than one month.

Ten minutes ago "Noel Kelly nke...@citrusnetworks.net" wrote in this list,
that google chrome v47 has broken NTLM authentication.
My clients with problems has google chrome v47 (((

Mozilla Firefox clients work good.

Thank you!

This is message from Noel Kelly:
"

Hi

For information, the latest version of Google Chrome (v47.0.2526.73M) has
broken NTLM authentication:

https://code.google.com/p/chromium/issues/detail?id=544255
https://productforums.google.com/forum/#!topic/chrome/G_9eXH9c_ns;context-place=forum/chrome

Cheers
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

"

On 4 December 2015 at 04:55, Amos Jeffries  wrote:

> On 4/12/2015 9:46 a.m., Dima Ermakov wrote:
> > Hi!
> > I have a problem with authentiation.
> >
> > I use samba ntlm authentication in my network.
> >
> > Some users ( not all ) have problems with http traffic.
> >
> > They see basic authentication request.
>
> Meaning you *dont* have NTLM authentication on your network.
>
> Or you are making the mistake of thinking a popup means Basic
> authentication.
>
> > If they enter correct domain login and password, they have auth error.
> > If this users try to open https sites: all works good, they have not any
> > type of errors.
>
> So,
>  a) they are probably not going through this proxy, or
>  b) the browser is suppressing the proxy-auth popups, or
>  c) the authentication request is not coming from *your* proxy.
>
> >
> > So we have errors only with unencrypted connections.
> >
> > I have this error on two servers:
> > debian8, squid3.4 (from repository)
> > CentOS7, squid3.3.8 (from repository).
> >
>
> Two things to try:
>
> 1) Adding a line like this before the group access controls in
> frntend.conf. This will ensure that authentiation credentials are valid
> before doing group lookups:
>  http_access deny !AuthorizedUsers
>
>
> 2) checking up on the Debian winbind issue mentioned in
> <
> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm#winbind_privileged_pipe_permissions
> >
>
> Im not sure about this it is likely to be involved on Debian, but CentOS
> is not known to have that issue.
>
>
> Oh and:
>  3) remove the "acl manager" line from squid.conf.
>
>  4) change your cachemgr_passwd. Commenting it out does not hide it from
> view when you post it on this public mailing list.
>
> You should remove all the commented out directives as well, some of them
> may be leading to misunderstanding of what the config is actually doing.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
С уважением, Дмитрий Ермаков.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid with NTLM auth behind netscaler

2015-12-04 Thread Fabio Bucci
Hi All,
my task is implementing a squid proxy that allow all my authenticated
(windows AD) internal users to surf internet without any credential request
(pop-up).

Plus, i created two squid nodes and put them behind a citrix netscaler in
order to perform a load balance service.

I configured squid with samba and ntlm helper in order to perform a
transparent authentication but sometimes some user report me their browsers
require authentication via pop-up.

I'm not a deep expert about squid and i'd like to receive your help in
order to understand if my configuration is correct or not and if there is a
way to prevent popup.

Thanks all!

Fabio
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Google Chrome v47.0.2526.73M Broken NTLM Authentication

2015-12-04 Thread Noel Kelly

Hi

For information, the latest version of Google Chrome (v47.0.2526.73M) 
has broken NTLM authentication:


https://code.google.com/p/chromium/issues/detail?id=544255
https://productforums.google.com/forum/#!topic/chrome/G_9eXH9c_ns;context-place=forum/chrome

Cheers
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Deny Access based on SSL-Blacklists (SHA1-Fingerprint) with ssl_bump

2015-12-04 Thread Tom Tom
Hi list,

I'm trying to implement SSL-Blacklists based on SHA1-Fingerprints
(squid 3.5.11). As I know, certificate-fingerprints are one of the
parts of a certificate, which are visible in a uncrypted traffic.

It seems, that blocking https-sites based on fingerprints is only
working with a ssl_bump-enabled configuration. The directive, which
denies the access based on the fingerprint is "ssl_bump bump all" in
my case.

The necessary parts of my config:
acl DENY_SSL_BUMP ssl::server_name_regex -i "/etc/squid/DENY_SSL_BUMP"
acl tls_s1_connect at_step SslBump1
acl SSL_BL server_cert_fingerprint "/etc/squid/SSL_BLACKLIST"
http_access deny SSL_BL

http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/certs/myCA.pem
ssl_bump peek tls_s1_connect all
ssl_bump splice DENY_SSL_BUMP
ssl_bump bump all



Question:
Why do I need a "full" ssl_bump-configuration to prevent access based
on fingerprints? Why is it not enough with just "peeking" the
certificate/connection?

Thanks a lot.
Kind regards,
Tom
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users