RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-05-14 Thread Clem
Hi Amos,

Thx for your answer.

I'm still searching why my solution works with XP and only when I change 2 
settings (lanmanager level, and disable msstd) on Windows7.
So I use a cache.log with debug options to analyze more precisely, to see the 
difference between these two OS.

When that doesn’t work on windows7, the request is "stuck" on RPC_OUT_DATA with 
a 200 success HTTP, sort of time out, and no infos, I've sniffed all I can, and 
nothing ...

The only thing I can see in logs is the cookie header and the pragma 
"sessionid" on windows7. In XP there is no cookie header and pragma is 
"no-cache" only, no other values.

> Also, request_header_replace requires a previous "request_header_access deny 
> ..." giving permission to remove existng header details before it can replace 
> the content.

Thx for this info, I'll test it today.
If I write : 
request_header_access Cookie deny all
request_header_replace Cookie none

Does this disable cookie header ?

Thx, regards

Clem


-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : vendredi 11 mai 2012 16:28
À : squid-users@squid-cache.org
Objet : Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 
exchange2007 with ntlm

On 12/05/2012 1:50 a.m., Clem wrote:
> Hello,
>
> In my cache.log I have (windows7 client) :
>
> --
> 2012/05/11 13:37:42.493| HTTP Client local=ip_squid:443
> remote=ip_wan_client:60465 FD 11 flags=1
> 2012/05/11 13:37:42.493| HTTP Client REQUEST:
> -
> RPC_OUT_DATA /rpc/rpcproxy.dll?fqdn_exchange_server:6002 HTTP/1.1
> Cache-Control: no-cache
> Connection: Keep-Alive
> Pragma: SessionId=d3deb408-a810-4e85-b3df-1e50e0fe11f7
> Accept: application/rpc
> Cookie: OutlookSession="{B14448C4-3BB4-454E-A09F-CA4705810688}
> Outlook=14.0.6117.5001 OS=6.1.7601"
> User-Agent: MSRPC
> Content-Length: 0
> Host: mail.xx.fr
> Authorization: NTLM 
> TlRMTVNTUAABB4IIogAGAbEdDw==
> --
>
> The difference between xp client is Pragma header, no-cache value for 
> xp, and Cookie header doesn’t exist in xp.

You mean no-cache as well as SessionId values? or just no-cache and no 
SessionId?

>
> So I want to "disable" Cookie header and replace value for pragma, in 
> my squid.conf I've added this lines :
>
> request_header_access Cookie deny all
> request_header_replace Pragma no-cache

"Pragma: no-cache" has been obsoleted by "Cache-Control:no-cache". They do the 
same thing.

Also, request_header_replace requires a previous "request_header_access deny 
..." giving permission to remove existng header details before it can replace 
the content.

>
> But that doesn't work, header cookie is still there, and pragma isn’t
> changed.

Make sure you are looking at the right things. "HTTP Client REQUEST " is 
the raw data received from the client. No changes made by Squid will 
show up in those details (except some minor auto-corrections by the 
parser). The "HTTP Server REQUEST" details later on with same URL are 
the Squid->Server information after all Squid manipulations.

The response headers are in a pair of "HTTP foo REPLY".

Amos



Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-05-14 Thread Amos Jeffries

On 14/05/2012 7:42 p.m., Clem wrote:

Hi Amos,

Thx for your answer.

I'm still searching why my solution works with XP and only when I change 2 
settings (lanmanager level, and disable msstd) on Windows7.
So I use a cache.log with debug options to analyze more precisely, to see the 
difference between these two OS.

When that doesn’t work on windows7, the request is "stuck" on RPC_OUT_DATA with 
a 200 success HTTP, sort of time out, and no infos, I've sniffed all I can, and nothing 
...

The only thing I can see in logs is the cookie header and the pragma "sessionid" on 
windows7. In XP there is no cookie header and pragma is "no-cache" only, no other values.


Hmm. Hanging usually means something somewhere is waiting expecting data 
somewhere.


Could be an HTTP object sent with wrong body size. Or another side 
channel somewhere expected to be working but not operating. Things like 
unexpected side channels seem to happen a lot with MS software IME.



Also, request_header_replace requires a previous "request_header_access deny 
..." giving permission to remove existng header details before it can replace the 
content.

Thx for this info, I'll test it today.
If I write :
request_header_access Cookie deny all
request_header_replace Cookie none

Does this disable cookie header ?


It erases all existing Cookie values and creates the header "Cookie: none".

Amos



RE: [squid-users] Squid Restarting

2012-05-14 Thread Justin Lawler
Thanks Amos - we have heap dumps but unfortunately we could not share with the 
wider community as they're taken from a customer production environment. 
However, we can send on information taken from the heap dump - like output from 
pflags/pstack/etc. Would this be sufficient to investigate the issue?

Thanks and regards,
Justin

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, May 06, 2012 7:16 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Restarting

On 4/05/2012 9:59 p.m., Justin Lawler wrote:
> Hi,
>
> We're running squid 3.1.19 - and have seen it restarting from the logs, just 
> after the below error:
>
> 2012/04/19 12:12:28| assertion failed: forward.cc:496: "server_fd == fd"
> 2012/04/19 12:12:59| Starting Squid Cache version 3.1.19 for 
> sparc-sun-solaris2.10...
>
> Is this a known issue? any workaround?

Seems to be new and a bit strange. Squid opens one connection to the server to 
fetch content sometime later a connection was closed, but not the one which was 
opened to begin with.

Do you have a core dump or stack trace available to identify what the fd and 
server_fd values actually were during the crash?

>
> It's been in production for 6 weeks now, and have only seen it once, but we 
> need to have an answer for the customer. We're worried it'll be more 
> frequently as traffic goes up.

Being the first report over a month after the release, it would seem to be very 
rare.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-05-14 Thread Clem
In the log, the exactly same sequence, on w7 it hangs, on xp it continues :

:: Win7

2012/05/14 10:14:15.090| ctx: enter level  0: 
'https://mail.x.fr/rpc/rpcproxy.dll?fqdn_exchange_server:6002'
2012/05/14 10:14:15.090| HTTP Server local=ip_squid:49014 
remote=ip_exchange_server:443 FD 12 flags=1
2012/05/14 10:14:15.090| HTTP Server REPLY:
-
HTTP/1.1 200 OK
Date: Mon, 14 May 2012 10:15:09 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Type: application/rpc
Content-Length:20
Connection: Keep-Alive


--
2012/05/14 10:14:15.091| ctx: exit level  0
2012/05/14 10:14:15.091| The reply for RPC_OUT_DATA 
https://mail.x.fr/rpc/rpcproxy.dll?fqdn_exchange_server:6002 is 1, because it 
matched 'all'
2012/05/14 10:14:15.091| HTTP Client local=ip_squid:443 
remote=ip_wan_client:51556 FD 11 flags=1
2012/05/14 10:14:15.091| HTTP Client REPLY:
-
HTTP/1.1 200 OK
Date: Mon, 14 May 2012 10:15:09 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Type: application/rpc
Content-Length: 20
X-Cache: MISS from mail.x.fr
Via: 1.1 mail.x.fr (squid/3.2.0.17-20120415-r11555)
Connection: keep-alive


--
2012/05/14 10:14:15.092| FilledChecklist.cc(100) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x8dff1c8
2012/05/14 10:14:15.092| ACLChecklist::~ACLChecklist: destroyed 0x8dff1c8

And it hangs there ...

:: Win7


:: WinXP

2012/05/11 13:22:33.452| ctx: enter level  0: 
'https://mail.x.fr/rpc/rpcproxy.dll?fqdn_exchange_server:6002'
2012/05/11 13:22:33.452| HTTP Server local=ip_squid:46111 
remote=ip_exchange_server:443 FD 12 flags=1
2012/05/11 13:22:33.452| HTTP Server REPLY:
-
HTTP/1.1 200 OK
Date: Fri, 11 May 2012 13:23:13 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Type: application/rpc
Content-Length:20
Connection: Keep-Alive


--
2012/05/11 13:22:33.452| ctx: exit level  0
2012/05/11 13:22:33.452| The reply for RPC_OUT_DATA 
https://mail.x.fr/rpc/rpcproxy.dll?fqdn_exchange_server:6002 is 1, because it 
matched 'all'
2012/05/11 13:22:33.452| HTTP Client local=ip_squid:443 
remote=ip_wan_client:1162 FD 11 flags=1
2012/05/11 13:22:33.452| HTTP Client REPLY:
-
HTTP/1.1 200 OK
Date: Fri, 11 May 2012 13:23:13 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Type: application/rpc
Content-Length: 20
X-Cache: MISS from mail.x.fr
Via: 1.1 mail.x.fr (squid/3.2.0.17-20120415-r11555)
Connection: keep-alive


--
2012/05/11 13:22:33.454| FilledChecklist.cc(100) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x8dccea8
2012/05/11 13:22:33.454| ACLChecklist::~ACLChecklist: destroyed 0x8dccea8
2012/05/11 13:22:33.512| HTTP Client local= ip_squid:443 
remote=ip_wan_client:1160 FD 8 flags=1
2012/05/11 13:22:33.512| HTTP Client REQUEST:
-
RPC_IN_DATA /rpc/rpcproxy.dll? fqdn_exchange_server:6002 HTTP/1.1
Accept: application/rpc
User-Agent: MSRPC
Host: mail.x.fr
Content-Length: 1073741824
Connection: Keep-Alive
Cache-Control: no-cache
Pragma: no-cache

 and that continues ...

:: WinXP


And no more infos why It's hanging



Clem


-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : lundi 14 mai 2012 12:17
À : squid-users@squid-cache.org
Objet : Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 
exchange2007 with ntlm

On 14/05/2012 7:42 p.m., Clem wrote:
> Hi Amos,
>
> Thx for your answer.
>
> I'm still searching why my solution works with XP and only when I change 2 
> settings (lanmanager level, and disable msstd) on Windows7.
> So I use a cache.log with debug options to analyze more precisely, to see the 
> difference between these two OS.
>
> When that doesn’t work on windows7, the request is "stuck" on RPC_OUT_DATA 
> with a 200 success HTTP, sort of time out, and no infos, I've sniffed all I 
> can, and nothing ...
>
> The only thing I can see in logs is the cookie header and the pragma 
> "sessionid" on windows7. In XP there is no cookie header and pragma is 
> "no-cache" only, no other values.

Hmm. Hanging usually means something somewhere is waiting expecting data 
somewhere.

Could be an HTTP object sent with wrong body size. Or another side channel 
somewhere expected to be working but not operating. Things like unexpected side 
channels seem to happen a lot with MS software IME.

>> Also, request_header_replace requires a previous "request_header_access deny 
>> ..." giving permission to remove existng header details before it can 
>> replace the content.
> Thx for this info, I'll test it today.
> If I write :
> request_header_access Cookie deny all
> request_header_replace Cookie none
>
> Does this disable cookie header ?

It erases all existing Cookie values and creates the header "Cookie: none".

Amos



Re: [squid-users] Squid Restarting

2012-05-14 Thread Amos Jeffries

On 14/05/2012 11:03 p.m., Justin Lawler wrote:

Thanks Amos - we have heap dumps but unfortunately we could not share with the 
wider community as they're taken from a customer production environment. 
However, we can send on information taken from the heap dump - like output from 
pflags/pstack/etc. Would this be sufficient to investigate the issue?


Private data should not be a problem. Initially we just need a backtrace 
from the dump to find which function calls led to it, and the two values 
in those FD.


Amos


Thanks and regards,
Justin

-Original Message-
From: Amos Jeffries

On 4/05/2012 9:59 p.m., Justin Lawler wrote:

Hi,

We're running squid 3.1.19 - and have seen it restarting from the logs, just 
after the below error:

2012/04/19 12:12:28| assertion failed: forward.cc:496: "server_fd == fd"
2012/04/19 12:12:59| Starting Squid Cache version 3.1.19 for 
sparc-sun-solaris2.10...

Is this a known issue? any workaround?

Seems to be new and a bit strange. Squid opens one connection to the server to 
fetch content sometime later a connection was closed, but not the one which was 
opened to begin with.

Do you have a core dump or stack trace available to identify what the fd and 
server_fd values actually were during the crash?


It's been in production for 6 weeks now, and have only seen it once, but we 
need to have an answer for the customer. We're worried it'll be more frequently 
as traffic goes up.

Being the first report over a month after the release, it would seem to be very 
rare.

Amos
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp





[squid-users] High load squid setup

2012-05-14 Thread Timur Irmatov
Hi!


I would like to receive an advice from people more experienced with
squid than me.. :)

We are trying to setup fully transparent squid proxy (with TPROXY) for
about 6000 clients, according to instructions on wiki.

At the moment system is configured and working well at the half of
planned load - 3000 clients, with 120 Mbit/s peak traffic at 1000
requests/sec max. System is Ubuntu 12.04, Squid version is 3.1.19.
Server has two 2.6 GHz Xeon CPUs and 6 SCSI drives.

What is the way to double that load on server? I suppose there could
be several bottlenecks:

== CPU load ==

Current stable Squid version does not take advantage of several CPUs,
but we can work around this by configuring second squid instance on
another port. Then half of the clients will be served by one instance
and half by other. Both instances will be configured as siblings and
proxy-only.

== Disk access performance ==

Well, this is what we just need to test and see if current setup will
be enough or not. If not then more spindles is the way to go. Another
option would be to use a couple of SSDs, but I am not sure if they are
reliable enough for this kind of load and what models should we use.
(We just haven't used any SSDs before, advices would be greatly
appreciated).

== Outgoing connections number ==

As you can see from output of following one-liner, there are only two
local ports that have more than one connection on it:

netstat -tn|awk '/ESTABLISHED/ && NR > 2 {print $4}'|perl -pe
's/.*://'|sort|uniq -c |awk '$1!=1 {print}'
  2 22
  16959 80

There are two connections to ssh port (22) and almost 17k connections
to local port 80 (which is TPROXYied to Squid). So, all other outgoing
connections are using unique random ports. Then there is a limit of
65k outgoing connections for this box as a whole. Am I right? Is there
anything we can do?

Anything other I missed?

Any other performance/ reliability recommendations for our setup would
be greatly apreciated.


-- 
Timur Irmatov, xmpp:irma...@jabber.ru


[squid-users] Expire time cache

2012-05-14 Thread lucas coudures
Hello,
I installed squid 3.1 in a debian squeeze 2.6.32-5-686, I need to know
if exist a way to set the time to expire cache, or if squid alredy do
this.

I will cache http pages and also files like .mp3 .jpg, etc...


thank you for your help



Lucas Coudures
Linux User #442566

Blog: http://lucas-coudures.blogspot.com/

.

-.
Dead is a matter of definition. Free software only dies when the last
copy of the source code is erased.


[squid-users] problem with logging to mysql

2012-05-14 Thread Jan Malaník
Good day,
I have a problem with logging to mysql. I tried this configuration:
logformat squid %tl;%>a;%>A;%ru;%un;%Ss
access_log daemon:/127.0.0.1/report/surf/squid/squid squid
logfile_daemon /usr/local/sbin/log_mysql_daemon.pl

But it wrote during start:
 /etc/init.d/squid3 start

Starting Squid HTTP Proxy 3.x: squid3Creating Squid HTTP Proxy 3.x
cache structure ... (warning).
2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile:
squid.conf:2309 unrecognized: 'logfile_daemon'
2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile:
squid.conf:2309 unrecognized: 'logfile_daemon'
Why this happen?

Then I found log_db_daemon directive, but I don't know howto configure it.

Please can someone help me?

Thank you Jan Malanik


Re: [squid-users] sŽsquidù(131) connection reset by peer(145) Connection timed out the„$š

2012-05-14 Thread Giles Coochey

On 14/05/2012 19:06, ql li wrote:

搏�'濈-妷雤黔o*^z皑瀢湺*'�)瀡嫮��//==

Hi,

I don't know if you can try English (however bad it might be?).

Thanks

Giles


smime.p7s
Description: S/MIME Cryptographic Signature


[squid-users] Re: Expire time cache

2012-05-14 Thread RW
On Mon, 14 May 2012 10:09:43 -0300
lucas coudures wrote:

> Hello,
> I installed squid 3.1 in a debian squeeze 2.6.32-5-686, I need to know
> if exist a way to set the time to expire cache, or if squid alredy do
> this.

Squid maintains it's caches, whether memory or disk, within the sizes
you specified. If you need more or less retention then adjust the size. 



[squid-users] Squid load balancing access log

2012-05-14 Thread Ibrahim Lubis
Squid guru,

I do load balancing 2 centos server with ucarp and haproxy, with cache peering 
all squid server as sibling. I use squid as caching. The problem is every log 
line what i see in access log file is ip from squid cache not from user 
requested web access. Before i do load balancing,only one squid box, in access 
log file i see ip user requested web access.

Thx

RE: [squid-users] FTP option ftp_epsv

2012-05-14 Thread Nil Nik

Please reply i need help on this.


> From: nil_fe...@hotmail.com
> To: squid-users@squid-cache.org
> Date: Wed, 9 May 2012 11:15:05 +
> Subject: [squid-users] FTP option ftp_epsv
>
>
>
> I have configured browser (http and FTP option) to use squid proxy.
> some FTP sites are opening with the help of browser but not 
> "ftp://ftp.uar.net/";
>
> If i use "ftp_epsv off" then it works fine.
> I am using squid-3.1.9.
>
> What was the exact problem?
> Please tell me consequences of "ftp_epsv off".
> Does it will affect on some other settings?
>
> Thanks in advance!!!
>
  

Re: [squid-users] problem with logging to mysql

2012-05-14 Thread Amos Jeffries

On 15/05/2012 1:56 a.m., Jan Malaník wrote:

Good day,
I have a problem with logging to mysql. I tried this configuration:
logformat squid %tl;%>a;%>A;%ru;%un;%Ss
access_log daemon:/127.0.0.1/report/surf/squid/squid squid
logfile_daemon /usr/local/sbin/log_mysql_daemon.pl

But it wrote during start:
  /etc/init.d/squid3 start

Starting Squid HTTP Proxy 3.x: squid3Creating Squid HTTP Proxy 3.x
cache structure ... (warning).
2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile:
squid.conf:2309 unrecognized: 'logfile_daemon'
2012/05/14 15:16:58| cache_cf.cc(363) parseOneConfigFile:
squid.conf:2309 unrecognized: 'logfile_daemon'
Why this happen?

Then I found log_db_daemon directive, but I don't know howto configure it.

Please can someone help me?


You need squid 2.7 or 3.2+ to use logfile_daemon.

It looks like you are running a packaged Squid from Debian or a 
derivative. Which would be 3.1 series Squid, not 3.2 yet.


Amos


Re: [squid-users] sŽsquidù(131) connection reset by peer(145) Connection timed out the„$š

2012-05-14 Thread Amos Jeffries

On 15/05/2012 8:37 a.m., Giles Coochey wrote:

On 14/05/2012 19:06, ql li wrote:

搏�'濈-妷雤黔o*^z皑瀢湺*'�)瀡嫮��//==

Hi,

I don't know if you can try English (however bad it might be?).

Thanks

Giles



Google explains...

> "
> Squid can not detect connection reset by the peer and the Connection 
timed out the error made ​​classified

 This automatic response from the above agent with a parent?
 "


Those errors are TCP protocol errors from the network system HTTP 
travels over. Squid already does everything it possibly can to avoid them.




 "
 Can escape the national anti / fire / wall / smart? CHINA friends?
 "




Amos


[squid-users] Squid not keeping authenticated NTLM session open

2012-05-14 Thread infernalis
Hi all,
 
I'm having considerable trouble getting Squid to work well with
NTLM/Kerberos and was hoping someone here would be able to help.
 
My ultimate goal is to be able to connect to an IIS server through Squid
using a computer that is not a member of the AD domain. I would like to
enter my credentials once to the proxy, and then have Squid save the
authentication token in order to use it against other servers that require
authentication.
 
The problem I'm facing is that no matter what I've tried, I'm forced to
authenticate manually six times while loading sites requiring
authentication. This is much worse than the behavior prior to adding Squid. 
 
First, is it possible for Squid to cache the credentials and then
authenticate on behalf of the client to an upstream server? If this isn't
the best way to go about doing this, what would you suggest?
 
Second, what could be the problem with my configuration?
 
I'm running Squid 3.1.10.
 
Thanks in advance!
 
 
 
 
Here is my current config:
 
http_port 80 accel defaultsite=webservername connection-auth=on
cache_peer x.x.x.x parent 80 0 no-query login=PASS originserver
connection-auth=on name=serv
 
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param ntlm keep_alive on
 
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Domain Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
 
acl auth  proxy_auth REQUIRED
 
http_access allow auth
http_access deny all
 
 
acl our_sites dstdomain webservername proxy_auth REQUIRED
client_persistent_connections on
server_persistent_connections on
debug_options ALL,2
 
http_access allow our_sites
cache_peer_access serv allow our_sites
cache_peer_access serv deny all
 
 
 
 
 
 
If it helps, here is part of the cache.log file with debug level 2 applied.
 
When I request the website throught the proxy, there is an initial 5 second
delay that is not present when accessing the site directly. Then I get the
following:
 
[2012/05/14 22:32:00.549309,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa2088207
2012/05/14 22:32:05.555| AuthNTLMUserRequest::authenticate: need to
challenge client 'Tl...AA'!
2012/05/14 22:32:05.556| The request GET http://webservername/testsite/ is
DENIED, because it matched 'auth'
2012/05/14 22:32:05.556| The reply for GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'
[2012/05/14 22:32:05.560165,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[me] domain=[DOMAIN] workstation=[WKS_NAME] len1=24 len2=24
[2012/05/14 22:32:05.565952,  3]
libsmb/ntlmssp_sign.c:343(ntlmssp_sign_init)
  NTLMSSP Sign/Seal - Initialising with flags:
[2012/05/14 22:32:05.566021,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa2088205
2012/05/14 22:32:05.566| The request GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'
2012/05/14 22:32:05.566| client_side_request.cc(547) clientAccessCheck2: No
adapted_http_access configuration.
2012/05/14 22:32:05.566| The request GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'
2012/05/14 22:32:05.578| The reply for GET http://webservername/testsite/ is
ALLOWED, because it matched 'our_sites'
 
## After authenticating, I get this, followed by a few more authentications
and a lot more http requests:
 
2012/05/14 22:33:09.880| connReadWasError: FD 12: got flag -1
2012/05/14 22:33:09.880| ConnStateData::swanSong: FD 12
[2012/05/14 22:33:09.884534,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa2088207
2012/05/14 22:33:14.891| AuthNTLMUserRequest::authenticate: need to
challenge client 'Tl...AA'!
2012/05/14 22:33:14.891| The request GET http://webservername/testsite/ is
DENIED, because it matched 'auth'
2012/05/14 22:33:14.891| The reply for GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'
[2012/05/14 22:33:14.894114,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[me] domain=[DOMAIN] workstation=[WKS_NAME] len1=24 len2=24
[2012/05/14 22:33:14.899355,  3]
libsmb/ntlmssp_sign.c:343(ntlmssp_sign_init)
  NTLMSSP Sign/Seal - Initialising with flags:
[2012/05/14 22:33:14.899521,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa2088205
2012/05/14 22:33:14.899| The request GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'
2012/05/14 22:33:14.899| client_side_request.cc(547) clientAccessCheck2: No
adapted_http_access configuration.
2012/05/14 22:33:14.899| The request GET http://webservername/testsite/ is
ALLOWED, because it matched 'auth'

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-keeping-authenticated-NTLM-session-open-tp4633944.html
Sent from the Squid - Users mailing list archive at Nabble.com.