Re: [RFC] Breaking forwarding loops

2011-01-11 Thread Henrik Nordström
mån 2011-01-10 klockan 16:19 -0700 skrev Alex Rousskov:

 I seem to recall reports that the proxy was still working fine despite
 the lookup errors so we should be careful not to disable useful
 functionality. There is probably more than one way to lookup the
 destination, and as long as one way is working, we should probably keep
 going.

That's mainly when people use external NAT to redirect traffic to the
proxy.

 Do you think we should break all loops because it is difficult to be
 sure which ones are going to be fixed by going direct? The alternative
 is to try harder when identifying the situations where a loop can and
 should be broken.

We aways try to break loops when detected. And loops are always detected
unless Via is disabled.

There is two ways of breaking a loop

a) Go direct.

b) Return an error.

If we always return an error then we need to be careful to absorb the
error as well, which is quite fragile code paths today due to our
forwarding logics being quite messy.

But yes, we should be better at this. A proposal is to always return an
error if Via indicates that we have already processed this request twice
(on third time the same request is received). This will break actual
loops, while keeping sibling loops silent.

Regards
Henrik



Re: [RFC} threadsafe internal profiler

2011-01-11 Thread Kinkie
On Tue, Jan 11, 2011 at 8:03 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 The profiler aufs assertions are caused by the profiler not being thread
 safe but attempting to account operations inside each of the AIO threads.
 Lack of thread safety is due to the stack the profiler maintains of what
 states are currently being counted.

 I don't believe we should be maintaining such a stack. Instead I think we
 should leverage the existing system stack by having the PROF_start(X) macro
 create a counter object in the local scope being counted. When the local
 scope exists for any reason the system stack object will be destructed and
 the destructor can accumulate the counters back into the global data.

+1. I like this.



-- 
    /kinkie


Re: NTLM authentication broken for Mozilla/3.0 User-Agents

2011-01-11 Thread Henrik Nordström
tis 2011-01-11 klockan 11:37 +0100 skrev Fabian Hugelshofer:

 What do you think about removing the special handling for Mozilla/3 and 
 Netscape/3 agents from HttpMsg.cc?

+1 from me.

 How large is the chance that there is still an affected browser in use? 

Pretty close to none. And if there are those can be fixed in their local
configuration to disable the use of persistent connections.

Regards
Henrik



Re: Init script failure

2011-01-11 Thread Henrik Nordström
tis 2011-01-11 klockan 08:06 -0800 skrev Alex Ray:

 Then I used chkconfig to register it.  I can confirm in
 /var/log/messages that the script does run on boot, however it
 immediately crashes.  The root cause seems to be that the ssl_crtd
 instances (I am using squid with ssl-bump, ssl-crtd, and icap) crash
 right after starting:
 
 squid[2335]: Squid Parent: child process  exited with status 1
 (squid-1): The ssl_crtd helpers are crashing too rapidly, need help!
 squid[2335]: Exiting due to repeated, frequent failures

What is said in cache.log?

Do you have selinux enabled? If so then also
check /var/log/security/audit.log

Regards
Henrik



Re: NTLM authentication broken for Mozilla/3.0 User-Agents

2011-01-11 Thread Amos Jeffries

On 12/01/11 12:14, Henrik Nordström wrote:

tis 2011-01-11 klockan 11:37 +0100 skrev Fabian Hugelshofer:


What do you think about removing the special handling for Mozilla/3 and
Netscape/3 agents from HttpMsg.cc?


+1 from me.


How large is the chance that there is still an affected browser in use?


Pretty close to none. And if there are those can be fixed in their local
configuration to disable the use of persistent connections.

Regards
Henrik



There are two cases here, the Netscape one, yes is close to none. 
However as you pointed out there are download agents using Mozilla/3.0. 
How certain are we that the second hack case for that agent string is 
not aimed at a popular one of them?


FWIW: +1 from me, I'm game to try and kill this on performance grounds 
and push to get any remaining broken agents fixed.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [RFC] Breaking forwarding loops

2011-01-11 Thread Amos Jeffries

On 12/01/11 13:53, Alex Rousskov wrote:

On 01/07/2011 06:04 PM, Amos Jeffries wrote:


Note that a great many hostnames are localhost or
localhost.localdomain or localhost.local due to certain distros
hard-coding localhost into their packages.

We also use localhost as a backup when the gethostname() call fails to
provide anything with rDNS. (IMO that hard rDNS requirement is a bit naive)


Good point!

On 01/11/2011 01:16 AM, Henrik Nordström wrote:

A proposal is to always return an error if Via indicates
that we have already processed this request twice
(on third time the same request is received). This will break actual
loops, while keeping sibling loops silent.


Sounds like a good approach to me. I would even take it a few steps
further to address Amos concern using the same technique. How about this
plan:

If we have detected a forwarding loop and our name appeared N times,
then respond with an error provided at least one of the conditions below
is true:

1) N  2 and our name is not localhost or similar.
2) N  10.

No checks for the port mode or transaction flags (intercepted,
accelerated, etc.).

In addition to the above, do a startup check for the name and warn the
user if our name is localhost or similar.


I've done a quick search and failed to find the patch I made for this 
(bug 2654). Please go ahead with just the warning anyway, I'll resolve 
any conflict when I get back into the bug fix.



As for the 10. why 10 and not 1000? or 5?

Consider: what is the use case for allowing a request through the same 
Squid three times?
 the only valid one as Henrik said was peer loops by accident. Due to 
us not checking if the source IP matches the peer IP. This is resolved 
on loop #2 by not passing to peers if we have already looped. So 
breaking on 3 makes a strong case.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: Squid sends conflicting headers to origin when If-Unmodified-Since header is present from client

2011-01-11 Thread Amos Jeffries

On 12/01/11 11:38, Guy Bashkansky wrote:

I have to modify the behavior of a customized version of Squid 2.4
STABLE6 code, either by configuration or by coding.  Currently I can
not switch to any other Squid version, because of the customizations.


Problem description:

- When a client sends a byte-range request with an If-Unmodified-Since
header AND the object in Squid's cache is stale, then this Squid
version generates a request to origin with both IUMS and IMS headers,
which is conflicting and undefined by RFC2616.  The origin throws an
error.


Proposed solution:

- On an IMS check for a content that was requested with a UIMS header,
Squid should only insert the IMS header, not the IUMS header.  (If
only the IUMS header was added, then the origin would return origin
content unnecessarily, since it hasn't changed from the the cached
version.)

- Once the origin check is complete, then Squid cache should compute
IUMS calculations as defined in RFC2616, returning possibly a 206
Partial or 412 precondition failed.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html


Questions:

- Is there any possibility to facilitate such behavior using Squid 2.4
STABLE6 configuration?

- If not, then where in the code should I start to look to make the
necessary code change, and approximately how?

- I could not find any notion of If-Unmodified-Since in the Squid 2.4
STABLE6 code.  What's the best way to handle this?


You will need to:
 * register the header in src/HttpHeader.c and src/enums.h (duplicating 
the registration for If-Modified-Since).
 * add the relevant logics to httpBuildRequestHeader() in src/http.c to 
consider it when passing o or adding HDR_IF_MODIFIED_SINCE.




Have a good look through the behaviour added by those customizations 
anyway. It's highly likely that they are irrelevant in the later 
versions. If not, we are interested in merging useful features to avoid 
exactly this type of version problem in future.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [RFC] Breaking forwarding loops

2011-01-11 Thread Alex Rousskov
On 01/11/2011 07:13 PM, Amos Jeffries wrote:
 On 12/01/11 13:53, Alex Rousskov wrote:
 On 01/07/2011 06:04 PM, Amos Jeffries wrote:

 Note that a great many hostnames are localhost or
 localhost.localdomain or localhost.local due to certain distros
 hard-coding localhost into their packages.

 We also use localhost as a backup when the gethostname() call fails to
 provide anything with rDNS. (IMO that hard rDNS requirement is a bit
 naive)

 Good point!

 On 01/11/2011 01:16 AM, Henrik Nordström wrote:
 A proposal is to always return an error if Via indicates
 that we have already processed this request twice
 (on third time the same request is received). This will break actual
 loops, while keeping sibling loops silent.

 Sounds like a good approach to me. I would even take it a few steps
 further to address Amos concern using the same technique. How about this
 plan:

 If we have detected a forwarding loop and our name appeared N times,
 then respond with an error provided at least one of the conditions below
 is true:

 1) N  2 and our name is not localhost or similar.
 2) N  10.

 No checks for the port mode or transaction flags (intercepted,
 accelerated, etc.).

 In addition to the above, do a startup check for the name and warn the
 user if our name is localhost or similar.
 
 I've done a quick search and failed to find the patch I made for this
 (bug 2654). Please go ahead with just the warning anyway, I'll resolve
 any conflict when I get back into the bug fix.
 
 
 As for the 10. why 10 and not 1000? or 5?


 Consider: what is the use case for allowing a request through the same
 Squid three times?

The use case is a request going (without any loops!) through several
independent proxies, each auto-configured to be localhost, triggering
a false loop detection.

I am fine with N2 for all cases if others think that is sufficient.

Thank you,

Alex.

 the only valid one as Henrik said was peer loops by accident. Due to us
 not checking if the source IP matches the peer IP. This is resolved on
 loop #2 by not passing to peers if we have already looped. So breaking
 on 3 makes a strong case.
 
 Amos



Re: Updates to configure.ac for netfilter marking

2011-01-11 Thread Andrew Beverley
 
  Personally I am quite fine with requiring pkg-config as a build
  requirement for automtic detection of libcap, openssl, openldap and a a
  couple more. My only requirement is that a minimal build should be
  possible even without pkg-config.
 
  pkg-config is often available even on those old OS versions even if not
  normally installed.
 

 Aye, I think similar.
 
 Andrew,
   If you want to make the patch it should be fine for trunk, scheduled 
 to go out with 3.3.

No problem, will do (although I'm away for 3 weeks shortly). Should I
keep the above patch separate to the patch previously posted? Or
encompass everything together? It would be nice to get the previous
patch into 3.2 to prevent anyone having the same problems that I had.

Andy