Build failed in Jenkins: 3.HEAD-amd64-CentOs-icc #443

2011-10-04 Thread noc
See 

Changes:

[Automatic source maintenance] SourceFormat Enforcement

--
[...truncated 15877 lines...]
libtool: link: ranlib .libs/libsmblib.a
libtool: link: ( cd ".libs" && rm -f "libsmblib.la" && ln -s "../libsmblib.la" 
"libsmblib.la" )
make[3]: Leaving directory 
`
make[3]: Entering directory 
`
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT base64.lo -MD -MP -MF ".deps/base64.Tpo" -c -o base64.lo 
../../lib/base64.c; \
then mv -f ".deps/base64.Tpo" ".deps/base64.Plo"; else rm -f 
".deps/base64.Tpo"; exit 1; fi
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT charset.lo -MD -MP -MF ".deps/charset.Tpo" -c -o charset.lo 
../../lib/charset.c; \
then mv -f ".deps/charset.Tpo" ".deps/charset.Plo"; else rm -f 
".deps/charset.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT base64.lo -MD -MP -MF 
.deps/base64.Tpo -c ../../lib/base64.c -o base64.o
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT charset.lo -MD -MP -MF 
.deps/charset.Tpo -c ../../lib/charset.c -o charset.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT html_quote.lo -MD -MP -MF ".deps/html_quote.Tpo" -c -o html_quote.lo 
../../lib/html_quote.c; \
then mv -f ".deps/html_quote.Tpo" ".deps/html_quote.Plo"; else rm -f 
".deps/html_quote.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT html_quote.lo -MD -MP -MF 
.deps/html_quote.Tpo -c ../../lib/html_quote.c -o html_quote.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT md5.lo -MD -MP -MF ".deps/md5.Tpo" -c -o md5.lo ../../lib/md5.c; \
then mv -f ".deps/md5.Tpo" ".deps/md5.Plo"; else rm -f ".deps/md5.Tpo"; 
exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT md5.lo -MD -MP -MF 
.deps/md5.Tpo -c ../../lib/md5.c -o md5.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT rfc1738.lo -MD -MP -MF ".deps/rfc1738.Tpo" -c -o rfc1738.lo 
../../lib/rfc1738.c; \
then mv -f ".deps/rfc1738.Tpo" ".deps/rfc1738.Plo"; else rm -f 
".deps/rfc1738.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT rfc1738.lo -MD -MP -MF 
.deps/rfc1738.Tpo -c ../../lib/rfc1738.c -o rfc1738.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT rfc2617.lo -MD -MP -MF ".deps/rfc2617.Tpo" -c -o rfc2617.lo 
../../lib/rfc2617.c; \
then mv -f ".deps/rfc2617.Tpo" ".deps/rfc2617.Plo"; else rm -f 
".deps/rfc2617.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT rfc2617.lo -MD -MP -MF 
.deps/rfc2617.Tpo -c ../../lib/rfc2617.c -o rfc2617.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT uudecode.lo -MD -MP -MF ".deps/uudecode.Tpo" -c -o uudecode.lo 
../../lib/uudecode.c; \
then mv -f ".deps/uudecode.Tpo" ".deps/uudecode.Plo"; else rm -f 
".deps/uudecode.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT uudecode.lo -MD -MP -MF 
.deps/uudecode.Tpo -c ../../lib/uudecode.c -o uudecode.o
if /bin/sh ../libtool --tag=CC --mode=compile icc -no-gcc -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../include  -Werror  -g 
-MT hash.lo -MD -MP -MF ".deps/hash.Tpo" -c -o hash.lo ../../lib/hash.c; \
then mv -f ".deps/hash.Tpo" ".deps/hash.Plo"; else rm -f 
".deps/hash.Tpo"; exit 1; fi
libtool: compile:  icc -no-gcc -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../include -Werror -g -MT hash.lo -MD -MP -MF 
.deps/hash.Tpo -c ../../lib/hash.c -o hash.o
if /bin/sh ../

Jenkins build is back to normal : website-builds #724

2011-10-04 Thread noc
See 




Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Amos Jeffries

On 05/10/11 16:58, Alex Rousskov wrote:

On 10/04/2011 07:05 PM, Amos Jeffries wrote:

On Tue, 04 Oct 2011 17:39:48 -0600, Alex Rousskov wrote:

On 10/04/2011 04:19 PM, Amos Jeffries wrote:

On Tue, 04 Oct 2011 08:30:47 -0600, Alex Rousskov wrote:

We got a malformed max-stale value. We have only
two options when it comes to that value interpretation, I think:

A1. Treat it as valueless max-stale.
A2. Ignore it completely.



we have three options when it comes to forwarding the
malformed directive:

B1. Forward our own valid directive.
B2. Forward nothing.
B3. Forward the malformed directive.




I was going by the RFC 2616 section 14.9.3 clause on max-stale:
  "If no value is assigned to max-stale, then the client is willing to
accept a stale response of any age."

Treating it compliantly as valueless when a value was sent (but not
understood or malformed) means the client will get incorrect
overly-stale objects.


Not sure why you call such treatment "compliant", but yes, that is
exactly why I oppose A1: I do not want to make things worse by serving
overly-stale objects.


Thats what the RFC said. valueless == anything ->  very,very,very stale
is fine.

Very nasty, but compliant.


Since the client has not sent a valueless directive, we cannot really
apply the valueless directive meaning to what we got. All we know is
that there is a value that we cannot parse. The value could be garbage,
a very long integer, or even a poorly specified custom extension. It is
probably not worth the trouble detecting the difference.



   A0. Treat it as max-stale=0.
   A1. Treat it as valueless max-stale.
   A2. Ignore it completely.



If we do A1, the RFC requires "anything" and our max_stale directives
will always be smaller or equal to that. Usually smaller (1 week) so a
slight gain in worst-case freshness over a bare assumption of
"anything".


but still possibly a lot more stale than the client can tolerate. I see
no reason to do A1 if we can do A0. Do you?



I can't. Agreed.


Glad we reached consensus and even improved the implementation plan
after all. A0 it is!

Are you OK with B2? Do you think B3 is worth implementing?


I'm okay with either. Overall I think B3 is better on the outgoing just 
in case it was an extension or big integer. But not strongly enough to 
want it right now.
 B3 will probably be needed when we re-enable "transparent" traffic 
flag as meaning fully transparent/invisible proxy (3.4?). Until then 
there is no real need.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.15
  Beta testers wanted for 3.2.0.12


Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Alex Rousskov
On 10/04/2011 07:05 PM, Amos Jeffries wrote:
> On Tue, 04 Oct 2011 17:39:48 -0600, Alex Rousskov wrote:
>> On 10/04/2011 04:19 PM, Amos Jeffries wrote:
>>> On Tue, 04 Oct 2011 08:30:47 -0600, Alex Rousskov wrote:
>> We got a malformed max-stale value. We have only
>> two options when it comes to that value interpretation, I think:
>>
>>A1. Treat it as valueless max-stale.
>>A2. Ignore it completely.
>>
>> we have three options when it comes to forwarding the
>> malformed directive:
>>
>>B1. Forward our own valid directive.
>>B2. Forward nothing.
>>B3. Forward the malformed directive.


>>> I was going by the RFC 2616 section 14.9.3 clause on max-stale:
>>>  "If no value is assigned to max-stale, then the client is willing to
>>> accept a stale response of any age."
>>>
>>> Treating it compliantly as valueless when a value was sent (but not
>>> understood or malformed) means the client will get incorrect
>>> overly-stale objects.
>>
>> Not sure why you call such treatment "compliant", but yes, that is
>> exactly why I oppose A1: I do not want to make things worse by serving
>> overly-stale objects.
> 
> Thats what the RFC said. valueless == anything -> very,very,very stale
> is fine.
> 
> Very nasty, but compliant.

Since the client has not sent a valueless directive, we cannot really
apply the valueless directive meaning to what we got. All we know is
that there is a value that we cannot parse. The value could be garbage,
a very long integer, or even a poorly specified custom extension. It is
probably not worth the trouble detecting the difference.


>>   A0. Treat it as max-stale=0.
>>   A1. Treat it as valueless max-stale.
>>   A2. Ignore it completely.

>>> If we do A1, the RFC requires "anything" and our max_stale directives
>>> will always be smaller or equal to that. Usually smaller (1 week) so a
>>> slight gain in worst-case freshness over a bare assumption of
>>> "anything".
>>
>> but still possibly a lot more stale than the client can tolerate. I see
>> no reason to do A1 if we can do A0. Do you?
>>
> 
> I can't. Agreed.

Glad we reached consensus and even improved the implementation plan
after all. A0 it is!

Are you OK with B2? Do you think B3 is worth implementing?


Thank you,

Alex.


Re: [PATCH] Re: Adding a reset option to the session helper

2011-10-04 Thread Amos Jeffries

On Tue, 04 Oct 2011 18:47:21 -0600, Alex Rousskov wrote:

On 10/04/2011 04:21 PM, Andrew Beverley wrote:

On Tue, 2011-10-04 at 18:59 +0100, Andrew Beverley wrote:
> However, I'm now having problems with multiple instances of the 
session
> helper writing to the same database. I thought I had fixed this 
with the
> ->sync option, but it appears not. If I open multiple instances 
of
> ext_session_acl in terminal windows on the same database, then I 
do not
> get consistent results between each process. If I do a "LOGIN" on 
one,
> then I can still get "No session available" on the other. Any 
ideas why
> this might be? Is it a bug with db-185? Debugging shows that the 
key
> does not exist in the second process, despite having been written 
in the

> first.



After further investigation, the problem appears to be that the 
*read*
database process is being cached (I'm not sure where). So even if 
the
writing process syncs its changes to the DB file, these aren't 
reflected

in the process that is reading the DB file.

After a lot of Googling, the only way I can see to fix this is to 
close

and re-open the database on every read. This is very messy, but does
anyone have any better suggestions?


Hi Andrew,

FWIW, the ssl_crtd daemon which stores generated SSL certificates 
on
disk does [lock and] reopen the database every time it needs to read 
it.

This is not efficient, but avoids conflicts and stale info.

If you do not like that simple but inefficient approach, you can use 
a

better database that can handle concurrency (and leave all caching to
it)  OR  implement your own custom and efficient database, possibly
using shared memory.

Disclaimer: I do not remember what database the session helper is 
using

so I may be overlooking simpler/better options.


An ancient version of BerkleyDB. Mostly because the *.db API became 
much less simple and potentially slower in later versions.


An upgrade is well overdue, but only really worth doing if this sync 
bug is fixed.


Amos


Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Amos Jeffries

On Tue, 04 Oct 2011 17:39:48 -0600, Alex Rousskov wrote:

On 10/04/2011 04:19 PM, Amos Jeffries wrote:

On Tue, 04 Oct 2011 08:30:47 -0600, Alex Rousskov wrote:

We got a malformed max-stale value. We have only
two options when it comes to that value interpretation, I 
think:


   A1. Treat it as valueless max-stale.
   A2. Ignore it completely.



we have three options when it comes to forwarding the
malformed directive:

   B1. Forward our own valid directive.
   B2. Forward nothing.
   B3. Forward the malformed directive.



A1 has a side effect of serving a possibly "too stale" object to 
the
client. You may be confused by the old source code comments (now 
fixed
by Kinkie) that valueless max-stale means "always stale". It 
actually
means "never stale". Thus, A1 and B1 convert "possibly stale at 
some
unknown point in time" to "never stale" which feels like a bad 
idea.


I was going by the RFC 2616 section 14.9.3 clause on max-stale:
 "If no value is assigned to max-stale, then the client is willing 
to

accept a stale response of any age."

Treating it compliantly as valueless when a value was sent (but not
understood or malformed) means the client will get incorrect
overly-stale objects.


Not sure why you call such treatment "compliant", but yes, that is
exactly why I oppose A1: I do not want to make things worse by 
serving

overly-stale objects.


Thats what the RFC said. valueless == anything -> very,very,very stale 
is fine.


Very nasty, but compliant.





If we want the client _never_ to get stale objects then it must be
treated as max-stale=0 to override both the local max_stale 
directives

and the RFC "anything" behaviour of valueless field.


Indeed. I think that would be even better than A2! Let me call it A0 
to

avoid conflicts with previously mentioned options:

  A0. Treat it as max-stale=0.
  A1. Treat it as valueless max-stale.
  A2. Ignore it completely.

A0 is the safest, most "conservative" option IMO.



I think we might have to get around this by using the max_stale
directive (or refresh_pattern override value) to limit the 
max-stale
value relayed upstream from the current Squid. If we consider this 
to be
an administrative override determining that this Squid will not 
emit
objects older than max_stale its reasonable to shrink the 
requested

staleness range from the valueless "anything" and ensure we get an
object as current as we want, within the wider range of what the 
client

wants.


When max-stale=value is malformed, we do not know whether we shrink 
or
extend it by supplying our own value. That is why I think A2 is the 
only
acceptable option for Squid internal interpretation and B2 and B3 
are

the only acceptable options for forwarding.



If we do A2, ignoring the header text entirely results in using the
max_stale directives local values.


I did not think of a local config taking over in the simulated 
absence
of a max-stale directive. If the client-supplied max-stale always 
takes

priority over a local config, than A0 becomes even more attractive!



The result of what I ported from squid-2 was supposed to be:
  max_stale overridden by refresh_pattern max-stale=, overridden by CC 
header max-stale[=], overridden by CC no-cache.





If we do A1, the RFC requires "anything" and our max_stale 
directives
will always be smaller or equal to that. Usually smaller (1 week) so 
a
slight gain in worst-case freshness over a bare assumption of 
"anything".


but still possibly a lot more stale than the client can tolerate. I 
see

no reason to do A1 if we can do A0. Do you?



I can't. Agreed.

Amos


Re: [PATCH] Re: Adding a reset option to the session helper

2011-10-04 Thread Alex Rousskov
On 10/04/2011 04:21 PM, Andrew Beverley wrote:
> On Tue, 2011-10-04 at 18:59 +0100, Andrew Beverley wrote:
>> > However, I'm now having problems with multiple instances of the session
>> > helper writing to the same database. I thought I had fixed this with the
>> > ->sync option, but it appears not. If I open multiple instances of
>> > ext_session_acl in terminal windows on the same database, then I do not
>> > get consistent results between each process. If I do a "LOGIN" on one,
>> > then I can still get "No session available" on the other. Any ideas why
>> > this might be? Is it a bug with db-185? Debugging shows that the key
>> > does not exist in the second process, despite having been written in the
>> > first.


> After further investigation, the problem appears to be that the *read*
> database process is being cached (I'm not sure where). So even if the
> writing process syncs its changes to the DB file, these aren't reflected
> in the process that is reading the DB file.
> 
> After a lot of Googling, the only way I can see to fix this is to close
> and re-open the database on every read. This is very messy, but does
> anyone have any better suggestions?

Hi Andrew,

FWIW, the ssl_crtd daemon which stores generated SSL certificates on
disk does [lock and] reopen the database every time it needs to read it.
This is not efficient, but avoids conflicts and stale info.

If you do not like that simple but inefficient approach, you can use a
better database that can handle concurrency (and leave all caching to
it)  OR  implement your own custom and efficient database, possibly
using shared memory.

Disclaimer: I do not remember what database the session helper is using
so I may be overlooking simpler/better options.


HTH,

Alex.


Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Alex Rousskov
On 10/04/2011 04:19 PM, Amos Jeffries wrote:
> On Tue, 04 Oct 2011 08:30:47 -0600, Alex Rousskov wrote:
 We got a malformed max-stale value. We have only
 two options when it comes to that value interpretation, I think:

A1. Treat it as valueless max-stale.
A2. Ignore it completely.

 we have three options when it comes to forwarding the
 malformed directive:

B1. Forward our own valid directive.
B2. Forward nothing.
B3. Forward the malformed directive.


>> A1 has a side effect of serving a possibly "too stale" object to the
>> client. You may be confused by the old source code comments (now fixed
>> by Kinkie) that valueless max-stale means "always stale". It actually
>> means "never stale". Thus, A1 and B1 convert "possibly stale at some
>> unknown point in time" to "never stale" which feels like a bad idea.
> 
> I was going by the RFC 2616 section 14.9.3 clause on max-stale:
>  "If no value is assigned to max-stale, then the client is willing to
> accept a stale response of any age."
> 
> Treating it compliantly as valueless when a value was sent (but not
> understood or malformed) means the client will get incorrect
> overly-stale objects.

Not sure why you call such treatment "compliant", but yes, that is
exactly why I oppose A1: I do not want to make things worse by serving
overly-stale objects.


> If we want the client _never_ to get stale objects then it must be
> treated as max-stale=0 to override both the local max_stale directives
> and the RFC "anything" behaviour of valueless field.

Indeed. I think that would be even better than A2! Let me call it A0 to
avoid conflicts with previously mentioned options:

  A0. Treat it as max-stale=0.
  A1. Treat it as valueless max-stale.
  A2. Ignore it completely.

A0 is the safest, most "conservative" option IMO.


>>> I think we might have to get around this by using the max_stale
>>> directive (or refresh_pattern override value) to limit the max-stale
>>> value relayed upstream from the current Squid. If we consider this to be
>>> an administrative override determining that this Squid will not emit
>>> objects older than max_stale its reasonable to shrink the requested
>>> staleness range from the valueless "anything" and ensure we get an
>>> object as current as we want, within the wider range of what the client
>>> wants.
>>
>> When max-stale=value is malformed, we do not know whether we shrink or
>> extend it by supplying our own value. That is why I think A2 is the only
>> acceptable option for Squid internal interpretation and B2 and B3 are
>> the only acceptable options for forwarding.
>>
> 
> If we do A2, ignoring the header text entirely results in using the
> max_stale directives local values.

I did not think of a local config taking over in the simulated absence
of a max-stale directive. If the client-supplied max-stale always takes
priority over a local config, than A0 becomes even more attractive!


> If we do A1, the RFC requires "anything" and our max_stale directives
> will always be smaller or equal to that. Usually smaller (1 week) so a
> slight gain in worst-case freshness over a bare assumption of "anything".

but still possibly a lot more stale than the client can tolerate. I see
no reason to do A1 if we can do A0. Do you?


Thank you,

Alex.


Re: [PATCH] Re: Adding a reset option to the session helper

2011-10-04 Thread Amos Jeffries

On Tue, 04 Oct 2011 23:21:13 +0100, Andrew Beverley wrote:

On Tue, 2011-10-04 at 18:59 +0100, Andrew Beverley wrote:
However, I'm now having problems with multiple instances of the 
session
helper writing to the same database. I thought I had fixed this with 
the

->sync option, but it appears not. If I open multiple instances of
ext_session_acl in terminal windows on the same database, then I do 
not
get consistent results between each process. If I do a "LOGIN" on 
one,
then I can still get "No session available" on the other. Any ideas 
why

this might be? Is it a bug with db-185? Debugging shows that the key
does not exist in the second process, despite having been written in 
the

first.


After further investigation, the problem appears to be that the 
*read*

database process is being cached (I'm not sure where). So even if the
writing process syncs its changes to the DB file, these aren't 
reflected

in the process that is reading the DB file.

After a lot of Googling, the only way I can see to fix this is to 
close

and re-open the database on every read. This is very messy, but does
anyone have any better suggestions?


I also found another bug with the session helper when used in active
mode (which might have been created by me). I'll submit a patch once
I've got a way ahead with the DB.


The problem here was that the "LOGIN" was being written to the 
database,

as well as the desired key (even though the correct string length was
specified). So it was never actually matching a subsequent query.

Please find attached a patch that fixes both of these. The fix for 
the

first is messy; the fix for the second is simple, just using strtok
instead of strrchr, so as to force the key to be correctly truncated.

Andy


The reason it was doing strrchr() was that people are free to specify 
multi-word keys. We need to scan from the end of the line backwards for 
the token.
 I think strrchr() is still probably the best to use. The buffer is 
local so if you can map the strrchr output to an offset it can be freely 
adjusted to add \0 in the space before "LOGIN".


Amos


[PATCH] Re: Adding a reset option to the session helper

2011-10-04 Thread Andrew Beverley
On Tue, 2011-10-04 at 18:59 +0100, Andrew Beverley wrote:
> However, I'm now having problems with multiple instances of the session
> helper writing to the same database. I thought I had fixed this with the
> ->sync option, but it appears not. If I open multiple instances of
> ext_session_acl in terminal windows on the same database, then I do not
> get consistent results between each process. If I do a "LOGIN" on one,
> then I can still get "No session available" on the other. Any ideas why
> this might be? Is it a bug with db-185? Debugging shows that the key
> does not exist in the second process, despite having been written in the
> first.

After further investigation, the problem appears to be that the *read*
database process is being cached (I'm not sure where). So even if the
writing process syncs its changes to the DB file, these aren't reflected
in the process that is reading the DB file.

After a lot of Googling, the only way I can see to fix this is to close
and re-open the database on every read. This is very messy, but does
anyone have any better suggestions?

> I also found another bug with the session helper when used in active
> mode (which might have been created by me). I'll submit a patch once
> I've got a way ahead with the DB.

The problem here was that the "LOGIN" was being written to the database,
as well as the desired key (even though the correct string length was
specified). So it was never actually matching a subsequent query.

Please find attached a patch that fixes both of these. The fix for the
first is messy; the fix for the second is simple, just using strtok
instead of strrchr, so as to force the key to be correctly truncated.

Andy

This patch fixes two problems with the session helper:

1. When using multiple processes with the DB, it did not properly re-read the
database file when the other process makes a change.

2. Active mode was not properly working, due to the wrong key being written to
the database after a login.

=== modified file 'helpers/external_acl/session/ext_session_acl.cc'
--- helpers/external_acl/session/ext_session_acl.cc	2011-09-21 00:19:19 +
+++ helpers/external_acl/session/ext_session_acl.cc	2011-10-04 22:10:31 +
@@ -79,11 +79,16 @@
 DBT key, data;
 key.data = (void *)details;
 key.size = len;
+/* This is very messy, but appears to be the only way to force a reload of the DB.
+   Failure to do this causes sync problems when multiple helper processes are running. */
+shutdown_db();
+init_db();   
 if (db->get(db, &key, &data, 0) == 0) {
 time_t timestamp;
 if (data.size != sizeof(timestamp)) {
 fprintf(stderr, "ERROR: %s: CORRUPTED DATABASE (%s)\n", program_name, details);
 db->del(db, &key, 0);
+db->sync(db, 0);
 return 0;
 }
 memcpy(×tamp, data.data, sizeof(timestamp));
@@ -111,6 +116,7 @@
 key.data = (void *)details;
 key.size = len;
 db->del(db, &key, 0);
+db->sync(db, 0);
 }
 
 static void usage(void)
@@ -156,21 +162,19 @@
 while (fgets(request, HELPER_INPUT_BUFFER, stdin)) {
 int action = 0;
 const char *channel_id = strtok(request, " ");
-const char *detail = strtok(NULL, "\n");
+const char *detail = strtok(NULL, " \n");
 if (detail == NULL) {
 // Only 1 paramater supplied. We are expecting at least 2 (including the channel ID)
 fprintf(stderr, "FATAL: %s is concurrent and requires the concurrency option to be specified.\n", program_name);
 exit(1);
 }
-const char *lastdetail = strrchr(detail, ' ');
+const char *lastdetail = strtok(NULL, " \n");
 size_t detail_len = strlen(detail);
 if (lastdetail) {
-if (strcmp(lastdetail, " LOGIN") == 0) {
+if (strcmp(lastdetail, "LOGIN") == 0) {
 action = 1;
-detail_len = (size_t)(lastdetail-detail);
-} else if (strcmp(lastdetail, " LOGOUT") == 0) {
+} else if (strcmp(lastdetail, "LOGOUT") == 0) {
 action = -1;
-detail_len = (size_t)(lastdetail-detail);
 }
 }
 if (action == -1) {



Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Amos Jeffries

On Tue, 04 Oct 2011 08:30:47 -0600, Alex Rousskov wrote:

On 10/03/2011 08:49 PM, Amos Jeffries wrote:


We got a malformed max-stale value. We have only
two options when it comes to that value interpretation, I 
think:


   A1. Treat it as valueless max-stale.
   A2. Ignore it completely.


BTW, we have three options when it comes to forwarding the 
malformed

directive:

   B1. Forward our own valid directive.
   B2. Forward nothing.
   B3. Forward the malformed directive.




The new updated RFC texts from HTTPbis are now stating:

"
   A proxy, whether or not it implements a cache, MUST pass cache
   directives through in forwarded messages, regardless of their
   significance to that application, since the directives might be
   applicable to all recipients along the request/response chain.  
It is

   not possible to target a directive to a specific cache.
"

So we do not have B2 as a compliant option. If we receive it, it 
MUST be

relayed.


I disagree. The above HTTPbis text does not talk about malformed
directives, which are the only subject of the current discussion. B2 
is
acceptable for malformed directives, IMO, even though B3 would match 
the

spirit of the above spec better.



Erasing it could be an option, but not a compliant one. Striping the
unknown-value to make it less strict than we received would be more
likely to add problems in downstream software which might understand 
it.


Squid can either drop the malformed directive completely (B2) or 
forward

the original malformed directive (B3). B2 is easy to implement. B3 is
not so easy and is probably not very useful in practice so I would 
not

spend cycles on it, but it is Kinkie's call.

Forwarding something other than the original is a bad idea, IMO, 
because
it may result in more stale objects than the client wants to accept 
(B1).



That said, the spec clearly outlines it as delta-seconds so if its 
not
parseable as delta-seconds its clearly malformed and B1 probably 
should

not pass "max-stale".


Exactly. Although, technically, it is easy to construct a well-formed
max-stale value that Squid would consider malformed:

  max-stale=

The same is true for virtually all integer-based values in HTTP. We 
can

hope such values will never be found in real requests until we meet
aliens that write software which can cache objects for eternity.



Or the science and archival communities. They have particular accuracy 
requirements on the meaning of "forever".





Which leaves us in a mess because;
 A1->B1 seems intuitively the right thing, but has bad side effects.


Neither A1 nor B1 are the right things, IMO, because they may lead to
responses that are more stale than what the client is willing to 
accept.



 A1->B3 seems like a reasonable alternative to avoid the effects, 
but

gambles on delta-seconds never changing.


A1 has a side effect of serving a possibly "too stale" object to the
client. You may be confused by the old source code comments (now 
fixed

by Kinkie) that valueless max-stale means "always stale". It actually
means "never stale". Thus, A1 and B1 convert "possibly stale at some
unknown point in time" to "never stale" which feels like a bad idea.


I was going by the RFC 2616 section 14.9.3 clause on max-stale:
 "If no value is assigned to max-stale, then the client is willing to 
accept a stale response of any age."


Treating it compliantly as valueless when a value was sent (but not 
understood or malformed) means the client will get incorrect 
overly-stale objects.


If we want the client _never_ to get stale objects then it must be 
treated as max-stale=0 to override both the local max_stale directives 
and the RFC "anything" behaviour of valueless field.





I think we might have to get around this by using the max_stale
directive (or refresh_pattern override value) to limit the max-stale
value relayed upstream from the current Squid. If we consider this 
to be

an administrative override determining that this Squid will not emit
objects older than max_stale its reasonable to shrink the requested
staleness range from the valueless "anything" and ensure we get an
object as current as we want, within the wider range of what the 
client

wants.


When max-stale=value is malformed, we do not know whether we shrink 
or
extend it by supplying our own value. That is why I think A2 is the 
only

acceptable option for Squid internal interpretation and B2 and B3 are
the only acceptable options for forwarding.



If we do A2, ignoring the header text entirely results in using the 
max_stale directives local values.


If we do A1, the RFC requires "anything" and our max_stale directives 
will always be smaller or equal to that. Usually smaller (1 week) so a 
slight gain in worst-case freshness over a bare assumption of 
"anything".


Amos


Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Kinkie
On Tue, Oct 4, 2011 at 11:05 PM, Amos Jeffries  wrote:
> On Tue, 4 Oct 2011 14:00:14 +0200, Kinkie wrote:
>>
>> If you guys are OK, I'll merge the current implementation, then we can
>> alter it post-merge.
>>
>> Right now what Squid does is A1->B2 for all directives except for
>> max-stale, for which it does B4 (forward valueless). This  is probably
>> the worst thing that can be done.
>>
>> IMHVO the best thing we could do is forward things we can't parse
>> as-is (A2-B3). If we change nothing we can probably also keep a copy
>> of the raw directives, saves some cycles to reassemble it at pack-time
>> .
>
>
> I'm okay with an intermediary polish-only merge as long as its not adding or
> changing the behaviour mess.
>
> If it is changing behaviour then may as well make sure there are no known
> bugs in the post-merge result.

No behaviour changes.
These will have to wait until a consensus is reached.

Ok, merging.
Thanks


-- 
    /kinkie


Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Amos Jeffries

On Tue, 4 Oct 2011 14:00:14 +0200, Kinkie wrote:
If you guys are OK, I'll merge the current implementation, then we 
can

alter it post-merge.

Right now what Squid does is A1->B2 for all directives except for
max-stale, for which it does B4 (forward valueless). This  is 
probably

the worst thing that can be done.

IMHVO the best thing we could do is forward things we can't parse
as-is (A2-B3). If we change nothing we can probably also keep a copy
of the raw directives, saves some cycles to reassemble it at 
pack-time

.



I'm okay with an intermediary polish-only merge as long as its not 
adding or changing the behaviour mess.


If it is changing behaviour then may as well make sure there are no 
known bugs in the post-merge result.


Amos



Re: squid disk cache

2011-10-04 Thread Alex Rousskov
On 10/04/2011 12:03 PM, pavi wrote:

> whenever a client sends a request how does the proxy server identifies
> whether the requested object is in the disk cache or not

Squid computes a "key" using Request-URI, Request method, and other
transaction properties and then looks up that key in one or more storage
indexes. If there is a match, the requested object is probably in the
cache.  Additional checks are performed as the identified stored object
entry is located and swapped in.

With SMP caching, there are up to three kinds of storage indexes:

  * shared memory cache index (if memory caching is enabled);
  * shared disk cache indexes (one per Rock cache_dir, if any); and
  * local index of in-transit objects (unused to the extent possible).

Without SMP caching, there is just one global storage index where all
stored objects are recorded.


HTH,

Alex.


squid disk cache

2011-10-04 Thread pavi
whenever a client sends a request how does the proxy server identifies
whether the requested object is in the disk cache or not

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-disk-cache-tp3871834p3871834.html
Sent from the Squid - Development mailing list archive at Nabble.com.


Re: Adding a reset option to the session helper

2011-10-04 Thread Andrew Beverley
On Sat, 2011-10-01 at 15:51 +1300, Amos Jeffries wrote:
[...]
>  When the LOGIN/LOGOUT ACLs are tested they perform their action on the 
>  session state. The ACLs leading up to them have to be crafted to avoid 
>  testing them at all unless you want LOGIN/LOGOUT to happen on that 
>  request
[...]

Thanks, I've kind-of got that working, and will update the wiki in due
course.

However, I'm now having problems with multiple instances of the session
helper writing to the same database. I thought I had fixed this with the
->sync option, but it appears not. If I open multiple instances of
ext_session_acl in terminal windows on the same database, then I do not
get consistent results between each process. If I do a "LOGIN" on one,
then I can still get "No session available" on the other. Any ideas why
this might be? Is it a bug with db-185? Debugging shows that the key
does not exist in the second process, despite having been written in the
first.

I also found another bug with the session helper when used in active
mode (which might have been created by me). I'll submit a patch once
I've got a way ahead with the DB.

Andy




Re: Can not include my code into squid

2011-10-04 Thread Alex Rousskov
On 10/04/2011 09:21 AM, Arthur Tumanyan wrote:
> Hi,all. I have some troubles.When I trying  make squid with my sources
> (SquidShell.cc,SquidShell.h),I got 
> error:
> /root@dreambox:/home/arthur/workspace/squid# make
> Making all in compat
> make[1]: Entering directory `/home/arthur/workspace/squid/compat'
> g++ -DHAVE_CONFIG_H  -I.. -I../include -I../lib -I../src -I../include 
> -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
> -g -O2 -MT assert.o -MD -MP -MF .deps/assert.Tpo -c -o assert.o assert.cc
> In file included from ../include/config.h:68:0,
>  from assert.cc:34:
> ../src/shell/SquidShell.h:9:18: error: ‘StoreEntry’ has not been declared
> make[1]: *** [assert.o] Error 1
> make[1]: Leaving directory `/home/arthur/workspace/squid/compat'
> make: *** [all-recursive] Error 1
> root@dreambox:/home/arthur/workspace/squid# 
> /
> then I  add #include "Store.h" to SquidShell.h
> and getting this:
> /Making all in compat
> make[1]: Entering directory `/home/arthur/workspace/squid/compat'
> g++ -DHAVE_CONFIG_H  -I.. -I../include -I../lib -I../src -I../include 
> -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
> -g -O2 -MT assert.o -MD -MP -MF .deps/assert.Tpo -c -o assert.o assert.cc
> In file included from ../include/snmp.h:74:0,
>  from ../include/cache_snmp.h:19,
>  from ../src/squid.h:158,
>  from ../src/Store.h:40,
>  from ../src/shell/SquidShell.h:4,
>  from ../include/config.h:68,
>  from assert.cc:34:
> ../include/snmp_debug.h:7:56: error: expected initializer before
> ‘PRINTF_FORMAT_ARG2’
...
> make[1]: *** [assert.o] Error 1
> make[1]: Leaving directory `/home/arthur/workspace/squid/compat'
> make: *** [all-recursive] Error 1
> root@dreambox:/home/arthur/workspace/squid# 
> /
> Can't get rid of this trouble.Can someone point me,what I'm doing wrong?
> Thanx.
> 
> My source files already included in Makefile,and configure.in. I have no
> problem with that.I could even make a squid example with empty SquidShell.cc
> and SquidShell.h files


Hard to say without seeing your code (you should post a diff), but it
sounds like you are #including SquidShell.h header file in
include/config.h, which is wrong. Virtually nothing outside src/shell/
(and most certainly not config.h!) should know about SquidShell.h.

For example, squid/compat should build fine even if you put unparsable
garbage inside SquidShell.h.


HTH,

Alex.


Can not include my code into squid

2011-10-04 Thread Arthur Tumanyan
Hi,all. I have some troubles.When I trying  make squid with my sources
(SquidShell.cc,SquidShell.h),I got 
error:
/root@dreambox:/home/arthur/workspace/squid# make
Making all in compat
make[1]: Entering directory `/home/arthur/workspace/squid/compat'
g++ -DHAVE_CONFIG_H  -I.. -I../include -I../lib -I../src -I../include 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
-g -O2 -MT assert.o -MD -MP -MF .deps/assert.Tpo -c -o assert.o assert.cc
In file included from ../include/config.h:68:0,
 from assert.cc:34:
../src/shell/SquidShell.h:9:18: error: ‘StoreEntry’ has not been declared
make[1]: *** [assert.o] Error 1
make[1]: Leaving directory `/home/arthur/workspace/squid/compat'
make: *** [all-recursive] Error 1
root@dreambox:/home/arthur/workspace/squid# 
/
then I  add #include "Store.h" to SquidShell.h
and getting this:
/Making all in compat
make[1]: Entering directory `/home/arthur/workspace/squid/compat'
g++ -DHAVE_CONFIG_H  -I.. -I../include -I../lib -I../src -I../include 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
-g -O2 -MT assert.o -MD -MP -MF .deps/assert.Tpo -c -o assert.o assert.cc
In file included from ../include/snmp.h:74:0,
 from ../include/cache_snmp.h:19,
 from ../src/squid.h:158,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../include/snmp_debug.h:7:56: error: expected initializer before
‘PRINTF_FORMAT_ARG2’
In file included from ../include/util.h:66:0,
 from ../src/squid.h:165,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../include/SquidNew.h: In function ‘void* operator new(size_t)’:
../include/SquidNew.h:47:24: error: ‘xmalloc’ was not declared in this scope
../include/SquidNew.h: In function ‘void operator delete(void*)’:
../include/SquidNew.h:51:18: error: ‘xfree’ was not declared in this scope
../include/SquidNew.h: In function ‘void* operator new [](size_t)’:
../include/SquidNew.h:55:24: error: ‘xmalloc’ was not declared in this scope
../include/SquidNew.h: In function ‘void operator delete [](void*)’:
../include/SquidNew.h:59:18: error: ‘xfree’ was not declared in this scope
In file included from ../include/Array.h:40:0,
 from ../include/Stack.h:37,
 from ../include/splay.h:11,
 from ../include/MemPool.h:26,
 from ../src/squid.h:167,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../include/fatal.h: At global scope:
../include/fatal.h:5:47: error: expected initializer before
‘PRINTF_FORMAT_ARG1’
In file included from ../include/Stack.h:37:0,
 from ../include/splay.h:11,
 from ../include/MemPool.h:26,
 from ../src/squid.h:167,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../include/Array.h: In static member function ‘static void*
Vector::operator new(size_t)’:
../include/Array.h:124:24: error: there are no arguments to ‘xmalloc’ that
depend on a template parameter, so a declaration of ‘xmalloc’ must be
available
../include/Array.h:124:24: note: (if you use ‘-fpermissive’, G++ will accept
your code, but allowing the use of an undeclared name is deprecated)
../include/Array.h: In static member function ‘static void
Vector::operator delete(void*)’:
../include/Array.h:131:19: error: there are no arguments to ‘xfree’ that
depend on a template parameter, so a declaration of ‘xfree’ must be
available
In file included from ../src/HttpHeaderRange.h:40:0,
 from ../src/HttpHeader.h:37,
 from ../src/structs.h:38,
 from ../src/squid.h:169,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../src/Packer.h: At global scope:
../src/Packer.h:62:47: error: expected initializer before
‘PRINTF_FORMAT_ARG2’
In file included from ../src/squid.h:169:0,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../src/structs.h:127:5: error: ‘regex_t’ does not name a type
In file included from ../src/squid.h:169:0,
 from ../src/Store.h:40,
 from ../src/shell/SquidShell.h:4,
 from ../include/config.h:68,
 from assert.cc:34:
../src/structs.h:1067:5: error: ‘regex_t’ does not name a type
In file included from ../src/protos.h

Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Alex Rousskov
On 10/03/2011 08:49 PM, Amos Jeffries wrote:

>> We got a malformed max-stale value. We have only
>> two options when it comes to that value interpretation, I think:
>>
>>A1. Treat it as valueless max-stale.
>>A2. Ignore it completely.

>> BTW, we have three options when it comes to forwarding the malformed
>> directive:
>>
>>B1. Forward our own valid directive.
>>B2. Forward nothing.
>>B3. Forward the malformed directive.


> The new updated RFC texts from HTTPbis are now stating:
> 
> "
>A proxy, whether or not it implements a cache, MUST pass cache
>directives through in forwarded messages, regardless of their
>significance to that application, since the directives might be
>applicable to all recipients along the request/response chain.  It is
>not possible to target a directive to a specific cache.
> "
> 
> So we do not have B2 as a compliant option. If we receive it, it MUST be
> relayed.

I disagree. The above HTTPbis text does not talk about malformed
directives, which are the only subject of the current discussion. B2 is
acceptable for malformed directives, IMO, even though B3 would match the
spirit of the above spec better.


> Erasing it could be an option, but not a compliant one. Striping the
> unknown-value to make it less strict than we received would be more
> likely to add problems in downstream software which might understand it.

Squid can either drop the malformed directive completely (B2) or forward
the original malformed directive (B3). B2 is easy to implement. B3 is
not so easy and is probably not very useful in practice so I would not
spend cycles on it, but it is Kinkie's call.

Forwarding something other than the original is a bad idea, IMO, because
it may result in more stale objects than the client wants to accept (B1).


> That said, the spec clearly outlines it as delta-seconds so if its not
> parseable as delta-seconds its clearly malformed and B1 probably should
> not pass "max-stale".

Exactly. Although, technically, it is easy to construct a well-formed
max-stale value that Squid would consider malformed:

  max-stale=

The same is true for virtually all integer-based values in HTTP. We can
hope such values will never be found in real requests until we meet
aliens that write software which can cache objects for eternity.


> Which leaves us in a mess because;
>  A1->B1 seems intuitively the right thing, but has bad side effects.

Neither A1 nor B1 are the right things, IMO, because they may lead to
responses that are more stale than what the client is willing to accept.


>  A1->B3 seems like a reasonable alternative to avoid the effects, but
> gambles on delta-seconds never changing.

A1 has a side effect of serving a possibly "too stale" object to the
client. You may be confused by the old source code comments (now fixed
by Kinkie) that valueless max-stale means "always stale". It actually
means "never stale". Thus, A1 and B1 convert "possibly stale at some
unknown point in time" to "never stale" which feels like a bad idea.


> I think we might have to get around this by using the max_stale
> directive (or refresh_pattern override value) to limit the max-stale
> value relayed upstream from the current Squid. If we consider this to be
> an administrative override determining that this Squid will not emit
> objects older than max_stale its reasonable to shrink the requested
> staleness range from the valueless "anything" and ensure we get an
> object as current as we want, within the wider range of what the client
> wants.

When max-stale=value is malformed, we do not know whether we shrink or
extend it by supplying our own value. That is why I think A2 is the only
acceptable option for Squid internal interpretation and B2 and B3 are
the only acceptable options for forwarding.


Cheers,

Alex.



Re: [MERGE] c++-refactor HttpHdrCc

2011-10-04 Thread Kinkie
If you guys are OK, I'll merge the current implementation, then we can
alter it post-merge.

Right now what Squid does is A1->B2 for all directives except for
max-stale, for which it does B4 (forward valueless). This  is probably
the worst thing that can be done.

IMHVO the best thing we could do is forward things we can't parse
as-is (A2-B3). If we change nothing we can probably also keep a copy
of the raw directives, saves some cycles to reassemble it at pack-time
.


On Tue, Oct 4, 2011 at 4:49 AM, Amos Jeffries  wrote:
> On Mon, 03 Oct 2011 18:02:38 -0600, Alex Rousskov wrote:
>>
>> On 09/30/2011 09:30 PM, Amos Jeffries wrote:
>>>
>>> On Thu, 29 Sep 2011 17:59:13 +0200, Kinkie wrote:

 On Thu, Sep 29, 2011 at 5:55 PM, Alex Rousskov wrote:
>>
>>> We got a malformed max-stale value. We have only
>>> two options when it comes to that value interpretation, I think:
>>>
>>>   A1. Treat it as valueless max-stale.
>>>   A2. Ignore it completely.
>>>
>>> Since valueless max-stale results in possibly very stale content
>>> being
>>> served to an unsuspecting client, I think #2 is much safer (and
>>> compliant with HTTP).
>>
>>> BTW, we have three options when it comes to forwarding the malformed
>>> directive:
>>>
>>>   B1. Forward our own valid directive.
>>>   B2. Forward nothing.
>>>   B3. Forward the malformed directive.
>>>
>>> While option B3 could be the best, it requires more parsing changes.
>>> Option B2 is much better than B1, IMO, and requires minimal changes.
>>>
>>> We currently implement A1 and B1.
>>
>>
>> Hi Amos,
>>
>>    I sense misconmmunication. Please keep in mind that A* options are
>> concerned with internal interpretation of a malformed CC directive by
>> Squid while B* options are concerned with forwarding of a malformed CC
>> directive by Squid. We treat those two areas separately, and we do not
>> discuss a whole CC header.
>
> Understood.
>
>>
>>
>>> B2 is not a valid option AFAIK.
>>
>> Why not? We can drop a CC directive if we consider it malformed, I
>> think. And if it is the only directive, we would drop the whole CC
>> header, of course.
>>
>> In fact, don't we already implement B2 for all other malformed
>> integer-valued CC directives?
>
> Possibly. This does not make it a full case though. It seems to be one
> instance of B1 where our fixed valid value happens to be nil.
>
> ie  A1->B2 is not valid.  and for the reasons outlined below A2->B2 is not
> valid.
>
>>
>>> HTTP does not offer the option A2 as you are discussing right now.
>>> Since we know the header we must either treat it as an option known (A1)
>>> or something unknown (A3).
>>
>> A2 is "ignore it", which (from interpretation point of view) is
>> equivalent to "treat it as something unknown" because Squid ignores
>> unknown CC directives.
>>
>>
>>>  * A3 means we can ignore it for operational purposes (like A2), the
>>> natural result being B3.
>>
>> I do not understand the difference between A2 and A3. A2 does not imply
>> B3 though. A2 implies (either B2 or B3).
>
> Possible miscommunication here. I was reading your A* as what to do with the
> values from the point of parser sighting them onwards.
>
> A2 as I see it means produce 'other' key-value token from the parse.
> Ignoring it internally. The only compliant action on output is B3. This
> could lead to accepting a second instance of max-stale as the real
> max-stale. So I'm kind of against choosing this action.
>
>
> A3 as I see it means parse and accept as a unknown *non-nil* value of
> max-stale. Treating it internally as valueless (or zero value to be very
> conservative?), BUT the compliant output is B3. Without the problems of
> accidentally emitting 2 max-stale values, because even if ignored we record
> in the state that a max-stale was seen.
>
>
>>
>>> A1 means we have to decide betrween B1 or B3. Treating it as a known
>>> directive which has been extended with some unknown extension value
>>> (B3). Or as definitely malformed and invalid (B1).
>>
>> A1 can be combined with all three Bs.  I would not recommend A1 at all
>> though because we may end up serving stale content to a client that does
>> not want it. If we have to gamble, it is better to server "too fresh"
>> than "too stale" content, IMO (it is also compliant!).
>
> The new updated RFC texts from HTTPbis are now stating:
>
> "
>   A proxy, whether or not it implements a cache, MUST pass cache
>   directives through in forwarded messages, regardless of their
>   significance to that application, since the directives might be
>   applicable to all recipients along the request/response chain.  It is
>   not possible to target a directive to a specific cache.
> "
>
> So we do not have B2 as a compliant option. If we receive it, it MUST be
> relayed.
>
> Erasing it could be an option, but not a compliant one. Striping the
> unknown-value to make it less strict than we received would be more likely
> 

Re: [PATCH 2/2] Remove entryLimitAllowed() TODO from RockSwapDirRr::run().

2011-10-04 Thread Amos Jeffries

On 03/10/11 18:19, Dmitry Kurochkin wrote:


RockSwapDirRr::run() calls sd->entryLimitAllowed() when sd does not
have a map yet, which causes entryLimitAllowed() to use zero for the
lower limit.  But that OK so we can remove the TODO.
---
  src/fs/rock/RockSwapDir.cc |1 -
  1 files changed, 0 insertions(+), 1 deletions(-)



+1. Both these look okay to me. Please combine on merge though.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.15
  Beta testers wanted for 3.2.0.12


Re: API inconsistency in HttpHeader.cc?

2011-10-04 Thread Kinkie
Found the issue. I had inadvertently flipped a test in
HttpStateData::httpBuildRequestHeader, which caused Cc to be defined
but not to be filled-in.
So now that code-path is not taken anymore, and I will then merge the
playground branch soon.
These four methods IMVHO have however a weakness (not
security-related), which will be hopefully solved by StringNg.

Thanks for the eyeballs,
  Kinkie


On Tue, Oct 4, 2011 at 3:18 AM, Amos Jeffries  wrote:
> On Mon, 03 Oct 2011 17:31:30 -0600, Alex Rousskov wrote:
>>
>> On 10/03/2011 04:42 PM, Amos Jeffries wrote:
>>>
>>> On Mon, 03 Oct 2011 10:09:42 -0600, Alex Rousskov wrote:

 On 10/03/2011 08:37 AM, Kinkie wrote:
>
> Hi all,
>  while working on playground (HttpHdrCc c++-ification, mostly), I
> stumbled upon something which worries me a bit, and I wonder why it is
> not causing issues.
>
> HttpHeader.cc defines a few functions which add headers, I'm zoning in
> onto HttpHeader::putCc as is the one I've been looking at the most.
> Here it is:
>
> void
> HttpHeader::putCc(const HttpHdrCc * cc)
> {
>    MemBuf mb;
>    Packer p;
>    assert(cc);
>    /* remove old directives if any */
>    delById(HDR_CACHE_CONTROL);
>    /* pack into mb */
>    mb.init();
>    packerToMemInit(&p, &mb);
>    cc->packInto(&p);
>    /* put */
>    addEntry(new HttpHeaderEntry(HDR_CACHE_CONTROL, NULL, mb.buf));
>    /* cleanup */
>    packerClean(&p);
>    mb.clean();
> }
>
> The problem is that addEntry is initialized from the raw storage of a
> MemBuf (filled-in via packer), expecting a NULL-terminated c-string
> value.
> Which may very well not be the case. For instance, if noone as written
> anything to the MemBuf.

 I suspect it was impossible for the old code to call putCc with an empty
 Cache-Control header under normal conditions. Abnormal conditions were
 rare, [nearly] never fatal because the buffer is [usually] zeroed on
 allocation, and so the bug was never noticed.
>>
>> I should have made my comments less specific to the empty CC header
>> case. What Kinkie is asking about is null-termination which is not
>> specific to empty CC. Kinkie just used an empty CC as an obvious example
>> of a not explicitly terminated mb.buf.
>>
>
> Ah, Right.
>
>>
>>> Empty CC is invalid HTTP. If you are creating an empty CC header
>>> something is wrong elsewhere in the code.
>>
>> While it is true that empty CC on the wire is syntactically invalid, I
>> can think of three cases where an object representing it may appear
>> inside Squid:
>>
>>  1. Squid trying to preserve original headers instead of trying to
>> "fix" them. I do not think we do that to CC now, but many proxy admins
>> want their proxies to be minimally invasive. Yes, this creates
>> smuggling-like exploitation opportunities but so is fixing headers (it
>> is essentially two sides of the same coin).
>
> For this case I think we want to have a header-blob construct. Essentially
> what I'm expecting the HeaderParser to produce after SBuf conversion:
>  - a 'parent' SBuf pointing to the start of the header line and running to
> terminal '\n'.
>  - a 'label' SBuf pointing to the start of the header name and running to
> terminal ':'.
>  - a 'header-value' SBuf pointing to the start of the non-whitespace header
> values and running to terminal '\n'.
>
> When preserving originals we decide whether to output the second two as a
> pair, like we are supposed to for compliant syntax cleaning, resulting in
> trimmed whitespace garbage. Or for fully transparent, dump the first back
> onto the wire.
>
>>
>>  2. Squid obeying configuration options or some kind of internal logic
>> that removes an existing CC directive. I am not sure there are cases
>> like that now, but that is not important for this thread. There is no
>> code that checks that removing a directive does not lead to an empty
>> cache control header. Moreover, I do not think we need such code as it
>> would make callers life difficult.
>
> I think we may have this as a unreported problem. There was a comment in
> HTTPbis about proxies emitting empty headers questioning what to do about
> storage in that case. So likely there is a portion of the user base causing
> or experiencing this brokenness.
>
>>
>>  3. Debugging. We may print an "under construction" CC header that is
>> currently empty. I doubt we do that now, but I doubt an audit would
>> catch this if we start doing it later. And debugging code is not the
>> place to check for header validity anyway.
>>
>> Until we start doing #1, we can simply check for an empty CC header when
>> packing it to solve #2 and #3 (but this is not what Kinkie is asking
>> about, I think).
>>
>>
>>> Same for range and SC headers.
>>>
>>> In the case of SC replacing a CC with nothing there will still be a  '
>>> field="" ' string value to set.
>>>

> A possible solution