Re: [PATCH] rock fixes and improvements

2014-04-29 Thread Amos Jeffries

+1. All good apart form one minor nit:

src/tests/stub_store.cc
 * the new checkCacheable() needs to be STUB_RETVAL(false).

That can be added on merge.

Thank you for all these
Amos



Re: [PATCH] cache_peer standby=N

2014-04-29 Thread Amos Jeffries
On 29/04/2014 8:46 a.m., Alex Rousskov wrote:
 On 04/27/2014 10:02 PM, Amos Jeffries wrote:
 
 We should state the problem with idles clearly (yes it is difficult to
 word),
 
 We already do that:
 
 +max-conn limit works poorly when there is a relatively
 +large number of idle persistent connections with the
 +peer because the limiting code does not know that
 +Squid can often reuse some of those idle connections.
 

Oh thats what that was about.

 Do you want us to add This poor idle connection management is a
 problem. sentence to the above?

That would make it worse IMO.

What about:

max-conn works poorly with persistent connections and may prevent a peer
being selected when there are idle connections because the limiting code
does not know whether Squid can reuse some of those idle connections.


 
 or we fix that problem (see below) and update the documentation
 
 The change is not trivial, so I do not think we should be forced to do
 that as a part of this project. There are many problems with idle
 connections, and we are not making them worse by adding the standby
 pools, quite the opposite. It feels like we are being penalized for
 improving documentation of ancient problems.
 

If you want to do it as a followup fine. I just do not see a particular
need to delay fixing a bug with a (now) known solution.

Amos



Re: [PATCH] ConnStateData flexible transport support

2014-04-29 Thread Amos Jeffries
On 29/04/2014 9:12 a.m., Alex Rousskov wrote:
 On 04/28/2014 07:10 AM, Amos Jeffries wrote:
 
 * ssl-bump transforms the transportVersion from whatever it was
 previously (usually HTTP or HTTPS) to HTTPS.
 
 +/// the transport protocol currently being spoken on this connection
 +AnyP::ProtocolVersion transportVersion;
 
 
 If I recall our earlier discussions correctly, transport is TCP, UDP,
 and such while HTTP and HTTPS are transfer protocols. It sounds like
 you want to use the new data member for transfer, not transport
 protocols. If yes, the member should be renamed accordingly.
 
 Also, if this is often about the protocol name rather than just its
 version, let's remove Version from the data member name (PortCfg
 already uses that approach).
 
 Finally, the patch does not actually use the version part of the
 transportVersion data member AFAICT. Perhaps the data member type should
 be changed from ProtocolVersion to ProtocolType?


I'm fine with transferProtocol or just protocol (although that may
get as common as conn has become). Whichever suits you.

The version is not used yet only because ssl-bump does not need it
(AFAIK. possibly if ssl-bump identified the client is using HTTP/0.9
inside the CONNECT tunnel it may want to do special things, but we don't
today).
 The 2.0 code needs the version to identify several key logic decisions,
like which parser to allocate/use on reads(2), how to manage the
pipeline, what to do with 10x messages etc.


 
 -debugs(33, 5, HERE  converting   clientConnection   to SSL);
 +debugs(33, 5, converting   clientConnection   to SSL);
 
 To reduce merge conflicts, please do not remove HERE from debugging
 lines that otherwise do not need to be changed.
 

Okay.

 
 This variable can
 be altered whenever necessary to cause an on-wire protocol change.
 
 Altering the data member does not cause an on-wire protocol change in
 the patched code AFAICT. Perhaps you meant that the data member should
 always reflect the current wire protocol?
 

Yes. However I was planning to use it to decode which Packer was
allocated. So it is both a reflection of the transfer protocol and a
cause of how messages converted to bytes.
 I am fine with documenting it as a reflection though.

Amos


Re: [PATCH] cache_peer standby=N

2014-04-29 Thread Alex Rousskov
On 04/29/2014 05:48 AM, Amos Jeffries wrote:
 On 29/04/2014 8:46 a.m., Alex Rousskov wrote:
 On 04/27/2014 10:02 PM, Amos Jeffries wrote:

 We should state the problem with idles clearly (yes it is difficult to
 word),

 We already do that:

 +   max-conn limit works poorly when there is a relatively
 +   large number of idle persistent connections with the
 +   peer because the limiting code does not know that
 +   Squid can often reuse some of those idle connections.


 What about:
 
 max-conn works poorly with persistent connections and may prevent a peer
 being selected when there are idle connections because the limiting code
 does not know whether Squid can reuse some of those idle connections.
 

Sure, I would just emphasize that a peer may be excluded only when it
reached the limit, not just when it has some pconns:


max-conn currently works poorly with idle persistent connections: When a
peer reaches its max-conn limit, and there are idle persistent
connections to the peer, the peer may not be selected because the
limiting code does not know whether Squid can reuse those idle connections.



 or we fix that problem (see below) and update the documentation

 The change is not trivial, so I do not think we should be forced to do
 that as a part of this project. There are many problems with idle
 connections, and we are not making them worse by adding the standby
 pools, quite the opposite. It feels like we are being penalized for
 improving documentation of ancient problems.


 If you want to do it as a followup fine. I just do not see a particular
 need to delay fixing a bug with a (now) known solution.

The solution you have outlined is incomplete. The correct solution will
involve more work than you think. There are worse problems to work on.
This problem is old, and the new feature being reviewed does not make it
worse. IMO, this situation clearly falls into the quality patches
welcomed category, not the if you want standby feature to be accepted,
you must [promise to] fix the idle connection problem category.

Alex.



Re: [PATCH] ConnStateData flexible transport support

2014-04-29 Thread Alex Rousskov
On 04/29/2014 06:03 AM, Amos Jeffries wrote:

 I'm fine with transferProtocol or just protocol (although that may
 get as common as conn has become). Whichever suits you.

Let's avoid protocol for the reasons you mentioned and use
transferProtocol.


 This variable can
 be altered whenever necessary to cause an on-wire protocol change.

 Altering the data member does not cause an on-wire protocol change in
 the patched code AFAICT. Perhaps you meant that the data member should
 always reflect the current wire protocol?


 Yes. However I was planning to use it to decode which Packer was
 allocated.

Sure, and the new data member is already used for other things. To the
extent possible, the description should focus on the meaning, not use,
especially when the use cases are diverse.


Thank you,

Alex.



How long is a domain or url can be?

2014-04-29 Thread Eliezer Croitoru
I am working on external_acl helper and I want to work with a DB of urls 
and domains.


I know that there is a limit for domains size and urls size but I am not 
sure where to look for it(rfc?) or what is it?
Since the DB in hands is mysql I have the options to use one of two or 
three variable types.
TEXT which is for big text and varchar with a fixed maximum size I can 
define.


If you can assist me it will help me.

Eliezer



Re: /bzr/squid3/trunk/ r13384: Bug 1961: pt1: URL handling redesign

2014-04-29 Thread Tsantilas Christos
On 04/28/2014 04:57 AM, Amos Jeffries wrote:
 On 28/04/2014 5:35 a.m., Tsantilas Christos wrote:
 Unfortunately this is not build with ecap.

 The ecap uses the HttpMsg::protocol, to describe protocol for both
 requests and responses.
 Looks that HttpReply::protocol was never set (Am I loosing something?).
 
 That is correct. HttpReply::protocol no longer exists.

 

 Is it a bad idea to replace HttpMsg::protocol with a virtual method,
 which return HttpRequest::url.getScheme() for HttpRequest objects, and
 HttpReply::sline.protocol for HttpReply objects?

 
 Those two protocol details should be kept separate because replies do
 not have a URL or Scheme in their first-line. It is possible for the
 request URL scheme protocol to be anything, but the reply message
 syntax/protocol should always be one of HTTP 0.9/1.0/1.1 or ICY 1.0 at
 present.

Exactly. But for ECAP we are sending both requests and replies.


 
 What the eCAP field needs to be set to depends on its definition:
 
 * If it is sending the scheme/protocol of the URL *in* the message then
 it should be the url.getScheme() string on requests and a value
 signifying non-existent on replies

We are passing to ECAP reply or request HTTP messages. The HTTP reply
message, may have different scheme/protocol from the HTTP request cause
the reply.
For example to an https:// request we may pass to ECAP an HTTP/1.0
or HTTP/1.1 reply because of the SSL-bump.

The protocol information included in HTTP replies. Currently we can
recognize only HTTP 0.9/1.0/1.1 or ICY replies but these are the only
replies which can be sent to ECAP...

 
 * If it is sending the scheme of the URL which the message was generated
 *for* then the reply has a request member which can be used to access
 the URL details of the request which triggered this reply.
 
 * If it is sending an indicator of what syntax message to parse, then
 the http_ver member should be used instead for both requests and replies.
 
 * If it is sending the on-wire protocol used to communicate with the
 current client or peer/server. Then we have nothing currently to signal
 that. I have one untested patch coming up but for now the best
 workaround that can be used is http_ver for both requests and replies.

My opinion is that we need to send to ECAP the scheme of the url for
HTTP requests and protocol information included in headers for replies.
At least as temporary solution to allow ECAP compile for now, which also
is compatible with the current behaviour.

A virtual method provides these information for HttpMsg objects, I
believe, it is a good solution.
If we have problem with definitions, we can name it with a name
different than the HttpMsg::protocol(). eg
HttpMsg::estimatedProtocol() or something like this...



 
 Amos
 
 



Re: [PATCH] cache_peer standby=N

2014-04-29 Thread Amos Jeffries
On 30/04/2014 2:10 a.m., Alex Rousskov wrote:
 On 04/29/2014 05:48 AM, Amos Jeffries wrote:
 On 29/04/2014 8:46 a.m., Alex Rousskov wrote:
 On 04/27/2014 10:02 PM, Amos Jeffries wrote:

 We should state the problem with idles clearly (yes it is difficult to
 word),

 We already do that:

 +  max-conn limit works poorly when there is a relatively
 +  large number of idle persistent connections with the
 +  peer because the limiting code does not know that
 +  Squid can often reuse some of those idle connections.
 
 
 What about:
 
 max-conn works poorly with persistent connections and may prevent a peer
 being selected when there are idle connections because the limiting code
 does not know whether Squid can reuse some of those idle connections.
 
 
 Sure, I would just emphasize that a peer may be excluded only when it
 reached the limit, not just when it has some pconns:
 
 
 max-conn currently works poorly with idle persistent connections: When a
 peer reaches its max-conn limit, and there are idle persistent
 connections to the peer, the peer may not be selected because the
 limiting code does not know whether Squid can reuse those idle connections.
 
 

+1.

Amos



Re: How long is a domain or url can be?

2014-04-29 Thread Kinkie
http://www.boutell.com/newfaq/misc/urllength.html

Squid defines MAX_URL at 8KiB (in src/defines.h)


On Tue, Apr 29, 2014 at 4:40 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I am working on external_acl helper and I want to work with a DB of urls and
 domains.

 I know that there is a limit for domains size and urls size but I am not
 sure where to look for it(rfc?) or what is it?
 Since the DB in hands is mysql I have the options to use one of two or three
 variable types.
 TEXT which is for big text and varchar with a fixed maximum size I can
 define.

 If you can assist me it will help me.

 Eliezer




-- 
Francesco


Re: [PATCH] cache_peer standby=N

2014-04-29 Thread Tsantilas Christos
To summarize, the requested changes are:

*in cf.data.pre:
replace:
max-conn limit works poorly when there is a relatively
large number of idle persistent connections with the
peer because the limiting code does not know that
Squid can often reuse some of those idle connections.

with:
max-conn currently works poorly with idle persistent connections: When a
peer reaches its max-conn limit, and there are idle persistent
connections to the peer, the peer may not be selected because the
limiting code does not know whether Squid can reuse those idle connections.


*n src/cache_cf.cc:
  please use ERROR: instead of parse_peer: in 'fatalf(parse_peer:'...

* in pconn.h:
document closeN as follows:
   /// closes any n connections, regardless of their destination
void closeN(int n);

* in neighbors.cc
add the following comment at the beginning of peerDNSConfigure:
// TODO: connections to no-longer valid IP addresses should be
// closed when we can detect such IP addresses.


Is it OK?



Re: How long is a domain or url can be?

2014-04-29 Thread Eliezer Croitoru

Thanks,

I have seen this url and I remembered that there was something inside squid.

I assumed that based on this link it should be pretty high.
I noticed that apache web server has the limit of 128KB for a url.

So I am a bit confused but I am thinking about url filtering DB.

Eliezer

On 04/29/2014 06:28 PM, Kinkie wrote:

http://www.boutell.com/newfaq/misc/urllength.html

Squid defines MAX_URL at 8KiB (in src/defines.h)




Re: [PATCH] cache_peer standby=N

2014-04-29 Thread Amos Jeffries
On 30/04/2014 4:00 a.m., Tsantilas Christos wrote:
 To summarize, the requested changes are:
 
 *in cf.data.pre:
 replace:
 max-conn limit works poorly when there is a relatively
 large number of idle persistent connections with the
 peer because the limiting code does not know that
 Squid can often reuse some of those idle connections.
 
 with:
 max-conn currently works poorly with idle persistent connections: When a
 peer reaches its max-conn limit, and there are idle persistent
 connections to the peer, the peer may not be selected because the
 limiting code does not know whether Squid can reuse those idle connections.
 
 
 *n src/cache_cf.cc:
   please use ERROR: instead of parse_peer: in 'fatalf(parse_peer:'...
 
 * in pconn.h:
 document closeN as follows:
/// closes any n connections, regardless of their destination
 void closeN(int n);
 
 * in neighbors.cc
 add the following comment at the beginning of peerDNSConfigure:
 // TODO: connections to no-longer valid IP addresses should be
 // closed when we can detect such IP addresses.
 
 
 Is it OK?
 

Yes. Since its all documentation I dont think it needs another review.

Amos


Re: How long is a domain or url can be?

2014-04-29 Thread Francesco
Well, Squid can't send you more than 8 KiB, so I'd consider that as a limit.
Most browser will send much shorter URLs than that, if the page I refereced is 
to be believed.

Kinkie

On 29 Apr 2014, at 18:02, Eliezer Croitoru elie...@ngtech.co.il wrote:

 Thanks,
 
 I have seen this url and I remembered that there was something inside squid.
 
 I assumed that based on this link it should be pretty high.
 I noticed that apache web server has the limit of 128KB for a url.
 
 So I am a bit confused but I am thinking about url filtering DB.
 
 Eliezer
 
 On 04/29/2014 06:28 PM, Kinkie wrote:
 http://www.boutell.com/newfaq/misc/urllength.html
 
 Squid defines MAX_URL at 8KiB (in src/defines.h)
 



Re: How long is a domain or url can be?

2014-04-29 Thread Amos Jeffries
On 30/04/2014 4:02 a.m., Eliezer Croitoru wrote:
 Thanks,
 
 I have seen this url and I remembered that there was something inside
 squid.
 
 I assumed that based on this link it should be pretty high.
 I noticed that apache web server has the limit of 128KB for a url.
 
 So I am a bit confused but I am thinking about url filtering DB.
 

HTTP defines no limit.
 - squid defines MAX_URL of 8KB, along with a header line limit of 64KB
total, and a helper line limit of 32KB total.

DNS defines X.Y.Z segments as being no longer than 255 bytes *each*.


Amos

 Eliezer
 
 On 04/29/2014 06:28 PM, Kinkie wrote:
 http://www.boutell.com/newfaq/misc/urllength.html

 Squid defines MAX_URL at 8KiB (in src/defines.h)
 



Re: /bzr/squid3/trunk/ r13384: Bug 1961: pt1: URL handling redesign

2014-04-29 Thread Amos Jeffries
On 30/04/2014 2:47 a.m., Tsantilas Christos wrote:
 On 04/28/2014 04:57 AM, Amos Jeffries wrote:
 On 28/04/2014 5:35 a.m., Tsantilas Christos wrote:
 Unfortunately this is not build with ecap.

 The ecap uses the HttpMsg::protocol, to describe protocol for both
 requests and responses.
 Looks that HttpReply::protocol was never set (Am I loosing something?).

 That is correct. HttpReply::protocol no longer exists.
 


 Is it a bad idea to replace HttpMsg::protocol with a virtual method,
 which return HttpRequest::url.getScheme() for HttpRequest objects, and
 HttpReply::sline.protocol for HttpReply objects?


 Those two protocol details should be kept separate because replies do
 not have a URL or Scheme in their first-line. It is possible for the
 request URL scheme protocol to be anything, but the reply message
 syntax/protocol should always be one of HTTP 0.9/1.0/1.1 or ICY 1.0 at
 present.
 
 Exactly. But for ECAP we are sending both requests and replies.
 
 

 What the eCAP field needs to be set to depends on its definition:

 * If it is sending the scheme/protocol of the URL *in* the message then
 it should be the url.getScheme() string on requests and a value
 signifying non-existent on replies
 
 We are passing to ECAP reply or request HTTP messages. The HTTP reply
 message, may have different scheme/protocol from the HTTP request cause
 the reply.
 For example to an https:// request we may pass to ECAP an HTTP/1.0
 or HTTP/1.1 reply because of the SSL-bump.
 
 The protocol information included in HTTP replies. Currently we can
 recognize only HTTP 0.9/1.0/1.1 or ICY replies but these are the only
 replies which can be sent to ECAP...

Do you mean ECAP is supposed to be receiving the syntax/protocol type of
the headers?

That means ...


 

 * If it is sending the scheme of the URL which the message was generated
 *for* then the reply has a request member which can be used to access
 the URL details of the request which triggered this reply.

 * If it is sending an indicator of what syntax message to parse, then
 the http_ver member should be used instead for both requests and replies.


 ... this ^^^  HttpMsg::http_ver ??


 * If it is sending the on-wire protocol used to communicate with the
 current client or peer/server. Then we have nothing currently to signal
 that. I have one untested patch coming up but for now the best
 workaround that can be used is http_ver for both requests and replies.
 
 My opinion is that we need to send to ECAP the scheme of the url for
 HTTP requests and protocol information included in headers for replies.
 At least as temporary solution to allow ECAP compile for now, which also
 is compatible with the current behaviour.
 
 A virtual method provides these information for HttpMsg objects, I
 believe, it is a good solution.
 If we have problem with definitions, we can name it with a name
 different than the HttpMsg::protocol(). eg
 HttpMsg::estimatedProtocol() or something like this...
 
 
 

 Amos


 



Build failed in Jenkins: 3.HEAD-amd64-FreeBSD-10 #55

2014-04-29 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10/55/changes

Changes:

[Amos Jeffries] Fix order dependency between cache_dir and maximum_object_size

parse_cachedir() has a call to update_maxobjsize() which limits the
store_maxobjsize variable used as the internal maximum_object_size
variable of the store data structure) to the value of maximum_object_size
defined at the moment of execution of this function, for all stores (all
store directories). So if parse for cache_dir is called before
maximum_object_size, we get the effect of the default 4 MB.

BUT, when we get to parse maximum_object_size line(s) after the last
cache_dir, the maximum_object_size option is processed and only shown on
the cachemgr config page without having updated store_maxobjsize.

--
[...truncated 3561 lines...]
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/NCSA   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT crypt_md5.o -MD -MP -MF .deps/crypt_md5.Tpo -c 
-o crypt_md5.o ../../../../helpers/basic_auth/NCSA/crypt_md5.cc
mv -f .deps/crypt_md5.Tpo .deps/crypt_md5.Po
--- basic_ncsa_auth.o ---
mv -f .deps/basic_ncsa_auth.Tpo .deps/basic_ncsa_auth.Po
--- basic_ncsa_auth ---
/bin/sh ../../../libtool  --tag=CXX--mode=link /usr/local/bin/ccache g++ 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT  -g -O2 -I/usr/local/include  -g -L/usr/local/lib 
-Wl,-R/usr/local/lib -pthread -o basic_ncsa_auth basic_ncsa_auth.o  crypt_md5.o 
../../../lib/libmisccontainers.la  ../../../lib/libmiscencoding.la  
../../../compat/libcompat-squid.la   -lnettle  -lcrypt-lm 
libtool: link: /usr/local/bin/ccache g++ -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -g 
-Wl,-R/usr/local/lib -pthread -o basic_ncsa_auth basic_ncsa_auth.o crypt_md5.o  
-L/usr/local/lib ../../../lib/.libs/libmisccontainers.a 
../../../lib/.libs/libmiscencoding.a ../../../compat/.libs/libcompat-squid.a 
-lnettle -lcrypt -lm -pthread
Making all in PAM
--- basic_pam_auth.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl   -I/usr/include  
-I/usr/include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow 
-Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -MT basic_pam_auth.o -MD 
-MP -MF .deps/basic_pam_auth.Tpo -c -o basic_pam_auth.o 
../../../../helpers/basic_auth/PAM/basic_pam_auth.cc
mv -f .deps/basic_pam_auth.Tpo .deps/basic_pam_auth.Po
--- basic_pam_auth ---
/bin/sh ../../../libtool  --tag=CXX--mode=link /usr/local/bin/ccache g++ 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT  -g -O2 -I/usr/local/include  -g -L/usr/local/lib 
-Wl,-R/usr/local/lib -pthread -o basic_pam_auth basic_pam_auth.o 
../../../lib/libmiscencoding.la  ../../../compat/libcompat-squid.la   -lpam  
-lm 
libtool: link: /usr/local/bin/ccache g++ -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -g 
-Wl,-R/usr/local/lib -pthread -o basic_pam_auth basic_pam_auth.o  
-L/usr/local/lib ../../../lib/.libs/libmiscencoding.a 
../../../compat/.libs/libcompat-squid.a -lpam -lm -pthread
Making all in POP3
--- basic_pop3_auth ---
sed -e 's,[@]PERL[@],/usr/bin/perl,g' 
../../../../helpers/basic_auth/POP3/basic_pop3_auth.pl.in basic_pop3_auth || 
(/bin/rm -f -f basic_pop3_auth ; exit 1)
Making all in RADIUS
--- basic_radius_auth.o ---
--- radius-util.o ---
--- basic_radius_auth.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/RADIUS   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT basic_radius_auth.o -MD -MP -MF 
.deps/basic_radius_auth.Tpo -c -o basic_radius_auth.o 
../../../../helpers/basic_auth/RADIUS/basic_radius_auth.cc
--- radius-util.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/RADIUS   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT radius-util.o -MD -MP -MF .deps/radius-util.Tpo 

Build failed in Jenkins: 3.HEAD-amd64-FreeBSD-10-clang #55

2014-04-29 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/55/changes

Changes:

[Amos Jeffries] Fix order dependency between cache_dir and maximum_object_size

parse_cachedir() has a call to update_maxobjsize() which limits the
store_maxobjsize variable used as the internal maximum_object_size
variable of the store data structure) to the value of maximum_object_size
defined at the moment of execution of this function, for all stores (all
store directories). So if parse for cache_dir is called before
maximum_object_size, we get the effect of the default 4 MB.

BUT, when we get to parse maximum_object_size line(s) after the last
cache_dir, the maximum_object_size option is processed and only shown on
the cachemgr config page without having updated store_maxobjsize.

--
[...truncated 6013 lines...]
--- wccp.o ---
depbase=`echo wccp.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; ccache clang++ 
-DHAVE_CONFIG_H 
-DDEFAULT_CONFIG_FILE=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/squid.conf\;
  
-DDEFAULT_SQUID_DATA_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/share\;
  
-DDEFAULT_SQUID_CONFIG_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc\;
  -I../.. -I../../include -I../../lib  -I../../src -I../include  
-I/usr/local/include -I/usr/include  -I/usr/include -I../../libltdl  -I../src 
-I../../libltdl -I/usr/include  -I/usr/include   -I/usr/include  -I/usr/include 
-Werror -Qunused-arguments  -D_REENTRANT -g -O2 -I/usr/local/include -MT wccp.o 
-MD -MP -MF $depbase.Tpo -c -o wccp.o ../../src/wccp.cc  mv -f $depbase.Tpo 
$depbase.Po
--- wccp2.o ---
depbase=`echo wccp2.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; ccache clang++ 
-DHAVE_CONFIG_H 
-DDEFAULT_CONFIG_FILE=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/squid.conf\;
  
-DDEFAULT_SQUID_DATA_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/share\;
  
-DDEFAULT_SQUID_CONFIG_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc\;
  -I../.. -I../../include -I../../lib  -I../../src -I../include  
-I/usr/local/include -I/usr/include  -I/usr/include -I../../libltdl  -I../src 
-I../../libltdl -I/usr/include  -I/usr/include   -I/usr/include  -I/usr/include 
-Werror -Qunused-arguments  -D_REENTRANT -g -O2 -I/usr/local/include -MT 
wccp2.o -MD -MP -MF $depbase.Tpo -c -o wccp2.o ../../src/wccp2.cc  mv -f 
$depbase.Tpo $depbase.Po
--- whois.o ---
depbase=`echo whois.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; ccache clang++ 
-DHAVE_CONFIG_H 
-DDEFAULT_CONFIG_FILE=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/squid.conf\;
  
-DDEFAULT_SQUID_DATA_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/share\;
  
-DDEFAULT_SQUID_CONFIG_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc\;
  -I../.. -I../../include -I../../lib  -I../../src -I../include  
-I/usr/local/include -I/usr/include  -I/usr/include -I../../libltdl  -I../src 
-I../../libltdl -I/usr/include  -I/usr/include   -I/usr/include  -I/usr/include 
-Werror -Qunused-arguments  -D_REENTRANT -g -O2 -I/usr/local/include -MT 
whois.o -MD -MP -MF $depbase.Tpo -c -o whois.o ../../src/whois.cc  mv -f 
$depbase.Tpo $depbase.Po
--- wordlist.o ---
depbase=`echo wordlist.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; ccache clang++ 
-DHAVE_CONFIG_H 
-DDEFAULT_CONFIG_FILE=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/squid.conf\;
  
-DDEFAULT_SQUID_DATA_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/share\;
  
-DDEFAULT_SQUID_CONFIG_DIR=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc\;
  -I../.. -I../../include -I../../lib  -I../../src -I../include  
-I/usr/local/include -I/usr/include  -I/usr/include -I../../libltdl  -I../src 
-I../../libltdl -I/usr/include  -I/usr/include   -I/usr/include  -I/usr/include 
-Werror -Qunused-arguments  -D_REENTRANT -g -O2 -I/usr/local/include -MT 
wordlist.o -MD -MP -MF $depbase.Tpo -c -o wordlist.o ../../src/wordlist.cc  
mv -f $depbase.Tpo $depbase.Po
--- LoadableModule.o ---
depbase=`echo LoadableModule.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; ccache 
clang++ -DHAVE_CONFIG_H 
-DDEFAULT_CONFIG_FILE=\/usrhttp://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10-clang/ws/btlayer-00-default/squid-3.HEAD-BZR/_inst/etc/squid.conf\;
  

Build failed in Jenkins: 3.HEAD-amd64-OpenBSD-5.4 #58

2014-04-29 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/58/

--
Started by upstream project 3.HEAD-amd64-centos-6 build number 317
originally caused by:
 Started by an SCM change
Building remotely on ypg-openbsd-54 (gcc farm amd64-openbsd 5.4 openbsd-5.4 
openbsd amd64-openbsd-5.4 amd64) in workspace 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
$ bzr revision-info -d 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
info result: bzr revision-info -d 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/ returned 0. 
Command output: 13384 squ...@treenet.co.nz-20140427075917-90sedongfbg7du97
 stderr: 
[3.HEAD-amd64-OpenBSD-5.4] $ bzr pull --overwrite 
http://bzr.squid-cache.org/bzr/squid3/trunk/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to pull
Since BZR itself isn't crash safe, we'll clean the workspace so that on the 
next try we'll do a clean pull...
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch http://bzr.squid-cache.org/bzr/squid3/trunk/ 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to branch http://bzr.squid-cache.org/bzr/squid3/trunk/
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch http://bzr.squid-cache.org/bzr/squid3/trunk/ 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to branch http://bzr.squid-cache.org/bzr/squid3/trunk/



Build failed in Jenkins: 3.HEAD-amd64-FreeBSD-10 #56

2014-04-29 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-FreeBSD-10/56/changes

Changes:

[Amos Jeffries] Resolve 'dying from an unhandled exception: c'

CbcPointer is used from code outside of Job protection where it is
safe to use Must(). In order to get a useful backtrace we need to assert
immediately at the point of failure. Particularly necessary since these
are in generic operators used everywhere in the code.

--
[...truncated 3555 lines...]
sed -e 's,[@]PERL[@],/usr/bin/perl,g' 
../../../../helpers/basic_auth/MSNT-multi-domain/basic_msnt_multi_domain_auth.pl.in
 basic_msnt_multi_domain_auth || (/bin/rm -f -f basic_msnt_multi_domain_auth ; 
exit 1)
Making all in NCSA
--- basic_ncsa_auth.o ---
--- crypt_md5.o ---
--- basic_ncsa_auth.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/NCSA   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT basic_ncsa_auth.o -MD -MP -MF 
.deps/basic_ncsa_auth.Tpo -c -o basic_ncsa_auth.o 
../../../../helpers/basic_auth/NCSA/basic_ncsa_auth.cc
--- crypt_md5.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/NCSA   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT crypt_md5.o -MD -MP -MF .deps/crypt_md5.Tpo -c 
-o crypt_md5.o ../../../../helpers/basic_auth/NCSA/crypt_md5.cc
mv -f .deps/crypt_md5.Tpo .deps/crypt_md5.Po
--- basic_ncsa_auth.o ---
mv -f .deps/basic_ncsa_auth.Tpo .deps/basic_ncsa_auth.Po
--- basic_ncsa_auth ---
/bin/sh ../../../libtool  --tag=CXX--mode=link /usr/local/bin/ccache g++ 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT  -g -O2 -I/usr/local/include  -g -L/usr/local/lib 
-Wl,-R/usr/local/lib -pthread -o basic_ncsa_auth basic_ncsa_auth.o  crypt_md5.o 
../../../lib/libmisccontainers.la  ../../../lib/libmiscencoding.la  
../../../compat/libcompat-squid.la   -lnettle  -lcrypt-lm 
libtool: link: /usr/local/bin/ccache g++ -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -g 
-Wl,-R/usr/local/lib -pthread -o basic_ncsa_auth basic_ncsa_auth.o crypt_md5.o  
-L/usr/local/lib ../../../lib/.libs/libmisccontainers.a 
../../../lib/.libs/libmiscencoding.a ../../../compat/.libs/libcompat-squid.a 
-lnettle -lcrypt -lm -pthread
Making all in PAM
--- basic_pam_auth.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl   -I/usr/include  
-I/usr/include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow 
-Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -MT basic_pam_auth.o -MD 
-MP -MF .deps/basic_pam_auth.Tpo -c -o basic_pam_auth.o 
../../../../helpers/basic_auth/PAM/basic_pam_auth.cc
mv -f .deps/basic_pam_auth.Tpo .deps/basic_pam_auth.Po
--- basic_pam_auth ---
/bin/sh ../../../libtool  --tag=CXX--mode=link /usr/local/bin/ccache g++ 
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT  -g -O2 -I/usr/local/include  -g -L/usr/local/lib 
-Wl,-R/usr/local/lib -pthread -o basic_pam_auth basic_pam_auth.o 
../../../lib/libmiscencoding.la  ../../../compat/libcompat-squid.la   -lpam  
-lm 
libtool: link: /usr/local/bin/ccache g++ -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -g 
-Wl,-R/usr/local/lib -pthread -o basic_pam_auth basic_pam_auth.o  
-L/usr/local/lib ../../../lib/.libs/libmiscencoding.a 
../../../compat/.libs/libcompat-squid.a -lpam -lm -pthread
Making all in POP3
--- basic_pop3_auth ---
sed -e 's,[@]PERL[@],/usr/bin/perl,g' 
../../../../helpers/basic_auth/POP3/basic_pop3_auth.pl.in basic_pop3_auth || 
(/bin/rm -f -f basic_pop3_auth ; exit 1)
Making all in RADIUS
--- basic_radius_auth.o ---
--- radius-util.o ---
--- basic_radius_auth.o ---
/usr/local/bin/ccache g++ -DHAVE_CONFIG_H  -I../../../.. -I../../../../include 
-I../../../../lib  -I../../../../src -I../../../include  -I/usr/local/include 
-I/usr/include  -I/usr/include -I../../../../libltdl 
-I../../../../helpers/basic_auth/RADIUS   -I/usr/include  -I/usr/include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-g -O2 -I/usr/local/include -MT basic_radius_auth.o -MD -MP -MF 
.deps/basic_radius_auth.Tpo -c -o basic_radius_auth.o 

Build failed in Jenkins: 3.HEAD-amd64-OpenBSD-5.4 #59

2014-04-29 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/59/

--
Started by upstream project 3.HEAD-amd64-centos-6 build number 318
originally caused by:
 Started by an SCM change
Building remotely on ypg-openbsd-54 (gcc farm amd64-openbsd 5.4 openbsd-5.4 
openbsd amd64-openbsd-5.4 amd64) in workspace 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
Cleaning workspace...
$ bzr branch http://bzr.squid-cache.org/bzr/squid3/trunk/ 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to branch http://bzr.squid-cache.org/bzr/squid3/trunk/
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch http://bzr.squid-cache.org/bzr/squid3/trunk/ 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to branch http://bzr.squid-cache.org/bzr/squid3/trunk/
Retrying after 10 seconds
Cleaning workspace...
$ bzr branch http://bzr.squid-cache.org/bzr/squid3/trunk/ 
http://build.squid-cache.org/job/3.HEAD-amd64-OpenBSD-5.4/ws/
bzr: ERROR: Connection error: Couldn't resolve host 'bzr.squid-cache.org' 
[Errno -5] no address associated with name
ERROR: Failed to branch http://bzr.squid-cache.org/bzr/squid3/trunk/