Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #444
This is on west: checking sasl/sasl.h usability... no checking sasl/sasl.h presence... no checking for sasl/sasl.h... no checking sasl.h usability... no checking sasl.h presence... no checking for sasl.h... no checking for sasl_errstring in -lsasl2... no checking for sasl_errstring in -lsasl... no configure: error: Neither SASL nor SASL2 found buildtest.sh result is 1 BUILD: .././test-suite/buildtests/layer-00-default.opts configure: error: Neither SASL nor SASL2 found Build FAILED. Is it a configure problem as configure doesn't detect SASL or is SASL really not installed on west? If it's the latter case, could someone with root access install it? Thanks! -- /kinkie
Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #444
On Thu, 19 Aug 2010 08:10:01 +0200, Kinkie gkin...@gmail.com wrote: This is on west: checking sasl/sasl.h usability... no checking sasl/sasl.h presence... no checking for sasl/sasl.h... no checking sasl.h usability... no checking sasl.h presence... no checking for sasl.h... no checking for sasl_errstring in -lsasl2... no checking for sasl_errstring in -lsasl... no configure: error: Neither SASL nor SASL2 found buildtest.sh result is 1 BUILD: .././test-suite/buildtests/layer-00-default.opts configure: error: Neither SASL nor SASL2 found Build FAILED. Is it a configure problem as configure doesn't detect SASL or is SASL really not installed on west? If it's the latter case, could someone with root access install it? Thanks! /usr/local/include/sasl/sasl.h exists currently on west. Unless someone answered your plea and installed it between now and the test it appears to be a configure problem. Amos
Hudson build is back to normal: 3.HEAD-i386-FreeBSD-6.4 #446
See http://build.squid-cache.org/job/3.HEAD-i386-FreeBSD-6.4/446/changes
Re: Build failed in Hudson: 3.HEAD-i386-FreeBSD-6.4 #444
/usr/local/include/sasl/sasl.h exists currently on west. Unless someone answered your plea and installed it between now and the test it appears to be a configure problem. Worked around by specifying CPPFLAGS and LDFLAGS for extra include paths (in the node properties, not in the projects' properties). Strangely, even though there are compilation errors in the kerberos_ldap_group helper, the build test succeeds.. -- /kinkie
Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #762
See http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/762/changes Changes: [Francesco Chemolli kin...@squid-cache.org] configure.in fix: properly pass default hosts_file option around during build. [Amos Jeffries amosjeffr...@squid-cache.org] Bundle the purge and hexd tools with Squid sources. Fixes the remaining known errors with purge tool building within Squid source tree. This adds the auto-tools changes necessary to bundle the tool. -- [...truncated 4667 lines...] make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/PAM' if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT basic_pam_auth.o -MD -MP -MF .deps/basic_pam_auth.Tpo -c -o basic_pam_auth.o ../../../../helpers/basic_auth/PAM/basic_pam_auth.cc; \ then mv -f .deps/basic_pam_auth.Tpo .deps/basic_pam_auth.Po; else rm -f .deps/basic_pam_auth.Tpo; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -g -o basic_pam_auth basic_pam_auth.o -L../../../lib -lmiscutil ../../../compat/libcompat.la -lpam -lm -lnsl -ldl -ldl libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -g -o basic_pam_auth basic_pam_auth.o -Lhttp://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/lib -lmiscutil ../../../compat/.libs/libcompat.a -lpam -lm -lnsl -ldl make[4]: Leaving directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/PAM' Making all in POP3 make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/POP3' sed -e 's,[...@]perl[@],/usr/bin/perl,g' ../../../../helpers/basic_auth/POP3/basic_pop3_auth.pl.in basic_pop3_auth || (/bin/rm -f -f basic_pop3_auth ; exit 1) make[4]: Leaving directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/POP3' Making all in RADIUS make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/RADIUS' if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../../helpers/basic_auth/RADIUS -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT basic_radius_auth.o -MD -MP -MF .deps/basic_radius_auth.Tpo -c -o basic_radius_auth.o ../../../../helpers/basic_auth/RADIUS/basic_radius_auth.cc; \ then mv -f .deps/basic_radius_auth.Tpo .deps/basic_radius_auth.Po; else rm -f .deps/basic_radius_auth.Tpo; exit 1; fi if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../../helpers/basic_auth/RADIUS -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT radius-util.o -MD -MP -MF .deps/radius-util.Tpo -c -o radius-util.o ../../../../helpers/basic_auth/RADIUS/radius-util.cc; \ then mv -f .deps/radius-util.Tpo .deps/radius-util.Po; else rm -f .deps/radius-util.Tpo; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -g -o basic_radius_auth basic_radius_auth.o radius-util.o -L../../../lib -lmiscutil ../../../compat/libcompat.la -lm -lnsl -ldl -ldl libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -g -o basic_radius_auth basic_radius_auth.o radius-util.o -Lhttp://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/lib -lmiscutil ../../../compat/.libs/libcompat.a -lm -lnsl -ldl make[4]: Leaving directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/RADIUS' Making all in fake make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/helpers/basic_auth/fake' if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT fake.o -MD -MP -MF .deps/fake.Tpo -c -o fake.o ../../../../helpers/basic_auth/fake/fake.cc; \ then mv -f .deps/fake.Tpo .deps/fake.Po; else rm -f .deps/fake.Tpo; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -g -o basic_fake_auth fake.o
Re: [PATCH] Compliance: rename Trailers header to Trailer everywhere.
On 08/18/2010 10:09 PM, Alex Rousskov wrote: Compliance: rename Trailers header to Trailer everywhere. RFC 2616 section 13.5.1 has a typo in Trailers header name. The correct name is Trailer. See http://trac.tools.ietf.org/wg/httpbis/trac/ticket/9 Co-Advisor test cases: test_case/rfc2616/hopHdr-Trailer-toClt test_case/rfc2616/hopHdr-Trailer-toSrv It looks like I did not include the patch. Now attached. Sorry, Alex. Compliance: rename Trailers header to Trailer everywhere. RFC 2616 section 13.5.1 has a typo in Trailers header name. The correct name is Trailer. See http://trac.tools.ietf.org/wg/httpbis/trac/ticket/9 Co-Advisor test cases: test_case/rfc2616/hopHdr-Trailer-toClt test_case/rfc2616/hopHdr-Trailer-toSrv === modified file 'src/HttpHeader.cc' --- src/HttpHeader.cc 2010-05-31 19:51:06 + +++ src/HttpHeader.cc 2010-08-18 17:21:51 + @@ -106,41 +106,41 @@ static const HttpHeaderFieldAttrs Header {Last-Modified, HDR_LAST_MODIFIED, ftDate_1123}, {Link, HDR_LINK, ftStr}, {Location, HDR_LOCATION, ftStr}, {Max-Forwards, HDR_MAX_FORWARDS, ftInt64}, {Mime-Version, HDR_MIME_VERSION, ftStr}, /* for now */ {Pragma, HDR_PRAGMA, ftStr}, {Proxy-Authenticate, HDR_PROXY_AUTHENTICATE, ftStr}, {Proxy-Authentication-Info, HDR_PROXY_AUTHENTICATION_INFO, ftStr}, {Proxy-Authorization, HDR_PROXY_AUTHORIZATION, ftStr}, {Proxy-Connection, HDR_PROXY_CONNECTION, ftStr}, {Proxy-support, HDR_PROXY_SUPPORT, ftStr}, {Public, HDR_PUBLIC, ftStr}, {Range, HDR_RANGE, ftPRange}, {Referer, HDR_REFERER, ftStr}, {Request-Range, HDR_REQUEST_RANGE, ftPRange}, /* usually matches HDR_RANGE */ {Retry-After, HDR_RETRY_AFTER, ftStr}, /* for now (ftDate_1123 or ftInt!) */ {Server, HDR_SERVER, ftStr}, {Set-Cookie, HDR_SET_COOKIE, ftStr}, {TE, HDR_TE, ftStr}, {Title, HDR_TITLE, ftStr}, -{Trailers, HDR_TRAILERS, ftStr}, +{Trailer, HDR_TRAILER, ftStr}, {Transfer-Encoding, HDR_TRANSFER_ENCODING, ftStr}, {Translate, HDR_TRANSLATE, ftStr}, /* for now. may need to crop */ {Unless-Modified-Since, HDR_UNLESS_MODIFIED_SINCE, ftStr}, /* for now ignore. may need to crop */ {Upgrade, HDR_UPGRADE, ftStr}, /* for now */ {User-Agent, HDR_USER_AGENT, ftStr}, {Vary, HDR_VARY, ftStr}, /* for now */ {Via, HDR_VIA, ftStr}, /* for now */ {Warning, HDR_WARNING, ftStr}, /* for now */ {WWW-Authenticate, HDR_WWW_AUTHENTICATE, ftStr}, {Authentication-Info, HDR_AUTHENTICATION_INFO, ftStr}, {X-Cache, HDR_X_CACHE, ftStr}, {X-Cache-Lookup, HDR_X_CACHE_LOOKUP, ftStr}, {X-Forwarded-For, HDR_X_FORWARDED_FOR, ftStr}, {X-Request-URI, HDR_X_REQUEST_URI, ftStr}, {X-Squid-Error, HDR_X_SQUID_ERROR, ftStr}, {Negotiate, HDR_NEGOTIATE, ftStr}, #if X_ACCELERATOR_VARY {X-Accelerator-Vary, HDR_X_ACCELERATOR_VARY, ftStr}, #endif #if USE_ADAPTATION @@ -232,41 +232,41 @@ static http_hdr_type ReplyHeadersArr[] = #endif #if USE_ADAPTATION HDR_X_NEXT_SERVICES, #endif HDR_X_SQUID_ERROR, HDR_SURROGATE_CONTROL }; static HttpHeaderMask RequestHeadersMask; /* set run-time using RequestHeaders */ static http_hdr_type RequestHeadersArr[] = { HDR_AUTHORIZATION, HDR_FROM, HDR_HOST, HDR_IF_MATCH, HDR_IF_MODIFIED_SINCE, HDR_IF_NONE_MATCH, HDR_IF_RANGE, HDR_MAX_FORWARDS, HDR_PROXY_CONNECTION, HDR_PROXY_AUTHORIZATION, HDR_RANGE, HDR_REFERER, HDR_REQUEST_RANGE, HDR_USER_AGENT, HDR_X_FORWARDED_FOR, HDR_SURROGATE_CAPABILITY }; static HttpHeaderMask HopByHopHeadersMask; static http_hdr_type HopByHopHeadersArr[] = { HDR_CONNECTION, HDR_KEEP_ALIVE, /*HDR_PROXY_AUTHENTICATE,*/ HDR_PROXY_AUTHORIZATION, -HDR_TE, HDR_TRAILERS, HDR_TRANSFER_ENCODING, HDR_UPGRADE, HDR_PROXY_CONNECTION +HDR_TE, HDR_TRAILER, HDR_TRANSFER_ENCODING, HDR_UPGRADE, HDR_PROXY_CONNECTION }; /* header accounting */ static HttpHeaderStat HttpHeaderStats[] = { {all}, #if USE_HTCP {HTCP reply}, #endif {request}, {reply} }; static int HttpHeaderStatCount = countof(HttpHeaderStats); static int HeaderEntryParsedCount = 0; /* * local routines */ #define assert_eid(id) assert((id) = 0 (id) HDR_ENUM_END) === modified file 'src/HttpHeader.h' --- src/HttpHeader.h 2010-03-05 07:10:40 + +++ src/HttpHeader.h 2010-08-18 17:22:27 + @@ -87,41 +87,41 @@ typedef enum { HDR_LAST_MODIFIED, HDR_LINK, HDR_LOCATION, HDR_MAX_FORWARDS, HDR_MIME_VERSION, HDR_PRAGMA, HDR_PROXY_AUTHENTICATE, HDR_PROXY_AUTHENTICATION_INFO, HDR_PROXY_AUTHORIZATION, HDR_PROXY_CONNECTION, HDR_PROXY_SUPPORT, HDR_PUBLIC, HDR_RANGE, HDR_REQUEST_RANGE, /** some clients use this, sigh */ HDR_REFERER, HDR_RETRY_AFTER, HDR_SERVER, HDR_SET_COOKIE, HDR_TE, HDR_TITLE, -HDR_TRAILERS, +HDR_TRAILER, HDR_TRANSFER_ENCODING, HDR_TRANSLATE,
[PREVIEW] 1xx response forwarding
On Mon, 16 Aug 2010 15:53:42 -0600, Alex Rousskov rouss...@measurement-factory.com wrote: Hello, We need to forward 1xx control messages from servers to clients. I see two implementation options: 1. Use Store. Squid client side expects responses via storeClientCopy, so we will be using the usual/normal code paths. Multiple 1xx responses may be handled with relative ease. The 1xx responses in Store will be treated kind of as regular response headers, except they will not be cached and such. The code will need to skip them until they reach the socket-writing client. 2. Bypass Store. Contact fwdStart caller (e.g., clientReplyContext) directly and give it a 1xx response to forward. Store code remains unchanged. It may be difficult to get from the fwdStart caller to the client socket and comm_write. It will be difficult to handle multiple 1xx responses or a regular response that arrives before we are done with writing 1xx response (all unusual, but can happen!). Both approaches may have to deal with crazy offset management, clientStreams manipulations, and other client-side mess. Do you see any other options? Which option is the best? On 08/16/2010 04:06 PM, Amos Jeffries wrote: My earlier plan if I did it was to do (2). The complication only occurs at one point, finding the client FD. comm_write() should not be altering offset of the higher level store stuff directly. If it is that is a bug to be fixed. Pipelining the responses one at a time with a simple block on further reply passing-on until the existing header set has been finished with gets around any trickiness with multiple or early real responses. The block on further reply passing is far from simple because it needs to deal with two async jobs. On 08/18/2010 02:11 PM, Henrik Nordström wrote: mån 2010-08-16 klockan 15:53 -0600 skrev Alex Rousskov: Both approaches may have to deal with crazy offset management, clientStreams manipulations, and other client-side mess. Yes. For now I think we need to bypass store to make this sane, and it's probably also a step in the right direction in general. Thank you both for your feedback! The attached patch implements the Bypass Store design and forwards 1xx control messages to clients that are likely to be able to handle such messages. The patch appears to pass initial tests but more testing and a sync with trunk are needed. There is also one XXX that I still need to resolve, but it requires some code from the bug #2583 (pure virtual call) fix. I will switch to committing that fix now. Meanwhile, if you have a chance, please review the overall direction of the patch. Preamble has more notes. The patch removes the ignore_expect_100 feature because we now forward 100 Continue messages. Is everybody OK with that removal? Thank you, Alex. Compliance: Forward 1xx control messages to clients that support them. Take 0, which needs more work. The patch removes ignore_expect_100 squid.conf option because we can safely forward Expect: 100-continue headers to servers because we can forward 100 Continue control messages to the expecting clients. We now forward 1xx control messages to all HTTP/1.1 clients and to HTTP/1.0 clients that sent an Expect: 100-continue header. RFC 2616 requires clients to accept 1xx control messages, even if they did not send Expect headers. We still respond with 417 Expectation Failed to requests with expectations other than 100-continue. Implementation notes: We forward control messages one-at-a-time and stop processing the server response while the 1xx message is being written to client, to avoid server-driven DoS attacks with large number of 1xx messages. 1xx forwarding is done via async calls from HttpStateData to ConnStateData/ClientSocketContext. The latter then calls back to notify HttpStateData that the message was written out to client. If any one of the two async messages is not fired, HttpStateData will get stuck unless it is destroyed due to an external event/error. The code assumes such event/error will always happen because when ConnStateData/ClientSocketContext is gone, HttpStateData job should be terminated. This requires more testing/thought. XXX: The patch is not finished. We need to cbdata-protect the HttpRequest::clientConnection member and re-sync with trunk. === added file 'src/HttpControlMsg.h' --- src/HttpControlMsg.h 1970-01-01 00:00:00 + +++ src/HttpControlMsg.h 2010-08-18 19:36:04 + @@ -0,0 +1,57 @@ +/* + * $Id$ + */ + +#ifndef SQUID_HTTP_CONTROL_MSG_H +#define SQUID_HTTP_CONTROL_MSG_H + +#include HttpReply.h +#include base/AsyncCall.h + +class HttpControlMsg; + +/* + * This API exists to throttle forwarding of 1xx messages from the server + * side (Source == HttpStateData) to the client side (Sink == ConnStateData). + * + * Without throttling, Squid would have to drop some 1xx responses to + * avoid DoS attacks that send many 1xx responses without reading them. + * Dropping 1xx responses
[PATCH] Compliance: respond to OPTIONS requests with zero Max-Forwards value.
Compliance: respond to OPTIONS requests with zero Max-Forwards value. RFC 2616 section 9.2 says that a proxy MUST NOT forward requests with a zero Max-Forwards value. RFC 2616 does not define proper OPTIONS responses, so we consider successful responses optional and reply with 501 Not Implemented. While TRACE and OPTIONS are similar with regard to Max-Forwards, we handle them in different places because OPTIONS do not need to echo the request via Store. Co-Advisor test case: test_case/rfc2616/maxForwardsZero-OPTIONS-absolute Compliance: do not forward OPTIONS requests with zero Max-Forwards value. RFC 2616 section 9.2 says that a proxy MUST NOT forward requests with a zero Max-Forwards value. RFC 2616 does not define proper OPTIONS responses, so we consider successful responses optional and reply with 501 Not Implemented. While TRACE and OPTIONS are similar with regard to Max-Forwards, we handle them in different places because OPTIONS do not need to echo the request via Store. Co-Advisor test case: test_case/rfc2616/maxForwardsZero-OPTIONS-absolute === modified file 'src/client_side.cc' --- src/client_side.cc 2010-08-07 14:22:54 + +++ src/client_side.cc 2010-08-19 15:31:38 + @@ -2356,40 +2356,41 @@ ConnStateData::clientAfterReadingRequest if (fd_table[fd].flags.socket_eof) { if ((int64_t)in.notYetUsed bodySizeLeft()) { /* Partial request received. Abort client connection! */ debugs(33, 3, clientAfterReadingRequests: FD fd aborted, partial request); comm_close(fd); return; } } clientMaybeReadData (do_next_read); } static void clientProcessRequest(ConnStateData *conn, HttpParser *hp, ClientSocketContext *context, const HttpRequestMethod method, HttpVersion http_ver) { ClientHttpRequest *http = context-http; HttpRequest *request = NULL; bool notedUseOfBuffer = false; bool tePresent = false; bool deChunked = false; +bool mustReplyToOptions = false; bool unsupportedTe = false; /* We have an initial client stream in place should it be needed */ /* setup our private context */ context-registerWithConn(); if (context-flags.parsed_ok == 0) { clientStreamNode *node = context-getClientReplyContext(); debugs(33, 1, clientProcessRequest: Invalid Request); clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw()); assert (repContext); switch (hp-request_parse_status) { case HTTP_HEADER_TOO_LARGE: repContext-setReplyToError(ERR_TOO_BIG, HTTP_HEADER_TOO_LARGE, method, http-uri, conn-peer, NULL, conn-in.buf, NULL); break; case HTTP_METHOD_NOT_ALLOWED: repContext-setReplyToError(ERR_UNSUP_REQ, HTTP_METHOD_NOT_ALLOWED, method, http-uri, conn-peer, NULL, conn-in.buf, NULL); break; default: repContext-setReplyToError(ERR_INVALID_REQ, HTTP_BAD_REQUEST, method, http-uri, conn-peer, NULL, conn-in.buf, NULL); @@ -2481,42 +2482,46 @@ clientProcessRequest(ConnStateData *conn #if USE_SQUID_EUI request-client_eui48 = conn-peer_eui48; request-client_eui64 = conn-peer_eui64; #endif #if FOLLOW_X_FORWARDED_FOR request-indirect_client_addr = conn-peer; #endif /* FOLLOW_X_FORWARDED_FOR */ request-my_addr = conn-me; request-http_ver = http_ver; tePresent = request-header.has(HDR_TRANSFER_ENCODING); deChunked = conn-in.dechunkingState == ConnStateData::chunkReady; if (deChunked) { assert(tePresent); request-setContentLength(conn-in.dechunked.contentSize()); request-header.delById(HDR_TRANSFER_ENCODING); conn-finishDechunkingRequest(hp); } else conn-cleanDechunkingRequest(); +if (method == METHOD_TRACE || method == METHOD_OPTIONS) +request-max_forwards = request-header.getInt64(HDR_MAX_FORWARDS); + +mustReplyToOptions = (method == METHOD_OPTIONS) (request-max_forwards == 0); unsupportedTe = tePresent !deChunked; -if (!urlCheckRequest(request) || unsupportedTe) { +if (!urlCheckRequest(request) || mustReplyToOptions || unsupportedTe) { clientStreamNode *node = context-getClientReplyContext(); clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw()); assert (repContext); repContext-setReplyToError(ERR_UNSUP_REQ, HTTP_NOT_IMPLEMENTED, request-method, NULL, conn-peer, request, NULL, NULL); assert(context-http-out.offset == 0); context-pullData(); conn-flags.readMoreRequests = false; goto finish; } if (!clientIsContentLengthValid(request)) { clientStreamNode *node = context-getClientReplyContext(); clientReplyContext *repContext = dynamic_castclientReplyContext *(node-data.getRaw());
Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #763
See http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/763/changes Changes: [Automatic source maintenance squid...@squid-cache.org] SourceFormat Enforcement [Francesco Chemolli kin...@squid-cache.org] configure.in fix: properly pass default hosts_file option around during build. -- [...truncated 3176 lines...] checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for shl_load... no checking for shl_load in -ldld... no checking for dlopen... no checking for dlopen in -ldl... yes checking whether a program can dlopen itself... yes checking whether a statically linked program can dlopen itself... no checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for library containing dlopen... -ldl checking for dlerror... yes checking for shl_load... (cached) no checking for shl_load in -ldld... (cached) no checking for dld_link in -ldld... no checking whether compiler accepts -fhuge-objects... no checking iostream usability... yes checking iostream presence... yes checking for iostream... yes checking for an ANSI C-conforming const... yes checking for size_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating include/Makefile config.status: creating src/Makefile config.status: creating test/Makefile config.status: creating config.h config.status: executing depfiles commands config.status: executing libtool commands make[1]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build' Making all in compat make[2]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-00-default/squid-3.HEAD-BZR/_build/compat' if /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT assert.lo -MD -MP -MF .deps/assert.Tpo -c -o assert.lo ../../compat/assert.cc; \ then mv -f .deps/assert.Tpo .deps/assert.Plo; else rm -f .deps/assert.Tpo; exit 1; fi if /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT compat.lo -MD -MP -MF .deps/compat.Tpo -c -o compat.lo ../../compat/compat.cc; \ then mv -f .deps/compat.Tpo .deps/compat.Plo; else rm -f .deps/compat.Tpo; exit 1; fi libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT compat.lo -MD -MP -MF .deps/compat.Tpo -c ../../compat/compat.cc -fPIC -DPIC -o .libs/compat.o libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT assert.lo -MD -MP -MF .deps/assert.Tpo -c ../../compat/assert.cc -fPIC -DPIC -o .libs/assert.o libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT compat.lo -MD -MP -MF .deps/compat.Tpo -c ../../compat/compat.cc -o compat.o /dev/null 21 libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT assert.lo -MD -MP -MF .deps/assert.Tpo -c ../../compat/assert.cc -o assert.o /dev/null 21 if /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../.. -I../../include -I../../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT debug.lo
compat/unsafe.h
Stumbled over compat/unsafe.h again when trying to compile trunk after the purge merge. Imho these rules in compat/unsafe.h should be dropped, replaced by coding standards for the different sections and auditing. - The rules originally come from lazyness in Squid-2 where we did not want to check return code of malloc() or is data had been allocated before free(). - The way they are implemented (#define) causes issues with perfectly valid code such as system headers.. - these rules makes it harder to integrate other code. Regards Henrik
[MERGE] Kill compat/unsafe.h, not really needed and causes more grief than gain
# Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: hen...@henriknordstrom.net-20100820031034-\ # 0o3f9jw06pqkgmwa # target_branch: http://www.squid-cache.org/bzr/squid3/trunk/ # testament_sha1: 7c770c668bbf0875624a280061c125890faeda6d # timestamp: 2010-08-20 05:10:39 +0200 # base_revision_id: hen...@henriknordstrom.net-20100820023828-\ # kguboyrr0hxkhj1g # # Begin patch === modified file 'compat/GnuRegex.c' --- compat/GnuRegex.c 2010-07-28 20:16:31 + +++ compat/GnuRegex.c 2010-08-20 03:10:34 + @@ -32,7 +32,6 @@ #define _GNU_SOURCE 1 #endif -#define SQUID_NO_ALLOC_PROTECT 1 #include config.h #if USE_GNUREGEX /* only if squid needs it. Usually not */ === modified file 'compat/Makefile.am' --- compat/Makefile.am 2010-07-25 08:10:12 + +++ compat/Makefile.am 2010-08-20 03:10:34 + @@ -29,7 +29,6 @@ strtoll.h \ tempnam.h \ types.h \ - unsafe.h \ valgrind.h \ \ os/aix.h \ === modified file 'compat/compat.h' --- compat/compat.h 2010-08-10 15:37:53 + +++ compat/compat.h 2010-08-20 03:10:34 + @@ -108,7 +108,4 @@ */ #include compat/GnuRegex.h -/* some functions are unsafe to be used in Squid. */ -#include compat/unsafe.h - #endif /* _SQUID_COMPAT_H */ === modified file 'compat/os/dragonfly.h' --- compat/os/dragonfly.h 2010-03-21 03:08:26 + +++ compat/os/dragonfly.h 2010-08-20 03:10:34 + @@ -20,11 +20,5 @@ #undef HAVE_MALLOC_H #endif -/* Exclude CPPUnit tests from the allocator restrictions. */ -/* BSD implementation uses these still */ -#if defined(SQUID_UNIT_TEST) -#define SQUID_NO_ALLOC_PROTECT 1 -#endif - #endif /* _SQUID_DRAGONFLY_ */ #endif /* SQUID_OS_DRAGONFLY_H */ === modified file 'compat/os/freebsd.h' --- compat/os/freebsd.h 2010-07-25 08:10:12 + +++ compat/os/freebsd.h 2010-08-20 03:10:34 + @@ -27,12 +27,6 @@ #define _etext etext -/* Exclude CPPUnit tests from the allocator restrictions. */ -/* BSD implementation uses these still */ -#if defined(SQUID_UNIT_TEST) -#define SQUID_NO_ALLOC_PROTECT 1 -#endif - /* * This OS has at least one version that defines these as private * kernel macros commented as being 'non-standard'. === modified file 'compat/os/netbsd.h' --- compat/os/netbsd.h 2010-07-25 08:10:12 + +++ compat/os/netbsd.h 2010-08-20 03:10:34 + @@ -13,12 +13,6 @@ *--* / -/* Exclude CPPUnit tests from the allocator restrictions. */ -/* BSD implementation uses these still */ -#if defined(SQUID_UNIT_TEST) -#define SQUID_NO_ALLOC_PROTECT 1 -#endif - /* NetBSD does not provide sys_errlist global for strerror */ #define NEED_SYS_ERRLIST 1 === modified file 'compat/os/openbsd.h' --- compat/os/openbsd.h 2010-07-25 08:10:12 + +++ compat/os/openbsd.h 2010-08-20 03:10:34 + @@ -20,12 +20,6 @@ #undef HAVE_MALLOC_H #endif -/* Exclude CPPUnit tests from the allocator restrictions. */ -/* BSD implementation uses these still */ -#if defined(SQUID_UNIT_TEST) -#define SQUID_NO_ALLOC_PROTECT 1 -#endif - /* * This OS has at least one version that defines these as private * kernel macros commented as being 'non-standard'. === modified file 'compat/os/solaris.h' --- compat/os/solaris.h 2010-08-11 00:12:56 + +++ compat/os/solaris.h 2010-08-20 03:10:34 + @@ -82,12 +82,6 @@ #define __FUNCTION__ #endif -/* Exclude CPPUnit tests from the allocator restrictions. */ -/* BSD implementation uses these still */ -#if defined(SQUID_UNIT_TEST) -#define SQUID_NO_STRING_BUFFER_PROTECT 1 -#endif - /* Bug 2500: Solaris 10/11 require s6_addr* defines. */ //#define s6_addr8 _S6_un._S6_u8 //#define s6_addr16 _S6_un._S6_u16 === removed file 'compat/unsafe.h' --- compat/unsafe.h 2010-03-21 03:08:26 + +++ compat/unsafe.h 1970-01-01 00:00:00 + @@ -1,33 +0,0 @@ -#ifndef SQUID_CONFIG_H -#include config.h -#endif - -#ifndef _SQUID_COMPAT_UNSAFE_H -#define _SQUID_COMPAT_UNSAFE_H - -/* - * Trap unintentional use of functions unsafe for use within squid. - */ - -#if !SQUID_NO_ALLOC_PROTECT -#ifndef free -#define free(x) ERROR_free_UNSAFE_IN_SQUID(x) -#endif -#ifndef malloc -#define malloc ERROR_malloc_UNSAFE_IN_SQUID -#endif -#ifndef calloc -#define calloc ERROR_calloc_UNSAFE_IN_SQUID -#endif -#endif /* !SQUID_NO_ALLOC_PROTECT */ - -#if !SQUID_NO_STRING_BUFFER_PROTECT -#ifndef sprintf -#define sprintf ERROR_sprintf_UNSAFE_IN_SQUID -#endif -#ifndef strdup -#define strdup ERROR_strdup_UNSAFE_IN_SQUID -#endif -#endif /* SQUID_NO_STRING_BUFFER_PROTECT */ - -#endif /* _SQUID_COMPAT_UNSAFE_H */ === modified file 'helpers/basic_auth/LDAP/basic_ldap_auth.cc' --- helpers/basic_auth/LDAP/basic_ldap_auth.cc 2010-07-08 11:58:30 + +++ helpers/basic_auth/LDAP/basic_ldap_auth.cc 2010-08-20 03:10:34 + @@ -82,7 +82,6 @@ * - Allow full filter specifications in -f */ -#define SQUID_NO_ALLOC_PROTECT 1 #include config.h
Build failed in Hudson: 3.HEAD-amd64-CentOS-5.3 #764
See http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/764/changes Changes: [Henrik Nordstrom hen...@henriknordstrom.net] Kill redundant hexd program from purge. There is too many other tools for producing a readable hexdump of a file. [Henrik Nordstrom hen...@henriknordstrom.net] Also fix up hexd to Squid coding standards [Henrik Nordstrom hen...@henriknordstrom.net] Adjust purge sources to Squid coding standard (xmalloc, xfree etc) [Henrik Nordstrom hen...@henriknordstrom.net] Clean up DEFAULT_PID_FILE in similar manner [Henrik Nordstrom hen...@henriknordstrom.net] Kill recursive DEFAULT_HOSTS. Automake automatically adds expansions to Makefile.in, no need for us to wrongly try to reference them.. [Automatic source maintenance squid...@squid-cache.org] SourceFormat Enforcement -- [...truncated 23971 lines...] /bin/sh ../../../libtool --tag=CXX --mode=link g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -g -o ext_ldap_group_acl ext_ldap_group_acl.o -L../../../lib -lmiscutil ../../../compat/libcompat.la -lldap -llber -ldl -lm -lnsl -ldl -ldl libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -g -o ext_ldap_group_acl ext_ldap_group_acl.o -Lhttp://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/lib -lmiscutil ../../../compat/.libs/libcompat.a -lldap -llber -lm -lnsl -ldl make[4]: Leaving directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/helpers/external_acl/LDAP_group' Making all in file_userip make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/helpers/external_acl/file_userip' if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include-I/usr/include/libxml2-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -MT ext_file_userip_acl.o -MD -MP -MF .deps/ext_file_userip_acl.Tpo -c -o ext_file_userip_acl.o ../../../../helpers/external_acl/file_userip/ext_file_userip_acl.cc; \ then mv -f .deps/ext_file_userip_acl.Tpo .deps/ext_file_userip_acl.Po; else rm -f .deps/ext_file_userip_acl.Tpo; exit 1; fi /bin/sh ../../../libtool --tag=CXX --mode=link g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -g -o ext_file_userip_acl ext_file_userip_acl.o -L../../../lib -lmiscutil ../../../compat/libcompat.la -ldl -lm -lnsl -ldl -ldl libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -g -o ext_file_userip_acl ext_file_userip_acl.o -Lhttp://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/lib -lmiscutil ../../../compat/.libs/libcompat.a -lm -lnsl -ldl make[4]: Leaving directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/helpers/external_acl/file_userip' Making all in kerberos_ldap_group make[4]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/helpers/external_acl/kerberos_ldap_group' make[5]: Entering directory `http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/ws/btlayer-02-maximus/squid-3.HEAD-BZR/_build/helpers/external_acl/kerberos_ldap_group' if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../../helpers/external_acl/kerberos_ldap_group -I/usr/include/libxml2-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -MT kerberos_ldap_group.o -MD -MP -MF .deps/kerberos_ldap_group.Tpo -c -o kerberos_ldap_group.o ../../../../helpers/external_acl/kerberos_ldap_group/kerberos_ldap_group.cc; \ then mv -f .deps/kerberos_ldap_group.Tpo .deps/kerberos_ldap_group.Po; else rm -f .deps/kerberos_ldap_group.Tpo; exit 1; fi if g++ -DHAVE_CONFIG_H -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../.. -I../../../../include -I../../../../src -I../../../include -I../../../../helpers/external_acl/kerberos_ldap_group -I/usr/include/libxml2-I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -D_FILE_OFFSET_BITS=64 -g -O2 -MT support_group.o -MD -MP -MF .deps/support_group.Tpo -c -o support_group.o ../../../../helpers/external_acl/kerberos_ldap_group/support_group.cc; \
Re: [MERGE] Kill compat/unsafe.h, not really needed and causes more grief than gain
+1 -- /kinkie