Re: gzip+squid3 code
fre 2006-09-01 klockan 18:00 -0300 skrev Gonzalo Arana: You may want to check this: http://webs.sinectis.com.ar/garana/patches/squid/1-add-gz.patch.dpatch.CVS Unless I am mistaken, that is the latest patch I've used for content compression. I will dig into this during the weekend and grab the latest patch. It would be excellent if you could look into syncing the gzip branch at devel.squid-cache.org with your latest changes. That code there is based on a version from 2004-06-25, somewhat cleaned up and adjusted for HEAD. Or even better yet, if you could participate in the Squid-3 bugfest taking place on irc.freenode.net #squiddev this weekend ;-) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: squid3 nt port - mainline
On Thu, 2006-08-31 at 09:50 +0200, Guido Serassio wrote: VisualStudio and MinGW share the same compatibility problems: IPC, fd = socket stuff, etc. I know. But VisualStudio has two main pros: - It's a well known (the best known I think) development environment for Windows (here I'm thinking to Windows only developers) Yes, this is what I mean with Visual Studio being Visual Studio.. - The Runtime libraries used are recent, while MinGW uses the old msvcrt.dll, born in 1998 How much are we actually dependent on this runtime library? The major cons are the lack of autoconf/automake support and the project files bandwagon . Is those working with mingw? Regards Henrik
Re: squid3 nt port - mainline
ons 2006-08-30 klockan 10:38 +0200 skrev Guido Serassio: I think that we could try to add MinGW to the Squid natively supported platforms, leaving only the VisualStudio support into the nt branch. + here. I'd love to se MinGW as an official platform. Btw, what do you see as the main benefits of the VisualStudio build compared to MinGW? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: OpenSSL and Bug #1716
sön 2006-08-27 klockan 20:21 +1000 skrev Robert Collins: Or we could require openSSL 9.8.x? For that part of the code yes. It's not normally needed, and it's fine if it's not compiled for all OpenSSL versions. These lines of code exists mainly for being able to decrypt the SSL traffic using ssldump or similar tool while debugging, not needed in production. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: OpenSSL and Bug #1716
sön 2006-08-27 klockan 20:48 +0200 skrev Guido Serassio: I have published a patch with a configure check into bug #1716: http://www.squid-cache.org/bugs/attachment.cgi?id=1198. What is your opinion ? see bug. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [PATCH] Make URN support an optional component
Looks quite nice I think. And while I agree with Adrian that perhaps we should try to not do additional refactoring and API changes in Squid-3.0 I do not think this kind of things should be sitting in a private branch. If it's finished and looking good but not a candidate for 3.0 then it's a good reason for forking Squid-3.0. Now this change wasn't that intrusive. Mostly reshuffling to better isolate things so I don't object having it in 3.0. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Why HttpMsg manual locking instead of refcounting
Why is HttpReply/Request (HttpMsg subclasses) using manual reference counting by HTTPMSGLOCK/UNLOCK macros instead of automatic reference counting by RefCount? I find this design quite error prone, especially considering that several of the LOCK/UNLOCK pairs is crossing code boundaries. One glaring one is ACLCheckList-reply, assigned by the caller and freed by the acl processing.. (why this is assigned by the caller is another mystery, but separate). I remember discussing this before with Duane I think, but no clear memory of the outcome, but I think it was just confusion about how the RefCount template works.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: OpenSSL and Bug #1716
lör 2006-08-26 klockan 17:58 +0200 skrev Guido Serassio: For testing, I have replaced PEM_ASN1_write() call with the PEM_write_SSL_SESSION() macro, and this seems to fix all the errors that I have listed in bug #1716. What is the history of this comment ? It depends on the version of OpenSSL I think. And maybe GCC warning flags... Which version of OpenSSL are you using? If I do this here then I get the following errors: squid-2.6/src/client_side.c: In function 'clientNegotiateSSL': squid-2.6/src/client_side.c:4616: warning: function declaration isn't a prototype squid-3/src/client_side.cc: In function 'void clientNegotiateSSL(int, void*)': squid-3/src/client_side.cc:2933: error: invalid conversion from 'int (*)()' to 'int (*)(void*, unsigned char**)' squid-3/src/client_side.cc:2933: error: initializing argument 1 of 'int PEM_ASN1_write(int (*)(void*, unsigned char**), const char*, FILE*, char*, const EVP_CIPHER*, unsigned char*, int, int (*)(char*, int, int, void*), void*)' Fedora Core 5 Linux on x86_64 OpenSSL 0.9.8a-5.2 gcc-4.1.1-1.fc5 Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Wikimedia having issues
tor 2006-08-24 klockan 10:27 +0800 skrev Adrian Chadd: Mark from Wikimedia has popped up and noted that there's been some issues with the Squid-2.6 deployment. Specifically, people are being prompted to download pages rather than display them. It only started after they installed Squid-2.6. Quite likely there is a broken gzip content encoder involved, not giving correct ETag headers. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Wikimedia having issues
tor 2006-08-24 klockan 15:14 +0800 skrev Adrian Chadd: Well, whats changed between squid-2.5 and squid-2.6 which may play a part? The major change related to thisis that we now server side support content negotiation using ETag and If-None-Match to find which cached entity variant (identity vs gzip encoding, Swedish vs English etc) to send to the client. Which means that if the server is not sending correct ETag:s AND responds to If-None-Match then clients will not be given the correct entities. Apache mod_gzip is an example. There is a directive to try to work around such broken servers (broken_vary_encoding or something like that), but quite likely it needs additional work as the matrix of the server brokenness aspects out there is figured out.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Squid-2.6 maintenance status
Squid-2.6 looks pretty decent at the moment with no major issues known. The only larger bug scheduled to be fixed is delay pool fairness where connections currently can get completely starved when throttled by shared pools. Apart from that Squid-2 maintenance i now switching to pure bugfixing only, fixing either stability issues, protocol issues, or similar misbehavior. Documentation fixes is also accepted. Regards Henrik
Re: Squid 2.6 crash
fre 2006-08-25 klockan 07:07 +1000 skrev Robert Collins: - Add to ERR_INVALID_REQ error file User: %aP - reload error messages ERR_INVALID_URL crash also confirmed. Whats strange is that we're seeing error pages with no requests. client_side is setup to create an empty request if there is no fully parsed one for error construction. Not in 2.6 which is what Guido was looking at.. there the request only exists if we could delimit the message and parse the URL. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Squid 2.6 crash
fre 2006-08-25 klockan 08:07 +1000 skrev Robert Collins: Ok. this is another thing to forward port then - 3.0 still makes a stub request object (which is yucky). If it does then thats something new. In Squid-2 the request_t is created by urlParse()... The client-side request exists earlier, but that's not what it failed on.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Wikimedia having issues
tor 2006-08-24 klockan 09:35 -0300 skrev Giancarlo Razzolini: I can be talking a lot of s... but, i remember seeing something about gzip compression being broken for some sites on internet explorer, after microsoft released the monthly patches. Brokenness exists at many fronts, both servers and clients. Only Squid is right ;-) Don't think it is an squid specific issue. In this case the issue was the server, and triggered by Squid. The far most common case.. servers not sending correct ETag headers. We already have a workaround implemented for Apache mod_deflate mod_gzip (both broken), but wikimedia is using another server so our workaround doesn't trigger there. Check if some people use firefox and if it does get broken on it too. Firefox has full support for gzip so the problem won't be noticed there.. But using this to promote open source browsers isn't a bad thing. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Squid developer configuration assignments
From time to time I and a few other developers gets more requests than we can handle, and I wonder if there is interest in setting up an internal mailing list where we can ask if there is another squid developer who can take the assignment. If there is interest in being in a such mailinglist please notify me either here on squid-dev or privately. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: FAQ ToC
On Wed, 2006-08-23 at 11:52 +0200, Kinkie wrote: Yes.. can't do much about it currently. I will probably have to write a custom macro. At this time two clicks are required, unfortunately. The problem is not the two clicks, it's two searches to find the correct entry in long lists of questions before the URL can be mailed to users who can't find the information themselves.. Regards Henrik
Re: EventLoop shutdown procedure..
On Mon, 2006-08-21 at 22:17 +1000, Robert Collins wrote: I'd like to make the shutdown procedure async as well - allowing things like swap log writing to use calls elsewhere in the codebase that are async - like the disk engines. Makes sense. Heres my proposed api: - on EventLoop you can call addShutdownEvent(callback, callbackdata); - this builds up a vector of pending shutdown callbacks. - when the event loop detects its time to stop - i.e. because its had stop() called, or because its gone idle, it invokes the last registered callback. This has a signature like 'void SHUTDOWNEVENT(callback, callbackdata);'. The event loop will invoke it with the loop as the callback data - so when that routine completes, the loop will move onto the next shutdown event and so on. - When the last shutdown event returns, the loop will return to the caller of run(). Sounds reasonable. My first thinking was why not do this with a broadcast message bus asking the subsystems to prepare for shutdown, but this simple scheme is easier. This has a couple of ramifications: - we'll need to teach various layers like comms that there core can be shutdown *while* the event loop is still running - but this is conceptually quite easy IMO. Yes. - we'll have a mismatch between some parts of reconfigure and this approach, for a but - but thats reasonably easy to deal with in the short term. Long term, if reconfigure becomes async itself it will be non problematic. Agreed. Should not be a problem in itself. But getting the clean store index management async while still allowing the comm loop to run will be a bit of a challenge... Regards Henrik
logdaemon and shutdown or reconfigure?
Was reading the new logfile stuff, and logfileClose() in combination with logfileFlush() looks dangerous. To me it looks like the last log segment is lost. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logdaemon and shutdown or reconfigure?
mån 2006-08-21 klockan 12:50 +0200 skrev Henrik Nordstrom: Was reading the new logfile stuff, and logfileClose() in combination with logfileFlush() looks dangerous. To me it looks like the last log segment is lost. Nevermind that. Was mixing up the logfileFlush and logfileFlushEvent functions.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: FAQ TOC
mån 2006-08-21 klockan 17:23 +0200 skrev Kinkie: This said, I'm not opposing the idea; I'd just like to get more reasons to do it :) The main reason is that the FAQ is not a document, it's a collection of very many small notes, loosely coupled together in sections to try to make some order out of them. Very often it is not at all obvious in which section an answer to a given question is to be found. The summary currently missing from the expanded index is mainly a matter of moving it there. Preferably automated (see below) Having it fully automated would be great, and the section summary should be snarfed from the subsection page title I think. The FAQ does not really have a concise order of the chapters. It's a bag of things. Ideally I would see the chapters as category indexes to the questions, with a single question belonging to N categories. But that's probably outside what can be done nicely in a wiki. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Low portability of logfile-daemon support
lör 2006-08-19 klockan 04:18 -0600 skrev Adrian Chadd: Its just a malloc() and then sprintf(). Feel free to do what you think you have to, if you haven't already. Perhaps replace it with a memBufPrintf to be consistent with the rest of the code? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Delay pools starvation
fre 2006-08-18 klockan 02:01 +0200 skrev Henrik Nordstrom: Ideas on how to avoid starvation when there is multiple connections competing for a delay pool in the generic comm loop framework is welcome. Currently the comm loops take an overly simplistic view of delay pools, and simply kicks all deferred connections alive once per second to have them react on delay pools being filled up. But not too unexpectedly this leads to a situation where some connections easily get starved. In the old comm loops (still used for poll/select) delayed connections is dealt with separately, randomizing their order. Hmm.. the exact same starvation is seen with poll Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
2.6.STABLE3 released
2.6.STABLE3 has now been released. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: ccache and distcc, take two
On Thu, 2006-08-17 at 01:07 +0200, Kinkie wrote: On Thu, 2006-08-17 at 00:03 +0200, Henrik Nordstrom wrote: ons 2006-08-16 klockan 01:36 +0200 skrev Kinkie: Hi all, this is a second attempt at getting autoconf/automake to understand our desire to get builds faster, and this attempt works for me. What I don't quite get is why we should need to have configure look for ccache? It's trivial to do this outside configure, and many installations of ccache already sets up the system in such manner that ccache is used by default without the need to do anything. Not all do, and the main reason to me is convenience (admittedly, it should be on by default to get the maximum gain). I just don't really see the point of having this in the package. Making ccache trivial to use is imho more of a business for ccache, and a trivial thing to do at that level. Ideally I'd like to se an interface like the following ccache make install to have the compile run with ccache. And it's a trivial thing for ccache to provide by simply adding a wrapper directory to PATH and then invoke make. This borders on selection of which compiler to use, which isn't something there should be configure flags for, so why have configure flags for ccache? Now your configure magic is nothing magical as such and there is nothing technically stopping this. But I question adding yet another configure option, especially one which neither has anything to do with the package as such (results the same) and which can trivially be done externally, either globally on the whole system, for a specific user or per invocation. I just don't see it your task to make ccache easier. It's already trivial to use if you ask me. So I only see the negative aspects of having yet more configure options which people have to understand.. Reards Henrik
Delay pools starvation
Ideas on how to avoid starvation when there is multiple connections competing for a delay pool in the generic comm loop framework is welcome. Currently the comm loops take an overly simplistic view of delay pools, and simply kicks all deferred connections alive once per second to have them react on delay pools being filled up. But not too unexpectedly this leads to a situation where some connections easily get starved. In the old comm loops (still used for poll/select) delayed connections is dealt with separately, randomizing their order. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Squid 2.6 and ISA Proxy authentication NTLM.
ons 2006-08-16 klockan 18:41 +0300 skrev Tsachi: Looking at netstat it shows that squid hold many open sockets (even 180 and more) only for one client. You should not see this with 2.6. I would like to know if anyone had a chance to check a similar scenario to this with version 2.6. My machine configuration is: 1.Squid configured for full transparent. 2.Cache Parent is - Upstream ISA proxy which requires NTLM authentication. 3.Client browser is configured to work with the ISA server on port X. 4.Iptable tproxy rule which intercept and redirect client destination port X to squid port 3128. Should work I think, but interception of proxy connections is not something which has been actively tested.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: ccache and distcc, take two
ons 2006-08-16 klockan 01:36 +0200 skrev Kinkie: Hi all, this is a second attempt at getting autoconf/automake to understand our desire to get builds faster, and this attempt works for me. What I don't quite get is why we should need to have configure look for ccache? It's trivial to do this outside configure, and many installations of ccache already sets up the system in such manner that ccache is used by default without the need to do anything. For example ccache on Fedora Core sets up a new PATH element which activates ccache on all the supported compilers, and this PATH element is added in the default user profile automatically. Nothing needs to be done in any application or makefile to use ccache, it's all automagic. $ echo $PATH /usr/kerberos/bin:/usr/lib64/ccache:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/henrik/bin $ ls -l /usr/lib64/ccache/ totalt 0 lrwxrwxrwx 1 root root 15 30 maj 13.07 c++ - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 cc - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 g++ - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 gcc - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 gcc32 - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 x86_64-redhat-linux-c++ - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 x86_64-redhat-linux-g++ - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 x86_64-redhat-linux-gcc - /usr/bin/ccache lrwxrwxrwx 1 root root 15 30 maj 13.07 x86_64-redhat-linux-gcc32 - /usr/bin/ccache Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Bug review request
On Mon, 2006-08-14 at 10:32 +0200, Guido Serassio wrote: err-request = requestLink(request); But not always err-request is linked to a request, it could be a problem ? You have found places where err-request is assigned without a link? -src_addr Some confusion here, sometimes: err-src_addr = conn-peer.sin_addr; or err-src_addr = http-conn-peer.sin_addr; some other times: err-src_addr = request-client_addr; What is correct ? request when you have a request. The few places where errorCon is called before we have a request NULL can be passed, and the caller has to fill in the available details after as is done now. In client_side.c (clientAccessCheckDone()) there is: if (http-conn-auth_user_request) err-auth_user_request = http-conn-auth_user_request; else if (http-request-auth_user_request) err-auth_user_request = http-request-auth_user_request; where http is a clientHttpRequest. I think we always have request-auth_user_request now even on connection oriented auth. If not that should be easy to fix. Regards Henrik
Re: Update to 2.6 changeset 10924
tis 2006-08-15 klockan 20:12 +0200 skrev Pawel Worach: Henrik pointed out on IRC that the changeset at http://www.squid-cache.org/Versions/v2/2.6/changesets/10924.patch doesn't actually work, that is because debug() can't be used that early. Applied. signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Bug review request
tis 2006-08-15 klockan 22:46 +0200 skrev Guido Serassio: In client_side.c at line 3932, 3943, 4073 and 4285 (not used code) I can't see any err-request assignment. All of these is before the request has been parsed, so there is no request... So like this ? err-auth_user_request = request-auth_user_request; I think so. See client_side.c line 463. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
2.6.STABLE3 approaching
Think we are more or less set now for a stable3 release. Please test the tree some extra. And if you know any issues you think should be fixed for stable3 then now is a goot time to speak up. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: 2.6 HEAD doesn't compile
sön 2006-08-13 klockan 10:27 +0800 skrev Adrian Chadd: On Sun, Aug 13, 2006, Henrik Nordstrom wrote: /home/henrik/SRC/squid/commit-2.6/src/comm.c: In function 'CommWriteStateCallbackAndFree': /home/henrik/SRC/squid/commit-2.6/src/comm.c:85: warning: comparison between pointer and integer I assume this is from Adrians comm_write optimizations.. Eww. How'd this happen? I wasn't getting any errors here.. I'll check it out in a minute. http://www.squid-cache.org/Versions/v2/2.6/changesets/10918.patch NULL is a (void *)... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Bug review request
sön 2006-08-13 klockan 22:37 +0200 skrev Guido Serassio: Hi, I'm trying to close the Bug #212. Someone could review the proposed patch ? It' should be incomplete, some errors still need to be verified. Looked at it and my gut feeling is that errorCon should be extended with a request_t argument, pulling all the common details of the request.. -request -src_addr -auth_user_request and maybe more.. there is a bit too much poorly duplicated core surrounding this function today.. Note: fwdFail() tries to patch some of the blank spots, but not all, and is only relevant late in the processing.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: epoll - funky copied code?
lör 2006-08-12 klockan 09:51 +1000 skrev Robert Collins: That is - isn't the for loop completely irrelevant ? It is.. just a matter of programming style (complex if condition, vs smaller conditions with break). Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: epoll - funky copied code?
lör 2006-08-12 klockan 11:22 +0200 skrev Henrik Nordstrom: lör 2006-08-12 klockan 09:51 +1000 skrev Robert Collins: That is - isn't the for loop completely irrelevant ? It is.. just a matter of programming style (complex if condition, vs smaller conditions with break). This said I agree with the change. More consistent with the rest of the code. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: CONNECT invalid-request
On Wed, 2006-08-09 at 13:35 +0300, Liudas Bakšys wrote: http_port 192.168.0.254:8080 transparent vhost Don't use vhost here... vhost is for accelerator mode, not Internet proxy mode. transparent is all you need for a transparently intercepting Internet proxy. In accelerator mode CONNECT doesn't make sense (it's only relevant to web proxies, not web servers) and is therefore blocked to avoid nasty security implications. Regards Henrik
Re: Download hanging
tis 2006-08-08 klockan 07:51 +0800 skrev Steven: I did have one other thought. If you set max-object-size on a cache_dir, it effectively allows an object to use up to max-object-size woth of RAM before it will start to swap out. I can understand this being needed if there is no content-length header, but should the following code also do a check against the content-length header if the header has given us a size? store.c near line 200: if ((e-store_status != STORE_OK) (swapout_size store_maxobjsize)) { Not sure. I think COSS wants the object to be spooled to the cache_dir in one chunk, not small pieces at a time over an extended period of time. But if COSS can handle an object being swapped out slowly then sure. If e-mem_obj-reply-content_length is 0, I don;t believe that we need to wait for the object to grow too large before we can make a decision on where to store it. Agreed. But that part of the code needs to be audited for the change. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logfile daemon stuff
tis 2006-08-08 klockan 17:29 +0800 skrev Adrian Chadd: * Reimplement logfile-daemon.c to not use stdio. I've noticed via strace that the Linux stdio implements its reading using one-byte read()s. That depends on your stdio buffer setting. To get bigger reads you need block/full buffering on stdin. Default for pipes and terminals is unbuffered. One note: I assume you do know that ipcCreate() currently redirects stderr to cache.log at the time the ipc channel is created. This is why all helpers is restarted on squid -k rotate. I think it would be a good idea to change this to have stderr from helpers returned back to Squid, who then logs it to the debug log.. This would allow us to not restart helpers on squid -k rotate. This is currently a little problem in setups using unlinkd, as the initial cache.log then gets locked by the unlinkd process which is never restarted. If I am not mistaken your new log helper shares this property.. Not a big deal on UNIXes as there the file becomes anonymous when the last name goes away, but not all platforms is this forgiving in handling renames/delete of open files.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
mån 2006-08-07 klockan 17:50 +0800 skrev Steven: Once an object reachs store_maxobjsize (in my case 128K), it falls through and eventually runs storeCheckCachable() and figures out that the object is not cachable. The problem here is that if all data has been sent to the client, there is nothing to kick the store entry into freeing the memory. Makes sense. The attached patch is one solution. Hmm. found the code not very obvious to follow here and your patch didn't exacly improve this.. What do you think about having the call at the end of storeCheckCachable() just after the object is marked uncacheable, or perhaps better yet move that part of the logics up to storeSwapout making the conditions easier to follow. I.e. something like the attached variant of the patch. Regards Henrik Index: src/store.c === RCS file: /cvsroot/squid/squid/src/store.c,v retrieving revision 1.565 diff -u -p -r1.565 store.c --- src/store.c 17 Jul 2006 02:32:00 - 1.565 +++ src/store.c 7 Aug 2006 10:51:22 - @@ -1230,8 +1230,6 @@ storeCheckCachable(StoreEntry * e) store_check_cachable_hist.yes.Default++; return 1; } -storeReleaseRequest(e); -EBIT_CLR(e-flags, ENTRY_CACHABLE); return 0; } Index: src/store_swapout.c === RCS file: /cvsroot/squid/squid/src/store_swapout.c,v retrieving revision 1.94 diff -u -p -r1.94 store_swapout.c --- src/store_swapout.c 2 Jun 2006 00:07:40 - 1.94 +++ src/store_swapout.c 7 Aug 2006 10:51:22 - @@ -241,11 +241,16 @@ storeSwapOut(StoreEntry * e) if (e-swap_status == SWAPOUT_NONE) { assert(mem-swapout.sio == NULL); assert(mem-inmem_lo == 0); - if (storeCheckCachable(e)) + if (storeCheckCachable(e)) { storeSwapOutStart(e); - else + } else { + storeReleaseRequest(e); + /* free any unwanted mem data to ensure it doesn't get + * stuck in memory + */ + storeSwapOutMaintainMemObject(e); return; - /* ENTRY_CACHABLE will be cleared and we'll never get here again */ + } } if (NULL == mem-swapout.sio) return; signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Another defer reference
mån 2006-08-07 klockan 18:26 +0800 skrev Steven: 2006/08/04 16:01:42| WARNING! Your cache is running out of filedescriptors 2006/08/04 16:01:42| comm_call_handlers(): WARNING defer handler for fd=12 (desc=HTTP Socket) does not call commDeferFD() - backing off manually Thinking.. what is the rationale behind having the defer functions call commDeferFD instead of letting the comm loop do this? I.e. making that check the default rather than the exception.. This patch should fix the second message. Thanks. Applied. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Commit changelogs
mån 2006-08-07 klockan 13:04 +0200 skrev Guido Serassio: Yes, I also prefer a solution without any special markup, but this solution warrants that the commiter must know what is doing, and it's a not so big effort like the old 2.5 patch management :-) What about the following simple rule First paragraph must be 20 characers and 120. If it doesnt start with Bug then it must additionally be 40 characers long and it must be followed by a second paragraph further explaining the impact of the change. i.e. a title and a description is required, separated. and to further guide the committer there may be a template commit template message looking something like the following: Bug: Title: Description where the keywords is automatically transformed into the kind of text we have been using up to now.. [Bug #xxx: ]Title.. Description.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Another defer reference
mån 2006-08-07 klockan 19:24 +0800 skrev Steven: If you let the defer function call commDeferFD, you don't run through the comm loop 2 times to cause a FD to back off (once to set the defer/backoff flag, and once to actually back off). The code in the comm loop is there as a fail-safe, but is not the most optimal solution. But the defer function is only called from there.. how could the number of loops differ if it's the defer function calling commDeferFD or if it's the only caller which does this automatically when the defer function indicates the fd should be deferred? The current flow is 1. FD is registered for read 2. I/O event arrives 3. Defer function defers the FD when called from the comm loop while processing read events. 4. FD stays deferred, not touched by the comm loop again until kicked alive. And what I propose would give a very similar flow 1. FD is registered for read 2. I/O event arrives 3. If the defer function says the FD should be deferred then defer it from the comm loop while processing read events. 4. FD stays deferred, not touched by the comm loop again until kicked alive. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
mån 2006-08-07 klockan 19:38 +0800 skrev Steven: From what I can tell, the attached patch will break the current behaviour. storeCheckCachable() needs to remove the ENTRY_CACHABLE flag so that other parts of the code know that they can free the memory. There is only two users of storeCheckCachable(). One is the one we are discussing and mucking with, and there is a storeReleaseRequest() call which automatically clears ENTRY_CACHABLE. The patch moved this from storeCheckCachable() to the call in storeSwapout() in an attempt to make the flow in storeSwapout() easier to follow. The other is a call when the swapout file has been closed, and there any storeReleaseRequest() should be redundant as it has already been done elsewhere in the code if there was any problem.. (bad content length etc). That part also already calls storeSwapMaintainMemObject(). But reading storeCheckCachable() again there is one case which my patch didn't handle. Negative cached objects. My patch threw those out.. Is the attached patch what you meant (free the memory as soon as we clear the flag in storeCheckCachable()). Yes, it was my original thought. Just wanted to try to clean up the code flow in storeSwapout while messing with this, but continue doing it the ugly way is simpler so lets stick to that. So lets go for your first patch as it's closest to how the code is used today which should result in least surprises.. Hopefully nobody need to go near this function again in 2.x. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
Steven, do you think this could be the cause of Bug #1304? Never got to the bottom with that bug and it seemed to disappear, but it feels like this could be the root cause... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Commit changelogs
mån 2006-08-07 klockan 14:44 +0200 skrev Guido Serassio: What about an optional tag allowing a minimum length override ? If we go to the tagged/guided form there will most likely be no lower limits other than there must be a title and a following description. For now I thinkt I'll just ignore the issue, and instead remind everyone that it's a good idea to write same commit messages. For me it's good enough, and if all goes well the commit rate should continue to decline rapidly in 2.6 now. For Squid-3 it isn't even set in stone which version control tool should be used. The fight is currently between CVS (or maybe SVN to keep with the same philosophy but better engine) and Baz. With CVS we have a server enforcing rules on us, with baz it's less controlled with the benefit of being able to easily branch anywhere in any means desired without loosing track. monotone has already been disqualified, at least in it's current incarnation. It's main benefit compared to baz would be the multihead support which allows delayed conflict resolution allowing any developer to resolve the conflict and not only the last one to commit and seamless flow of information, but it fell badly in cross-branch merge support with no cherrypicking and failure to at all handle files added on two branches in a sane manner, not even providing sane diff capabilities to compare the two copies in such cases. Also the seamless flow of information feels a bit scary for the uninvited. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
mån 2006-08-07 klockan 17:50 +0800 skrev Steven: Once an object reachs store_maxobjsize (in my case 128K), it falls through and eventually runs storeCheckCachable() and figures out that the object is not cachable. The problem here is that if all data has been sent to the client, there is nothing to kick the store entry into freeing the memory. The attached patch is one solution. The other option I can see would be to call storeCheckCachable() just before line 212: swapout_able = storeSwapOutMaintainMemObject(e); I think the attached patch is the better option. Applied. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional DNS error in Squid 2.6
lör 2006-07-29 klockan 09:35 +0200 skrev Guido Serassio: I have found an occasional DNS resolution error when browsing www.microsoft.com. I have seen the error only few times, less then 10, but the odd thing is that this happens always only with www.microsoft.com and sometime it was also happened on 2.5: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=367963 A page reload is always successful. Can you test if you see the same with the DNS patch in Bug #1602? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
lör 2006-08-05 klockan 15:49 +0800 skrev Adrian Chadd: When I went through this exercise a while ago my solution was to make sure I 'kicked' (ie, started another comm read event) the server side if all the clients had read all the data they could (and thus -someone- had to kickstart an IO event.) We kind of do today as well, with the defer function being main responsible for when to back off, and I/O kicked alive again by other functions when they think I/O need to happen. The main reason to why the defer function exists is actually delay pools where there is no practical signal going back to the filedescriptor when the pools have been filled up. The store input backoff can very easily be moved to storeAppend() today eleminating that aspect of the defer function. Ideally delay pools would kick the I/O going again when the pool have been filled up, and we could then get rid of the periodic repoll of deferred connections relying almost exclusively on storeDeferRead/storeResumeRead signaling to indicate when I/O should take place. (not sure delay pools should be using these.. probably the low level commDefer/ResumeFD like today, but maybe..) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Download hanging
lör 2006-08-05 klockan 15:45 +0800 skrev Steven: if (fd = 0 mem-inmem_hi - mem-inmem_lo SM_PAGE_SIZE + Config.Store.maxInMemObjSize + Config.readAheadGap) { - storeDeferRead(e, fd); - return 1; + if(storeLowestMemReaderOffset(e) != mem-inmem_hi) { + storeDeferRead(e, fd); + return 1; + } } Hmm.. first I thought this this effectively disables the whole block, but maybe you are right. Lets see. The purpose of this block is: /* Just a small safety cap to defer storing more data into the object * if there already is way too much. This handles the case when there * is disk clients pending on a too large object being fetched and a * few other corner cases. */ under these conditions storeLowestMemReaderOffset returns what? Well, it depends. If there is no clients at all it returns inmem_hi + 1. If there is no other clients it returns inmem_hi + 1. If the memory client is still around it returns the offset of the memory client.. so no, I don't think the above will work out (even if fixed for the +1). It both allows memory usage to explode and still allows for the stall to happen making the code even further unpredictable.. It's then better to keep it as it is and look into why data didn't get freed, or delete the block and accepting that memory usage may explode under these conditions.. Note: As long as we allow multiple clients on the same incomplete object we will run into cornercases like this, and it's far from obvious how to handle them all.. In the specific case you are seeing the correct fix is to look into why memory had not been freed. It should have been freed. The condition above is there to avoid blowing up memory usage outside any proportions which will very negatively impact everything, not just this request so disabling it is not a good option. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [Fwd: squid.conf stuff]
On Tue, 2006-08-01 at 01:53 +1200, Reuben Farrelly wrote: I wrote this a while ago...does it seem suitable to put in -CVS or is it a bit long? Length is OK I think, but Adrian is the guy to speak to for COSS content. COSS is still experimental so there is no feature freeze in the COSS implementation. From memory the bit about specifying the number of threads in configure can now come out, yes? I think so yes. COSS now gets a few threads automatically (6 + 3 per additional COSS store) If so, I'm happy to create a diff for it if that makes it easier to merge. Diffs is preferred for submissions, text easier in discussion. So both with text inline and diff attached is optimal. Regards Henrik
Re: Patch for squid short expiration time
mån 2006-07-31 klockan 11:25 +0200 skrev Eduard Veleba: the patch we sent you contains another one small change - possibility of setting expiration time in refresh_pattern in seconds, not only minutes. It simply doesn't multiply the time with 60, when there's letter 's' right after the number. Is there any possibility to add this feature to squid? Yes. See the discussion thread on squid-dev regarding this. http://www.squid-cache.org/mail-archive/squid-dev/200607/0060.html regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: cvs commit: squid mkrelease.sh
mån 2006-07-31 klockan 12:43 +0200 skrev Guido Serassio: Hi Henrik, At 12.39 31/07/2006, Henrik Nordstrom wrote: Hmm.. ah. cvsps got a bit confused. Not by this change but by the tag being moved after the changesets had been updated (minor packaging bugfix, had forgot to update ChangeLog release date). Fixed. One problem still left :-) http://www.squid-cache.org/Versions/v2/2.6/changesets/SQUID_2_6_STABLE2.html Fixed. Really should redo this to automatically render the release file when finding the tag.. but no time now. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Squid 2.6 STABLE2 + COSS
mån 2006-07-31 klockan 21:49 +0800 skrev Adrian Chadd: On Mon, Jul 31, 2006, Steven Wilton wrote: The code does only touch the coss directory (except for the comments in cf.data.pre) if that makes any difference. Or maybe I should check whether it's been released before posting :) Why? There'll be a squid-2.6stable3 at this rate. :) Sure. In a few weeks (or earlier if we have messed up badly somewhere). So it's best to submit any changes soon allowing proper review. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: How test HTCP ?
lör 2006-07-29 klockan 19:45 +0200 skrev Guido Serassio: I want to test the forward port of HTCP changes from 2.6 to 3.0, but I don't know how to check al the HTCP functionality. Most of the code is tested by peering with 2.5 and 2.6 in both directions. The CLR support need a HTCP client. Unfortunately we don't have any.. Perhaps one should be written for HTCP and ICP. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Any takers on Bug #1703? (diskd stuck at 100% CPU)
sön 2006-07-30 klockan 09:56 +0200 skrev Guido Serassio: I think that likely you have found the problem look the log attached to this Squid bug on Debian opened by the same people: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=380020 2006/07/26 16:52:21| ipcCreate: /usr/lib/squid/diskd_daemon: (2) No such file or directory Ah, explains a bit... The problem seems to be into ipcCreate(): Currently only the child can know if the program to be launched exist. What about about to add a check of the existence of the program before fork() ? Thinking. Why are we using sysv ipc for diskd? Why not the normal ipc channel created by ipcCreate? But we should at least monitor it for close... The path check already exists for other directives. requirePathnameExists(). Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: How test HTCP ?
sön 2006-07-30 klockan 19:12 +0200 skrev Guido Serassio: Any suggestion on the peering setup ?, excluding the htcp option ... :-) Nothing else particular I can think of.. The CLR support need a HTCP client. Unfortunately we don't have any.. Perhaps one should be written for HTCP and ICP. Yes, something like squidclient. Actually shouldn't be hard to extend squidclient I think. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: COSS Crash + WCCP while rebuilding
tor 2006-07-27 klockan 16:00 +0800 skrev Steven Wilton: The first is a config option for wccp2 to make squid wait until all cache_dirs have finished rebuilding before squid will register itself with WCCP. This will allow the rebuild to happen quickly, and avoid slow web requests while the cache rebuilds. Applied. The second is a trivial patch to the storeSwapMetaUnpack() function to make it initialise a variable. This was causing crashes with COSS when the first object in the buffer was broken. Got applied a few days ago as Bug #1685 Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Any takers on Bug #1703? (diskd stuck at 100% CPU)
sön 2006-07-30 klockan 09:56 +0200 skrev Guido Serassio: 2006/07/26 16:52:21| ipcCreate: /usr/lib/squid/diskd_daemon: (2) No such file or directory A quite large but nonintrusive patch applied to detect this early, refusing to start at all. The problem seems to be into ipcCreate(): Yes and no. Helpers existence are supposed to be verified early in the startup to alert on stupid errors. But ipcCreate() could do better.. Currently only the child can know if the program to be launched exist. What about about to add a check of the existence of the program before fork() ? Did a similar thing like you proposed in the patch, but as a fatal error. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Patch for the getpwnam_auth helper
mån 2006-07-03 klockan 21:33 -0300 skrev Giancarlo Razzolini: Here it is. Please send any comments, critics or suggestions. Did the patch against the getpwnam.c from 2.6.STABLE1. Also i've started today writing the documentation for it. Applied to Squid-3. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional DNS error in Squid 2.6
lör 2006-07-29 klockan 09:35 +0200 skrev Guido Serassio: Hi, I have found an occasional DNS resolution error when browsing www.microsoft.com. I have seen the error only few times, less then 10, but the odd thing is that this happens always only with www.microsoft.com and sometime it was also happened on 2.5: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=367963 A page reload is always successful. Could be Bug #1602. www.microsoft.com is dangerously close to the UDP DNS size limit of 512 octets.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional DNS error in Squid 2.6
lör 2006-07-29 klockan 12:47 +0200 skrev Henrik Nordstrom: lör 2006-07-29 klockan 09:35 +0200 skrev Guido Serassio: Hi, I have found an occasional DNS resolution error when browsing www.microsoft.com. I have seen the error only few times, less then 10, but the odd thing is that this happens always only with www.microsoft.com and sometime it was also happened on 2.5: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=367963 A page reload is always successful. Could be Bug #1602. www.microsoft.com is dangerously close to the UDP DNS size limit of 512 octets.. Could also be a broken bind version.. some versions of bind have had problems with CNAME chains and TTL expiry. a ethereal trace of port 53 traffic will tell what the problem is. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Any takers on Bug #1703? (diskd stuck at 100% CPU)
lör 2006-07-29 klockan 18:05 +0800 skrev Steven: I could reproduce the bug if I had a COSS cache_dir enabled without any aufs cache_dirs. I've updated the bug with a patch to fix this scenario. I think the COSS issue is separate. Based on your patch that problem should be seen immediately on startup, and not after a squid -k rotate. Also are you sure the symptoms is really the same? In bug #1703 Squid seem to be stuck on calling msgrecv() repeatedly. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] tproxy2 patch for squid3
lör 2006-07-29 klockan 09:44 +0200 skrev Jan Engelhardt: Hello, Regular client-side transparent proxying is easily accomplished by redirecting network traffic using -j DNAT, -j REDIRECT, or -j TPROXY (I do not know why this seems needed). However, server-side transparency requires a little more kick. https://lists.balabit.hu/pipermail/tproxy/2006-July/000273.html This patch actually brew in my homemade version of squid3 and worked long before tproxy even hit the squid2.6 scene. CAP_NET_ADMIN must be accounted for by the user, and in my case, is easily done through the MultiAdmin linux kernel module. Comments: Your patch does not handle persistent connections. If there is multiple clients talking to the same server their requests may get intermixed, no longer keeping the source IP binding. Why the commConnectStart2 function instead of extending commConnectStart? Regards Henrik diff --fast -Ndpru squid-3.0.PRE4-20060727~/src/cf.data.pre squid-3.0.PRE4-20060727/src/cf.data.pre --- squid-3.0.PRE4-20060727~/src/cf.data.pre 2006-07-02 18:53:46.0 +0200 +++ squid-3.0.PRE4-20060727/src/cf.data.pre 2006-07-28 15:56:59.629577000 +0200 @@ -2852,6 +2852,16 @@ DOC_START the correct result. DOC_END +NAME: tproxy +TYPE: onoff +DEFAULT: off +LOC: Config.onoff.tproxy +DOC_START + If you have Linux with iptables and TPROXY2 support, you can enable + this option to have SQUID make outgoing connections using the original + IP address of the client. +DOC_END + NAME: tcp_outgoing_tos tcp_outgoing_ds tcp_outgoing_dscp TYPE: acl_tos DEFAULT: none diff --fast -Ndpru squid-3.0.PRE4-20060727~/src/comm.cc squid-3.0.PRE4-20060727/src/comm.cc --- squid-3.0.PRE4-20060727~/src/comm.cc 2006-05-30 23:15:58.0 +0200 +++ squid-3.0.PRE4-20060727/src/comm.cc 2006-07-28 15:57:02.299577000 +0200 @@ -39,8 +39,10 @@ #include StoreIOBuffer.h #include comm.h #include fde.h +#include forward.h #include CommIO.h #include ConnectionDetail.h +#include HttpRequest.h #include MemBuf.h #include pconn.h #include SquidTime.h @@ -52,6 +54,7 @@ #include netinet/tcp.h #endif +#include ip_tproxy.h class ConnectStateData { @@ -66,7 +69,7 @@ public: char *host; u_short port; -struct sockaddr_in S; +struct sockaddr_in S, src_addr; CallBackCNCB callback; struct IN_ADDR in_addr; @@ -1150,6 +1153,26 @@ ConnectStateData::operator delete (void cbdataFree(address); } +void commConnectStart2(int fd, const char *host, u_short port, CNCB *callback, + FwdState *fs) +{ +ConnectStateData *cs; + +cs = new ConnectStateData; +cs-fd = fd; +cs-host = xstrdup(host); +cs-port = port; +cs-callback = CallBackCNCB(callback, fs); +if(fs-request != NULL) { +cs-src_addr.sin_addr = fs-request-client_addr; +cs-src_addr.sin_port = htons(fs-request-client_port); +} else { +memset(cs-src_addr, 0, sizeof(cs-src_addr)); +} +comm_add_close_handler(fd, commConnectFree, cs); +ipcache_nbgethostbyname(host, commConnectDnsHandle, cs); +} + void commConnectStart(int fd, const char *host, u_short port, CNCB * callback, void *data) { @@ -1353,7 +1376,7 @@ ConnectStateData::connect() if (S.sin_addr.s_addr == 0) defaults(); -switch (comm_connect_addr(fd, S)) { +switch (comm_connect_addr(fd, S, src_addr)) { case COMM_INPROGRESS: debug(5, 5) (ConnectStateData::connect: FD %d: COMM_INPROGRESS\n, fd); @@ -1406,9 +1429,45 @@ commSetTimeout(int fd, int timeout, PF * return F-timeout; } -int +static void do_tproxy(int sock, const struct sockaddr_in *src, + const struct sockaddr_in *dest) +{ +struct in_tproxy itp; +int ret; -comm_connect_addr(int sock, const struct sockaddr_in *address) +memset(itp, 0, sizeof(itp)); +itp.v.addr.faddr = src-sin_addr; // fix endianness +itp.v.addr.fport = 0; //src-sin_port; +itp.op = TPROXY_ASSIGN; + +if((ret = setsockopt(sock, SOL_IP, IP_TPROXY, itp, sizeof(itp))) != 0) { +debug(5, 3) (setsockopt IP_TPROXY/TPROXY_ASSIGN failed\n); +return; +} + +memset(itp, 0, sizeof(itp)); +itp.v.addr.faddr = dest-sin_addr; +itp.v.addr.fport = dest-sin_port; +itp.op = TPROXY_CONNECT; +if((ret = setsockopt(sock, SOL_IP, IP_TPROXY, itp, sizeof(itp))) != 0) { +debug(5, 3) (setsockopt IP_TPROXY/TPROXY_CONNECT failed\n); +return; +} + +memset(itp, 0, sizeof(itp)); +itp.v.flags = ITP_CONNECT; +itp.op = TPROXY_FLAGS; +if((ret = setsockopt(sock, SOL_IP, IP_TPROXY, itp, sizeof(itp))) != 0) { +debug(5, 3) (setsockopt IP_TPROXY/TPROXY_FLAGS failed\n); +return; +} + +return; +} + +int +comm_connect_addr(int sock, const struct sockaddr_in *address, + const struct sockaddr_in *src)
Re: [squid-users] tproxy2 patch for squid3
lör 2006-07-29 klockan 17:05 +0200 skrev Jan Engelhardt: The relevant parts of the code to fix this is in FwdState::pconnPush and FwdState::connectStart fwdPconnPool-pop(). What would I have to add? You would need to extend the key used in these functions with at least the source IP of the client (== source IP of connection) when tproxied.. 0.0.0.0 otherwise. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Any takers on Bug #1703? (diskd stuck at 100% CPU)
lör 2006-07-29 klockan 23:16 +0800 skrev Steven: I was seeing the msgrecv() calls while running strace, but it wasn't in the same loop as reported in the bug. Looks like I just found another bug while trying to reproduce this one :) Was not aware there was msgrcv() calls in pthreads. We don't have a backtrace in the bug, so it could be the same and I was chasing down the wrong path... Guess we gave to wait for Ralf to answer about the details of his setup. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Patch for squid short expiration time
fre 2006-07-28 klockan 10:55 +0200 skrev Eduard Veleba: Dear Squid developers, in our company Seznam.cz we use your application Squid for a long time, for example at our service http://wiki.mapy.cz, and we are very satisfied with it. There was recently a demand to cache pages that has set expiration time shorter than 1 minute, that your system doesn´t allow. So we modified the code a little bit. This modification is an easy patch and we would very appreciate if you use it in Squid for other users. The patch is sent as an attachment and it's patch for Debian source squid 2.5.9-10sarge2. Thanks, but we have already added a similar directive in squid-3. NAME: minimum_expiry_time COMMENT: (seconds) TYPE: time_t LOC: Config.minimum_expiry_time DEFAULT: 60 seconds DOC_START The minimum caching time according to (Expires - Date) Headers Squid honors if the object can't be revalidated defaults to 60 seconds. In reverse proxy enorinments it might be desirable to honor shorter object lifetimes. It is most likely better to make your server return a meaningful Last-Modified header however. In ESI environments where page fragments often have short lifetimes, this will often be best set to 0. DOC_END Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Patch for squid short expiration time
fre 2006-07-28 klockan 22:33 +0200 skrev Henrik Nordstrom: and we are very satisfied with it. There was recently a demand to cache pages that has set expiration time shorter than 1 minute, that your system doesn´t allow. So we modified the code a Thanks, but we have already added a similar directive in squid-3. NAME: minimum_expiry_time Now also backported to Squid-2.6. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Any takers on Bug #1703? (diskd stuck at 100% CPU)
Looked at it breafly, but ran out of ideas. http://www.squid-cache.org/bugs/show_bug.cgi?id=1703 Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Patch for squid short expiration time
fre 2006-07-28 klockan 22:24 +0100 skrev Michael Pye: However this backport doesn't allow you to specifiy the minimum refresh time in seconds. You may have a situation where you want the minimum refresh time to be 20 seconds, but still have some other refresh_patterns that use 40 seconds. Right, that was what those parts was about... must be a bit tired.. Would it be possible to add this? Yes. Eduards patch seems to allow you to add an 's' after the minumum refresh time to indicate secconds so either this or another directive like minimum_refresh_seconds on could be used where seconds is assumed in refresh_pattern min times. I would be happy to submit something for that. As its a new feature however I guess it needs to go into squid3 first? Correct, but the code is basically the same in this area so there should not be any issues porting the changes to squid3. You are welcome to give it a shot. Should be sufficient to just extend the refresh_pattern directive with the ability to specify seconds. Internally it's seconds anyway as you probably noticed, it's the config parser which translates from minutes to seconds.. I am not very fond of the format. Would prefer refresh_pattern min [unit] pct max [unit] where unit is seconds/minutes/days, and defaults to minutes if not specified. Makes the parser slightly more complex, but it's worth it I think. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: COSS Crash + WCCP while rebuilding
tor 2006-07-27 klockan 11:02 +0200 skrev Henrik Nordstrom: tor 2006-07-27 klockan 16:40 +0800 skrev Adrian Chadd: On Thu, Jul 27, 2006, Steven wrote: They make me use Outlook at work :) Here I go again. Cool! I'm happy for you to commit the store_swapmeta patch. You are welcome to commit both patches. They look fine to me. Only comment is that I maybe would use a smaller retry interval in the WCCP patch to have registration more timely after rebuild have finished. and have it default on.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
WCCPv2 issues
Bug #1696 could use a couple of other eyes. Turns out the router capability processing has never been tested (return instead of break in a switch statement.. see the bug), and something seems fishy there (squid complaining on what to me looks like correct packets).. See also http://www.squid-cache.org/mail-archive/squid-users/200607/0450.html as there was some questionmarks while decoding the WCCP2_HERE_I_AM sent by Squid. May be other bugs as well in these WCCP2 exchanges. I don't quite get some of the IOS error messages, but maybe it's just incomplete data. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: WCCPv2 issues
tis 2006-07-25 klockan 23:09 +0200 skrev Henrik Nordstrom: Turns out the router capability processing has never been tested See also http://www.squid-cache.org/mail-archive/squid-users/200607/0450.html as there was some questionmarks while decoding the WCCP2_HERE_I_AM sent by Squid. May be other bugs as well in these WCCP2 exchanges. I don't quite get some of the IOS error messages, but maybe it's just incomplete data. I think all pieces of the WCCP2 issue has now fallen into place, except the questionmarks in the message referenced above. Awaiting testing of the proposed patch before commit. From what it looks the equipment he uses simply not compatible with Squid at the moment as it seems it only supports mask based assignment, and we only support hash based assignment. Regarding the mask based assignments. I wonder how complex mask arrays a typical equipment supports. In the specs is an unbounded array only limited by the UDP packet size. The mask specifications can build very powerful expressions, but also harder to distribute among the farm members. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: WCCPv2 issues
ons 2006-07-26 klockan 11:27 +1200 skrev Reuben Farrelly: Regarding the mask based assignments, wonder how complex mask arrays a typical equipment supports. In the specs is an unbounded array. Does this help? http://www.cisco.com/en/US/products/hw/switches/ps4324/products_configuration_guide_chapter09186a008062cfc6.html especially WCCP version 2 standard allows for support of up to 256 distinct masks. However, a Catalyst 4500 series switch only supports mask assignment table with a single mask. I suppose this means a single mask/value set. Wonder if there is any limit on the size of the value set.. hopefully not too small or load distribution will be a bit troublesome. A single mask is sufficient provided the value set can be sufficiently large. But in worst case this means a single mask value per cache member server, which kind of limits possible setups to power of 2 cache servers as all other configurations will have IP areas which can't be redirected if only a single mask value per cache server is supported. A quick math. They say it supports up to 32 cache servers. Then hopefully they support at least 64 mask values to allow for some kind of distribution. Btw, not sure where they found the supposed standard limit of 256.. nothing mentioned in the last draft I have access to, and the field with is 32 bits... The only 256 limit mentioned is the hash size for hash distribution which is using 256 buckets (fixed size). Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Make epoll() a run-time choice?
tis 2006-07-11 klockan 14:34 +0200 skrev Luigi Gangitano: I've just packaged squid-2.6.STABLE1 and squid-3.0.PRE4 for debian, enabling epoll() support at build time. This obviously makes squid fail with kernels older than 2.6.x which are still supported by debian. During discussions on the debian-devel mailing list, it was proposed to enable a fallback if epoll() is not available at startup, switching to the old poll() behaviour. I think the best solution to this is to write a libevent comm loop. Having the switching in Squid is a little troublesome, and probably not worth the effort with libevent already existing.. I assume libevent is supported by Debian? I admit I don't have the skill to do that. Is this possible in the comm framework? Adding a libevent comm loop certainly is possible. Actually not very much different from the native epoll and kqueue comm loops already there. Just a little more per-fd state than epoll/kqueue. The native poll/select loops is very different and is best left ignored when thinking about this.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: 2.6.STABLE2 approaching
ons 2006-07-19 klockan 21:53 +0200 skrev Guido Serassio: I think that Bug #1681 should be also investigated before STABLE2. The problem is happening on different platforms with the same symptom with too much frequency. I will try to do more investigation during the next weekend. Unfortunately this is a bug I can't test very much myself, but 2.6.STABLE is always open to bugfixes. To diagnose this the helper cachemgr stats may be helpful. buzy == waiting for domain reserved == waiting for client If there is Squid bugs I can think of two different scenarios: a) Helper getting stuck as reserved with no client b) Helper getting stuck in some unuseable state after -k rotate. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: fd patch to squid
ons 2006-07-19 klockan 15:13 +0200 skrev Martin Stransky: Hello guys, As you probably know, we have one RH specific patch to squid. Yes, found this out by accident some time ago.. It's for dynamic configuration of filedescriptors number, via. squid.conf. I reviewed it for the 2.6 branch, fixed some bugs (which comes with squid reconfiguration) so I think it's stable enough now. A quick review of the patch indicates it's a bit overcomplicated And I'm not that convinced you really need it either.. A simple directive limiting Squid_MaxFD and a high limit specified to configure would do nearly as good. The parts eleminating SQUID_MAXFD constant dependencies is good. Note: The patch does not account for comm_select_win32.c, causing problems for the win32 plaform.. The parts messing with comm_select.c doesn't feel warranted. Systems using select() is generally not good at handling more than FD_SETSIZE connections anyway, and the select interface as such crumbles a bit when used for many concurrent connections. Also not that keen on using a custom type for the fd_set given to select(), even if we already make very strict assumptions about fd_set being a bit array.. (but I have heard of some systems using an array of ints...) The bit array stuff doesn't feel needed, as it's only used by the changes to comm_select.c. Renaming Squid_MaxFD to SQUID_NUMFD does not feel right. Our naming convention uses all caps for preprocessor constants only, and even with the patch it's still the max limit this Squid can handle after startup.. (same as Squid_MaxFD). What I'd like to see for doing this proper is * The stuff eleminating SQUID_MAXFD dependencies in arrays etc. * The directive for setting the fd limit * Checks in comm_select_init or similar to lower Squid_MaxFD if it's above what the select loop implementation can handle. This means comm_select.c and comm_select_win32.c limiting Squid_MaxFD to FD_SETSIZE. (eleminating the similar check from main.c). Some thinking needed to get rid of comm_select dependencies.. (probably a new comm_select_fdlimit call needed before comm_init) * Moving the FD_SETSIZE override from squid.h to comm_select.c before include of squid.h.. * After this SQUID_MAXFD should only be referenced from the FD_SETSIZE override in comm_select.c. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
2.6.STABLE2 approaching
The 2.6.STABLE2 release is approaching with many critical bug fixes. but Bug #1677 kind of a blocker for the release.. not due to it being a significant Squid bug, but due to it causing some broken web sites to fail which worked in 2.5.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: Tproxy patch
mån 2006-07-17 klockan 14:43 +0800 skrev Steven Wilton: I'm compiling under Debian (stable and unstable), and we are still seeing the fd set limited to 1024. It's coming in the following path: /usr/include/sys/capability.h /usr/include/linux/types.h /usr/include/linux/posix_types.h Is it safe to include these all headers in squid.h before __FD_SETSIZE is redefined? Or is this specific to the debian include files? That would be a bit messy. The problem is that those two linux headers isn't supposed to be included at all in userspace applications (only kernel). glibc provides it's own types. I suppose we could use the same glue as Fedora already has in it's sys/capability.h... --- libcap-1.10/libcap/include/sys/capability.h.foo Fri Nov 9 16:26:25 2001 +++ libcap-1.10/libcap/include/sys/capability.h Fri Nov 9 16:28:47 2001 @@ -21,6 +21,16 @@ */ #include sys/types.h +#include stdint.h + +/* + * Make sure we can be included from userland by preventing + * capability.h from including other kernel headers + */ +#define _LINUX_TYPES_H +#define _LINUX_FS_H +typedef unsigned int __u32; + #include linux/capability.h /* or maybe move all capability related code out to a separate file. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: Tproxy patch
mån 2006-07-17 klockan 16:31 +0800 skrev Steven Wilton: That would be a bit messy. The problem is that those two linux headers isn't supposed to be included at all in userspace applications (only kernel). glibc provides it's own types. I suppose we could use the same glue as Fedora already has in it's sys/capability.h... --- libcap-1.10/libcap/include/sys/capability.h.foo Fri Nov 9 16:26:25 2001 +++ libcap-1.10/libcap/include/sys/capability.h Fri Nov 9 16:28:47 2001 @@ -21,6 +21,16 @@ */ #include sys/types.h +#include stdint.h + +/* + * Make sure we can be included from userland by preventing + * capability.h from including other kernel headers + */ +#define _LINUX_TYPES_H +#define _LINUX_FS_H +typedef unsigned int __u32; + #include linux/capability.h /* or maybe move all capability related code out to a separate file. You mean something similar to the attached patch? Yes, plus the above #define and typedef trickery before including sys/capability.h to shield us from the linux kernel types.. #define _LINUX_TYPES_H #define _LINUX_FS_H typedef uint32_t __u32; #include sys/capability.h Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: Tproxy patch
mån 2006-07-17 klockan 11:04 +0200 skrev Henrik Nordstrom: You mean something similar to the attached patch? Yes, plus the above #define and typedef trickery before including sys/capability.h to shield us from the linux kernel types.. #define _LINUX_TYPES_H #define _LINUX_FS_H typedef uint32_t __u32; #include sys/capability.h Applied. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logfile writing
sön 2006-07-16 klockan 21:49 +0800 skrev Adrian Chadd: I've written some code to push logfile writing into an external process, freeing up the main squid process from the potentially blocking stdio writes. Good. This code isn't ready to be merged into squid-2.6. Its meant more as a prototype. I haven't even reviewed it for 'correctness' besides running it under a reasonable load (200req/sec) for half a day. Some comments: Instead of the linked list of separately maintained buffers, have you considered using MemBuf:s? * rework the socket IO scheduling code to try and hold onto a buffer until its full or 1 second has passed. It might not really matter but it'll be interesting to profile it at very high (1500/sec) request rates. Not sure I agree with this. It should even out anyway under load, and under low load it's just annoying to have logs delayed. * figure out what to do about rotate (as it stands, this code only re-opens the logfiles and therefore pushes logfile maintainence into an external script.) You flush any pending buffers first, right? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logfile writing
mån 2006-07-17 klockan 09:25 +1000 skrev Robert Collins: I did a patch for squid 2.x/early 3.0 that did something similar - and it suffered from dropped log contents during high load... which is something I never got *satisfactorily* addressed - its the key thing I think needs considering. From what I can tell this is already addressed in Adrians patch by buffering log events when writing is not possible. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logfile writing
mån 2006-07-17 klockan 09:07 +0800 skrev Adrian Chadd: Robert: I do remember your work having that annoying problem of dropping logfile entries. I've got an idea on how to test it (I'll write a script to throw lots of consecutively-numbered requests at my code and make sure they all appear in the logs.) Also try add a bit of delay in the logger daemon, making things buffer up... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: logfile writing
mån 2006-07-17 klockan 09:07 +0800 skrev Adrian Chadd: You flush any pending buffers first, right? This patch doesn't do that yet. It wasn't that important as buffered data doesn't get lost during a rotate. The main reason to flush on rotate is to ensure the logs are whole. It is not desireable to have a single log record split over multiple log files... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: gcc -rdynamic question
fre 2006-07-14 klockan 16:59 +0200 skrev Guido Serassio: Hi, A question regarding the -rdynamic gcc option: It should go into LDFLAGS, correct ? Most likely yes. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Tproxy patch
ons 2006-07-12 klockan 09:36 -0300 skrev Gonzalo Arana: Cool, this fixes what some people have been telling me about FD limits under Squid-2.6 being stuck at 1024. Good find! If the limit comes from select(2) limits, It doesn't. It comes from Linux include header stupidities. The glibc (and in this case linux) headers do not support redefining FD_SETSIZE in a sane manner. The Linux kernel as such does not have any FD_SETSIZE limitations since way back. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Patch for the getpwnam_auth helper
mån 2006-07-03 klockan 20:07 -0300 skrev Giancarlo Razzolini: First sorry for the late, it were 2 weeks of tests in my university, so i was busiest that never. Now that i'm on vacation from university, i do have more free time. No problem. Now, to the patch. For doing what you want, i'll have do do major changes to the code. Because if using shadow, i declare a pointer to the spwd struct, and if using getpwnam, i declare a pointer to the passwd struct. To fail back, i'll have to do some big changes to the code. I'm even thinking in using functions inside the code to do the auth. Want to know if you agree with it or not. Didn't look that big to me... a few lines only. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
tor 2006-06-29 klockan 10:40 -0300 skrev Gonzalo Arana: ... } else if (flag == COMM_OK len == 0 !flags.headers_parsed) { fwd-fail(errorCon(ERR_ZERO_SIZE_OBJECT, HTTP_BAD_GATEWAY)); [..] } else if (flag == COMM_OK len == 0) { [..] if (!flags.headers_parsed) /* * When we called processReplyHeader() before, we * didn't find the end of headers, but now we are * definately at EOF, so we want to process the reply * headers. */ processReplyHeader(); 0) The 'if (!flags.headers_parsed)' is dead code. Any ideas what was the original idea of this? Yes.. to handle broken servers not sending complete headers. 1) A 'zero size object' is a good reply in this case? Only if no data at all has been received I think. Maybe not if partial headers have been received. But it's a border case. if server closed the connection before we even got reply headers, this could mean that server actually closed connection before it got our last request. Yes, but this is handled in the FwdState. Or at least should be... retrying the request on certain classes of errors. This is also why local errors is not pushed to the StoreEntry by the protocol handlers but merely registered with the fwdstate.. 2) In fact, sniffing with ethereal I've 'verified' this. Not a foolproff check, but I have about 220ms of RTT to osnews.com, and you can see that between packets 88 89 of attached pcap file there are about 50ms. This means that: server closed connection, my squid3 sends the request, my squid3 receives the FIN packet for the very same TCP connection. Ok. So the retry logic has gone defunct, pretty much like the abort logic had... 4) How to react if server closes connection before we even get reply headers? If it's the first request on a new connection a ERR_ZERO_SIZED_REPLY. Else FwdState should retry the request on a new connection (preferably a brand new connection to not run into the same issue again). All the logics for doing this is there already. The question is why it doesn't get triggered. See FwdState::serverClosed. I believe it would be great if we could check last TCP ACK value from server-side when read(2) returns 0. This way, we could know if server actually got the request and only retry if it didn't. Is there any way to read that? tcp(7) manpage does not show anything useful :( (at least on Linux). Unfortunately not that I know of. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
ons 2006-06-28 klockan 08:32 +0200 skrev Guido Serassio: No more errors, so it seems that the problem is fixed. But spun off into more issues... (Bug #1638).. this connection pinning business has been a bit of a nightmare to get correct in all configurations.. hopefully no additional gremlins will pop up in there. should probably write a document on it while all detail requirements is fresh in memory.. Could be quite useful when implementing pinning support in Squid-3... Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Open issues for 2.6
ons 2006-06-28 klockan 10:19 +0800 skrev Adrian Chadd: I do keep getting these during polygraph testing: 041.64| Xaction.cc:74: error: 8/10 (c14) premature end of msg body 1151460879.419466# obj: http://host:8080/w0e4091fe.60215fc4:0006/t01/_000186bd flags: basic,GET,chb, size: 39200/41790 xact: 0e4091fe.60215fc4:0009b992 [no data to dump] Polygraph people; is that a zero sized reply, or is it a closed before all the data was transferred From the message I would guess the second.. (of msg body and size: 39200/41790). This is also confirmed by reading the source. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
RE: Bug #1616: assertion failed:comm_generic.c:65:F-flags.openonstoreResumeFD.
tis 2006-06-27 klockan 08:16 +0800 skrev Steven Wilton: In the original code for 2.5, I had the following logic in commResumeFD() to handle this situation if(!(F-read_handler) || !(F-epoll_backoff)) { debug(5, 2) (commResumeFD: fd=%d ignoring read_handler=%p, epoll_backoff=%d\n,fd,F-read_handler,F-epoll_backoff); F-epoll_backoff=0; return; } Handle is a matter of definition.. the above ignores the condition. What about removing the backoff flag from the fde struct in fd_close(), and removing the assert(flags.open) from commResumeFD. This way, if a fd is backed off, then closed, the backoff flag will be removed. We are then left with 2 possible situations: Removing the assert is fine. I'll replace it with a debug statement. Clearing the backoff flag is redundant as the whole fde is cleared a few lines down, and the flag is never looked unless there is also some handlers set.. But I seriously think this bug is closed since some time back. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: squid-3 vs 2.6
tis 2006-06-27 klockan 08:22 +0800 skrev Adrian Chadd: Which reminds me, I should really spend some time making sensible defaults when COSS is using AUFS - right now the aiops code doesn't create any threads because n_aufs_dirs (or whatever it is) is set to 0; one has to override it by the ./configure option (--with-aufs-threads=). That will get rid of at least one runtime problem (ie, no IO threads, so COSS doesn't do anything :) Been thinking many times about splitting the aiops threads per cache_dir, which besides solving the above problem also gives much better load calculation. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
tis 2006-06-27 klockan 10:25 +0200 skrev Guido Serassio: Could you set a breakpoint on the ERR_ZERO_SIZE_OBJECT fwdFail call in http.c httpReadReply, and print read_sz from there.. (non-optimized build please..) Obviously during a failed request ? Yes.. this code is only reached on such requests... I can run Squid under gdb on the Linux machine, tell me what other info you need. backtrace print read_sz print len maybe more.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
tis 2006-06-27 klockan 12:39 +0200 skrev Guido Serassio: Hi, At 10.40 27/06/2006, Henrik Nordstrom wrote: tis 2006-06-27 klockan 10:25 +0200 skrev Guido Serassio: Could you set a breakpoint on the ERR_ZERO_SIZE_OBJECT fwdFail call in http.c httpReadReply, and print read_sz from there.. (non-optimized build please..) Obviously during a failed request ? Yes.. this code is only reached on such requests... I can run Squid under gdb on the Linux machine, tell me what other info you need. backtrace print read_sz print len maybe more.. OK, I will run it tonight at home. Excellent.. One question: Are you using NTLM/Negotiate proxy authentication? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
tis 2006-06-27 klockan 13:30 +0200 skrev Henrik Nordstrom: One question: Are you using NTLM/Negotiate proxy authentication? If you are then the problem should be fixed now. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
lör 2006-06-24 klockan 08:48 +0800 skrev Adrian Chadd: I'm seeing it in my polygraph testing under 2.6 + COSS. It may be related. Most likely not... there was a nasty bug related to persistent connections and NTLM authentication.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Open issues for 2.6
None critical / blocker for the release, but still 761: diskd unstable 1584: WCCPv2 unable to register with more than one router on Linux (IP ID IOS issue) 1602: Need to implement TCP fallback for DNS 1638: Connection pinning a bit too agressive. Should not pin due to local authentication, only forwarded authentication.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Fwdstats serverfd (Patch)
mån 2006-06-26 klockan 11:40 +0800 skrev Steven Wilton: When I did the original serverfd work for backed off connections, I assumed serverfd=0 was invalid. This patch fixes the code so serverfd=-1 is the default when there is no backed-off server fd. Thanks. Applied. I've also added an assert into fd_close to catch any cases where a fd is closed while it is backed off. This condition can cause crashes later on (Bug #1616 is an example) if not detected. Not applied as I don't think it will trap 1616 at all as that bug is a fd being closed while it's still registered as the serverfd of a deferred StoreEntry, not that the fd as such is deferred. The other situations the assert maybe could trap is all harmless. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: squid-3 vs 2.6
sön 2006-06-25 klockan 17:43 +1200 skrev Doug Dixon: - Parts of the SSL cleanup. Main requirements being the pending bits in the comm code, or refactoring solving this I/O operation independence differently. Bug #1633. - Reasonable read defer management. Bug #1634 - COSS ng (or whatever to call what Adrian is doing to COSS). Some of this actually fits better in Squid-3 with it's re-factored I/O framework. This is already on Adrians schedule. Not sure there need to be a bugzilla entry.. - 2GB objects Bug #437. - HTCP updates Bug #1635. - Many small bugfixes here and there in the new features common to both trees. - probably a bit more We have to take these on a case by case basis as they are encountered in 3.0 I think. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: Occasional Zero Sized Reply error with 2.6
sön 2006-06-25 klockan 09:56 +0200 skrev Guido Serassio: Hi Henrik, At 23.39 24/06/2006, Henrik Nordstrom wrote: Could you try collecting a tcpdump -s 1600 of that traffic? I have done an ethereal capture saved in tcpdump format. Thanks, but didn't make me very much wiser. But maybe.. Is this with or without delay pools? Could you set a breakpoint on the ERR_ZERO_SIZE_OBJECT fwdFail call in http.c httpReadReply, and print read_sz from there.. (non-optimized build please..) Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel