Squid Developer? (fwd)
Duane: I'm a second year student at the Harvard Business School. Prior to coming back to school for my MBA I was the co-founder of something called Project Honey Pot (www.projecthoneypot.org). Project Honey Pot tracks malicious behavior online. We work with law enforcement agencies worldwide and have been instrumental in several of the high profile cases shutting down spam gangs and other online criminals. Generally, Project Honey Pot has operated as a public service project with contributions from members in more than 120 countries around the world. Talking with some of the entrepreneurial faculty around HBS I'm being coaxed into believing that there's a potentially interesting and disruptive business opportunity that could be spun off of the Project's data in order to protect websites. They're very interested in making the introductions to get this idea funded. Imagine something like a CDN for any website with the benefits of advanced, threat-based firewall++ protection. I've poked around with my technical team from Project Honey Pot as well as some CS friends I have over at MIT. Squid has repeatedly come up as a potential base platform onto which we could build the layers of the service. I was wondering if in the process of managing the Squid development these days you'd run across any talented, eager programmers who might be searching for a startup opportunity that has significant backing, a large market opportunity, and where they could play a meaningful role in growing a new company. I'm agnostic as to geographic locations so long as the person is talented and self-motivated. If any names spring to mind, I hope you won't hesitate shooting them my way. Best wishes, Matthew Prince. Email: matthew -at- mba2009 -dot- hbs -dot- edu
Re: [noc] translation toolkit
On Tue, 5 Aug 2008, Amos Jeffries said: Can someone with admin access to squid-cache.org please install the translation toolkit. done. textproc/translate-toolkit has been installed
some debug message cleanup in squid-2
FYI I'm planning to fix and commit all cases that I can find where debugging messages contain the wrong function name. For example: @@ -3959,8 +3959,8 @@ clientTryParseRequest(ConnStateData * co /* Limit the number of concurrent requests to 2 */ for (n = conn-reqs.head, nrequests = 0; n; n = n-next, nrequests++); if (nrequests = (Config.onoff.pipeline_prefetch ? 2 : 1)) { - debug(33, 3) (clientReadRequest: FD %d max concurrent requests reached\n, fd); - debug(33, 5) (clientReadRequest: FD %d defering new request until one is done\n, fd); + debug(33, 3) (clientTryParseRequest: FD %d max concurrent requests reached\n, fd); + debug(33, 5) (clientTryParseRequest: FD %d defering new request until one is done\n, fd); conn-defer.until = squid_curtime + 100;/* Reset when a request is complete */ return 0; }
Re: Problem with CVS pserver?
On Wed, 9 Apr 2008, Benno Rice wrote: I've started getting this today: cvs -z9 -d :pserver:[EMAIL PROTECTED]:/squid co squid open /dev/null failed Operation not supported Has something broken on the CVS server? It was broken when I upgraded the OS from FreeBSD-5 to -6. Fixed now. DW
ESI on Solaris
On Sat, 10 Nov 2007, Randall DuCharme wrote: Ok this is strange. It looks like it's trying to compile ESI specific support in yet I've not done --enable-esi. In autoconf.h #define ESI 0 is present. I've removed the -Werror flag for now so the multiple inclusion warning shouldn't be stopping anything: I'm pretty sure something else on solaris is #defining ESI behind our backs. I've committed a change to rename ESI to SQUID_ESI Duane W.
Re: ESI on Solaris
On Fri, 16 Nov 2007, Amos Jeffries wrote: Could you make that 'USE_ESI' instead? that seems to be a defacto standard within squid for enabling components. Easier to keep things consistent. I could make it USE_SQUID_ESI if you like. I'm still concerned that USE_ESI is too generic and likely to be chosen by some other project or operating system that uses an ESI acronym.
Re: caching dynamic content
On Thu, 15 Nov 2007, Adrian Chadd wrote: What about default refresh_pattern to not cache cgi-bin and/or ? URLs? I assume you mean to always refresh (validate) cgi-bin and/or ? Because if you don't want them to be cached then the 'cache' access list is the place to do that. yes, I could support default refresh_pattern lines for ? and cgi-bin, and then remove the default 'cache' rules I suppose. While we're at it we could probably also get rid of the silly gopher refresh_pattern line. Duane W.
Re: caching dynamic content
On Thu, 15 Nov 2007, Adrian Chadd wrote: Ideally I'd like to cache cgi-bin / ? content if cache information is given (max-age, Expires, etc; henrik knows more about the options than right. I'm not sure my current refresh patterns handle this: refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern cgi-bin 0 0% 0 refresh_pattern \? 0 0% 0 refresh_pattern . 0 20% 4320 You also have to remove these: #We recommend you to use the following two lines. acl QUERY urlpath_regex cgi-bin \? cache deny QUERY Thats what I'm supporting/suggesting: remove the default 'cache deny' lines and add some default refresh_pattern lines. Duane W.
Re: caching dynamic content
On Thu, 15 Nov 2007, Adrian Chadd wrote: I'd like to see something default in the next Squid release, so we can release it with a few interesting tag lines like Can cache google maps! I can support removing '?' from the default QUERY acl definition. I cannot support adding default 'rep_header' ACL types. Duane W.
Re: [squid-users] Solaris/OpenSSL/MD5 Issues
The patches to make MD5 work on Solaris have broken things on FreeBSD (at least) which also has a sys/md5.h. Compile fails with error: `MD5_DIGEST_LENGTH' was not declared in this scope It seems to me that the original problem was just that Squid's own MD5 routines are using names that collide with some system libraries/headers. Since we already ship a public domain MD5 implementaiton with Squid, why not just change our names to be unique and then always use them? Why go through this yucky configure maybe-find-some-libraries-here or maybe-find-them-there stuff? DW
Re: positive_dns_ttl
On Wed, 10 Oct 2007, Mark Nottingham wrote: From ipcache.c; if (ttl == 0 || ttl Config.positiveDnsTtl) ttl = Config.positiveDnsTtl; if (ttl Config.negativeDnsTtl) ttl = Config.negativeDnsTtl; i-expires = squid_curtime + ttl; As I read this, if the TTL from an upstream resolver happens to be '0', it changes it to whatever positive_dns_ttl is -- even though that also acts as a ceiling for DNS TTLs. I think this is partly left over from the old days when Squid always used the external dnsserver programs. 'dnsserver' could only report TTLs if the O/S had the libresolv _dns_ttl hack. So ttl == 0 meant that dnsserver didn't have any TTL value, so it should be set to positive_dns_ttl. The problem is that this plays havoc with DNS-based load balancers, which will be '0' more often than other DNS entries by nature. Any chance of either; The only thing I'm worried about is that with true 0 TTL squid will have to make multiple lookups for a single HTTP request. For example, if someone had a long list of 'dst' ACLs then each one could result in a new DNS lookup. AFAIK, the ipcache is the only place where DNS lookups are cached and Squid may refer to the ipcache multiple times for a given HTTP transaction. DW
Re: RC1 time
On Sun, 30 Sep 2007, Duane Wessels wrote: Alex tells me that its time to release Squid-3 RC1. I plan to start the process tonight. If all goes well it should be done and announceable in 24-48 hours. squid-3.0.RC1 is now on the master web/ftp sites for download. Is now a good time to branch the CVS tree? Or would folks rather wait until the first stable release? DW
RC1 time
Alex tells me that its time to release Squid-3 RC1. I plan to start the process tonight. If all goes well it should be done and announceable in 24-48 hours. DW
Re: cvs commit: www2/content/Download mirrors.dyn
On Sun, 5 Aug 2007, Amos Jeffries wrote: Don't know what the bug was (background-color?) Missing $ RCS file: /server/cvs-server/squid/www2/content/Download/mirrors.dyn,v retrieving revision 1.11 retrieving revision 1.12 diff -u -3 -p -r1.11 -r1.12 --- mirrors.dyn 4 Aug 2007 01:55:12 - 1.11 +++ mirrors.dyn 4 Aug 2007 21:03:00 - 1.12 @@ -7,7 +7,7 @@ $table = 'ftp'; function td($cdata) { return td$cdata/td; } function anchor($anchor, $href) { - if(anchor) + if($anchor) return a href=\$href\$anchor/a; else return $anchor; but I'm not seeing any DB records on that page anymore. My fault. I removed the wrong cron job. I intended to remove the job that created the .html files, but instead I removed the job that checked the mirror sites. Duane W.
Re: Event order
On Thu, 26 Jul 2007, Alex Rousskov wrote: I am leaning towards (2) for now because it minimizes the modifications and risk. The attached patch implements that option. Your patch is simple enough that I do not find it offensive :-) My only suggestion is to add more comments, in particular about protecting backwards clocks, and maybe a note that it is okay for timestamp to be zero. (1) still sounds like a good idea, although we might find some problems with it after implementation. DW
Re: tarball help
On Mon, 16 Jul 2007, [EMAIL PROTECTED] wrote: hey Guys, I'm thinking its about time I got around to making a source D/L tarball of the IPv6 branch. Am I correct is assuming that I can run bootstrap locally and bundle the results for people just to run configure and make themselves? or is there OS/system specific stuff the bootstrap needs to do? there are some scripts in the top source directory that can help you. see 'mkrelease.sh' and 'mksnapshot.sh'
Re: Squid 3 download page stuck
On Sun, 13 May 2007, Henrik Nordstrom wrote: s??n 2007-05-13 klockan 10:39 +0200 skrev Guido Serassio: The Squid 3 download page is stuck at 9 May, may be related to PRE6 release. Checking... yes. The -CVS part of the version tag should not be removed. It's removed automatically by the mkrelease script. It's there so we know when people use the CVS version.. Fixed. sorry, my mistake it's as simple as /path/to/squid-3/mkrelease.sh 3.0.PRE6 /path/to/www.squid-cache.org/Versions/v3/3.0/ I was going to see about updating release notes and such. Now done, but I have not updated the web pages, or copied the files to the FTP area. I updated the web page and copied it to the FTP area. Any good reason that Squid-3 packages are in /pub/squid-2 instead of /pub/squid-3? I created the squid-3 directory and cross linked the old files.
Re: [squid-users] Question about authenticateNegotiateHandleReply
On Wed, 9 May 2007, Markus Moeller wrote: I have written a helper program for the negotiate protocol (only the Kerberos part of it). I can get it to determine the correct userid but somehow the reply doesn't get back to squid. I don't get any debug from authenticateNegotiateHandleReply. What triggers authenticateNegotiateHandleReply to read the output of the helper program ? obvious question: is your helper using unbuffered I/O? In C: setbuf(stdout, NULL); In perl: $|=1; Duane W.
Re: Removed some uses of RefCount::getRaw() in DelayPool.cc
On Sun, 22 Apr 2007, Tsantilas Christos wrote: Hi Duane, I think there is an error in DelayPool.cc file and squid3 does not compiles if delay pools are enabled. Thanks, I committed your fix. Strange that it compiled okay for me with --enable-delay-pools on FreeBSD (gcc 3.4.2). DW
Re: cvs commit: squid3/src Store.h protos.h store.cc store_client.cc store_swapout.cc
On Wed, 18 Apr 2007, Tsantilas Christos wrote: Hi Duane, the StoreEntry::swapOut() does not compile when SIZEIF_OFF_T==4 I am using the following patch, but I am not sure if it is OK ... Thanks, I've applied the patch.
Re: squid3-largeobj and %lld
On Thu, 19 Apr 2007, Robert Collins wrote: On Wed, 2007-04-18 at 20:52 +0200, Guido Serassio wrote: I think that here should be used the standard ISO C99 macro PRId64 like in Squid 2.6. This allow the portability of the code: on Windows %lld is not available, when on some Unix 64 bit platforms (HP Tru64 is one) %ld must be used instead of %lld. In squid3 we should be using stream formatting, not % based formatting, which eliminates this class of problems. Currently in this squid3-largeobj branch we have %lld in just 3 places: access_log.cc for writing access.log store_log.cc for writing store.log ftp.cc for sending REST commands I've converted them to use PRId64 for now.
Re: some help with the website please
On Wed, 4 Apr 2007, Adrian Chadd wrote: I hate to ask for help when I said I'd do it, but I'm running very short on spare time at the moment and I'd appreciate some help in finishing off the new.squid-cache.org website so its ready to be made live. Someone with server access: help, please? Adrian, you can coordinate with me directly and I'll get it done.
Re: cvs commit: squid3/src main.cc
On Thu, 12 Apr 2007, Alex Rousskov wrote: rousskov2007/04/12 08:51:10 MDT Modified files: src main.cc Log: This change should fix bug #1837: Segfault on configuration error When quitting on a fatal error, such as a configuration error, Squid may need to write clean state/log files. Squid uses comm_ routines to do so. Thus, we must initialize comm_ before such fatal errors are discovered. Perhaps a better fix would be to avoid writing clean state/log files until the old ones become dirty? The last comment is correct. Squid should not write clean state files until existing state files have been entirely read. I fixed this bug a couple of days ago in the following revisions: 1.88 +2 -2 squid3/src/store_rebuild.cc 1.157 +9 -2 squid3/src/store_dir.cc store_dirs_rebuilding should be initialized to 1 store_dirs_rebuilding is initialized to _1_ as a hack so that storeDirWriteCleanLogs() doesn't try to do anything unless _all_ cache_dirs have been read. For example, without this hack, Squid will try to write clean log files if -kparse fails (becasue it calls fatal()). Sorry I did not realize there was a bugzilla entry for it. I'd suggest backing out your patch. DW
Re: cvs commit: squid/src stat.c tools.c
On Thu, 2 Nov 2006, Henrik Nordstrom wrote: ons 2006-11-01 klockan 13:58 -0700 skrev Duane Wessels: struct kb_t uses squid_off_t, which might be signed. That means that kb_t.kb could overflow and become negative in kb_incr(). If we detect that it is negative, add increasing powers of two until the value becomes positive again. This should work if squid_off_t is 32 ot 64 bits. Duane. can you explain the reasoning behind this? Gut feeling is that this may actually cause more headaches with a counter bouncing around unpredictable like this. The reasoning is that when Squid reports bandwidth as negative, it really screws up bandwidth graphs. I think it would be better if the counter were guaranteed to be unsigned, but squid_off_t is signed on some systems at least. I believe in squid-3 the use of size_t means it will always be unsigned. I don't know why you say bouncing around. With the patch it should overflow properly, as though the counter were 2^31 bits (or 2^63 I suppose). I considered just setting the counter to zero if it became negative, but I thought it would be nice to preserve the amount carried over after the overflow, if possible. Duane W.
Re: font for the new artwork buttons?
On Wed, 27 Sep 2006, Robert Collins wrote: Duane, do you know what font was used in making the sample buttons for the new artwork ? It is called Allspeed.I added the font package file to the website cvs under share/Fonts
proposal to remove port 563 from default ACLs
Our default ACL configuration allows CONNECT requests to port 563, which is for NNTP over SSL. Assuming that nobody really uses NNTP over SSL, especially through an HTTP proxy, I suggest that we remove it from the defaults.
Re: 3.0 branding - release plans - etc
On Sun, 10 Sep 2006, Robert Collins wrote: So, chatting with Adrian today, and some friends, I have some thoughts about what precisely 3.0 should be. I think 3.0 STABLE1 when release should be: * more functional than 2.6 STABLEX - there should be no regressions in functionality. * within 10-15% of the speed of 2.6 STABLEX. Does this mean +/- 10-15%? In order to meet the goal we'll need a measurable definition for speed. I assume that you're thinking along the lines of sustained requests/second within some response time window? Which OS, and which filesystem options? these two points are the primary things I can think of that will stop people adopting squid-3.0. And what we want is for developers to feel I would probably put stability ahead of performance, but yes, assuming squid-3 is stable enough then people will expect it to perform at least as well as the old. DW
Re: Current -cvs crashing when invalid hostname in client requested URL
On Sun, 7 May 2006, Reuben Farrelly wrote: Seems to be new breakage in the last few days, but if I try to surf to a URL which is invalid eg www.firfox.org, squid-3/CVS dies an ugly death: Looks like this was caused by one of my bugfixes yesterday. I backed it out and will look for another solution. Duane W.
Re: HTTPMSGLOCK/UNLOCK
On Sat, 29 Apr 2006, Robert Collins wrote: I'm unclear why we have these macros rather than smart pointers... having these macros as a pattern means we'll need to learn LOCK/UNLOCK macros for every class that is in use. I tried and found it very difficult. Perhaps you can do better. Heres the reasons I gave Henrik back in March: You can reference a Foo* without including Foo.h, but Foo::Pointer requires including Foo.h (and its dependencies). ::Pointer doesn't work for derived classes. For example, HttpMsg is a base class for HttpRequest and HttpReply and you can pass an HttpRequest* to a function that has HttpMsg* as a parameter. But it doesn't work with HttpMsg::Pointer. To get around it you have to use the getRaw() hack. These are of course solvable, but annoying. Its been too long for me to remember exactly the straw that broke the camel's back when I tried converting to ::Pointer. Duane W.
Re: inline the ICAP Makefile ?
On Sun, 30 Apr 2006, Robert Collins wrote: Duane, how do you feel about me inlining the ICAP Makefile into the src Makefile ? It makes it easier for automake to track dependencies, particular when building individual files - less recursion etc. Um, I guess. I certainly don't have a strong enough opinion to stop it. Sometimes its nice to run make from a subdir and not have to worry about wasting time compiling other junk. In general I think the source tree needs more subdirs, but I care less about how the makefiles work. So you also plan to do away with libicap.a? DW
Re: Squid-3 errors and blank pages
On Tue, 2 May 2006, Reuben Farrelly wrote: Squid-3 cvs seems to be functioning for me now - it's usable again but I am seeing lots of errors like this logged on random pages: 2006/05/01 23:07:12| http.cc(1866) Transaction aborted while reading HTTP body This corresponds with this code in http.cc : HttpStateData::requestBodyHandler(MemBuf mb) { if (eof || fd 0) { debugs(11, 1, HERE Transaction aborted while reading HTTP body); return; } Has anyone got any clues as to what I could do to narrow this problem down? It's happening frequently but not on every page. The debugging level is low because I was recently mucking with this code. As Henrik said you should only see this for POST/PUT requests and a likely cause is that the user aborts some file transfer or something. Or perhaps the origin server is returning a preemptive error without reading the entire POST/PUT body. If you see it for all or most POST/PUT requests then I've probably introduced a bug with request body processing. One way to narrow it down is grep the access.log for PUT and POST and compare that to how many times the error message occurs in cache.log. If you can find a reproducible case then we can set the debug_options to get some good debugging info. Duane W.
Re: refresh.cc - refreshIsCachable()
The return value of refreshIsCachable() can be calculated without making a call to refreshCheck(). I.e. you can remove the call to refreshCheck() from refreshIsCachable(), and refreshIsCachable() will still return the correct result. Sorry, still not following you. refreshIsCachable() uses the 'reason' value in an early if statement: if (reason 200) /* Does not need refresh. This is certainly cachable */ return 1; And there are numerous cases where refreshCheck() would return FRESH_* (i.e. 200) values. DW
from IRC
(I don't get why xassert is disabled when PURIFY is set.. Duane?) I don't remember exactly any more. Maybe because assert() interferred with purify's ability to get a good stack trace. It can be removed as far as I'm concerned.
Re: BodyReader
On Fri, 28 Apr 2006, Henrik Nordstrom wrote: tor 2006-04-27 klockan 19:27 + skrev [EMAIL PROTECTED]: Replacing ClientBody class with BodyReader. Seems to be some issues there, maybe 64-bit related. BodyReader.cc: In member function void BodyReader::read(void (*)(MemBuf, void*), void*): BodyReader.cc:51: error: cast from size_t (*)(void*, MemBuf, size_t) to int loses precision strange. I removed the cast.
squid3 commit heads up: ClientBody - BodyReader
I'm about ready to commit some changes related to the way Squid handles request bodies. The ClientBody class didn't work very well with ICAP, which also needs to read message bodies. I've replaced it with a BodyReader class and made a number of related changes. DW
Re: refresh.cc - refreshIsCachable()
On Tue, 25 Apr 2006, Doug Dixon wrote: It looks like the call to refreshCheck() is only used to short-circuit the function (if you can call it a short circuit, given the size of the callout) and to update some statistics. The main, and original, purpose of refreshCheck() is to check the request against the 'refresh_pattern' configuration values from squid.conf. I can't see the reason stats being used anywhere, but maybe I overlooked something. (And anyway... should this function even be updating the refreshCounts[rcStore].total stat, given the fact this just checks a couple of flags?) The 'reason' return value and related statistcs are definately used. You can see refresh statistics in the cache manager 'refresh' page. However, the return value can be calculated without calling it, and the statistic it updates is never used. Would something like this be cleaner/quicker and still correct? I'm not sure which function it refers to. We can't NOT call refreshCheck(), ala your suggested refreshIsCachable() replacement. refreshIsCachable() could be made slightly shorter and cleaner, but I don't think its worth the trouble. Duane W.
Re: wikis...
That's not how I remember it. From what I remember the objection from Duane wrt Kinkies wiki was more of a control and backup issue. docuwiki was selected by Duane as it's trivial to back up, and additionally the back-end content isn't hard to reuse for other purposes later on. I'm happy for Kinkie to host the officialk wiki. I would prefer it to be wiki.squid-cache.org. I agree Moin looks nicer on the outside. dokuwiki was pretty easy to set up, but it lacks nice features. I did like that it used plain text files which meant that we could store changes in CVS, but thats minor I guess.
Re: config.test fragments
On Sun, 23 Apr 2006, Guido Serassio wrote: Hi Robert, At 13.17 23/04/2006, Robert Collins wrote: seems to me that if we exported all the autoconf values during a config.test run, they would be more portable: see for instance Guido's recent commit to probe /usr/local as well: unless CPPFLAGS are set correctly that test won't help (because while its on the system, its not available). Yes, correct. On the machine where I have identified the problem, CPPFLAGS is set correctly, but on other systems this could not be true. I guess this is why ./configure now fails for me: Basic auth helpers built: LDAP MSNT NCSA PAM SASL YP getpwnam multi-domain-NTLM NTLM auth helpers built: SMB fakeauth no_check Digest auth helpers built: ldap password External acl helpers built: ip_user ldap_group session unix_group checking sasl/sasl.h usability... no checking sasl/sasl.h presence... no checking for sasl/sasl.h... no checking sasl.h usability... no checking sasl.h presence... no checking for sasl.h... no ERROR: Neither SASL nor SASL2 found
Re: so what is involved in calling squid-3.0 'stable'?
My own criteria: be able to deploy Squid3 with ESI as a reverse accelarator under real load without it falling over (I don't run Squid as a forward-cache at all). Can somebody point to a current how to help with Squid3 development document? E.g., I don't even know how to get a current checkout any longer (arch / bzr / cvs, whatever). You can get daily tarball snapshots from http://www.squid-cache.org/Versions/v3/3.0/ If you would rather, you can also use CVS # export CVSROOT=:pserver:[EMAIL PROTECTED]:/squid # cvs login (anoncvs/anoncvs) # cvs checkout squid3 See also http://dokuwiki.squid-cache.org/dev/cvsinstructions If you utilize CVS then you have to run 'bootstrap.sh' from the top directory to create all the Makefile.in's. This gets a little yucky because numerous and certain autotools packages are necessary. Also you may have to run bootstrap.sh from time to time as you develop, ie if you have a yet-uncommitted Makefile.am change. DW
Re: --disable-internal-dns
On Sat, 14 Jan 2006, Rafael Martinez Torres wrote: Hi: I need transiently to bypass squid3's own dns subsystem, so I compiled with ./configure --disable-internal-dns... Things use to go well, unless you get an alias address, i.e. http://www.google.es bash-2.04$ host www.google.es www.google.es is an alias for www.google.com. www.google.com is an alias for www.l.google.com. www.l.google.com has address 64.233.183.104 www.l.google.com has address 64.233.183.147 www.l.google.com has address 64.233.183.99 Things go well if you type http://www.l.google.com but in case you type http://www.google.es you will get the attached error You've disabled internal DNS so Squid should be using the 'dnsserver' binary to do lookups. You can test it like this: echo www.google.es | ./dnsserver $addr 226 66.102.7.99 66.102.7.147 66.102.7.104 Thats what I get, which looks like it works. If you look at dnsserver.cc you'll see a for-loop where Squid does try up to three times to resolve the IP address. If you have a repeatable case where dnsserver cannot resolve www.google.es then its time to open a bug report I guess. Duane W.
Re: Logo
On Mon, 6 Feb 2006, Andrew Pantyukhin wrote: Some time ago I posted to the squid-users list, saying that the old logo looks great whith just a little shadow: Thanks! I've put your shaded logo in place.
Re: [squid2.5-icap] patch: X-Server-IP support
On Wed, 14 Dec 2005, olivier wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, I recently made a small patch to add the X-Server-IP feature in Squid2.5. Basically: send the origin server ip in the ICAP headers if it's available from the ip cache. I've been using it in (pre)production for some weeks now without any problems. Can someone give a look at http://labs.biniou.info/squid-icap-2_5.diff ? The patch is nice and simple, so I don't have any problems with it. There is this, however: + if (Config.icapcfg.send_server_ip || service-flags.need_x_server_ip) The Squid admin might believe that setting 'icap_send_server_ip off' means Squid would never send the IP address to ICAP. But Squid will in fact send the IP if the ICAP server asks for it. Its probably not a big deal because most people don't care about the privacy of an origin server IP address. But some might. I suggest adding a comment to squid.conf to explain that the server IP address would be sent regardless of the icap_send_server_ip setting if the ICAP server OPTIONS response says X-Include: X-Server-IP Otherwise, maybe the logic should be instead of || ? Duane W.
make distclean annoyance
Does anyone know how to make 'make distclean' work again for squid3? Making distclean in auth ... rm -rf basic/.deps digest/.deps negotiate/.deps ntlm/.deps ... Making distclean in . Makefile, line 2358: Could not find auth/basic/.deps/basicScheme.Po Makefile, line 2359: Could not find auth/digest/.deps/digestScheme.Po Makefile, line 2360: Could not find auth/negotiate/.deps/negotiateScheme.Po Makefile, line 2361: Could not find auth/ntlm/.deps/ntlmScheme.Po Makefile, line 2363: Could not find fs/coss/.deps/StoreFScoss.Po Makefile, line 2365: Could not find fs/null/.deps/StoreFSnull.Po Makefile, line 2366: Could not find fs/ufs/.deps/StoreFSufs.Po make: fatal errors encountered -- cannot continue
Re: cvs commit: squid3/src/ICAP ICAPModXact.cc
In squid 3 we should not need to use printf, which reduces the weight of 64 bit support a lot. find . -name '*.cc' | xargs grep -i printf | wc -l 1609
Re: .cvsignore files
On Tue, 8 Nov 2005, Serassio Guido wrote: Hi, What should be the standard content of a .cvsignore file ? As Alex said, basically any file that gets created by running 'make' on a clean tree. DW
Re: Summary of Squid-2.6 opinions
No answer yet: * Duane Wessels * Robert Collins I have mixed feelings about 2.6. On one hand I think 2.5 has lived too long and it looks bad that we have not incremented the stable branch number for years. But on the other hand I feel cheated because I remember being scolded for adding things to the squid-2-head branch when others had decided that it would become a dead end. Like you I suppose, I have a number of little 2.5 features and fixes that I use on my own squids, which I have been reluctant to commit for those reasons. My company has taken on development projects with the understanding that all future work will go into squid-3. As part of that work we have promised spend time on making squid-3 stable. We still intend to do that. Looking at the current wishlist for 2.6 I think it is too long and too ambitious. I would rather that people spend time on squid-3, but that is perhaps a selfish reason. Duane W.
Re: LRU frequency
On Mon, 18 Jul 2005, Lucas Brasilino wrote: Hi! I looking around LRU informations to get into the code. I read a doc, I really don't remember where, that says LRU replacement policy runs every second. Is that correct ? So do heap LFUDA and GDSF ? It is true. This is the call in store.c: /* Reregister a maintain event .. */ eventAdd(MaintainSwapSpace, storeMaintainSwapSpace, NULL, 1.0, 1); If so, why LRU threshold is about days (1 to 10 days as said here[1]) First, current Squid versions don't really use the notion of LRU threshold anymore. and not minutes or hours ? Ok.. should be some overhead... but The LRU threshold represents how long it takes the cache to go from empty to full.It should be a long time (days). If the cache becomes full in only a few minutes (for example) then the cache is probably not big enough. I've experienced some very busy caches that quits due a full cache directory... in other words.. without efficiently swap out objects... Actually, it means that Squid cannot delete old objects fast enough to make room for new ones. What is the cache size and request rate of these caches that quit due a full cache directory? Duane W.
Re: Dijjerizer redirector for squid
On Fri, 15 Jul 2005, daniele wrote: Hi! I don't know if that is the right place. Anyway, I wrote a simple redirect_program for Squid to redirect dijjerizable urls to a dijjer server. Maybe someone is interested: you can find the description and the code here: - http://dijjer.org/wiki/SquiDJ Hi Daniele, Please visit http://www.squid-cache.org/cgi-bin/related-submit.pl and fill out the little form for your redirector. After that it will appear on the Squid site's pages. Duane W.
Re: Need help with dns_query patch
On Wed, 13 Jul 2005, Luigi Gangitano wrote: Hi, I packaged an update squid 2.4.STABLE6 for Debian woody with the backported squid-2.5.STABLE9-dns_query from RedHat RHSA-2005-489, which is quite straight. With this patch squid fails[1] with rfc1035.c:410: rfc1035RRUnpack: Assertion `(*off) = sz' failed which can be reproduced accessing http://62.26.121.2:80/dat/bgf/trpix.gif This seems to happen on SuSE squid-2.5.STABLE1[2] too. I cannot understand the RFC1035 code enough to debug it, can you please help? Hi Luigi, I can (mostly) understand the RFC1035 code, but I cannot reproduce this bug. Can you reproduce it? If you can get a core file and stack trace that would be helpful. Also if you know anything about the type of nameserver that Squid is using in this case (BIND, dnscache, etc)? Since the URL contains an IP address, Squid should not issue a name-to-address DNS query. Perhaps Squid is configured to make address-to-name (PTR) queries and this is why the rfc1035.c code gets called for this request. Duane W.
Re: Time to merge squid3-ipv6 into HEAD ?
On Mon, 27 Jun 2005, Rafael Martinez Torres wrote: I think it's a good opportunity to merge now. Then, my merge wtih HEAd will became simpler. I am willing to attempt the merge and let you know how it goes... Duane W.
Squid-3 and gcc 2.95
I spent half a day trying to figure out why Squid-3-cvs was core dumping in the debugs() macro. Alex suggested that GCC 2.x may have bugs in its support for stringstream. With GCC 3.3 it no longer dumps core at that spot. Give this, shouldn't we be checking the GCC version in ./configure? DW
Job opportunity with The Measurement Factory
More info at http://www.measurement-factory.com/jobs.html Duane W.
Re: squid-2.5 / ICAP patch
I've finally committed your patch to the sourceforge CVS. They look good to me, but I have not tested them. Thanks a lot! Duane W.
Re: Microsoft NTLM proxy authentication
On Mon, 7 Jun 2004, dilox wrote: I'looking for Microsoft NTLM proxy authentication, but http://devel.squid-cache.org/cgi-bin/diff2/ntlm?auth_rewrite says: Sorry, patch for ntlm branch of auth_rewrite in squid is not yet available. Please try again in a few hours If the problem persists write to [EMAIL PROTECTED] where can I find it? I believe all the NTLM code from sourceforge has been merged to the primary Squid source tree. That means you will find it in the source files that you download from www.squid-cache.org. The NTLM pages on sourceforge haven't been updated for a few years now. Duane W.
Re: How does it work?
On Thu, 13 May 2004, Mati wrote: hi, I was wondering if there are some tools that you use to test squid? At my day job we have a functionality/compliance tool called Co-Advisor. We would be happy to give you access to the on-line version so you can test your features. In order to use the on-line version you would have to set up your Squid with public IP address. Let me know... Duane W.
Re: Secure basic authentication. Is it possible?
On Fri, 21 May 2004, [koi8-r] Slivarez ![koi8-r] wrote: Hi, ALL. I'm using squid-2.5.STABLE5+basic_auth(ncsa_auth). BUT simply Sniffer can get USERID and PASSWORD from tcp packets. Is there any possibility to make basic authentication more secure? Basic authentication is fundamentally insecure. If you need to secure it, then you would have to use a technique like SSL port-forwarding or IPsec encryption. Duane W.
Re: icap support in squid
On Wed, 31 Mar 2004, Peter V. Saveliev wrote: ... I'm author of drweb-icapd server, and interested in ICAP support in Squid proxy. It would be very nice, if somebody from developers has agreed to help me on this topic. You can mail me: peet at drweb.com ('cause of moderated access, I'm not sure I can read your answers in the maillist) Hi Pete, I have been maintaining the 'icap-2.5' branch of Squid that lives at sourceforge (devel.squid-cache.org). Have you tried using it with your icap server yet? DW
Re: Squid-2.5.STABLE5 still not ready
To make things even more strange this problem apparently is introduced some time in November, but there has not been any ipcache or dlink related changes from what I can see.. Really? src/ipcache.c had a minor change (+7 -3 lines) on Nov 28 and a very big change ( +89 -81 lines) on Dec 6. Duane W.
Re: Squid-2.5.STABLE5 still not ready
This change looks suspicious to me: @@ -199,11 +199,15 @@ static void ipcacheAddEntry(ipcache_entry * i) { -hash_link *e = hash_lookup(ip_table, i-hash.key); +ipcache_entry *e = (ipcache_entry *) hash_lookup(ip_table, i-hash.key); if (NULL != e) { - /* avoid colission */ - ipcache_entry *q = (ipcache_entry *) e; - ipcacheRelease(q); + /* avoid collision */ + if (i-flags.negcached !e-flags.negcached e-expires squid_curtime) { + /* Don't waste good information */ + ipcacheFreeEntry(i); + return; + } + ipcacheRelease(e); } hash_join(ip_table, i-hash); dlinkAdd(i, i-lru, lru_list); Previously we were freeing e (aka q), but now we are freeing i, then inserting it into ip_table? DW
Re: Squid logfile 2GB problem
On Wed, 4 Feb 2004, Luigi Gangitano wrote: Hi, I'm the Debian maintainer for squid and got a bugreport about the 3GB limit on logfiles. I saw bug #319 and noticed that no solution was implemented except the --enable-large-files option in HEAD. Is there any way to backport that patch for 2.5.STABLE4? Hi Luigi, It looks to me like the --enable-large-files trick simply adds -D_FILE_OFFSET_BITS=64 to the compiler flags, so it seems relatively harmless. I don't mind adding it to the squid-2.5 branch. Anyone else? However, I still think that squid users will be happier in the long run if they install a simple cron job to rotate their log files instead of letting them get to 2GB in size. Duane W.
Re: Why is no-cache ignored on pending objects?
On Sun, 21 Dec 2003, Henrik Nordstrom wrote: There is a section in the clientProcessRequest2() on cache hit processing relating to the no-cache flags on requests for STORE_PENDING objects which I do not quite get what it is about, and the CVS log comment does not make me any wiser.. what is done simply looks wrong to me, or at least missing some condition explaining when it should be done. The only thing I can think of is this: On the IRCache proxies I used to see some very aggressive (probably non browser) user agents. Every request had the 'no-cache' directive and it seems like it was re-issuing the request often, say after a short timeout. When the URL was some big file on a far away server, Squid accumulates numerous parallel downloads of the same object. Depending on quick_abort settings, could be bad news. So the hack was put in place to accomodate this broken client (if it is the situation I am thinking of). Duane W.
Re: permanently enabling some -DPURIFY features.
On Wed, 3 Sep 2003, Robert Collins wrote: How do folk feel about us (post 3.0) permanently enabling the tidy cleanup stuff currently enabled by -DPURIFY, leaving the assert() changes and the mem pools disabling alone? I see no harm having squid behave well on shutdown, and it will help prevent bitrot in that code. Some functions, like storeFreeMemory() may take more time to execute than some users would like (because there are so many pointers to free). Sometimes when I shut down Squid I want it to exit quickly and start up again ASAP. DW
Re: squid-2.5 and coss
Here's a fun suggestion from our friend Robert - how about creating a _directory_ instead of a file for the COSS cachedir? Then we can place the store logfile in there, the coss storefile in there _and_ any other metadata. It'd make life a whole lot easier and mean you won't get bitten by the requirement to specify swaplog paths.. The only reason not to do that is if we want to eventually support using raw devices/partitions for COSS storage. I think it was one of the original plans. However, to do that, the current code would need some enhancements to make sure that I/Os occur on 512-byte boundaries and are always a multiple of 512 in size. That might get around some 2GB file-size limits in some cases too. Performance-wise, I suspect it doesn't matter. Using the filesystem seems to give darn good throughput already. DW
Re: squid-2.5 and coss
Well, theoretically, I should be able to easily forwardport my COSS changes to 3.0 once I know it works. Believe me, once its running I'm going to be looking to move to 3.0. kqueue and epoll are my next thing to sell. :) Adrian, FYI, my coss changes have not been commited to squid-3 yet because - I was going to wait until the time when it is okay to commit non-bugs to the tree. - I haven't figured out the new magic OO way of doing cache_dir options yet. DW
Re: Squid-2.5 bugs to kill
On Sun, 10 Aug 2003, Henrik Nordstrom wrote: There is now 4 bugs on the list of Squid-2.5 issues classified as worth to fix during the 2.5 cycle, preferably soonish to have them included in the upcoming 2.5.STABLE4 release. Not in your list is a relatively minor ICP timeout bug: #736: ICP dynamic timeout algorithm ignores multicast I guess there are no comments/objections, so I will probably commit my proposed patch. Duane W.
MAGIC1 in src/fs/aufs/store_asyncufs.h
We have: #define NUMTHREADS (Config.cacheSwap.n_configured*16) #define MAGIC1 (NUMTHREADS*Config.cacheSwap.n_configured*5) which means: #define MAGIC1 (Config.cacheSwap.n_configured*16*Config.cacheSwap.n_configured*5) It seems wrong to me that MAGIC1 is proportional to the SQUARE of the number of cache_dir's.I could even argue that it should be logarithmic since performance doesn't really scale linearly with #disks or cache_dirs. Also I would suggest that these calculations use the number of AUFS cache_dirs, which might be less than the total number. Kind of hard to put that in a #define though. Duane W.
Re: Making statCounter.syscalls.disk counters more consistent
On Thu, 18 Jul 2003, Robert Collins wrote: This won't increment the counters: the statCounter used by cachemanager is not the statCounter available to the external unlinkd process. yeah, my mistake, it belongs in the other part. None of the above are safe. They are all potentially racey on SMP machines. You need to use interlocked increments to safe perform such counts. Okay. There is already a call to increment disk.unlinks in aiops.c, so if it is unsafe I guess it should be removed. However, aufs -does- record the syscall counts: during the scheduling operation, not during the worker threads actual call. That is thread safe today.. You mean as reported in the squidaio_counts cachemgr page? I'm trying to make the '5min' etc counts more consistent among all storage schemes. In my tests with AUFS, they are all zero, except for writes (which must be going through file_write() I guess?): syscalls.polls = 122.796291/sec syscalls.disk.opens = 0.00/sec syscalls.disk.closes = 0.00/sec syscalls.disk.reads = 0.00/sec syscalls.disk.writes = 180.793150/sec syscalls.disk.seeks = 0.00/sec syscalls.disk.unlinks = 0.00/sec So I should either increment the counters somewhere else, or not at all and say that its too hard for AUFS. - statCounter.syscalls.disk.reads++; if (FD_READ_METHOD(fd, hdr_buf, SM_PAGE_SIZE) 0) { debug(47, 1) (storeAufsDirRebuildFromDirectory: read(FD %d): %s\n, fd, xstrerror()); Why are you removing this one? (Is it double counted?) Looks like I didn't give it enough thought/testing. It might be double counted, but I'm not sure. I can leave it.
of possible interest to ICAP developers
http://shweby.sourceforge.net/ I find their logo interesting, especially since the badmouth Squid in http://shweby.sourceforge.net/doc.php
Re: changes to make mib.txt work with current net-snmp
On Fri, 7 Feb 2003, David Luyer wrote: You can make that more compact as: SQUID-MIB DEFINITIONS ::= BEGIN enterprises OBJECT IDENTIFIER ::= { iso org(3) dod(6) internet(1) private(4) 1 } nlanr OBJECT IDENTIFIER ::= { enterprises 3495 } (you can merge those two lines also, but that's going a bit far) Although you shouldn't have to do it due to: IMPORTS enterprises [...] FROM SNMPv2-SMI And in fact if you are defining enterprises yourself, you shouldn't also import it. Are you sure your MIB path includes SNMPv2-SMI.txt? I'm not sure about anything when it comes to SNMP. After copying mib.txt to $prefix/share/snmp/mibs, then running snmpwalk (v5.0.7) I get: % /tmp/local/bin/snmpwalk -v 1 -c public -m SQUID-MIB bo2:3401 squid squid: Unknown Object Identifier (Sub-id not found: (top) - squid) ktrace shows that snmpwalk reads my SQUID-MIB.txt file, but apparently ignores it for some reason. If I change the file so that it looks like this at the top: SQUID-MIB DEFINITIONS ::= BEGIN enterprises OBJECT IDENTIFIER ::= { iso org(3) dod(6) internet(1) private(4) 1 } nlanr OBJECT IDENTIFIER ::= { enterprises 3495 } then it works better: % /tmp/local/bin/snmpwalk -v 1 -c public -m SQUID-MIB bo2:3401 squid | head SQUID-MIB::cacheSysVMsize.0 = INTEGER: 7996 SQUID-MIB::cacheSysStorage.0 = INTEGER: 19353283 SQUID-MIB::cacheUptime.0 = Timeticks: (80748) 0:13:27.48 ... (I guess the -m SQUID-MIB is unnecessary anyway) Also note that if I use -v 2c instead, I get no response from Squid. Actually, tcpdump shows Squid sending replies, but snmpwalk ignores them and reports: Timeout: No Response from bo2:3401