tproxy caching?
I've got tproxy + squid-2.7 here and I noticed that some stuff wasn't being cached after I unsubtly made the content cachable. The problem is "repaired" here: Index: forward.c === RCS file: /cvsroot/squid/squid/src/forward.c,v retrieving revision 1.131 diff -u -r1.131 forward.c --- forward.c 5 Sep 2007 20:03:08 - 1.131 +++ forward.c 20 Jan 2008 06:47:17 - @@ -712,7 +712,7 @@ * peer, then don't cache, and use the IP that the client's DNS lookup * returned */ -if (fwdState->request->flags.transparent && fwdState->n_tries && (NULL == fs->peer)) { +if (fwdState->request->flags.transparent && (fwdState->n_tries > 1) && (NULL == fs->peer)) { storeRelease(fwdState->entry); commConnectStart(fd, host, port, fwdConnectDone, fwdState, &fwdState->request->my_addr); } else { The problem is that n_tries is always going to be 1 at this point, even before it attempts a new connection, and stuff is just suddenly uncachable. Am I on the right track? Adrian -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
Re: cvs [server aborted]: "rtag" requires write access to the repository
fre 2008-01-18 klockan 04:51 -0800 skrev Arthur Tumanyan: > > > Adrian Chadd wrote: > > > > > > Wait, this is to which repository? the squid-cache.org one or the > > development > > one at sourceforge? > > Adrian > > > cvs on squid-cache.org. That CVS repository is for the main release tree only, not developments.. developments is on devel.squid-cache.org and it's cvs repository. Changes in the main repositry is automatically reflected in the developer repository with only some hours delay. > should I be registered? on devel.squid-cache.org / sourceforge yes.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
hy i have a probleme with my Squid
hello when i start squid by this commande /etc/init.d/squid start he tell me Starting Squid HTTP proxy: squid. but when i looking in webmin he tell me squid is down :( . in my daemon.log theyre is this Ready to serve requests. : WARNING: url_rewriter #2 (FD 8) exited : WARNING: url_rewriter #5 (FD 11) exited : WARNING: url_rewriter #4 (FD 10) exited : Too few url_rewriter processes are running : The url_rewriter helpers are crashing too rapidly, need help! do you know why i have this message . thank you for answer. _ Microsoft vous recommande de mettre à jour Internet Explorer. http://specials.fr.msn.com/IE7P25
Become a developer...
Hi, all! I'm interested in working on Squid 3. I have already created patch for: bug2101: Reuse pconns using LIFO http://www.squid-cache.org/bugs/show_bug.cgi?id=2101 bug1923: hop-by-hop headers MUST NOT be ICAP-encapsulated http://www.squid-cache.org/bugs/show_bug.cgi?id=1923 bug2038: check reply_body_max_size before ICAP http://www.squid-cache.org/bugs/show_bug.cgi?id=2038 bug2110: Send "Proxy-Connection: close" when shutting down http://www.squid-cache.org/bugs/show_bug.cgi?id=2110 bug1933: Mem::Init debugging lies? http://www.squid-cache.org/bugs/show_bug.cgi?id=1933 bug #2168: ICAP and tcp_outgoing_address http://www.squid-cache.org/bugs/show_bug.cgi?id=2168 bug226: Pass through non-standard HTTP methods http://www.squid-cache.org/bugs/show_bug.cgi?id=226 -- Thanks, Alexey.
Re: cvs commit: squid3/src cf.data.pre ftp.cc structs.h
Amos Jeffries wrote: amosjeffries2008/01/19 00:15:30 MST Modified files: src cf.data.pre ftp.cc structs.h Log: EPSV support for FTP and other fixes. - Adds full EPSV method support for FTP server connections - Fixes debugging in FTP state machine into specific levels: * 0: critical problems * 1: non-critical problems * 2: FTP protocol chatter * 3: FTP logic flow debugging * 5: FTP data parsing flows - Adds code documentation to some FTP functions. Forgot to mention the addition of 'ftp_epsv_all' squid.conf option to enable/disable FTP 'EPSV ALL' extension capabilities for NAT traversal. Documented in squid.conf.default anyway. Amos -- Please use Squid 2.6STABLE17+ or 3.0STABLE1+ There are serious security advisories out on all earlier releases.
diskd stability stuff
I've been looking at the diskd stuff a little for someone to see if I can mitigate the crashes under load. I didn't feel like trying to fix the main code paths to support reentry :) So here's what I've got thus far: http://www.creative.net.au/diffs/20080119-diskd-2.diff * Track the number of opened storeIOState's per swap dir; * Limit magic1 to be the number of open files in that swapdir, rather than the number of away messages; * disable using diskd for unlink; just use unlinkd. These are all an attempt to constrain the queue size to be somewhat related to the number of open storeIOState's for a given swapdir. Unfortunately its not -quite- related as I'm still seeing 3x and 4x the number of away messages to storeIOState for a given swapdir, but it doesn't reach magic2 anywhere near as often now and doesn't end up having to call storeDirCallback() recursively under high load. Magic1 = 64, Magic2=128 here. I think thats about as good a solution as I can come up with in the short term. I'm not going to commit it in its entirety - I may just commit the unlinkd change as that itself may mitigate issues enough to be worth it - but if diskd is going to hang around in the future then it needs to be a way of dispatching queued disk events rather than being the queue itself (ie, how aio works.) (Hopefully it works for the poor guy who is stuck with diskd and the crashes!) Adrian -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -