Re: refresh_pattern review
Following some IRC chat, I thought I'd start a discussion on a possible improvement of refresh_pattern in Squid3. The starting point for this discussion is the fact that refresh_pattern is a source of confusion for many users, even expert admins. It's not obvious what it does, how to achieve certain things, or under what circumstances different bits of it apply or don't apply. Currently refresh_pattern means different things depending on how the response freshness was calculated: whether by explicit header set by the origin server (Cache-Control, Expires), by invoking the Last- Modified algorithm (if it had a Last-Modified header), or whether it could not calculate a freshness by either of these methods. It's quite complicated. I don't know what the right answer is. Here is an idea though: We could separate the configuration out into standard and HTTP violating parts. Let us define standard as the two mechanisms that are most semantically transparent: 1. Explicit expiration set by server (Cache-Control, Expires) 2. Heuristic expiration based on Last-Modified And let's define HTTP violating as anything that either overrides these, or anything that enforces cacheability in the absence of any of these headers. What configuration options do we need for each of these two categories? For the standard configuration: We don't need any options for the explicit expiry mechanism, as it's... explicit :) However, we do need a couple of global options for the Last-Modified factor algorithm: TAG: refresh_lastmod_factor (percent) Default: 20 TAG: refresh_lastmod_max (minutes) Default: 10080 These, then, are the only refresh options I propose for a non-HTTP- violating setup. Now for the HTTP violating overrides, which are more complicated. Defaults are set first: TAG: refresh_override_default options Default: none These can be refined by regex: TAG: refresh_override_match [-i] pattern options Default: none where options can be any of: min=xxx minimum amount of time this object will be considered fresh max=xxx maximum amount of time this object will be considered fresh ignore-reload=on|off ignore all client headers that prevent serving a cached response reload-into-ims=on|off client reload is downgraded from unconditional to conditional GET ignore-no-cache=on|off ignore all server headers that prevent caching a response ignore-no-store=on|off ignore Cache-Control: no-store server header ignore-private=on|off ignore Cache-Control: private server header ignore-auth=on|off cache authorized responses, even if server didn't specify Cache-Control: public refresh-ims=on|off always pass client IMS requests through to the origin, even if we think our copy is fresh For example: refresh_override_default max=4320 reload-into-ims=on refresh_override_match http://host/ ignore-reload=on ignore-no-cache=on ignore-no-store=on refresh_override_match /path/ reload-into-ims=off refresh_override_match \.jpe?g$ min=1440 refresh_override_match \.css$ max=60 Main differences in usage: 1. The overrides would always apply, regardless of how the expiration time was arrived at - whether by explicit headers or last-modified algorithm heuristics. Currently the Min, Max and Percent settings only apply in different specific circumstances, e.g. Max and Percent only apply to L-M requests, Min only applies in the absence of L-M, Expires and CC max-age. 2. The refresh_override_default would always apply (although its options may be overridden by those of a refresh_override_match). Currently the default refresh_pattern only applies if no patterns match the request, meaning you can't ever override default behaviour, you can only fall back to it. 3. There is no way of setting the Last-Modified factor percentage by regex! This is perhaps a big problem, and it could be added as an option. But then it would be the only non-HTTP-violating directive possible in the option... and so would spoil it slightly. 4. No need for global counterparts of refresh_pattern directives, e.g. refresh_all_ims and reload_into_ims. 5. Frequently used override options could be stated in the default instead of every subsequent line This may be completely the wrong way of looking at it, or it may be just going too far. A smaller, but still helpful, step might be to introduce a refresh_pattern_default whose values would be inherited by any subsequent refresh_pattern match. Any help or input into this would be very welcome indeed Doug On 1 Jun 2006, at 20:06, Doug Dixon wrote: Hi I'm fixing bug 1202 (it's a simple fix) and am cleaning up refresh.cc at the same time. I'd like to review the various refresh_pattern options,
2 fixes for the squid icap implementation
Hi, Could someone add the 2 attached patches to the squid icap client? Here's a description of what they fix... 15minutefix.patch = I noticed that if an icap server replied with a 204 response, the download would continue without being forwarded to icap but it would then abort after 15 minutes. This was because the read timeout on the icap connection wasn't being cancelled. This fix cancels the timeout which avoids the aborted downloads. squidcrashfix.patch === A fix that I submitted some time ago (squidgrowthfix.patch) caused squid to crash under heavy load. This was because the deferred read check I added relied on the store entry object being present but didn't lock it. Under heavy load the store entry was sometimes freed before the defer read check which caused squid to crash. This patch maintains a lock on the store entry object and also has some other code to make sure the store entry object is unlocked and freed when it is no longer required. Thanks, Graeme This message has been scanned for viruses by BlackSpider MailControl - www.blackspider.com 15minutefix.patch Description: 15minutefix.patch squidcrashfix.patch Description: squidcrashfix.patch
Re: 2 fixes for the squid icap implementation
Hi Graeme, I did not verified the problems but looks that the problems exist. About the first patch I am proposing a somehow different approach. When squid-icap takes a 204 response immediately releases the connection and put it to connections poll so it can be used for other icap requests. The icap connection's file descriptor is no more related to the IcapStateData structure and only we have to free this structure when we have all data from the http server. The patch is attached if you want to test it. I must verify it and correct it before apply it to the cvs. If it causes problems I will proceed with your patch. About the second patch if there is not any problem I want to test it more and if it is possible to move most of its code to the icap_*.c files. This will save as time during merging with main branch.. Regards, Christos Graeme Bisset wrote: Hi, Could someone add the 2 attached patches to the squid icap client? Here's a description of what they fix... 15minutefix.patch = I noticed that if an icap server replied with a 204 response, the download would continue without being forwarded to icap but it would then abort after 15 minutes. This was because the read timeout on the icap connection wasn't being cancelled. This fix cancels the timeout which avoids the aborted downloads. squidcrashfix.patch === A fix that I submitted some time ago (squidgrowthfix.patch) caused squid to crash under heavy load. This was because the deferred read check I added relied on the store entry object being present but didn't lock it. Under heavy load the store entry was sometimes freed before the defer read check which caused squid to crash. This patch maintains a lock on the store entry object and also has some other code to make sure the store entry object is unlocked and freed when it is no longer required. Thanks, Graeme diff -u -r1.1.2.65 icap_respmod.c --- icap_respmod.c 25 May 2006 16:04:55 - 1.1.2.65 +++ icap_respmod.c 15 Jun 2006 18:01:52 - @@ -500,6 +500,8 @@ #if SUPPORT_ICAP_204 || ICAP_PREVIEW if (204 == status) { debug(81, 3) (got 204 status from ICAP server\n); + icapRespModKeepAliveOrClose(icap,0); + debug(81, 3) (setting icap-flags.no_content\n); icap-flags.no_content = 1; /* @@ -511,7 +513,8 @@ icap-respmod.resp_copy.buf, icap-respmod.resp_copy.size); icap-respmod.resp_copy.size = 0; if (icapReadReply2(icap) 0) - comm_close(fd); + icapStateFree(-1, icap); +// comm_close(fd); /* * XXX ideally want to clean icap-respmod.resp_copy here * XXX ideally want to close ICAP server connection here @@ -801,26 +804,29 @@ * transaction. */ static void -icapRespModKeepAliveOrClose(IcapStateData * icap) +icapRespModKeepAliveOrClose(IcapStateData * icap,int free_icap_state) { int fd = icap-icap_fd; if (fd 0) return; -if (!icap-flags.keep_alive) { - debug(81, 3) (%s:%d keep_alive not set, closing\n, __FILE__, - __LINE__); - comm_close(fd); - return; -} debug(81, 3) (%s:%d FD %d looks good, keeping alive\n, __FILE__, __LINE__, fd); commSetDefer(fd, NULL, NULL); commSetTimeout(fd, -1, NULL, NULL); commSetSelect(fd, COMM_SELECT_READ, NULL, NULL, 0); comm_remove_close_handler(fd, icapStateFree, icap); -pconnPush(fd, icap-current_service-hostname, icap-current_service-port); icap-icap_fd = -1; -icapStateFree(-1, icap); +if(free_icap_state) + icapStateFree(-1, icap); +if (!icap-flags.keep_alive) { + debug(81, 3) (%s:%d keep_alive not set, closing\n, __FILE__, + __LINE__); + comm_close(fd); + return; +} +else{ + pconnPush(fd, icap-current_service-hostname, icap-current_service-port); +} } @@ -1042,7 +1048,10 @@ comm_close(fd); } else if (icapPconnTransferDone(fd, icap)) { storeComplete(entry); - icapRespModKeepAliveOrClose(icap); + if(icap-flags.no_content) + icapStateFree(-1, icap); + else + icapRespModKeepAliveOrClose(icap,1); } else if (!icap-flags.no_content) { /* Wait for EOF condition */ commSetSelect(fd, COMM_SELECT_READ, icapReadReply, icap, 0);
Re: Merge some other enh bugs in 2.6 ?
On Thu, Jun 15, 2006, Guido Serassio wrote: - #1235: New options to ignore Cache-Control headers when necessary - #1423: Could use TCP_CORK to optimize cache hits I'd leave these until squid-2.6 has been made stable. (I'll get back into squid coding next week and I'll concentrate on fixing squid-2.6 for production.) Adrian
Talking to an upstream ISA server that requires NTLM authentication.
Hi All, I am working with squid version 2.5 stable 7. I would like to try and get squid talking to an upstream ISA server that requires NTLM authentication. Squid works as a transparent proxy with a ISA parent (login=PASS). I have also applied the connection pinning patch from Nov 7 2005. Connection is mark as pinned both for WWW and PROXY authentication. WWW NTLM authentication works well! In order to support proxy I changed httpReadReply() (http.c) to refer to orig_request-pinned_connection Instead of request-pinned_connection. Now, proxy NTLM negotiation goes well, but right after I am getting an assert. I believe it is b/c I need to have better support for FD and pinning. It seems that the FD to the other proxy is being closed twice?! I would really appreciate if you can share some tips/guidelines about what should be taken into consideration when trying to support proxy connection pinning? Did anyone try to support NTLM for proxy before? Thank you in advanced, Tsachi Kimel Here are the last lines from the debug file: (comm_close: FD 16 is called twice!?) 2006/06/15 16:22:32| CommWriteStateCallbackAndFree ## ENTER ### fd=13 2006/06/15 16:22:32| commCallCloseHandlers: FD 13 2006/06/15 16:22:32| commCallCloseHandlers: ch-handler=0x805c7b0 2006/06/15 16:22:32| comm_close: FD 16 2006/06/15 16:22:32| CommWriteStateCallbackAndFree ## ENTER ### fd=16 2006/06/15 16:22:32| commCallCloseHandlers: FD 16 2006/06/15 16:22:32| commCallCloseHandlers: ch-handler=0x807793c 2006/06/15 16:22:32| commCallCloseHandlers: ch-handler=0x806bc44 2006/06/15 16:22:32| fwdServerClosed: FD 16 http://www.google.co.il/ 2006/06/15 16:22:32| fwdStateFree: 0x847d308 2006/06/15 16:22:32| commCallCloseHandlers: ch-handler=0x8062268 2006/06/15 16:22:32| clientPinnedConnectionClosed ## ENTER ### fd=16 2006/06/15 16:22:32| commCallCloseHandlers: ch-handler=0x806d2d4 2006/06/15 16:22:32| fd_close FD 16 http://www.google.co.il/ 2006/06/15 16:22:32| fd_close FD 13 http://www.google.co.il/ 2006/06/15 16:22:32| comm_close: FD 16 2006/06/15 16:22:32| assertion failed: comm.c:746: F-flags.open