--- On Sun, 7/1/12, Amos Jeffries <squ...@treenet.co.nz> wrote:

> From: Amos Jeffries <squ...@treenet.co.nz>
> Subject: Re: [squid-users] a miss threshold for certian times of a specified 
> webpages
> To: squid-users@squid-cache.org
> Date: Sunday, July 1, 2012, 8:45 AM
> On 1/07/2012 1:04 a.m., Mustafa Raji
> wrote:
> > hello
> > 
> > is there an option that limits number of access to
> webpage before it can be consider as a cachable and caches
> the webpage
> > example
> > some option like a ( miss threshold ) = 30
> > so the user requests the page for a 30 time and this
> requests of the objects can by consider as a miss requests,
> after the user request reaches this threshold (30), then
> squid can consider this webpage objects as a cachable 
> objects and began to cache these objects
> 
> Uhm, why are you even considering this?  What benefit
> can you gain by wasting bandwidth and server CPU time?
> 
> HTTP servers send out Cache-Control details specifying
> whether and for how long each object can be cached for.
> Replacing these controls (which are often carefully chosen
> by the webmaster) with arbitrary other algorithms like the
> one you suggest is where all the trouble people have with
> proxies comes from.
> 
> Amos
> 
> 
thanks Amos for your reply
what about an option that can consider the first 60 http requests for google 
webpage as a miss, and after the 60 requests the google webpage can be allowed 
to be cached, is there any option in squid to do this, of course without time 
limitation

Reply via email to