ookup for what's expiring, just sleep
> until something needs to be regenerated.
> Bye
>
> Sergio
>
> -Original Message-
> From: Parin Shah [mailto:[EMAIL PROTECTED]
> Sent: venerdì 22 luglio 2005 8.02
> To: dev@httpd.apache.org
> Subject: Re: m
02
To: dev@httpd.apache.org
Subject: Re: mod-cache-requestor plan
Thanks Ian, Graham and Sergio for your help.
for past couple of days I am trying to figure out how our
mod-cache-requester should spawn thread (or set of threads).
Currently, I am considering following option. please let me know what
you t
Thanks Ian, Graham and Sergio for your help.
for past couple of days I am trying to figure out how our
mod-cache-requester should spawn thread (or set of threads).
Currently, I am considering following option. please let me know what
you think about this approach.
- mod-cache-requester would be
Parin Shah wrote:
2. how mod-cache-requester can generate the sub request just to reload
the content in the cache.
Look inside mod_include - it uses subrequests to be able to embed pages
within other pages.
Regards,
Graham
--
: mercoledì 20 luglio 2005 8.34
To: dev@httpd.apache.org
Subject: Re: mod-cache-requestor plan
Hi All,
We are now almost at consesus about this new mod-cache-requester
module's mechanism. and now I believe its good time to start
implementing the module.
But before I could do that, I need some help fro
Hi All,
We are now almost at consesus about this new mod-cache-requester
module's mechanism. and now I believe its good time to start
implementing the module.
But before I could do that, I need some help from you guys.
- I am now comfortable with mod-cache, mod-mem-cache, cache_storage.c,
cache_
Parin Shah wrote:
you should be using a mix of
# requests
last access time
cost of reproducing the request.
Just to double check, we would insert entry into the 'refresh queue'
only if the page is requested and the page is soon-to-be-expired. once
it is in the queue we would use above parame
> you should be using a mix of
>
> # requests
> last access time
> cost of reproducing the request.
>
Just to double check, we would insert entry into the 'refresh queue'
only if the page is requested and the page is soon-to-be-expired. once
it is in the queue we would use above parameters to ca
Parin Shah wrote:
On 7/15/05, Colm MacCarthaigh <[EMAIL PROTECTED]> wrote:
On Fri, Jul 15, 2005 at 01:23:29AM -0500, Parin Shah wrote:
- we need to maintain a counter for url in this case which would
decide the priority of the url. But mainting this counter should be a
low overhead operation,
On 7/16/05, Graham Leggett <[EMAIL PROTECTED]> wrote:
> Parin Shah wrote:
>
> > - I would prefer the approach where we maintain priority queue to keep
> > track of popularity. But again you guys have more insight and
> > understanding. so whichever approach you guys decide, I am ready to
> > work
Parin Shah wrote:
- I would prefer the approach where we maintain priority queue to keep
track of popularity. But again you guys have more insight and
understanding. so whichever approach you guys decide, I am ready to
work on it! ;-)
Beware of scope creep - we can always start with something
On 7/15/05, Colm MacCarthaigh <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 15, 2005 at 01:23:29AM -0500, Parin Shah wrote:
> > - we need to maintain a counter for url in this case which would
> > decide the priority of the url. But mainting this counter should be a
> > low overhead operation, I believe
On Fri, Jul 15, 2005 at 01:23:29AM -0500, Parin Shah wrote:
> - we need to maintain a counter for url in this case which would
> decide the priority of the url. But mainting this counter should be a
> low overhead operation, I believe.
Is a counter strictly speaking the right approach? Why not a t
Thanks all for for your thoughts on this issue.
> > The priority re-fetch would make sure the
> > popular pages are always in cache, while others are allowed to die at
> > their expense.
>
>
> So every request for an object would update a counter for that url?
>
- we need to maintain a counter
On 7/14/05 9:59 AM, "Ian Holsman" <[EMAIL PROTECTED]> wrote:
>
> that wouldn't keep track of the popularity of the given url, only when
> it is stored.
Which would be a useful input to something like htcacheclean so that it does
not have to scan directories.
> The priority re-fetch would m
This was a private message. I will continue this one offline.
Akins, Brian wrote:
On 7/13/05 6:36 PM, "Ian Holsman" <[EMAIL PROTECTED]> wrote:
Hi There.
just remember that this project is Parin's SoC project, and he is
expected to do the code on it.
sure. I am expected to do what's best
Akins, Brian wrote:
On 7/13/05 6:41 PM, "Ian Holsman" <[EMAIL PROTECTED]> wrote:
a pool of threads read the queue and start fetching the content, and
re-filling the cache with fresh responses.
How is this better than simply having an external cron job to fetch the
urls? You have total c
On 7/13/05 6:41 PM, "Ian Holsman" <[EMAIL PROTECTED]> wrote:
> a pool of threads read the queue and start fetching the content, and
> re-filling the cache with fresh responses.
>
How is this better than simply having an external cron job to fetch the
urls? You have total control of throttling
On 7/13/05 6:36 PM, "Ian Holsman" <[EMAIL PROTECTED]> wrote:
> Hi There.
>
> just remember that this project is Parin's SoC project, and he is
> expected to do the code on it.
sure. I am expected to do what's best for my employer and the httpd
project.
> While normally I think it would be gr
What my initial idea for this was:
we feed the 'soon to be expired' URLs into a priority queue. (similar to
mod-mem-cache's)
a pool of threads read the queue and start fetching the content, and
re-filling the cache with fresh responses.
the benefit of this method would be that we control e
On 7/13/05 2:43 PM, "Graham Leggett" <[EMAIL PROTECTED]> wrote:
> This was one of the basic design goals of the new cache, but the code
> for it was never written.
>
> It was logged as a bug against the original v1.3 proxy cache, which
> suffered from thundering herd when cache entries expired
Parin Shah wrote:
- In this case, what would be the criteria to determine which pages
should be refreshed and which should be left out. intitially I thought
that all the pages - those are about to expire and have been requested -
should be refreshed. but, if we consider keeping non-popular but
Akins, Brian wrote:
This avoids the "thundering herd" to the backend server/database/whatever
handler.
Trust me, it works :)
This was one of the basic design goals of the new cache, but the code
for it was never written.
It was logged as a bug against the original v1.3 proxy cache, whic
On 7/12/05 10:27 PM, "Parin Shah" <[EMAIL PROTECTED]> wrote:
>
>> Also, one of the flaws of mod_disk_cache (at least the version I am looking
>> at) is that it deletes objects before reloading them. It is better for many
>> reasons to only replace them. That's the best way to accomplish what
> We have been down this road. The way one might solve it is to allow
> mod_cache to be able to reload an object while serving the "old" one.
>
> Example:
>
> cache /A for 600 seconds
>
> after 500 seconds, request /A with special header (or from special client,
> etc) and cache does not serve
On 7/11/05 11:48 PM, "Parin Shah" <[EMAIL PROTECTED]> wrote:
should be
> refreshed. but, if we consider keeping non-popular but expensive pages in
> the cache, in that case houw would te mod-c-requester would make the
> decision?
>
We have been down this road. The way one might solve it is
> I believe the basic idea of
forwarding multiple requests on the back end can be a very good idea,
but needs some bounds as Graham suggests.> ..
its an interesting thought. But after Graham's opinion, I am not too
sure about performance improvement/overload incured by threads ratio.
if we could ga
> - Cache freshness of an URL is checked on each hit to the URL. This runs> the risk of allowing non-popular (but possibly expensive) URLs to expire
> without the chance to be refreshed.>
> - Cache freshness is checked in an independant thread, which monitors the> cached URLs for freshness at pred
Hi all
I basically agree with Graham, with just one observation on multi-threaded
subrequests.
I believe the basic idea of forwarding multiple requests on the back end can be
a very good idea, but needs some bounds as Graham suggests.
In my opinion you can define a mod_cache_requester connection
Parin Shah said:
> When the page expires from the cache, it is removed from cache and
> thus next request has to wait until that page is reloaded by the
> back-end server.
This is not strictly true - when a page expires from the cache, a
conditional request is sent to the backend server, and if a
anagement
Bye
Sergio
> Da: Parin Shah <[EMAIL PROTECTED]>
> Data: Sun, 10 Jul 2005 23:24:10 -0500
> A: dev@httpd.apache.org
> Oggetto: mod-cache-requestor plan
>
> Hi All,
>
> I am a newbie. I am going to work on mod-cache and a new module
> mod-cache-reque
Hi All,
I am a newbie. I am going to work on mod-cache and a new module
mod-cache-requester as a part of Soc program.
Small description of the module is as follows.
When the page expires from the cache, it is removed from cache and
thus next request has to wait until that page is reloaded by the
32 matches
Mail list logo