On Mon, 1 May 2006, Davi Arnaut wrote:
More important, if we stick with the key/data concept it's possible to
implement the header/body relationship under single or multiple keys.
I've been hacking on mod_disk_cache to make it:
* Only store one set of data when one uncached item is accessed
On Tue, May 2, 2006 11:22 am, Niklas Edmundsson said:
I've been hacking on mod_disk_cache to make it:
* Only store one set of data when one uncached item is accessed
simultaneously (currently all requests cache the file and the last
finished cache process is wins).
* Don't wait until
-Ursprüngliche Nachricht-
Von: Graham Leggett
* Don't block the requesting thread when requestng a large uncached
item, cache in the background and reply while caching
(currently it
stalls).
This is great, in doing this you've been solving a proxy bug that was
first
On 5/1/06, Greg Ames [EMAIL PROTECTED] wrote:
Jeff Trawick wrote:
after more thought, there is a simpler patch that should do the job. the key
to both of
these is how threads in SERVER_DEAD state with a pid in the scoreboard are
treated. this
means that p_i_s_m forked on a previous timer
On Tue, May 2, 2006 12:16 pm, Plüm, Rüdiger, VF EITO said:
This is great, in doing this you've been solving a proxy bug that was
first reported in 1998 :).
This already works in the case you get the data from the proxy backend. It
does
not work for local files that get cached (the scenario
On Tue, 2 May 2006, Graham Leggett wrote:
I've been hacking on mod_disk_cache to make it:
* Only store one set of data when one uncached item is accessed
simultaneously (currently all requests cache the file and the last
finished cache process is wins).
* Don't wait until the whole item
On Tue, 2 May 2006, Graham Leggett wrote:
This is great, in doing this you've been solving a proxy bug that was
first reported in 1998 :).
This already works in the case you get the data from the proxy backend. It
does
not work for local files that get cached (the scenario Niklas uses the
-Ursprüngliche Nachricht-
Von: Graham Leggett
The reason it does not work currently is that that a local file
usually is
delivered in one brigade with, depending on the size of the
file, one or
more
file buckets.
Hmmm - ok, this makes sense.
Something I've never
Jeff Trawick wrote:
On 5/1/06, Greg Ames [EMAIL PROTECTED] wrote:
after more thought, there is a simpler patch that should do the job. the
key to both of
these is how threads in SERVER_DEAD state with a pid in the scoreboard are
treated. this
means that p_i_s_m forked on a previous
On Tue, May 2, 2006 2:18 pm, Niklas Edmundsson said:
Exactly what is the thundering herd problem? I can guess the general
problem, but without a more precise definition I can't really say if
my patch fixes it or not.
If it's:
* Link to latest GNOME Live CD gets published on Slashdot.
* A
On Tue, 2 May 2006, Plüm, Rüdiger, VF EITO wrote:
Another thing: I guess on systems with no mmap support the current
implementation
of mod_disk_cache will eat up a lot of memory if you cache a large local file,
because it transforms the file bucket(s) into heap buckets in this case.
Even if
Graham Leggett wrote:
- the cache says cool, will send my copy upstream. Oops, where has my
data gone?.
So, the cache says, okay must get content the old fashioned way (proxy,
filesystem, magic fairies, etc.).
Where's the issue?
--
Brian Akins
Lead Systems Engineer
CNN Internet
On Tue, May 2, 2006 2:03 pm, Niklas Edmundsson said:
This is great, in doing this you've been solving a proxy bug that was
first reported in 1998 :).
OK. Stuck in the File under L for Later pile? ;)
Er no, it was under the redesign the entire code to fix it class of
bugs. :)
The v2.0
On Tue, 2 May 2006, Graham Leggett wrote:
If it's:
* Link to latest GNOME Live CD gets published on Slashdot.
* A gazillion users click the link to download it.
* mod_disk_cache starts a new instance of caching the file for each
request, until someone has completed caching the file.
Then
On Tue, 2 May 2006 11:22:31 +0200 (MEST)
Niklas Edmundsson [EMAIL PROTECTED] wrote:
On Mon, 1 May 2006, Davi Arnaut wrote:
More important, if we stick with the key/data concept it's possible to
implement the header/body relationship under single or multiple keys.
I've been hacking on
On 5/2/06, Chris Darroch [EMAIL PROTECTED] wrote:
If you can bear with me for a day or two more, I should have
a collection of patches ready. These tackle the issue by
tracking the start and listener threads in a nice new spot in
the scoreboard, and also clean up various issues and bugs
On Tue, May 2, 2006 3:24 pm, Brian Akins said:
- the cache says cool, will send my copy upstream. Oops, where has my
data gone?.
So, the cache says, okay must get content the old fashioned way (proxy,
filesystem, magic fairies, etc.).
Where's the issue?
To rephrase it, a whole lot of
On Tue, 2 May 2006, Graham Leggett wrote:
The need-size-issue goes for retrievals as well.
If you are going to read from partially cached files, you need a total
size field as well as a flag to say give up, this attempt at caching
failed
Are there partially cached files? If I request the
-Ursprüngliche Nachricht-
Von: Niklas Edmundsson
Correct. When caching a 4.3GB file on a 32bit arch it gets so
bad that
mmap eats all your address space and the thing segfaults. I initally
thought it was eating memory, but that's only if you have mmap
disabled.
Ahh, good
On Tue, 2 May 2006 15:40:30 +0200 (SAST)
Graham Leggett [EMAIL PROTECTED] wrote:
On Tue, May 2, 2006 3:24 pm, Brian Akins said:
- the cache says cool, will send my copy upstream. Oops, where has my
data gone?.
So, the cache says, okay must get content the old fashioned way (proxy,
On Tue, May 2, 2006 3:50 pm, Niklas Edmundsson said:
Are there partially cached files? If I request the last 200 bytes of a
4.3GB DVD image, the bucket brigade contains the complete file... The
headers says ranges and all sorts of things but they don't match
what's cached.
By partially
On Tue, May 2, 2006 7:06 pm, Davi Arnaut said:
There is not such scenario. I will simulate a request using the disk_cache
format:
The way HTTP caching works is a lot more complex than in your example, you
haven't taken into account conditional HTTP requests.
A typical conditional scenario
Graham Leggett wrote:
The way HTTP caching works is a lot more complex than in your example, you
haven't taken into account conditional HTTP requests.
...
Still not sure how this is different from what we are proposing. we
really want to separate protocol from cache stuff. If we have a
On Tue, May 2, 2006 5:27 pm, Brian Akins said:
Still not sure how this is different from what we are proposing. we
really want to separate protocol from cache stuff. If we have a
revalidate for the generic cache it should address all your concerns.
???
To be HTTP compliant, and to solve
Graham Leggett wrote:
To be HTTP compliant, and to solve thundering herd, we need the following
from a cache:
This seems more like a wish list. I just want to separate out the cache
and protocol stuff.
- The ability to amend a subkey (the headers) on an entry that is already
cached.
An open forward from your friendly security team.
Original Message
Subject: 2.2+ security page empty?
Date: Tue, 2 May 2006 14:53:53 +0100 (BST)
From: Per Olausson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
There is nothing on the security page any more for 2.2, is there a
On Tue, May 2, 2006 5:50 pm, Brian Akins said:
This seems more like a wish list. I just want to separate out the cache
and protocol stuff.
HTTP compliance isn't a wish, it's a requirement. A patch that breaks
compliance will end up being -1'ed.
The thundering herd issues are also a
Seems to me that the thundering herd / performance degradation is
inherent to apache design: all threads/processes are exact clones.
A more suitable design for this task I think would be to make each
process to have a special purpose: cache maintenance (purging expired
entries, purging entries
On Tue, 2 May 2006 17:22:00 +0200 (SAST)
Graham Leggett [EMAIL PROTECTED] wrote:
On Tue, May 2, 2006 7:06 pm, Davi Arnaut said:
There is not such scenario. I will simulate a request using the disk_cache
format:
The way HTTP caching works is a lot more complex than in your example, you
Gonzalo Arana wrote:
A more suitable design for this task I think would be to make each
process to have a special purpose: cache maintenance (purging expired
entries, purging entries to make room for new ones, creating new
entries, and so on), request processing (network/disk I/O, content
On 5/2/06, Brian Akins [EMAIL PROTECTED] wrote:
Gonzalo Arana wrote:
A more suitable design for this task I think would be to make each
process to have a special purpose: cache maintenance (purging expired
entries, purging entries to make room for new ones, creating new
entries, and so on),
Gonzalo Arana wrote:
What problems have you seen with this approach? postfix uses this
architecture, for instance.
Postfix implements SMTP, which is an asynchronous protocol.
Excuse my ignorance, what does event mpm ... keep the balance very
good mean?
Not all your threads are tied up
On 5/2/06, Brian Akins [EMAIL PROTECTED] wrote:
Gonzalo Arana wrote:
What problems have you seen with this approach? postfix uses this
architecture, for instance.
Postfix implements SMTP, which is an asynchronous protocol.
and which problems may bring this approach?
Excuse my
Davi Arnaut wrote:
The way HTTP caching works is a lot more complex than in your example, you
haven't taken into account conditional HTTP requests.
I've taken into account the actual mod_disk_cache code!
mod_disk_cache doesn't contain any of the conditional HTTP request code,
which is why
On Tue, 02 May 2006 23:31:13 +0200
Graham Leggett [EMAIL PROTECTED] wrote:
Davi Arnaut wrote:
The way HTTP caching works is a lot more complex than in your example, you
haven't taken into account conditional HTTP requests.
I've taken into account the actual mod_disk_cache code!
Davi Arnaut wrote:
Graham, what I want is to be able to write a mod_cache backend _without_
having to worry about HTTP.
Then you will end up with code that does not meet the requirements of
HTTP, and you will have wasted your time.
Please go through _all_ of the mod_cache architecture, and
On Wed, 03 May 2006 01:09:03 +0200
Graham Leggett [EMAIL PROTECTED] wrote:
Davi Arnaut wrote:
Graham, what I want is to be able to write a mod_cache backend _without_
having to worry about HTTP.
Then you will end up with code that does not meet the requirements of
HTTP, and you will
37 matches
Mail list logo