On Thu, 18 Jan 2007, Giuliano Gavazzi wrote:
Actually I don't. Either you or me have misunderstood how buckets work,
since the rest of the code should syntactically be equivalent. Or I'm
missing some fine detail somewhere.
Perhaps I do not understand buckets fully (and brigades), but this seems to
be clear enough.
The fine detail is in the original code (sorry for repeating myself):
while (e != APR_BRIGADE_SENTINEL(bb)) {
Ugh. I see it now.
The version on trunk has a different form of solution to the
read-while-caching-problem than my patch, and that solution depends on
other stuff in trunk. If I remember correctly you crafted the trunk
version onto 2.2.4, and that's bound to fail.
Either test trunk, or 2.2.4. Don't mix files freely between them and
expect stuff to work ;)
I have also tested your patch
(httpd-2.2.4-mod_disk_cache-jumbo20070117.patch) and in my limited test it
works for SSI, but does not seem to be less prone than my patch of r470455 to
hammering of the back-end. It is actually a tad worse.
A test on localhost with an SSI calling this script:
#!/bin/sh
echo `date` >> foo.log
sleep 10
echo bar
with:
/usr/local/apache2/bin/ab -c 10 -n 20 URL
gives 13 calls to the backend with yours and 12 with mine. 18 failures out of
20 (for length) in yours, and no failures in mine. Actually, it seems that
yours confuses ab, as it reported a length 2 bytes short, and not
corresponding to the one in the header file.
The throughput is about the same.
What's your update timeout? If you have a sleep 10 in the script,
you'll need an update timeout longer than that or you'll always fail.
It shouldn't report different lengths though.
Enable debug logging in httpd and review the debug log in order to
find out exactly where it falls short.
Regarding it hitting backend many times, that's probably due to the
small window between "I have no cached copy, I need to cache it so I
let it travel along the filter chain" and "I have stuff to write,
let's create a cache file". ab hits the page at exactly the same time,
so it will trigger it. My patches try hard to detect when it's
happening and only one instance will do the actual caching, but since
I haven't looked at the particular issues with dynamic content the
code tends to lean towards correctness (old behaviour) rather than
performance.
It replaces the brigade with an instance of the cached file when it
detects that it's already being cached. For stuff with unknown size
(usually dynamic content) it can't do this, so it's bound to hit your
backend. I have no clue on how to solve this with the current cache
design, but I'm sure there are more clued people here when it comes to
caching and dynamic content.
/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se | [EMAIL PROTECTED]
---------------------------------------------------------------------------
Want to forget all your troubles? Wear tight shoes.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=