Re: cvs commit: httpd-test/perl-framework/t/apache chunkinput.t

2004-03-23 Thread Bill Stoddard
Geoffrey Young wrote:
[EMAIL PROTECTED] wrote:
stoddard2004/03/23 12:50:41
 Modified:perl-framework/t/apache chunkinput.t
 Log:
 adjust for 1.3 fooness in handling this test case

hi
since you seem to understand chunked foo, care to take a look at this?
  http://marc.theaimsgroup.com/?l=apache-test-devm=107108438330280w=2
I still see 1.3 failing on t/apache/limit.t without that patch, but I wanted
to be sure that the change is warranted before committing.
thanks
--Geoff
The analysis of the problem pointed to by the url above looks right to me. I came to the same conclusion, that 
1.3 and 2.0 are fundamentally different in how they handle/respond to chunked request bodies. 1.3 is not going 
to change so may as well work around it. I am not a perl programmer, so +1 in concept to the limits.t patch.

Bill


Re: gettimeofday calls

2003-01-24 Thread Bill Stoddard

Infact, I tried this out yesterday (having the one global_time variable), it
gives me around 3-4% improvement. But, occasionally I do get some
un-conforming results, and I'm trying to figure out if it's because of the
time stamp.
You probably need to mutex updates to your global variable, which will 
probably suck out most of your performance gains.

Anyways, moving away from the time(), here's what I've been thinking..I'm
sure many of you have gone through this list, so, can you please give me
your feedback regarding the following :
1. Why we need to do the apr_stat() for static files each time the request
comes in - can it be done during the module_init() phase, and the values put
in a array of some sort. ?.
Files change. Why not use mod_file_cache? It will (or should if it does 
not have a bug) eliminate the stat.  Or we could  spend time rewriting 
code to just do an open (followed by a less expensive fstat) rather than 
a stat/open.

Bill


Re: cvs commit: httpd-test/specweb99/specweb99-2.0 mod_specweb99.c

2002-06-05 Thread Bill Stoddard


 Brian Pane wrote:

  Do you have any profile data that shows where the bottlenecks are?

 No, sorry.  At the moment I'm focusing on mod_specweb99.

   From recent tests with other workloads, I anticipate that the most
  expensive operations are likely to be: reading the HTTP headers,
  directory_walk and file_walk, and possibly mod_mime.

 I cannot confirm or deny at the moment.  I do see stat()s for URIs that exist
 only in Location containers, which can't help.

 The IBM Linux kernel hackers I'm working with have seen long hold/spin times 
 on
 the dcache_lock in their kernel profiles, and asked about what files Apache
 was opening.  I asked if some of this might be due to stat()s as well as 
 open()s
 but didn't get a response yet.  Anyway, I suggested they try mod_cache, which
 should cut down on both open()s and stat()s.

mod_cache will bypass directory_walk and file_walk as well. Makes a big 
difference on
Windows...

Bill