More bad/good news: Bad News (?): With the added debugging turned on, we did not manage to recreate the problem. In fact, we saw no output from the debugging code (which appears to fire off only in the case of a problem). And I even added an abort() specifically after the incrementing of nvec if it went above 16. However, all the added checking made our run queues get out of hand so we only ran this for a few hours (plus it was the day before Thanksgiving!)
Good News: We seem to have mitigated the problem by tuning some params - ProxyIOBufferSize & ProxyIOBufferSizenamely. (Our set up proxies some *very* large files between some front-end Apache instances and some back-end ones.) So, I don't think we've resolved our problem (seemingly hung threads that gobble up CPU) but we've tweaked things such that it doesn't seem to happen now. I am curious to see where the 'Worker MPM, stranded processes' thread goes as that might be related. Thanks for the suggestions! Ron On Tue, 2004-11-23 at 15:31 -0500, Cliff Woolley wrote: > On Tue, 23 Nov 2004, Ronald Park wrote: > > > Now, for the bad news: we don't know if there's a specific single > > request that creates the problem. In fact, given that it mostly runs > > Blah, I was afraid of that. :-/ > > > Now for the good news: we can probably run in production with debugging > > enabled (the overhead for extra debug code executing is probably > > negligible compared to overall download time for these huge files :D). > > So I'll see about getting that out there and report back with any > > findings. > > Okay, cool. > > You know, it would be really useful if there were a module that could > track memory leaks related to buckets. I'm envisioning something like > this: > > It has an input filter and an output filter. The input filter logs the > request headers (let's just say no content body for the sake of > simplicity), and the output filter logs the response. But for each one, > what I'm more interested in than the content itself is that > dump_brigade-style knowledge of what data was in what buckets, how big and > of what type those bucket are, and which buckets were in the same brigade > as opposed to separate brigades. So basically the output filter would > just insert itself right above the core_output_filter and log all the > buckets that fly by. It wouldn't be able to log the data contents of all > bucket types without modifying the bucket (eg, file buckets), but like I > say, I'm more interested in knowing the structure of the data than the > data itself. The additional useful bit of information would be a log of > what the RSS of the process was at the end of each request. I don't know > if there's an easy way to get that information, but man would it be > useful... > > --Cliff -- Ronald Park <[EMAIL PROTECTED]>