On Tue, 23 Nov 2004, Ronald Park wrote:

> Now, for the bad news: we don't know if there's a specific single
> request that creates the problem. In fact, given that it mostly runs

Blah, I was afraid of that.  :-/

> Now for the good news: we can probably run in production with debugging
> enabled (the overhead for extra debug code executing is probably
> negligible compared to overall download time for these huge files :D).
> So I'll see about getting that out there and report back with any
> findings.

Okay, cool.

You know, it would be really useful if there were a module that could
track memory leaks related to buckets.  I'm envisioning something like
this:

It has an input filter and an output filter.  The input filter logs the
request headers (let's just say no content body for the sake of
simplicity), and the output filter logs the response.  But for each one,
what I'm more interested in than the content itself is that
dump_brigade-style knowledge of what data was in what buckets, how big and
of what type those bucket are, and which buckets were in the same brigade
as opposed to separate brigades.  So basically the output filter would
just insert itself right above the core_output_filter and log all the
buckets that fly by.  It wouldn't be able to log the data contents of all
bucket types without modifying the bucket (eg, file buckets), but like I
say, I'm more interested in knowing the structure of the data than the
data itself.  The additional useful bit of information would be a log of
what the RSS of the process was at the end of each request.  I don't know
if there's an easy way to get that information, but man would it be
useful...

--Cliff

Reply via email to