On Wed, 2002-12-18 at 21:55, Justin Erenkrantz wrote:
> --On Wednesday, December 18, 2002 3:23 PM -0800 Brian Pane 
> <[EMAIL PROTECTED]> wrote:
> 
> > My proposed changes are:
> >   - Create each top-level request pool as a free-standing pool,
> >     rather than a subpool of the connection pool.
> >   - After generating the response, don't destroy the request pool.
> >     Instead, create a metadata bucket that points to the request_rec
> >     and send this bucket through the output filter chain.
> >   - In the core_output_filter, once everything before this metadata
> >     bucket is sent, run the logger and then destroy the request
> > pool.
> 
> Obvious question, but what happens if we don't get to that metadata 
> bucket?  If we get an error, do we have to go the end of the brigade? 
> What if we get to an abort situation before we see an EOS?  When 
> would we clean it up?

I think ap_process_request() is the best place to create
the metadata bucket, since we pass through there for both
successful and failed requests.

If the connection is aborted after the metadata bucket is
pushed into the output filter chain, it would be the core
output filter's job to consume and destroy all remaining
buckets until it had processed the request-cleanup metadata
bucket.

> Would such a bucket come before or after the EOS bucket?  Hmm, if we 
> had a bucket type extension system (something we kicked around at 
> AC), we could create a 'super' EOS bucket which had the pool 
> associated with it.

If it's a separate bucket, it should come after the EOS
bucket.  I think that will make the core_output_filter
logic simpler: EOS means "flush the output unless this
is a keepalive connection," and the metadata bucket means
"we're now completely finished with this request, so it's
safe to delete it."

I'll put together a prototype patch for testing...

Brian


Reply via email to