> > > The problem with this type of optimization is that the brigade code > > > cannot know the optimal size of a bucket -- only the next > > > filter in the chain can know, since what is optimal will depend > > > on what kind of processing is done next. > > > > However, the programmer knows what kind of data they are dealing with. If > > it is bufferable, then it should be written to the brigade using a > > buffering API. Otherwise, it should be written using a direct bucket API, > > IMO. > > Hmmm... but I don't want to write to the brigade. I want to write to > the filter stack just like I would write to any file handle. All of > the complexity should be handled within the filter implementations > and not exposed to the users of the filter, IMO.
Great, you and I agree here. I have a patch, that am working on, that uses the apr_brigade_* calls, but those are kept hidden from the programmer. Obviously those are exported from apr-util, but the module writer should never actually use them, unless they absolutely want to, which most people won't. > > > I think that if we are suffering from wasted cycles and pallocs > > > due to premature brigade formations, then we should try it the other > > > way -- always allocate the bucket structure off the stack, use a > > > simple next pointer to connect brigades, and force the filter that > > > needs to setaside the data to do so in a way that coalesces the > > > bucket data. That was the main difference between the design we > > > are using now and the one Greg proposed prior to the filters meeting. > > > > However, we don't always want to coalesce bucket data. I am picturing > > this case. > > > > file bucket 9k -> 10 byte pool bucket -> 10 byte pool bucket. > > > > We want to coalesce the 2 10 byte buckets, but we don't want to coalesce > > the file bucket. If the buckets are allocated off the stack, how do you > > keep the file bucket around? > > I don't. Every buffering mechanism needs a threshold against which > it writes to the next output (if it isn't blocked by waiting for > something like an end-of-record) or writing to a large processing > buffer if it is blocked. That prevents latency from getting too high, > and provides the intermediate files that can be identified and > cached just like a proxy does caching. Okay, I think I see what you are saying, and I think I see how this works. I need to think about it more. Thanks for the clarification. I may implement some small pieces of this in the code I am currently working on, but not the buckets allocated on the stack. That will have to wait. Ryan _______________________________________________________________________________ Ryan Bloom [EMAIL PROTECTED] 406 29th St. San Francisco, CA 94131 -------------------------------------------------------------------------------
