From: "Justin Erenkrantz" <[EMAIL PROTECTED]>
Sent: Wednesday, October 31, 2001 2:00 PM


> Yes, they are worthless because the core input filter has no way of 
> knowing if it is working on a request, header, or body line.  

It never needs to know.

> There is simply a call to ap_get_brigade with *readbytes == 0.  That 
> code has no way of knowing what the maximum request fields size is 
> (it has no knowledge of HTTP).  

It doesn't need to know.  It reads, and returns what it's got.  If that 
is 5 bytes on the socket, fine, then the consumer needs to call back and 
accumulate more if that wasn't enough.  Your choice of blocking/nonblock
should only indicate that any data will be returned, not how much.

> This point was brought up by OtherBill 
> and Aaron when we discussed the input filtering changes (i.e. maybe 
> we should have a maximum).  I think readline should be a distinct 
> mode not an interpretation of the length (and the passed-in length
> can now be the max to read).

My point from day one is that this model is flawed.  Whatever we have
read should simply be returned.  If that's too many or two few bytes,
so be it.  The consumer [the guy trying to interpret this line of data]
is the one who should handle it.  This merely reinterates the need for
better design.

The most efficient model is for the consumer to keep calling the chain
until it has sufficient bytes for what it is trying to query [be it one
line, one n byte block, one null delimited record, or whatever] and push
back on the filter chain the buckets it refuses [for whatever reason] to
consume.

Pull only works when everyone in the chain agrees on boundry conditions,
and a good filtering design doesn't impose one filter's boundry condition
on another filter; those should be transparent.

> As Greg Ames pointed out, I'm not a fan of this commit, but it is 
> an appropriate stopgap until something cleaner comes along.  -- justin

Agreed, better a running server than no server a'tall.

Reply via email to