Here is why I'm asking. I wrote a SOCKS proxy module. It has two connection records, one for the client and one for the backend server.
When I received data I pass it to the other side with a flush at the end. It works fine with one problem. The core output filter splits the brigade after the flush bucket creating a new bucket brigade. This bucket brigade is never destroyed, consuming 16 bytes of memory (apr_brigade.c: line 84). This may not be a problem for short lived connection but it the connection is long lived the pool keeps getting bigger and bigger. Is there any way around this?
So I though, I should use an EOS bucket instead (maybe not a good idea) but I found that the core output filter was setting aside my buckets. This section in core.c looks bogus to me:
Core.c line 3884:
if (nbytes + flen < AP_MIN_BYTES_TO_WRITE && ((!fd && !more && !APR_BUCKET_IS_FLUSH(last_e)) || (APR_BUCKET_IS_EOS(last_e) && c->keepalive == AP_CONN_KEEPALIVE))) { /* set aside the buckets */ }
What is weird about this code is that if the last bucket in the bridade is an EOS, this part: ((!fd && !more && !APR_BUCKET_IS_FLUSH(last_e)) will return true as long as you are not serving a file.
But it seems that you want that to happen only if the connection is a keep-alive connection. Right?
Juan,
Congratulations! You have homed right in on one of the trickiest new parts of Apache 2.0. I believe the intention here is to hang on to the data across HTTP connections, so we can take advantage of keepalives and pipelining and potentially pack more data into fewer TCP segments. Usually what will happen is that at the end of a request, we look for more inbound data (the next request), don't find any, and send a FLUSH bucket which causes the network write.
Any suggestions how to deal with this?
Deal with what? Are you saying that we leak memory with your SOCKS module?
Greg