Plüm wrote:
William A. Rowe, Jr. wrote:

Ruediger Pluem wrote:


[..cut..]


Quick consideration;

Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
generalize the problem.  ANY metadata bucket that isn't recognized and
handled by an intermediate filter probably indiciates a problem; and
therefore the result is a non-cacheable, broken response.

Actually two cases. In the error bucket case, it's non-cacheable, and broken.

So what do you think should be done in this case with respect to the 
'brokenness'.

1. set r->connection->keepalive to AP_CONN_CLOSE
2. Do not sent the last chunk marker in the case of a chunked encoding response
3. Do 1. and 2.

The straightforward thing is to close the client socket.  Obviously it's
not that trivial; unix can reuse the same fd almost immediately.  Perhaps
close the write side?  In any case, the connection should be marked aborted.

Number 2 follows from 1, of course we don't 'finish' the response.

Remember, the back end is *busted* midstream.  We have to convey that, but
we don't have to maintain the integrety of every byte sent by the (errant)
server, IMHO.

Next question is: Do we need to stop sending further data to the client
immediately?

Yup, again IMHO.

In the unrecognized bucket type case, it's non-cacheable (a 'complex' response), but it is likely serveable to the front end client. In both cases, if mod_cache doesn't grok what it sees, then something 'interesting' is going on and we would
not want to deposit into the cache.

I agree with the goals, but making it non cacheable is not easy to add to the
current patch, because the HTTP_OUTERROR filter is a protocol filter that is
run after the CACHE_SAVE filter.

My comments were ment to illustrate that a new filter is a waste of resources,
stack setup and teardown.  We've overengineered what (should be) some simple
code.  Delegating error handling which should occur in the (existing) filter
stack, previously, off to it's own filter is (again IMHO) sort of lame :)

But apart from the case that no content-length is present the CACHE_SAVE
filter itself does not iterate over the brigade.
So we would need to add an additional loop over the brigade inside
of CACHE_SAVE filter to scan for these meta buckets.

Yes; there is some complexity here.  Didn't suggest it was trivial :-)

Furthermore I think we need to keep in mind that, if we think that this
reponse is not worth caching, we may should make any upstream proxies
think the same. In the case of a broken backend this is reached (depending
on the transfer encoding) by

1. sending less content then the content-length header announces
2. do not send the last-chunk marker.

right.

But in the case of an unrecognized bucket type we must let the upstream
proxies know that it is not cacheable via headers.

as you might have guessed, my comments were aimed at those 'interesting'
applications that were otherwise cacheable - e.g. those folks who keep
arguing that things like mod_authnz_hostname should interact with the cache
(which yes I disagree with, but this would provide basic mechanisms to handle
this case.)

> But this could be impossible if the headers had been already sent.

exactly so.

Reply via email to