Re: http_filter.c r1524770 open issue?
On Thu, Dec 12, 2013 at 12:34 AM, William A. Rowe Jr. wr...@rowe-clan.netwrote: On Sat, 23 Nov 2013 19:10:21 +0100 Yann Ylavic ylavic@gmail.com wrote: On Sat, Nov 23, 2013 at 6:52 PM, Yann Ylavic ylavic@gmail.com wrote: On Tue, Nov 19, 2013 at 3:27 PM, Yann Ylavic ylavic@gmail.com wrote: On Mon, Nov 18, 2013 at 6:28 PM, William A. Rowe Jr. wr...@rowe-clan.net wrote: By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). This would be a good way to ensure the connection is closed by the origin, but half-closes are sometimes (and even often) mishandled, the origin might consider the connection is fully closed (unwritable) when the read-end is closed, that could be an issue too. Otherwise, the following patch could do the trick : Index: modules/http/http_filters.c === --- modules/http/http_filters.c(revision 1541907) +++ modules/http/http_filters.c(working copy) @@ -291,13 +291,19 @@ apr_status_t ap_http_filter(ap_filter_t *f, apr_bu * denoted by C-L or T-E: chunked. * * Note that since the proxy uses this filter to handle the - * proxied *response*, proxy responses MUST be exempt. + * proxied *response*, proxy responses MUST be exempt, but + * ensure the connection is closed after the response. */ -if (ctx-state == BODY_NONE f-r-proxyreq != PROXYREQ_RESPONSE) { -e = apr_bucket_eos_create(f-c-bucket_alloc); -APR_BRIGADE_INSERT_TAIL(b, e); -ctx-eos_sent = 1; -return APR_SUCCESS; +if (ctx-state == BODY_NONE) { +if (f-r-proxyreq != PROXYREQ_RESPONSE) { +e = apr_bucket_eos_create(f-c-bucket_alloc); +APR_BRIGADE_INSERT_TAIL(b, e); +ctx-eos_sent = 1; +return APR_SUCCESS; +} +f-r- connection-keepalive = AP_CONN_CLOSE; + apr_socket_shutdown(ap_get_conn_socket(f-r-connection), +APR_SHUTDOWN_WRITE); } Actually the shutdown would break SSL (which may need to write during read, ~roughly~). Maybe just closing the connection at the end of the response is enough, which is assured by setting connection-keepalive to AP_CONN_CLOSE here and ap_proxy_http_response() setting the backend-close below in that case. Forget about that (chicken and egg problem), the end of the response is the close on the backend side, so there is no way to forcibly do it, just trust... No, the linger_close logic does all of the read-till-end logic. Yes, closing the write end may cause the connection to forcibly terminate, but we were already in the process of closing that connection. The back end SSL should give up on their failure to read-before-write due to the closed socket. We are indeed in the process of closing the connection, but aren't we supposed to forward the response before? Sorry to ramble on, but for this case, the RFC does not state the backend must nor should use Connection: close to signify the peer it will be (it's implicit enough), whereas it states proxies must do their best to forward valid responses (I've no particular section to quote for that statement though :) ). All we need to do is to return a 400-class error response to the client, I guess you meant a 500-class error (the client isn't responsible for anything here) mark the backend connection closed, and close the write side of our backend connection, letting the linger_close pump do the rest of our work. That is, provided that the linger_close logic in the proxy worker pool behaves as expected. Why bother reading (more) a connection when finally we won't use anything from it on the other side? Just close it as soon as we don't see any C-L or chunked T-E or Connection: close in the header, and respond with 5xx immediatly (note this is not what I agree with, it just looks the same as your proposal).
Re: http_filter.c r1524770 open issue?
On Sat, 23 Nov 2013 19:10:21 +0100 Yann Ylavic ylavic@gmail.com wrote: On Sat, Nov 23, 2013 at 6:52 PM, Yann Ylavic ylavic@gmail.com wrote: On Tue, Nov 19, 2013 at 3:27 PM, Yann Ylavic ylavic@gmail.com wrote: On Mon, Nov 18, 2013 at 6:28 PM, William A. Rowe Jr. wr...@rowe-clan.net wrote: By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). This would be a good way to ensure the connection is closed by the origin, but half-closes are sometimes (and even often) mishandled, the origin might consider the connection is fully closed (unwritable) when the read-end is closed, that could be an issue too. Otherwise, the following patch could do the trick : Index: modules/http/http_filters.c === --- modules/http/http_filters.c(revision 1541907) +++ modules/http/http_filters.c(working copy) @@ -291,13 +291,19 @@ apr_status_t ap_http_filter(ap_filter_t *f, apr_bu * denoted by C-L or T-E: chunked. * * Note that since the proxy uses this filter to handle the - * proxied *response*, proxy responses MUST be exempt. + * proxied *response*, proxy responses MUST be exempt, but + * ensure the connection is closed after the response. */ -if (ctx-state == BODY_NONE f-r-proxyreq != PROXYREQ_RESPONSE) { -e = apr_bucket_eos_create(f-c-bucket_alloc); -APR_BRIGADE_INSERT_TAIL(b, e); -ctx-eos_sent = 1; -return APR_SUCCESS; +if (ctx-state == BODY_NONE) { +if (f-r-proxyreq != PROXYREQ_RESPONSE) { +e = apr_bucket_eos_create(f-c-bucket_alloc); +APR_BRIGADE_INSERT_TAIL(b, e); +ctx-eos_sent = 1; +return APR_SUCCESS; +} +f-r- connection-keepalive = AP_CONN_CLOSE; + apr_socket_shutdown(ap_get_conn_socket(f-r-connection), +APR_SHUTDOWN_WRITE); } Actually the shutdown would break SSL (which may need to write during read, ~roughly~). Maybe just closing the connection at the end of the response is enough, which is assured by setting connection-keepalive to AP_CONN_CLOSE here and ap_proxy_http_response() setting the backend-close below in that case. Forget about that (chicken and egg problem), the end of the response is the close on the backend side, so there is no way to forcibly do it, just trust... No, the linger_close logic does all of the read-till-end logic. Yes, closing the write end may cause the connection to forcibly terminate, but we were already in the process of closing that connection. The back end SSL should give up on their failure to read-before-write due to the closed socket. All we need to do is to return a 400-class error response to the client, mark the backend connection closed, and close the write side of our backend connection, letting the linger_close pump do the rest of our work. That is, provided that the linger_close logic in the proxy worker pool behaves as expected.
Re: http_filter.c r1524770 open issue?
On Tue, Nov 19, 2013 at 3:27 PM, Yann Ylavic ylavic@gmail.com wrote: On Mon, Nov 18, 2013 at 6:28 PM, William A. Rowe Jr. wr...@rowe-clan.netwrote: By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). This would be a good way to ensure the connection is closed by the origin, but half-closes are sometimes (and even often) mishandled, the origin might consider the connection is fully closed (unwritable) when the read-end is closed, that could be an issue too. Otherwise, the following patch could do the trick : Index: modules/http/http_filters.c === --- modules/http/http_filters.c(revision 1541907) +++ modules/http/http_filters.c(working copy) @@ -291,13 +291,19 @@ apr_status_t ap_http_filter(ap_filter_t *f, apr_bu * denoted by C-L or T-E: chunked. * * Note that since the proxy uses this filter to handle the - * proxied *response*, proxy responses MUST be exempt. + * proxied *response*, proxy responses MUST be exempt, but + * ensure the connection is closed after the response. */ -if (ctx-state == BODY_NONE f-r-proxyreq != PROXYREQ_RESPONSE) { -e = apr_bucket_eos_create(f-c-bucket_alloc); -APR_BRIGADE_INSERT_TAIL(b, e); -ctx-eos_sent = 1; -return APR_SUCCESS; +if (ctx-state == BODY_NONE) { +if (f-r-proxyreq != PROXYREQ_RESPONSE) { +e = apr_bucket_eos_create(f-c-bucket_alloc); +APR_BRIGADE_INSERT_TAIL(b, e); +ctx-eos_sent = 1; +return APR_SUCCESS; +} +f-r- connection-keepalive = AP_CONN_CLOSE; +apr_socket_shutdown(ap_get_conn_socket(f-r-connection), +APR_SHUTDOWN_WRITE); } Actually the shutdown would break SSL (which may need to write during read, ~roughly~). Maybe just closing the connection at the end of the response is enough, which is assured by setting connection-keepalive to AP_CONN_CLOSE here and ap_proxy_http_response() setting the backend-close below in that case. /* Since we're about to read data, send 100-Continue if needed. Index: modules/proxy/mod_proxy_http.c === --- modules/proxy/mod_proxy_http.c(revision 1541907) +++ modules/proxy/mod_proxy_http.c(working copy) @@ -1681,6 +1681,7 @@ int ap_proxy_http_process_response(apr_pool_t * p, continue; } else if (rv == APR_EOF) { +backend-close = 1; break; } else if (rv != APR_SUCCESS) { @@ -1825,6 +1826,11 @@ int ap_proxy_http_process_response(apr_pool_t * p, return DONE; } +/* If the underlying backend connection is to be closed, close it. */ +if (origin-keepalive == AP_CONN_CLOSE) { +backend-close = 1; +} + return OK; } [END] Regards, Yann.
Re: http_filter.c r1524770 open issue?
On Sat, Nov 23, 2013 at 6:52 PM, Yann Ylavic ylavic@gmail.com wrote: On Tue, Nov 19, 2013 at 3:27 PM, Yann Ylavic ylavic@gmail.com wrote: On Mon, Nov 18, 2013 at 6:28 PM, William A. Rowe Jr. wr...@rowe-clan.net wrote: By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). This would be a good way to ensure the connection is closed by the origin, but half-closes are sometimes (and even often) mishandled, the origin might consider the connection is fully closed (unwritable) when the read-end is closed, that could be an issue too. Otherwise, the following patch could do the trick : Index: modules/http/http_filters.c === --- modules/http/http_filters.c(revision 1541907) +++ modules/http/http_filters.c(working copy) @@ -291,13 +291,19 @@ apr_status_t ap_http_filter(ap_filter_t *f, apr_bu * denoted by C-L or T-E: chunked. * * Note that since the proxy uses this filter to handle the - * proxied *response*, proxy responses MUST be exempt. + * proxied *response*, proxy responses MUST be exempt, but + * ensure the connection is closed after the response. */ -if (ctx-state == BODY_NONE f-r-proxyreq != PROXYREQ_RESPONSE) { -e = apr_bucket_eos_create(f-c-bucket_alloc); -APR_BRIGADE_INSERT_TAIL(b, e); -ctx-eos_sent = 1; -return APR_SUCCESS; +if (ctx-state == BODY_NONE) { +if (f-r-proxyreq != PROXYREQ_RESPONSE) { +e = apr_bucket_eos_create(f-c-bucket_alloc); +APR_BRIGADE_INSERT_TAIL(b, e); +ctx-eos_sent = 1; +return APR_SUCCESS; +} +f-r- connection-keepalive = AP_CONN_CLOSE; +apr_socket_shutdown(ap_get_conn_socket(f-r-connection), +APR_SHUTDOWN_WRITE); } Actually the shutdown would break SSL (which may need to write during read, ~roughly~). Maybe just closing the connection at the end of the response is enough, which is assured by setting connection-keepalive to AP_CONN_CLOSE here and ap_proxy_http_response() setting the backend-close below in that case. Forget about that (chicken and egg problem), the end of the response is the close on the backend side, so there is no way to forcibly do it, just trust...
Re: http_filter.c r1524770 open issue?
On Mon, Nov 18, 2013 at 6:28 PM, William A. Rowe Jr. wr...@rowe-clan.netwrote: On Thu, 14 Nov 2013 00:22:55 +0100 Yann Ylavic ylavic@gmail.com wrote: On Thu, Nov 14, 2013 at 12:05 AM, William A. Rowe Jr. wr...@rowe-clan.netwrote: On Wed, 13 Nov 2013 17:14:15 -0500 Jim Jagielski j...@jagunet.com wrote: On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec? Keep-alive (implicit in HTTP/1.1 absent a Connection: close header) is orthogonal to an unknown message body. Think about it :) Regarding the connection persistance, http://tools.ietf.org/id/draft-ietf-httpbis-p1-messaging-24.html#rfc.section.6.3states : In order to remain persistent, all messages on a connection MUST have a self-defined message length (i.e., one not defined by closure of the connection), as described in Section 3.3. So no keepalive is possible if the message length (content-coding) is undefined. Correct. It is NOT kept alive by the client (httpd) in this proxy case. But (to Jim's statement) the next-hop server claims it *is* kept alive and will handle the socket as kept-alive... I don't see how the origin (next-hop) can claim the connection is kept alive by using a *non* keepalive-able HTTP response, my understanding is that the non-chunked T-E is something usable by the origin *but* in that case it gives up keepalive-ability, and hence must close the connection at the end of the request. That could be a valid HTTP response : HTTP/1.1 200 OK Transfer-Encoding: x-cleverness Content-Length: 1000 but still the C-L must be ignored by the receiver, and a read-until-close be used by it. [Thank you for correcting me, the issue at hand is strictly with proxy response body processing in the first place.] Because the server claims it is keep-alive. It will send its thousand bytes (decorated by whatever x-cleverness might frame it with), and will then enter a keepalive loop. WE won't enter a keepalive loop, because we won't reuse the connection. But the next-hop server is keeping it alive for whatever duration it believes it should be waiting for the next request from the client (httpd). And in the interim, we are hung and holding our client's response open. At best you can describe this as a waste of worker requests, and at worst, as a mild DoS. See my first comment, I don't think the origin server is allowed to keep its connection alive. But indeed, if the origin is something not RFC compliant (or Alice), mod_proxy has no way to know (just like with the no C-L/T-E read-until-close case, the end of the response is the close), and the scenario you describe above is clearly what happens. We need to perform a lingering close (reading until our read end was closed by the next-hop server). The question is whether we tolerate the forwarding of keep-alive, other-than-'chunked' T-E proxy responses? There is nothing like keep-alive forwarding with mod_proxy_http, currently when the origin server's response is read-until-close, and HTTP/1.1 is used (which is a T-E requirement too), the client's side response will be chunked (as per ap_set_keepalive), and then can be kept alive regardless of the origin server's connection state. By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). This would be a good way to ensure the connection is closed by the origin, but half-closes are sometimes (and even often) mishandled, the origin might consider the connection is fully closed (unwritable) when the read-end is closed, that could be an issue too. Otherwise, the following patch could do the trick : Index: modules/http/http_filters.c === --- modules/http/http_filters.c(revision 1541907) +++ modules/http/http_filters.c(working copy) @@ -291,13 +291,19 @@ apr_status_t ap_http_filter(ap_filter_t *f, apr_bu * denoted by C-L or T-E: chunked. * * Note that since the proxy uses this filter to handle the - * proxied *response*, proxy responses MUST be exempt. + * proxied *response*, proxy responses MUST be exempt, but + * ensure the connection is closed after the response. */ -
Re: http_filter.c r1524770 open issue?
On Thu, 14 Nov 2013 00:22:55 +0100 Yann Ylavic ylavic@gmail.com wrote: On Thu, Nov 14, 2013 at 12:05 AM, William A. Rowe Jr. wr...@rowe-clan.netwrote: On Wed, 13 Nov 2013 17:14:15 -0500 Jim Jagielski j...@jagunet.com wrote: On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec? Keep-alive (implicit in HTTP/1.1 absent a Connection: close header) is orthogonal to an unknown message body. Think about it :) Regarding the connection persistance, http://tools.ietf.org/id/draft-ietf-httpbis-p1-messaging-24.html#rfc.section.6.3states : In order to remain persistent, all messages on a connection MUST have a self-defined message length (i.e., one not defined by closure of the connection), as described in Section 3.3. So no keepalive is possible if the message length (content-coding) is undefined. Correct. It is NOT kept alive by the client (httpd) in this proxy case. But (to Jim's statement) the next-hop server claims it *is* kept alive and will handle the socket as kept-alive... That could be a valid HTTP response : HTTP/1.1 200 OK Transfer-Encoding: x-cleverness Content-Length: 1000 but still the C-L must be ignored by the receiver, and a read-until-close be used by it. [Thank you for correcting me, the issue at hand is strictly with proxy response body processing in the first place.] Because the server claims it is keep-alive. It will send its thousand bytes (decorated by whatever x-cleverness might frame it with), and will then enter a keepalive loop. WE won't enter a keepalive loop, because we won't reuse the connection. But the next-hop server is keeping it alive for whatever duration it believes it should be waiting for the next request from the client (httpd). And in the interim, we are hung and holding our client's response open. At best you can describe this as a waste of worker requests, and at worst, as a mild DoS. We need to perform a lingering close (reading until our read end was closed by the next-hop server). The question is whether we tolerate the forwarding of keep-alive, other-than-'chunked' T-E proxy responses? By closing our write-end of the connection, we can signal to the server that we can't efficiently forward their response to the client (owing to the fact that the server believes this to be a keep-alive connection, and that we can't know where this specific response ends, until the server has given up on receiving another keep-alive request). My vote on this patch right now is -0. I don't believe it must be applied to 2.4.x this moment for this coming release and would prefer that we alter the behavior as of a specific release, not incrementally across several different releases (e.g. 2.4.6 w/ old behavior, 2.4.7 with an interim behavior and 2.4.8 with yet a third behavior). But I won't block the patch further if it has the votes for backport now.
Re: http_filter.c r1524770 open issue?
On Nov 13, 2013, at 6:05 PM, William A. Rowe Jr. wr...@rowe-clan.net wrote: On Wed, 13 Nov 2013 17:14:15 -0500 Jim Jagielski j...@jagunet.com wrote: On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec? Keep-alive (implicit in HTTP/1.1 absent a Connection: close header) is orthogonal to an unknown message body. Think about it :) STUFF /thisaction HTTP/1.1 Transfer-Encoding: x-cleverness Content-Length: 1000 The above would not be keptalive. It can't be.
Re: http_filter.c r1524770 open issue?
On Wed, Nov 13, 2013 at 8:25 AM, William A. Rowe Jr. wr...@rowe-clan.netwrote: Looking at the (f-r-proxyreq == PROXYREQ_RESPONSE) code path, the comments note; * http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-23 * Section 3.3.3.3: If a Transfer-Encoding header field is * present in a response and the chunked transfer coding is not * the final encoding, the message body length is determined by * reading the connection until it is closed by the server. All well and good. However, that statement makes almost no sense if the response is not Connection: close (or http/1.0, absent keep-alive). ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, f-r, APLOGNO(01586) Unknown Transfer-Encoding: %s; using read-until-close); Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. Jim, Yann, was this case thought through? It seems premature to commit the backport without considering that edge case. When the origin gives no C-L and no T-E, ap_http_filter() already assumes a read-until-close, even if Connection: close is not specified, as per rfc2616 - Section 4.4 - Message Length - §5, or draft-ietf-httpbis-p1-messaging-24 - Section 3.3.3 - Message Body Length - §7. IMHO, this is the same case here, if the origin gives a T-E which is not ending with chunked (something likely shared with the client), it is supposed to close the connection when done (as per rfc/ietf-httpbis), and the filter has to trust that... Regards, Yann.
Re: http_filter.c r1524770 open issue?
On Nov 13, 2013 8:22 AM, Yann Ylavic ylavic@gmail.com wrote: On Wed, Nov 13, 2013 at 8:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Looking at the (f-r-proxyreq == PROXYREQ_RESPONSE) code path, the comments note; * http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-23 * Section 3.3.3.3: If a Transfer-Encoding header field is * present in a response and the chunked transfer coding is not * the final encoding, the message body length is determined by * reading the connection until it is closed by the server. All well and good. However, that statement makes almost no sense if the response is not Connection: close (or http/1.0, absent keep-alive). ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, f-r, APLOGNO(01586) Unknown Transfer-Encoding: %s; using read-until-close); Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. Jim, Yann, was this case thought through? It seems premature to commit the backport without considering that edge case. When the origin gives no C-L and no T-E, ap_http_filter() already assumes a read-until-close, even if Connection: close is not specified, as per rfc2616 - Section 4.4 - Message Length - §5, or draft-ietf-httpbis-p1-messaging-24 - Section 3.3.3 - Message Body Length - §7. IMHO, this is the same case here, if the origin gives a T-E which is not ending with chunked (something likely shared with the client), it is supposed to close the connection when done (as per rfc/ietf-httpbis), and the filter has to trust that... Except, in -this- case, we unset the C-L returned by the origin server.
Re: http_filter.c r1524770 open issue?
On Wed, Nov 13, 2013 at 5:16 PM, William A. Rowe Jr. wmr...@gmail.comwrote: On Nov 13, 2013 8:22 AM, Yann Ylavic ylavic@gmail.com wrote: On Wed, Nov 13, 2013 at 8:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Looking at the (f-r-proxyreq == PROXYREQ_RESPONSE) code path, the comments note; * http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-23 * Section 3.3.3.3: If a Transfer-Encoding header field is * present in a response and the chunked transfer coding is not * the final encoding, the message body length is determined by * reading the connection until it is closed by the server. All well and good. However, that statement makes almost no sense if the response is not Connection: close (or http/1.0, absent keep-alive). ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, f-r, APLOGNO(01586) Unknown Transfer-Encoding: %s; using read-until-close); Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. Jim, Yann, was this case thought through? It seems premature to commit the backport without considering that edge case. When the origin gives no C-L and no T-E, ap_http_filter() already assumes a read-until-close, even if Connection: close is not specified, as per rfc2616 - Section 4.4 - Message Length - §5, or draft-ietf-httpbis-p1-messaging-24 - Section 3.3.3 - Message Body Length - §7. IMHO, this is the same case here, if the origin gives a T-E which is not ending with chunked (something likely shared with the client), it is supposed to close the connection when done (as per rfc/ietf-httpbis), and the filter has to trust that... Except, in -this- case, we unset the C-L returned by the origin server. This is what is required by the rfc (and draft-httpbis): when both C-L and T-E are received, the former must be ignored (by the reader) and removed (by the sender, proxy case). Actually r1524770 does remove the spurious C-L in ap_read_request (where, anyhow, a non-chunked T-E is not allowed from the client), but not in ap_http_filter where it is just ignored (read-until-close is used instead, but [backend-]r-headers_in is untouched). The one that will unset the C-L is ap_proxy_http_process_response (after backend-r-headers_in are merged into r-headers_out), as per (line 1429) : /* can't have both Content-Length and Transfer-Encoding */ if (apr_table_get(r-headers_out, Transfer-Encoding) apr_table_get(r-headers_out, Content-Length)) { /* * 2616 section 4.4, point 3: if both Transfer-Encoding * and Content-Length are received, the latter MUST be * ignored; * * To help mitigate HTTP Splitting, unset Content-Length * and shut down the backend server connection * XXX: We aught to treat such a response as uncachable */ apr_table_unset(r-headers_out, Content-Length); ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, APLOGNO(01107) server %s:%d returned Transfer-Encoding and Content-Length, backend-hostname, backend-port); backend-close = 1; } But that lasts long before r1524770, and I think the core http filter should do the same to be RFC compliant by itself... Anyway, the origin can send a non-chunked T-E and close the connection when done if it is sharing a special coding with the client, but it cannot send both a non-chunked T-E and a C-L for the latter to be used, the RFC says it will not in any case. I don't see any compatibility issue here, if the origin has no immediate plan to close the connection, it should use either a T-E (ultimately) chunked or a C-L alone, as it would without a special T-E... Regards, Yann.
Re: http_filter.c r1524770 open issue?
On Wed, 13 Nov 2013 22:19:37 +0100 Yann Ylavic ylavic@gmail.com wrote: On Wed, Nov 13, 2013 at 5:16 PM, William A. Rowe Jr. wmr...@gmail.comwrote: On Nov 13, 2013 8:22 AM, Yann Ylavic ylavic@gmail.com wrote: On Wed, Nov 13, 2013 at 8:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Looking at the (f-r-proxyreq == PROXYREQ_RESPONSE) code path, the comments note; * http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-23 * Section 3.3.3.3: If a Transfer-Encoding header field is * present in a response and the chunked transfer coding is not * the final encoding, the message body length is determined by * reading the connection until it is closed by the server. All well and good. However, that statement makes almost no sense if the response is not Connection: close (or http/1.0, absent keep-alive). ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, f-r, APLOGNO(01586) Unknown Transfer-Encoding: %s; using read-until-close); Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. Jim, Yann, was this case thought through? It seems premature to commit the backport without considering that edge case. When the origin gives no C-L and no T-E, ap_http_filter() already assumes a read-until-close, even if Connection: close is not specified, as per rfc2616 - Section 4.4 - Message Length - §5, or draft-ietf-httpbis-p1-messaging-24 - Section 3.3.3 - Message Body Length - §7. IMHO, this is the same case here, if the origin gives a T-E which is not ending with chunked (something likely shared with the client), it is supposed to close the connection when done (as per rfc/ietf-httpbis), and the filter has to trust that... Except, in -this- case, we unset the C-L returned by the origin server. This is what is required by the rfc (and draft-httpbis): when both C-L and T-E are received, the former must be ignored (by the reader) and removed (by the sender, proxy case). Actually r1524770 does remove the spurious C-L in ap_read_request (where, anyhow, a non-chunked T-E is not allowed from the client), but not in ap_http_filter where it is just ignored (read-until-close is used instead, but [backend-]r-headers_in is untouched). The one that will unset the C-L is ap_proxy_http_process_response (after backend-r-headers_in are merged into r-headers_out), as per (line 1429) : /* can't have both Content-Length and Transfer-Encoding */ if (apr_table_get(r-headers_out, Transfer-Encoding) apr_table_get(r-headers_out, Content-Length)) { /* * 2616 section 4.4, point 3: if both Transfer-Encoding * and Content-Length are received, the latter MUST be * ignored; * * To help mitigate HTTP Splitting, unset Content-Length * and shut down the backend server connection * XXX: We aught to treat such a response as uncachable */ apr_table_unset(r-headers_out, Content-Length); ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, APLOGNO(01107) server %s:%d returned Transfer-Encoding and Content-Length, backend-hostname, backend-port); backend-close = 1; } But that lasts long before r1524770, and I think the core http filter should do the same to be RFC compliant by itself... Anyway, the origin can send a non-chunked T-E and close the connection when done if it is sharing a special coding with the client, but it cannot send both a non-chunked T-E and a C-L for the latter to be used, the RFC says it will not in any case. I don't see any compatibility issue here, if the origin has no immediate plan to close the connection, it should use either a T-E (ultimately) chunked or a C-L alone, as it would without a special T-E... Yann, you just reiterated everything I've already conceded. You didn't mention keep-alives once. You used the word 'trust' before, but that is something not easily assigned if server resources are carelessly abused. Please keep in mind, not all mod_proxy applications are reverse proxies, and this exception is problematic for arbitrary origin servers requested by the client. I'll accept the silence as fact that this was not considered and will go ahead and veto the backports for the time being, until we add the appropriate guard against such origin server keepalive behavior. I would expect that patch to be very straightforward (trivial in simply dropping the response if keepalive is requested) and will start looking at it after finishing my reviews of various apr/httpd releases here. Bill
Re: http_filter.c r1524770 open issue?
On Nov 13, 2013, at 4:50 PM, William A. Rowe Jr. wr...@rowe-clan.net wrote: I'll accept the silence as fact that this was not considered and will go ahead and veto the backports for the time being That is not the case. The silence is that you are mentioning something which isn't applicable. It certainly isn't, from what I can tell, an issue with this patch directly. Please don't assume, especially when it results in unsubstantiated vetos.
Re: http_filter.c r1524770 open issue?
On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec?
Re: http_filter.c r1524770 open issue?
On Wed, 13 Nov 2013 17:14:15 -0500 Jim Jagielski j...@jagunet.com wrote: On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec? Keep-alive (implicit in HTTP/1.1 absent a Connection: close header) is orthogonal to an unknown message body. Think about it :) STUFF /thisaction HTTP/1.1 Transfer-Encoding: x-cleverness Content-Length: 1000 was a perfectly valid body, but if we drop C-L, and will pass the thousand bytes and sit indefinitely in an HTTP/0.9 body handling loop waiting an indeterminate amount of time for the close of the stream (which isn't immediately forthcoming) something is busted. And I'm not claiming existing behavior was any more correct :)
Re: http_filter.c r1524770 open issue?
On Thu, Nov 14, 2013 at 12:05 AM, William A. Rowe Jr. wr...@rowe-clan.netwrote: On Wed, 13 Nov 2013 17:14:15 -0500 Jim Jagielski j...@jagunet.com wrote: On Nov 13, 2013, at 2:25 AM, William A. Rowe Jr. wr...@rowe-clan.net wrote: Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. I cannot grok the above. The RFC itself does not make the differentiation between keepalive connections or not. So what exactly is the issue? Are you saying we should handle keepalive connections in this path differently? How is that supported by the spec? Keep-alive (implicit in HTTP/1.1 absent a Connection: close header) is orthogonal to an unknown message body. Think about it :) Regarding the connection persistance, http://tools.ietf.org/id/draft-ietf-httpbis-p1-messaging-24.html#rfc.section.6.3states : In order to remain persistent, all messages on a connection MUST have a self-defined message length (i.e., one not defined by closure of the connection), as described in Section 3.3. So no keepalive is possible if the message length (content-coding) is undefined. STUFF /thisaction HTTP/1.1 Transfer-Encoding: x-cleverness Content-Length: 1000 was a perfectly valid body, but if we drop C-L, and will pass the thousand bytes and sit indefinitely in an HTTP/0.9 body handling loop waiting an indeterminate amount of time for the close of the stream (which isn't immediately forthcoming) something is busted. This is not a valid HTTP request, the T-E must end with chunked. And if it were the case, the C-L must be ignored by the receiver. That could be a valid HTTP response : HTTP/1.1 200 OK Transfer-Encoding: x-cleverness Content-Length: 1000 but still the C-L must be ignored by the receiver, and a read-until-close be used by it. And I'm not claiming existing behavior was any more correct :)
http_filter.c r1524770 open issue?
Looking at the (f-r-proxyreq == PROXYREQ_RESPONSE) code path, the comments note; * http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-23 * Section 3.3.3.3: If a Transfer-Encoding header field is * present in a response and the chunked transfer coding is not * the final encoding, the message body length is determined by * reading the connection until it is closed by the server. All well and good. However, that statement makes almost no sense if the response is not Connection: close (or http/1.0, absent keep-alive). ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, f-r, APLOGNO(01586) Unknown Transfer-Encoding: %s; using read-until-close); Here we've unset C-L and T-E. but it makes no sense to wait if the origin server has no immediate plan to close the connection. Jim, Yann, was this case thought through? It seems premature to commit the backport without considering that edge case.