Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-10 Thread William A. Rowe, Jr.

Plüm wrote:

William A. Rowe, Jr. wrote:


Ruediger Pluem wrote:



[..cut..]



Quick consideration;

Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
generalize the problem.  ANY metadata bucket that isn't recognized and
handled by an intermediate filter probably indiciates a problem; and
therefore the result is a non-cacheable, broken response.


Actually two cases.  In the error bucket case, it's 
non-cacheable, and broken.


So what do you think should be done in this case with respect to the 
'brokenness'.

1. set r-connection-keepalive to AP_CONN_CLOSE
2. Do not sent the last chunk marker in the case of a chunked encoding response
3. Do 1. and 2.


The straightforward thing is to close the client socket.  Obviously it's
not that trivial; unix can reuse the same fd almost immediately.  Perhaps
close the write side?  In any case, the connection should be marked aborted.

Number 2 follows from 1, of course we don't 'finish' the response.

Remember, the back end is *busted* midstream.  We have to convey that, but
we don't have to maintain the integrety of every byte sent by the (errant)
server, IMHO.


Next question is: Do we need to stop sending further data to the client
immediately?


Yup, again IMHO.

In the unrecognized bucket type case, it's non-cacheable (a 
'complex' response),
but it is likely serveable to the front end client.  In both 
cases, if mod_cache
doesn't grok what it sees, then something 'interesting' is 
going on and we would

not want to deposit into the cache.


I agree with the goals, but making it non cacheable is not easy to add to the
current patch, because the HTTP_OUTERROR filter is a protocol filter that is
run after the CACHE_SAVE filter.


My comments were ment to illustrate that a new filter is a waste of resources,
stack setup and teardown.  We've overengineered what (should be) some simple
code.  Delegating error handling which should occur in the (existing) filter
stack, previously, off to it's own filter is (again IMHO) sort of lame :)


But apart from the case that no content-length is present the CACHE_SAVE
filter itself does not iterate over the brigade.
So we would need to add an additional loop over the brigade inside
of CACHE_SAVE filter to scan for these meta buckets.


Yes; there is some complexity here.  Didn't suggest it was trivial :-)


Furthermore I think we need to keep in mind that, if we think that this
reponse is not worth caching, we may should make any upstream proxies
think the same. In the case of a broken backend this is reached (depending
on the transfer encoding) by

1. sending less content then the content-length header announces
2. do not send the last-chunk marker.


right.


But in the case of an unrecognized bucket type we must let the upstream
proxies know that it is not cacheable via headers.


as you might have guessed, my comments were aimed at those 'interesting'
applications that were otherwise cacheable - e.g. those folks who keep
arguing that things like mod_authnz_hostname should interact with the cache
(which yes I disagree with, but this would provide basic mechanisms to handle
this case.)

 But this could be impossible if the headers had been already sent.

exactly so.


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-10 Thread Ruediger Pluem


On 01/10/2006 02:11 PM, William A. Rowe, Jr. wrote:
 Plüm wrote:
 
 William A. Rowe, Jr. wrote:

 Ruediger Pluem wrote:

[..cut..]

 
 
 The straightforward thing is to close the client socket.  Obviously it's
 not that trivial; unix can reuse the same fd almost immediately.  Perhaps
 close the write side?  In any case, the connection should be marked
 aborted.
 
 Number 2 follows from 1, of course we don't 'finish' the response.
 
 Remember, the back end is *busted* midstream.  We have to convey that, but
 we don't have to maintain the integrety of every byte sent by the (errant)
 server, IMHO.

In the case of the broken backend the patch already does this. Do I understand
you correctly that the behaviour currently done for a broken backend should be
applied to *any* type of error bucket?

[..cut..]

 as you might have guessed, my comments were aimed at those 'interesting'
 applications that were otherwise cacheable - e.g. those folks who keep
 arguing that things like mod_authnz_hostname should interact with the cache
 (which yes I disagree with, but this would provide basic mechanisms to
 handle
 this case.)

I think we disagree on this as I also belong to those folks, but thats another
story and thread :-).
Furthermore I do not see right now how this helps in the 'mod_authnz_hostname 
case'
as the response should still be *cached*. It shouldn't be delivered from the 
cache
without checking. But as said this point does not belong to this thread.

[..cut..]

Apart from getting very welcome improvement feedback I also wanted to check if 
someone
is -1 on the patch. As I regard the partial pages problem as something that 
should
be fixed in 2.2.1 I will propose it for backport once Brian did a successful 
test with
the patch and I do not see any -1 on it.

Regards

Rüdiger


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-09 Thread William A. Rowe, Jr.

Ruediger Pluem wrote:


Thanks to Jim for reviewing the patch. He detected one missed patch
and made some comments in the code clearer. The new patch list now:


Quick consideration;

Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
generalize the problem.  ANY metadata bucket that isn't recognized and
handled by an intermediate filter probably indiciates a problem; and
therefore the result is a non-cacheable, broken response.


Thoughts?

Bill


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-09 Thread William A. Rowe, Jr.

William A. Rowe, Jr. wrote:

Ruediger Pluem wrote:



Thanks to Jim for reviewing the patch. He detected one missed patch
and made some comments in the code clearer. The new patch list now:



Quick consideration;

Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
generalize the problem.  ANY metadata bucket that isn't recognized and
handled by an intermediate filter probably indiciates a problem; and
therefore the result is a non-cacheable, broken response.


Actually two cases.  In the error bucket case, it's non-cacheable, and broken.
In the unrecognized bucket type case, it's non-cacheable (a 'complex' response),
but it is likely serveable to the front end client.  In both cases, if mod_cache
doesn't grok what it sees, then something 'interesting' is going on and we would
not want to deposit into the cache.

This would guard against new designs causing new troublesome interactions with
mod_cache.  As some new bucket type is 'taught' to mod_cache, it can grow to
handle otherwise harmless requests.  But I suspect that the module should err
on the side of paranoia.

Bill


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-09 Thread Graham Leggett

William A. Rowe, Jr. wrote:


Rather than look for HTTP_BAD_GATEWAY error bucket, we can actually
generalize the problem.  ANY metadata bucket that isn't recognized and
handled by an intermediate filter probably indiciates a problem; and
therefore the result is a non-cacheable, broken response.


+1.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-07 Thread Ruediger Pluem


On 01/05/2006 08:59 PM, Ruediger Pluem wrote:
 
 On 01/05/2006 01:51 PM, Ruediger Pluem wrote:
 
 [..cut..]
 
 I finally merged all the commits done to the trunk on this issue

Thanks to Jim for reviewing the patch. He detected one missed patch
and made some comments in the code clearer. The new patch list now:

r354628
r354636
r357461
r357519
r358022  - new
r365374
r366181
r366554  - new
r366558  - new

I created a new merged patch that works with 2.2.x. So the same procedure
again please :).

I would like Brian (Akins) to give it a try and of course if someone else has a
look on it, it wouldn't hurt.

Regards

Rüdiger

Index: modules/http/http_core.c
===
--- modules/http/http_core.c(Revision 366698)
+++ modules/http/http_core.c(Arbeitskopie)
@@ -39,6 +39,7 @@
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_input_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_header_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_chunk_filter_handle;
+AP_DECLARE_DATA ap_filter_rec_t *ap_http_outerror_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_byterange_filter_handle;
 
 static const char *set_keep_alive_timeout(cmd_parms *cmd, void *dummy,
@@ -202,6 +203,8 @@
 NULL, r, r-connection);
 ap_add_output_filter_handle(ap_http_header_filter_handle,
 NULL, r, r-connection);
+ap_add_output_filter_handle(ap_http_outerror_filter_handle,
+NULL, r, r-connection);
 }
 
 return OK;
@@ -237,6 +240,9 @@
 ap_chunk_filter_handle =
 ap_register_output_filter(CHUNK, ap_http_chunk_filter,
   NULL, AP_FTYPE_TRANSCODE);
+ap_http_outerror_filter_handle =
+ap_register_output_filter(HTTP_OUTERROR, ap_http_outerror_filter,
+  NULL, AP_FTYPE_PROTOCOL);
 ap_byterange_filter_handle =
 ap_register_output_filter(BYTERANGE, ap_byterange_filter,
   NULL, AP_FTYPE_PROTOCOL);
Index: modules/http/http_filters.c
===
--- modules/http/http_filters.c (Revision 366698)
+++ modules/http/http_filters.c (Arbeitskopie)
@@ -1318,3 +1318,29 @@
 return bufsiz;
 }
 
+/* Filter to handle any error buckets on output */
+apr_status_t ap_http_outerror_filter(ap_filter_t *f,
+ apr_bucket_brigade *b)
+{
+request_rec *r = f-r;
+apr_bucket *e;
+
+for (e = APR_BRIGADE_FIRST(b);
+ e != APR_BRIGADE_SENTINEL(b);
+ e = APR_BUCKET_NEXT(e))
+{
+if (AP_BUCKET_IS_ERROR(e)) {
+/*
+ * Start of error handling state tree. Just one condition
+ * right now :)
+ */
+if (((ap_bucket_error *)(e-data))-status == HTTP_BAD_GATEWAY) {
+/* stream aborted and we have not ended it yet */
+r-connection-keepalive = AP_CONN_CLOSE;
+}
+}
+}
+
+return ap_pass_brigade(f-next,  b);
+}
+
Index: modules/http/mod_core.h
===
--- modules/http/mod_core.h (Revision 366698)
+++ modules/http/mod_core.h (Arbeitskopie)
@@ -42,6 +42,7 @@
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_http_input_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_http_header_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_chunk_filter_handle;
+extern AP_DECLARE_DATA ap_filter_rec_t *ap_http_outerror_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_byterange_filter_handle;
 
 /*
@@ -54,6 +55,10 @@
 /* HTTP/1.1 chunked transfer encoding filter. */
 apr_status_t ap_http_chunk_filter(ap_filter_t *f, apr_bucket_brigade *b);
 
+/* Filter to handle any error buckets on output */
+apr_status_t ap_http_outerror_filter(ap_filter_t *f,
+ apr_bucket_brigade *b);
+
 char *ap_response_code_string(request_rec *r, int error_index);
 
 /**
Index: modules/http/chunk_filter.c
===
--- modules/http/chunk_filter.c (Revision 366698)
+++ modules/http/chunk_filter.c (Arbeitskopie)
@@ -39,6 +39,12 @@
 
 #include mod_core.h
 
+/*
+ * A pointer to this is used to memorize in the filter context that a bad
+ * gateway error bucket had been seen. It is used as an invented unique 
pointer.
+ */
+static char bad_gateway_seen;
+
 apr_status_t ap_http_chunk_filter(ap_filter_t *f, apr_bucket_brigade *b)
 {
 #define ASCII_CRLF  \015\012
@@ -67,6 +73,16 @@
 eos = e;
 break;
 }
+if (AP_BUCKET_IS_ERROR(e)
+ (((ap_bucket_error *)(e-data))-status
+== HTTP_BAD_GATEWAY)) {
+/*
+ * We had a broken backend. Memorize this in the filter
+ * context.

Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-06 Thread Jim Jagielski

Still not sure why you are using a specific error detection
filter rather than the generic one in -trunk

On Jan 5, 2006, at 2:59 PM, Ruediger Pluem wrote:


@@ -146,13 +162,20 @@
  *   2) the trailer
  *   3) the end-of-chunked body CRLF
  *
- * If there is no EOS bucket, then do nothing.
+ * If there is no EOS bucket, or if we had seen an error  
bucket with
+ * status HTTP_BAD_GATEWAY then do nothing. We have  
memorized an

+ * error bucket that we had seen in the filter context.
+ * The error bucket with status HTTP_BAD_GATEWAY indicates  
that the
+ * connection to the backend (mod_proxy) broke in the  
middle of the
+ * response. In order to signal the client that something  
went wrong
+ * we do not create the last-chunk marker and set c- 
keepalive to

+ * AP_CONN_CLOSE in the core output filter.
  *
  * XXX: it would be nice to combine this with the end-of- 
chunk

  * marker above, but this is a bit more straight-forward for
  * now.
  */
-if (eos != NULL) {
+if (eos  !f-ctx) {


Code logic doesn't match comment.


+/*
+ * Ensure that we sent an EOS bucket thru the filter chain, if  
we already
+ * have sent some data. Maybe ap_proxy_backend_broke was  
called and added
+ * one to the brigade already. So we should not do this in  
this case.

+ */
+if (data_sent  !r-eos_sent  APR_BRIGADE_EMPTY 
(output_brigade)) {

+e = apr_bucket_eos_create(r-connection-bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(output_brigade, e);
+}
+



Also, if data_sent is true, then ap_proxy_backend_broke() already
sent the EOS, so why are we checking if it's true again? I
think the logic is wrong...


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-06 Thread Jim Jagielski


On Jan 6, 2006, at 1:47 PM, Jim Jagielski wrote:


Still not sure why you are using a specific error detection
filter rather than the generic one in -trunk

On Jan 5, 2006, at 2:59 PM, Ruediger Pluem wrote:


@@ -146,13 +162,20 @@
  *   2) the trailer
  *   3) the end-of-chunked body CRLF
  *
- * If there is no EOS bucket, then do nothing.
+ * If there is no EOS bucket, or if we had seen an error  
bucket with
+ * status HTTP_BAD_GATEWAY then do nothing. We have  
memorized an

+ * error bucket that we had seen in the filter context.
+ * The error bucket with status HTTP_BAD_GATEWAY  
indicates that the
+ * connection to the backend (mod_proxy) broke in the  
middle of the
+ * response. In order to signal the client that something  
went wrong
+ * we do not create the last-chunk marker and set c- 
keepalive to

+ * AP_CONN_CLOSE in the core output filter.
  *
  * XXX: it would be nice to combine this with the end-of- 
chunk

  * marker above, but this is a bit more straight-forward for
  * now.
  */
-if (eos != NULL) {
+if (eos  !f-ctx) {


Code logic doesn't match comment.



I should clarify that: when the comment says or yet the
code does an and then it causes undue confusion, even
if the 2 do sync up. 


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-06 Thread Ruediger Pluem


On 01/06/2006 07:47 PM, Jim Jagielski wrote:
 Still not sure why you are using a specific error detection
 filter rather than the generic one in -trunk
 

Arghh. Sorry I must have missed to include your patch that changed this.
Thanks for catching this. I will provide a new patch.

[..cut..]

 +/*
 + * Ensure that we sent an EOS bucket thru the filter chain, if 
 we already
 + * have sent some data. Maybe ap_proxy_backend_broke was  called
 and added
 + * one to the brigade already. So we should not do this in  this
 case.
 + */
 +if (data_sent  !r-eos_sent  APR_BRIGADE_EMPTY
 (output_brigade)) {
 +e = apr_bucket_eos_create(r-connection-bucket_alloc);
 +APR_BRIGADE_INSERT_TAIL(output_brigade, e);
 +}
 +

 
 Also, if data_sent is true, then ap_proxy_backend_broke() already
 sent the EOS, so why are we checking if it's true again? I
 think the logic is wrong...

No, there is also the case that the client aborted the connection.
In this case

status == APR_SUCCESS

So the outer if condition around ap_proxy_backend_broke is not true and it is
not called. Anyway since we had sent some data we should sent an EOS if we
have not done already to make all filters on the chain aware of it.
Especially mod_disk_cache needs to know in order to remove the temporary
file created for the cache entry.

Regards

Rüdiger



Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-06 Thread Ruediger Pluem


On 01/06/2006 08:03 PM, Jim Jagielski wrote:
 
 On Jan 6, 2006, at 1:47 PM, Jim Jagielski wrote:
 

[..cut..]


 
 I should clarify that: when the comment says or yet the
 code does an and then it causes undue confusion, even
 if the 2 do sync up.

I think with the adjustments you made to the comments it is now much
clearer what is done and this point is closed. Thanks for doing this.

Regards

Rüdiger



Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-06 Thread Jim Jagielski
Ruediger Pluem wrote:
 
 I think with the adjustments you made to the comments it is now much
 clearer what is done and this point is closed. Thanks for doing this.
 

np :)

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
If you can dodge a wrench, you can dodge a ball.


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-05 Thread Ruediger Pluem


On 01/03/2006 06:15 PM, Jim Jagielski wrote:
 
 On Jan 2, 2006, at 4:18 PM, Ruediger Pluem wrote:
 

 1. Proposal
 If a subrequest has a broken backend also set r-no_cache for the 
 main request
 and ensure that the chunk filter does not sent the last chunk  marker
 in this case.

 2. Proposal
 If a subrequest has a broken backend do not sent the error bucket. 
 Only set r-no_cache
 to ensure that this subrequest response does not get cached.

 

I am currently working on proposal #1. In order to ensure that the chunk
filter does not sent a last chunk marker in this case I try to memorize
a seen error bucket in the filter context.
But I remember myself that there had been casting issues in the past that
created compiler warnings especially with gcc 4. The patch below compiles
fine with my gcc 3.2.2 with -Wall. So if someone could give a comment
if

f-ctx = (void *)(1)

is fine it would be great.

Regards

Rüdiger


Index: chunk_filter.c
===
--- chunk_filter.c  (Revision 366160)
+++ chunk_filter.c  (Arbeitskopie)
@@ -47,7 +47,6 @@
 apr_bucket_brigade *more;
 apr_bucket *e;
 apr_status_t rv;
-int bad_gateway_seen = 0;

 for (more = NULL; b; b = more, more = NULL) {
 apr_off_t bytes = 0;
@@ -71,8 +70,11 @@
 if (AP_BUCKET_IS_ERROR(e)
  (((ap_bucket_error *)(e-data))-status
 == HTTP_BAD_GATEWAY)) {
-/* We had a broken backend. Memorize this. */
-bad_gateway_seen = 1;
+/*
+ * We had a broken backend. Memorize this in the filter
+ * context.
+ */
+f-ctx = (void *)(1);
 continue;
 }
 if (APR_BUCKET_IS_FLUSH(e)) {
@@ -155,7 +157,8 @@
  *   3) the end-of-chunked body CRLF
  *
  * If there is no EOS bucket, or if we had seen an error bucket with
- * status HTTP_BAD_GATEWAY then do nothing.
+ * status HTTP_BAD_GATEWAY then do nothing. We have memorized an
+ * error bucket that we had seen in the filter context.
  * The error bucket with status HTTP_BAD_GATEWAY indicates that the
  * connection to the backend (mod_proxy) broke in the middle of the
  * response. In order to signal the client that something went wrong
@@ -166,7 +169,7 @@
  * marker above, but this is a bit more straight-forward for
  * now.
  */
-if (eos  !bad_gateway_seen) {
+if (eos  !f-ctx) {
 /* XXX: (2) trailers ... does not yet exist */
 e = apr_bucket_immortal_create(ASCII_ZERO ASCII_CRLF
/* trailers */






Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-05 Thread Joe Orton
On Thu, Jan 05, 2006 at 12:48:25PM +0100, Ruediger Pluem wrote:
 But I remember myself that there had been casting issues in the past that
 created compiler warnings especially with gcc 4. The patch below compiles
 fine with my gcc 3.2.2 with -Wall. So if someone could give a comment
 if
 
 f-ctx = (void *)(1)
 
 is fine it would be great.

André's trick of using invented unique pointers by doing:

static char sentinel; (in global scope)

f-ctx = sentinel;

is neater and avoids the casting mess.  (pick a suitable variable name 
though :)

joe


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-05 Thread Ruediger Pluem


On 01/05/2006 12:58 PM, Joe Orton wrote:
 On Thu, Jan 05, 2006 at 12:48:25PM +0100, Ruediger Pluem wrote:

[..cut..]

 
 André's trick of using invented unique pointers by doing:
 
 static char sentinel; (in global scope)
 
 f-ctx = sentinel;
 
 is neater and avoids the casting mess.  (pick a suitable variable name 
 though :)

Thanks Joe for *pointing* me to this. :-)
This is really nice trick. I will use it.

Regards

Rüdiger



Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-05 Thread Nick Kew
On Thursday 05 January 2006 11:58, Joe Orton wrote:
 On Thu, Jan 05, 2006 at 12:48:25PM +0100, Ruediger Pluem wrote:
  But I remember myself that there had been casting issues in the past that
  created compiler warnings especially with gcc 4. The patch below compiles
  fine with my gcc 3.2.2 with -Wall. So if someone could give a comment
  if
 
  f-ctx = (void *)(1)
 
  is fine it would be great.

 André's trick of using invented unique pointers by doing:

 static char sentinel; (in global scope)

 f-ctx = sentinel;

 is neater and avoids the casting mess.  (pick a suitable variable name
 though :)

Why go to the trouble of inventing a pointer?  You already have unique
pointers in global scope.  An obvious example:

module some_module;

  f-ctx = some_module;

-- 
Nick Kew


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-05 Thread Ruediger Pluem


On 01/05/2006 01:51 PM, Ruediger Pluem wrote:

[..cut..]

I finally merged all the commits done to the trunk on this issue

r354628
r354636
r357461
r357519
r365374
r366181

into one patch that works with 2.2.x. From my current point of view all
aspects of this issue should be considered by this patch.
I would like Brian (Akins) to give it a try and of course if someone else has a
look on it, it wouldn't hurt.

Regards

Rüdiger
Index: modules/http/http_core.c
===
--- modules/http/http_core.c(Revision 357457)
+++ modules/http/http_core.c(Arbeitskopie)
@@ -39,6 +39,7 @@
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_input_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_header_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_chunk_filter_handle;
+AP_DECLARE_DATA ap_filter_rec_t *ap_broken_backend_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_byterange_filter_handle;
 
 static const char *set_keep_alive_timeout(cmd_parms *cmd, void *dummy,
@@ -237,6 +238,10 @@
 ap_chunk_filter_handle =
 ap_register_output_filter(CHUNK, ap_http_chunk_filter,
   NULL, AP_FTYPE_TRANSCODE);
+ap_broken_backend_filter_handle =
+ap_register_output_filter(BROKEN_BACKEND,
+  ap_http_broken_backend_filter,
+  NULL, AP_FTYPE_PROTOCOL);
 ap_byterange_filter_handle =
 ap_register_output_filter(BYTERANGE, ap_byterange_filter,
   NULL, AP_FTYPE_PROTOCOL);
Index: modules/http/http_filters.c
===
--- modules/http/http_filters.c (Revision 357457)
+++ modules/http/http_filters.c (Arbeitskopie)
@@ -1043,6 +1043,9 @@
  */
 ap_add_output_filter(CHUNK, NULL, r, r-connection);
 }
+/* If we have a Proxy request, add the BROKEN_BACKEND filter now */
+if (r-proxyreq != PROXYREQ_NONE)
+ap_add_output_filter(BROKEN_BACKEND, NULL, r, r-connection);
 
 /* Don't remove this filter until after we have added the CHUNK filter.
  * Otherwise, f-next won't be the CHUNK filter and thus the first
@@ -1318,3 +1321,23 @@
 return bufsiz;
 }
 
+apr_status_t ap_http_broken_backend_filter(ap_filter_t *f,
+   apr_bucket_brigade *b)
+{
+request_rec *r = f-r;
+apr_bucket *e;
+
+for (e = APR_BRIGADE_FIRST(b);
+ e != APR_BRIGADE_SENTINEL(b);
+ e = APR_BUCKET_NEXT(e))
+{
+if (AP_BUCKET_IS_ERROR(e)
+ (((ap_bucket_error *)(e-data))-status == HTTP_BAD_GATEWAY)) {
+/* stream aborted and we have not ended it yet */
+r-connection-keepalive = AP_CONN_CLOSE;
+}
+}
+
+return ap_pass_brigade(f-next,  b);
+}
+
Index: modules/http/mod_core.h
===
--- modules/http/mod_core.h (Revision 357457)
+++ modules/http/mod_core.h (Arbeitskopie)
@@ -42,6 +42,7 @@
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_http_input_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_http_header_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_chunk_filter_handle;
+extern AP_DECLARE_DATA ap_filter_rec_t *ap_broken_backend_filter_handle;
 extern AP_DECLARE_DATA ap_filter_rec_t *ap_byterange_filter_handle;
 
 /*
@@ -54,6 +55,10 @@
 /* HTTP/1.1 chunked transfer encoding filter. */
 apr_status_t ap_http_chunk_filter(ap_filter_t *f, apr_bucket_brigade *b);
 
+/* Filter to close the connection to the client if backend broke */
+apr_status_t ap_http_broken_backend_filter(ap_filter_t *f,
+   apr_bucket_brigade *b);
+
 char *ap_response_code_string(request_rec *r, int error_index);
 
 /**
Index: modules/http/chunk_filter.c
===
--- modules/http/chunk_filter.c (Revision 357457)
+++ modules/http/chunk_filter.c (Arbeitskopie)
@@ -39,6 +39,12 @@
 
 #include mod_core.h
 
+/*
+ * A pointer to this is used to memorize in the filter context that a bad
+ * gateway error bucket had been seen. It is used as an invented unique 
pointer.
+ */
+static char bad_gateway_seen;
+
 apr_status_t ap_http_chunk_filter(ap_filter_t *f, apr_bucket_brigade *b)
 {
 #define ASCII_CRLF  \015\012
@@ -67,6 +73,16 @@
 eos = e;
 break;
 }
+if (AP_BUCKET_IS_ERROR(e)
+ (((ap_bucket_error *)(e-data))-status
+== HTTP_BAD_GATEWAY)) {
+/*
+ * We had a broken backend. Memorize this in the filter
+ * context.
+ */
+f-ctx = bad_gateway_seen;
+continue;
+}
 if (APR_BUCKET_IS_FLUSH(e)) {
 flush = e;
 more = apr_brigade_split(b, APR_BUCKET_NEXT(e));
@@ 

Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-03 Thread Ruediger Pluem


On 01/03/2006 03:52 AM, Justin Erenkrantz wrote:
 On Mon, Jan 02, 2006 at 10:18:19PM +0100, Ruediger Pluem wrote:
 

[..cut..]

 
2. Proposal
If a subrequest has a broken backend do not sent the error bucket. Only
set r-no_cache to ensure that this subrequest response does not get
cached.
 
 
 I think we still need to ensure that an error bucket is sent too, right?
 Otherwise, the connection will be reused - what am I missing?  -- justin

No, you are not missing anything. The question to me was: Do we need to close
a keepalive on the main request just because a subrequest failed in the middle
of the response?
Or to be more precise: Should the behaviour to cut off the keepalive be the 
default
behaviour in such cases with the chance for subrequest creators to remove the 
error
bucket and to make the response cacheable again or should it be the other way 
round
that the subrequest creator is reponsible for preventing caching and closing the
keepalive by sending the error bucket by himself if he thinks that this is 
needed?
While writing this I personally come to the conclusion that the 1. proposal
(sending the error bucket) is saver as a default behaviour.


Regards

Rüdiger



Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-03 Thread Justin Erenkrantz
On 1/3/06, Ruediger Pluem [EMAIL PROTECTED] wrote:
 2. Proposal
 If a subrequest has a broken backend do not sent the error bucket. Only
 set r-no_cache to ensure that this subrequest response does not get
 cached.
 
 
  I think we still need to ensure that an error bucket is sent too, right?
  Otherwise, the connection will be reused - what am I missing?  -- justin

 No, you are not missing anything. The question to me was: Do we need to close
 a keepalive on the main request just because a subrequest failed in the middle
 of the response?
 Or to be more precise: Should the behaviour to cut off the keepalive be the 
 default
 behaviour in such cases with the chance for subrequest creators to remove the 
 error
 bucket and to make the response cacheable again or should it be the other way 
 round
 that the subrequest creator is reponsible for preventing caching and closing 
 the
 keepalive by sending the error bucket by himself if he thinks that this is 
 needed?
 While writing this I personally come to the conclusion that the 1. proposal
 (sending the error bucket) is saver as a default behaviour.

Oh, I didn't realize you intended it as an either/or scenario.  Then,
yes, I agree that #1 is correct.  =)  -- justin


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-03 Thread Jim Jagielski


On Jan 2, 2006, at 4:18 PM, Ruediger Pluem wrote:


1. Proposal
If a subrequest has a broken backend also set r-no_cache for the  
main request
and ensure that the chunk filter does not sent the last chunk  
marker in this case.


2. Proposal
If a subrequest has a broken backend do not sent the error bucket.  
Only set r-no_cache

to ensure that this subrequest response does not get cached.



Proposal #1


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-02 Thread Ruediger Pluem


On 12/20/2005 10:14 PM, Ruediger Pluem wrote:

[..cut..]

 But you pointed me to an interesting thing: If the main response is
 T-E chunked and the backend error happened during the subrequest, the
 chunked filter may sometimes add the last-chunk marker (if the brigade
 containing the error bucket does *not* contain and eos bucket) and
 sometimes not (if the brigade containing the error bucket does contain
 an eos bucket).
 In the case that the broken backend happend on the main request the brigade
 always contains both buckets as they both get added to the brigade. But in
 the subrequest case I guess the eos bucket (of the subrequest) gets removed
 and the main request adds its own one later and maybe to a different brigade
 then the error bucket.

Meanwhile I took some time to investigate this further. I checked with the
subrequests caused by mod_include's #include virtual tag. In this case the
error bucket and the eos bucket seems to get passed to the chunk filter on
different brigades. This means the chunk filter does sent the last chunk marker
in this case. OTOH the error bucket causes the keepalive connection on the
main request to be closed and the partially generated page of mod_include gets
cached.
Before I take any further actions I would like to discuss the desired behaviour
in the subrequest case:

1. Proposal
If a subrequest has a broken backend also set r-no_cache for the main request
and ensure that the chunk filter does not sent the last chunk marker in this 
case.

2. Proposal
If a subrequest has a broken backend do not sent the error bucket. Only set 
r-no_cache
to ensure that this subrequest response does not get cached.

Further proposals are welcome.

Furthermore I am wondering if the current behaviour of mod_include is correct.
Shouldn't we prevent caching if any of the mod_include operations got wrong?
The broken backend for the #include virtual tag is only one example for this.
And if we decide that we should not cache in this situation, how do we let 
external
caches know? Should we also do not sent the last chunk marker in this case?


 I am even getting unsure if the brigade always contains error and eos bucket
 in the main request case, as there might happen an brigade split on the way
 to the chunk filter. Does anybody know if this can happen?
 

Anybody found an answer to this question? If this is not sure it may be a good
idea to memorize the fact that the error bucket was seen in the context of
the chunk filter.

Regards

Rüdiger




Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2006-01-02 Thread Justin Erenkrantz
On Mon, Jan 02, 2006 at 10:18:19PM +0100, Ruediger Pluem wrote:
 Before I take any further actions I would like to discuss the desired 
 behaviour
 in the subrequest case:
 
 1. Proposal
 If a subrequest has a broken backend also set r-no_cache for the main
 request and ensure that the chunk filter does not sent the last chunk
 marker in this case.

+1.

 2. Proposal
 If a subrequest has a broken backend do not sent the error bucket. Only
 set r-no_cache to ensure that this subrequest response does not get
 cached.

I think we still need to ensure that an error bucket is sent too, right?
Otherwise, the connection will be reused - what am I missing?  -- justin


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-20 Thread Brian Akins

In my random tests I discovered something:

We do not want to close connection to client if the proxy request is a 
sub-request.  We somehow have to flag an error in that case, but we do 
not want to kill entire connection.


Just bit me in some qa stuff.  I had to revert!

Ideas?

--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-20 Thread Ruediger Pluem


On 12/20/2005 09:15 PM, Brian Akins wrote:
 In my random tests I discovered something:
 
 We do not want to close connection to client if the proxy request is a
 sub-request.  We somehow have to flag an error in that case, but we do
 not want to kill entire connection.

I guess this is technically possible, but is it really useful? The
subrequest returns an incomplete response and thus could make the whole
response invalid. But maybe I just miss the correct usecase for this. So
could you please explain me where thisbit you?
But you pointed me to an interesting thing: If the main response is
T-E chunked and the backend error happened during the subrequest, the
chunked filter may sometimes add the last-chunk marker (if the brigade
containing the error bucket does *not* contain and eos bucket) and
sometimes not (if the brigade containing the error bucket does contain
an eos bucket).
In the case that the broken backend happend on the main request the brigade
always contains both buckets as they both get added to the brigade. But in
the subrequest case I guess the eos bucket (of the subrequest) gets removed
and the main request adds its own one later and maybe to a different brigade
then the error bucket.
I am even getting unsure if the brigade always contains error and eos bucket
in the main request case, as there might happen an brigade split on the way
to the chunk filter. Does anybody know if this can happen?

Regards

Rüdiger


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-19 Thread Jim Jagielski


On Dec 18, 2005, at 5:12 PM, Ruediger Pluem wrote:




On 12/18/2005 06:21 PM, Jim Jagielski wrote:

[..cut..]



My thoughts were something more like a ap_http_error_ofilter
which simply checks for the error bucket and does appropriate
things; something very general that the full http chain can
use.


Also an interesting idea. Do you already have further applications  
for this in

mind apart from the current problem?



Not yet, but certainly a generic solution provides greater
flexibility for other things that might want to use it in
the future :)


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-18 Thread Ruediger Pluem


On 12/17/2005 11:22 PM, Ruediger Pluem wrote:
 
 On 12/17/2005 06:30 PM, Jim Jagielski wrote:

[..cut..]

 
 
Even so, I say let's fold this into trunk as is, and
then work on abstracting it out.
 

Done as r357461.
Attached a patch that

- fixes the same problem for mod_proxy_ajp
- puts the common code in proxy_util
- fixes a little return code issue that is related to Justins original patch in 
r354628


Regards

Rüdiger
Index: modules/proxy/mod_proxy_ajp.c
===
--- modules/proxy/mod_proxy_ajp.c   (Revision 357461)
+++ modules/proxy/mod_proxy_ajp.c   (Arbeitskopie)
@@ -138,6 +138,7 @@
 int havebody = 1;
 int isok = 1;
 apr_off_t bb_len;
+int data_sent = 0;
 #ifdef FLUSHING_BANDAID
 apr_int32_t conn_poll_fd;
 apr_pollfd_t *conn_poll;
@@ -348,6 +349,7 @@
   proxy: error processing body);
 isok = 0;
 }
+data_sent = 1;
 apr_brigade_cleanup(output_brigade);
 }
 else {
@@ -363,6 +365,7 @@
   proxy: error processing body);
 isok = 0;
 }
+data_sent = 1;
 break;
 default:
 isok = 0;
@@ -400,8 +403,6 @@
 }
 apr_brigade_destroy(input_brigade);
 
-apr_brigade_destroy(output_brigade);
-
 if (status != APR_SUCCESS) {
 /* We had a failure: Close connection to backend */
 conn-close++;
@@ -409,9 +410,20 @@
  proxy: send body failed to %pI (%s),
  conn-worker-cp-addr,
  conn-worker-hostname);
+/*
+ * If we already send data, signal a broken backend connection
+ * upwards in the chain.
+ */
+if (data_sent) {
+ap_proxy_backend_broke(r, output_brigade);
+/* Return DONE to avoid error messages being added to the stream */
+return DONE;
+}
 return HTTP_SERVICE_UNAVAILABLE;
 }
 
+apr_brigade_destroy(output_brigade);
+
 /* Nice we have answer to send to the client */
 if (result == CMD_AJP13_END_RESPONSE  isok) {
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server,
Index: modules/proxy/proxy_util.c
===
--- modules/proxy/proxy_util.c  (Revision 357461)
+++ modules/proxy/proxy_util.c  (Arbeitskopie)
@@ -2129,3 +2129,18 @@
 lb_workers_limit = proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT;
 return lb_workers_limit;
 }
+
+PROXY_DECLARE(apr_status_t) ap_proxy_backend_broke(request_rec *r,
+   apr_bucket_brigade *brigade)
+{
+apr_bucket *e;
+conn_rec *c = r-connection;
+
+r-no_cache = 1;
+e = ap_bucket_error_create(HTTP_BAD_GATEWAY, NULL, c-pool,
+   c-bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(brigade, e);
+e = apr_bucket_eos_create(c-bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(brigade, e);
+return ap_pass_brigade(r-output_filters, brigade);
+}
Index: modules/proxy/mod_proxy_http.c
===
--- modules/proxy/mod_proxy_http.c  (Revision 357461)
+++ modules/proxy/mod_proxy_http.c  (Arbeitskopie)
@@ -1199,6 +1199,7 @@
* are being read. */
 int pread_len = 0;
 apr_table_t *save_table;
+int backend_broke = 0;
 
 bb = apr_brigade_create(p, c-bucket_alloc);
 
@@ -1485,13 +1486,8 @@
  */
 ap_log_cerror(APLOG_MARK, APLOG_ERR, rv, c,
   proxy: error reading response);
-r-no_cache = 1;
-e = ap_bucket_error_create(HTTP_BAD_GATEWAY,  NULL,
-   c-pool, c-bucket_alloc);
-APR_BRIGADE_INSERT_TAIL(bb, e);
-e = apr_bucket_eos_create(c-bucket_alloc);
-APR_BRIGADE_INSERT_TAIL(bb, e);
-ap_pass_brigade(r-output_filters, bb);
+ap_proxy_backend_broke(r, bb);
+backend_broke = 1;
 backend-close = 1;
 break;
 }
@@ -1559,7 +1555,7 @@
 } while (interim_response);
 
 /* If our connection with the client is to be aborted, return DONE. */
-if (c-aborted) {
+if (c-aborted || backend_broke) {
 return DONE;
 }
 
Index: modules/proxy/mod_proxy.h
===
--- modules/proxy/mod_proxy.h   (Revision 357461)
+++ modules/proxy/mod_proxy.h   (Arbeitskopie)
@@ -669,6 +669,16 @@
 PROXY_DECLARE(int) ap_proxy_connection_create(const char *proxy_function,
   

Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-18 Thread Ruediger Pluem


On 12/18/2005 02:39 PM, Ruediger Pluem wrote:
 

[..cut..]

 Done as r357461.
 Attached a patch that
 
 - fixes the same problem for mod_proxy_ajp
 - puts the common code in proxy_util
 - fixes a little return code issue that is related to Justins original patch 
 in r354628

Forget about this one for a while. I detected some problems with it and I will 
repost it
once I solved these problems

Regards

Rüdiger



Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-18 Thread Ruediger Pluem

On 12/17/2005 06:30 PM, Jim Jagielski wrote:

[..cut..]

 
 
 I still think that having this http specific error mode
 hidden within the core output filter is misguided. Instead,
 a specific http output filter is, imo, a better place.
 

How about the attached patch?
It moves the code into a separate http protocol specific filter that
is only inserted if the request is a proxy request.
Of course it adds the effort of another loop over the brigade.

Regards

Rüdiger
Index: server/core_filters.c
===
--- server/core_filters.c   (Revision 357461)
+++ server/core_filters.c   (Arbeitskopie)
@@ -315,10 +315,8 @@
  apr_size_t *bytes_written,
  conn_rec *c);
 
-static void detect_error_bucket(apr_bucket *bucket, conn_rec *c);
+static void remove_empty_buckets(apr_bucket_brigade *bb);
 
-static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec *c);
-
 static apr_status_t send_brigade_blocking(apr_socket_t *s,
   apr_bucket_brigade *bb,
   apr_size_t *bytes_written,
@@ -489,7 +487,7 @@
 if (bb == NULL) {
 return;
 }
-remove_empty_buckets(bb, c);
+remove_empty_buckets(bb);
 if (!APR_BRIGADE_EMPTY(bb)) {
 c-data_in_output_filters = 1;
 if (make_a_copy) {
@@ -528,7 +526,7 @@
 struct iovec vec[MAX_IOVEC_TO_WRITE];
 apr_size_t nvec = 0;
 
-remove_empty_buckets(bb, c);
+remove_empty_buckets(bb);
 
 for (bucket = APR_BRIGADE_FIRST(bb);
  bucket != APR_BRIGADE_SENTINEL(bb);
@@ -598,26 +596,16 @@
 }
 }
 
-remove_empty_buckets(bb, c);
+remove_empty_buckets(bb);
 
 return APR_SUCCESS;
 }
 
-static void detect_error_bucket(apr_bucket *bucket, conn_rec *c)
+static void remove_empty_buckets(apr_bucket_brigade *bb)
 {
-if (AP_BUCKET_IS_ERROR(bucket)
- (((ap_bucket_error *)(bucket-data))-status == HTTP_BAD_GATEWAY)) {
-/* stream aborted and we have not ended it yet */
-c-keepalive = AP_CONN_CLOSE;
-}
-}
-
-static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec *c)
-{
 apr_bucket *bucket;
 while (((bucket = APR_BRIGADE_FIRST(bb)) != APR_BRIGADE_SENTINEL(bb)) 
(APR_BUCKET_IS_METADATA(bucket) || (bucket-length == 0))) {
-detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }
@@ -690,7 +678,6 @@
 for (i = offset; i  nvec; ) {
 apr_bucket *bucket = APR_BRIGADE_FIRST(bb);
 if (APR_BUCKET_IS_METADATA(bucket)) {
-detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }
Index: modules/http/http_core.c
===
--- modules/http/http_core.c(Revision 357461)
+++ modules/http/http_core.c(Arbeitskopie)
@@ -39,6 +39,7 @@
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_input_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_http_header_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_chunk_filter_handle;
+AP_DECLARE_DATA ap_filter_rec_t *ap_broken_backend_filter_handle;
 AP_DECLARE_DATA ap_filter_rec_t *ap_byterange_filter_handle;
 
 static const char *set_keep_alive_timeout(cmd_parms *cmd, void *dummy,
@@ -242,6 +243,10 @@
 ap_chunk_filter_handle =
 ap_register_output_filter(CHUNK, ap_http_chunk_filter,
   NULL, AP_FTYPE_TRANSCODE);
+ap_broken_backend_filter_handle =
+ap_register_output_filter(BROKEN_BACKEND,
+  ap_http_broken_backend_filter,
+  NULL, AP_FTYPE_PROTOCOL);
 ap_byterange_filter_handle =
 ap_register_output_filter(BYTERANGE, ap_byterange_filter,
   NULL, AP_FTYPE_PROTOCOL);
Index: modules/http/http_filters.c
===
--- modules/http/http_filters.c (Revision 357461)
+++ modules/http/http_filters.c (Arbeitskopie)
@@ -1055,6 +1055,9 @@
  */
 ap_add_output_filter(CHUNK, NULL, r, r-connection);
 }
+/* If we have a Proxy request, add the BROKEN_BACKEND filter now */
+if (r-proxyreq != PROXYREQ_NONE)
+ap_add_output_filter(BROKEN_BACKEND, NULL, r, r-connection);
 
 /* Don't remove this filter until after we have added the CHUNK filter.
  * Otherwise, f-next won't be the CHUNK filter and thus the first
@@ -1330,3 +1333,23 @@
 return bufsiz;
 }
 
+apr_status_t ap_http_broken_backend_filter(ap_filter_t *f,
+   apr_bucket_brigade *b)
+{
+request_rec *r = f-r;
+apr_bucket *e;
+
+for (e = APR_BRIGADE_FIRST(b);
+ e != 

Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-18 Thread Jim Jagielski


On Dec 18, 2005, at 10:59 AM, Ruediger Pluem wrote:


How about the attached patch?
It moves the code into a separate http protocol specific filter that
is only inserted if the request is a proxy request.
Of course it adds the effort of another loop over the brigade.



My thoughts were something more like a ap_http_error_ofilter
which simply checks for the error bucket and does appropriate
things; something very general that the full http chain can
use.

Again, since we're working on trunk, let's fold these in
and save tuning and semantics with the actual codebase.


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-18 Thread Ruediger Pluem


On 12/18/2005 06:21 PM, Jim Jagielski wrote:

[..cut..]

 
 My thoughts were something more like a ap_http_error_ofilter
 which simply checks for the error bucket and does appropriate
 things; something very general that the full http chain can
 use.

Also an interesting idea. Do you already have further applications for this in
mind apart from the current problem?

 
 Again, since we're working on trunk, let's fold these in
 and save tuning and semantics with the actual codebase.

Committed as r357519. I try to commit earlier to the trunk in the future.

Regards

Rüdiger





Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-17 Thread Ruediger Pluem


On 12/16/2005 09:41 AM, Plüm wrote:

[..cut..]

 
 Currently I am away from my development env. I hope I can post
 a complete patch with all my ideas by tomorrow.
 

I worked out a new version of the patch. It is attached. I checked
it again with my jsp and it seems to work well. If nobody has further
objections I would commit to the trunk. After that I would do the following
next steps:

- Examine mod_proxy_ajp for the same problem.
- Put the common code of mod_proxy_ajp and mod_proxy_http to signal a broken
  connection in a new function in proxy_util

Regards

Rüdiger
Index: server/core_filters.c
===
--- server/core_filters.c   (Revision 357328)
+++ server/core_filters.c   (Arbeitskopie)
@@ -315,8 +315,10 @@
  apr_size_t *bytes_written,
  conn_rec *c);
 
-static void remove_empty_buckets(apr_bucket_brigade *bb);
+static void detect_error_bucket(apr_bucket *bucket, conn_rec *c);
 
+static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec *c);
+
 static apr_status_t send_brigade_blocking(apr_socket_t *s,
   apr_bucket_brigade *bb,
   apr_size_t *bytes_written,
@@ -487,7 +489,7 @@
 if (bb == NULL) {
 return;
 }
-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);
 if (!APR_BRIGADE_EMPTY(bb)) {
 c-data_in_output_filters = 1;
 if (make_a_copy) {
@@ -526,7 +528,7 @@
 struct iovec vec[MAX_IOVEC_TO_WRITE];
 apr_size_t nvec = 0;
 
-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);
 
 for (bucket = APR_BRIGADE_FIRST(bb);
  bucket != APR_BRIGADE_SENTINEL(bb);
@@ -596,16 +598,26 @@
 }
 }
 
-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);
 
 return APR_SUCCESS;
 }
 
-static void remove_empty_buckets(apr_bucket_brigade *bb)
+static void detect_error_bucket(apr_bucket *bucket, conn_rec *c)
 {
+if (AP_BUCKET_IS_ERROR(bucket)
+ (((ap_bucket_error *)(bucket-data))-status == HTTP_BAD_GATEWAY)) {
+/* stream aborted and we have not ended it yet */
+c-keepalive = AP_CONN_CLOSE;
+}
+}
+
+static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec *c)
+{
 apr_bucket *bucket;
 while (((bucket = APR_BRIGADE_FIRST(bb)) != APR_BRIGADE_SENTINEL(bb)) 
(APR_BUCKET_IS_METADATA(bucket) || (bucket-length == 0))) {
+detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }
@@ -678,6 +690,7 @@
 for (i = offset; i  nvec; ) {
 apr_bucket *bucket = APR_BRIGADE_FIRST(bb);
 if (APR_BUCKET_IS_METADATA(bucket)) {
+detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }
Index: modules/http/chunk_filter.c
===
--- modules/http/chunk_filter.c (Revision 357328)
+++ modules/http/chunk_filter.c (Arbeitskopie)
@@ -47,6 +47,7 @@
 apr_bucket_brigade *more;
 apr_bucket *e;
 apr_status_t rv;
+int bad_gateway_seen = 0;
 
 for (more = NULL; b; b = more, more = NULL) {
 apr_off_t bytes = 0;
@@ -67,6 +68,13 @@
 eos = e;
 break;
 }
+if (AP_BUCKET_IS_ERROR(e)
+ (((ap_bucket_error *)(e-data))-status
+== HTTP_BAD_GATEWAY)) {
+/* We had a broken backend. Memorize this. */
+bad_gateway_seen = 1;
+continue;
+}
 if (APR_BUCKET_IS_FLUSH(e)) {
 flush = e;
 more = apr_brigade_split(b, APR_BUCKET_NEXT(e));
@@ -146,13 +154,19 @@
  *   2) the trailer
  *   3) the end-of-chunked body CRLF
  *
- * If there is no EOS bucket, then do nothing.
+ * If there is no EOS bucket, or if we had seen an error bucket with
+ * status HTTP_BAD_GATEWAY then do nothing.
+ * The error bucket with status HTTP_BAD_GATEWAY indicates that the
+ * connection to the backend (mod_proxy) broke in the middle of the
+ * response. In order to signal the client that something went wrong
+ * we do not create the last-chunk marker and set c-keepalive to
+ * AP_CONN_CLOSE in the core output filter.
  *
  * XXX: it would be nice to combine this with the end-of-chunk
  * marker above, but this is a bit more straight-forward for
  * now.
  */
-if (eos != NULL) {
+if (eos  !bad_gateway_seen) {
 /* XXX: (2) trailers ... does not yet exist */
 e = apr_bucket_immortal_create(ASCII_ZERO ASCII_CRLF

Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-17 Thread Jim Jagielski


On Dec 17, 2005, at 11:32 AM, Ruediger Pluem wrote:


Index: server/core_filters.c
===
--- server/core_filters.c   (Revision 357328)
+++ server/core_filters.c   (Arbeitskopie)
@@ -315,8 +315,10 @@
  apr_size_t  
*bytes_written,

  conn_rec *c);

-static void remove_empty_buckets(apr_bucket_brigade *bb);
+static void detect_error_bucket(apr_bucket *bucket, conn_rec *c);

+static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec  
*c);

+
 static apr_status_t send_brigade_blocking(apr_socket_t *s,
   apr_bucket_brigade *bb,
   apr_size_t *bytes_written,
@@ -487,7 +489,7 @@
 if (bb == NULL) {
 return;
 }
-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);
 if (!APR_BRIGADE_EMPTY(bb)) {
 c-data_in_output_filters = 1;
 if (make_a_copy) {
@@ -526,7 +528,7 @@
 struct iovec vec[MAX_IOVEC_TO_WRITE];
 apr_size_t nvec = 0;

-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);

 for (bucket = APR_BRIGADE_FIRST(bb);
  bucket != APR_BRIGADE_SENTINEL(bb);
@@ -596,16 +598,26 @@
 }
 }

-remove_empty_buckets(bb);
+remove_empty_buckets(bb, c);

 return APR_SUCCESS;
 }

-static void remove_empty_buckets(apr_bucket_brigade *bb)
+static void detect_error_bucket(apr_bucket *bucket, conn_rec *c)
 {
+if (AP_BUCKET_IS_ERROR(bucket)
+ (((ap_bucket_error *)(bucket-data))-status ==  
HTTP_BAD_GATEWAY)) {

+/* stream aborted and we have not ended it yet */
+c-keepalive = AP_CONN_CLOSE;
+}
+}
+
+static void remove_empty_buckets(apr_bucket_brigade *bb, conn_rec *c)
+{
 apr_bucket *bucket;
 while (((bucket = APR_BRIGADE_FIRST(bb)) !=  
APR_BRIGADE_SENTINEL(bb)) 
(APR_BUCKET_IS_METADATA(bucket) || (bucket-length ==  
0))) {

+detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }
@@ -678,6 +690,7 @@
 for (i = offset; i  nvec; ) {
 apr_bucket *bucket = APR_BRIGADE_FIRST(bb);
 if (APR_BUCKET_IS_METADATA(bucket)) {
+detect_error_bucket(bucket, c);
 APR_BUCKET_REMOVE(bucket);
 apr_bucket_destroy(bucket);
 }



I still think that having this http specific error mode
hidden within the core output filter is misguided. Instead,
a specific http output filter is, imo, a better place.

Even so, I say let's fold this into trunk as is, and
then work on abstracting it out.


Re: AW: AW: AW: 2.2 mod_http_proxy and partial pages

2005-12-17 Thread Ruediger Pluem


On 12/17/2005 06:30 PM, Jim Jagielski wrote:
 
 On Dec 17, 2005, at 11:32 AM, Ruediger Pluem wrote:
 

[..cut..]

 
 I still think that having this http specific error mode
 hidden within the core output filter is misguided. Instead,
 a specific http output filter is, imo, a better place.

Sounds reasonable. I think the code can be placed in http_filters.c
and the filter itself could be added by ap_http_header_filter once
it finished its own work (just the way it currently does with the chunk
filter).

 
 Even so, I say let's fold this into trunk as is, and
 then work on abstracting it out.

Yes, I will do so. Even more I will propose it for backport.
Although this fix is not perfect with respect to your remarks
above it fixes an actual problem without being a total nasty hack
and thus should be included in the 2.2.x branch. If it we manage to
extract this to a separate filter before 2.2.1 I would be even more
happy.


Regards

Rüdiger