On 10/19/2015 06:05 PM, Yann Ylavic wrote:
[From users@]

On Mon, Oct 19, 2015 at 11:44 PM, Andy Wang <aw...@ptc.com> wrote:

The issue is currently reproduced using Apache httpd 2.4.16, mod_jk 1.2.41
and tomcat 8.0.28.

I've created a very very simple JSP page that does nothing but print a small
string, but I've tried changing the jsp page to print a very very large
string (10000+ characters) and no difference.

If I POST to this JSP page, and something like mod_deflate is in place to
force a chunked transfer the TCP packet capture looks like this:

No.     Time           Source  Destination   Protocol Length Info
    1850 4827.762721000 client  server        TCP      66     54131→2280
[SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
    1851 4827.764976000 server  client        TCP      66     2280→54131
[SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
    1852 4827.765053000 client  server        TCP      54     54131→2280
[ACK] Seq=1 Ack=1 Win=131328 Len=0
    1853 4827.765315000 client  server        HTTP     791    POST
/JSPtoPostTo HTTP/1.1
    1854 4827.777981000 server  client        TCP      466    [TCP segment of
a reassembled PDU]
    1855 4827.982961000 client  server        TCP      54     54131→2280
[ACK] Seq=738 Ack=413 Win=130816 Len=0
    1856 4832.770458000 server  client        HTTP     74     HTTP/1.1 200 OK
(text/html)
    1857 4832.770459000 server  client        TCP      60     2280→54131
[FIN, ACK] Seq=433 Ack=738 Win=65536 Len=0
    1858 4832.770555000 client  server        TCP      54     54131→2280
[ACK] Seq=738 Ack=434 Win=130816 Len=0
    1859 4832.770904000 client  server        TCP      54     54131→2280
[FIN, ACK] Seq=738 Ack=434 Win=130816 Len=0
    1860 4832.774200000 server  client        TCP      60     2280→54131
[ACK] Seq=434 Ack=739 Win=65536 Len=0

Spdficially, note the 5 second delay between the first segment (No. 1854)
and the second data segment (1856).

This is the deferred write triggering *after* the keepalive timeout,
whereas no subsequent request was pipelined.
I wonder if we shouldn't issue a flush at the end of each request when
the following is not already there, ie:

Index: modules/http/http_request.c
===================================================================
--- modules/http/http_request.c    (revision 1708095)
+++ modules/http/http_request.c    (working copy)
@@ -228,8 +228,9 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
      ap_die_r(type, r, r->status);
  }

-static void check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
+static int check_pipeline(conn_rec *c, apr_bucket_brigade *bb)
  {
+    c->data_in_input_filters = 0;
      if (c->keepalive != AP_CONN_CLOSE && !c->aborted) {
          apr_status_t rv;

@@ -236,17 +237,12 @@ AP_DECLARE(void) ap_die(int type, request_rec *r)
          AP_DEBUG_ASSERT(APR_BRIGADE_EMPTY(bb));
          rv = ap_get_brigade(c->input_filters, bb, AP_MODE_SPECULATIVE,
                              APR_NONBLOCK_READ, 1);
-        if (rv != APR_SUCCESS || APR_BRIGADE_EMPTY(bb)) {
-            /*
-             * Error or empty brigade: There is no data present in the input
-             * filter
-             */
-            c->data_in_input_filters = 0;
-        }
-        else {
+        if (rv == APR_SUCCESS && !APR_BRIGADE_EMPTY(bb)) {
              c->data_in_input_filters = 1;
+            return 1;
          }
      }
+    return 0;
  }


@@ -287,11 +283,30 @@ AP_DECLARE(void) ap_process_request_after_handler(
       * already by the EOR bucket's cleanup function.
       */

-    check_pipeline(c, bb);
+    if (!check_pipeline(c, bb)) {
+        apr_status_t rv;
+
+        b = apr_bucket_flush_create(c->bucket_alloc);
+        APR_BRIGADE_INSERT_HEAD(bb, b);
+        rv = ap_pass_brigade(c->output_filters, bb);
+        if (APR_STATUS_IS_TIMEUP(rv)) {
+            /*
+             * Notice a timeout as an error message. This might be
+             * valuable for detecting clients with broken network
+             * connections or possible DoS attacks.
+             *
+             * It is still safe to use r / r->pool here as the eor bucket
+             * could not have been destroyed in the event of a timeout.
+             */
+            ap_log_cerror(APLOG_MARK, APLOG_INFO, rv, c, APLOGNO(01581)
+                          "Timeout while flushing data to the client");
+        }
+    }
      apr_brigade_destroy(bb);
-    if (c->cs)
+    if (c->cs) {
          c->cs->state = (c->aborted) ? CONN_STATE_LINGER
                                      : CONN_STATE_WRITE_COMPLETION;
+    }
      AP_PROCESS_REQUEST_RETURN((uintptr_t)r, r->uri, r->status);
      if (ap_extended_status) {
          ap_time_process_request(c->sbh, STOP_PREQUEST);
@@ -373,33 +388,10 @@ void ap_process_async_request(request_rec *r)

  AP_DECLARE(void) ap_process_request(request_rec *r)
  {
-    apr_bucket_brigade *bb;
-    apr_bucket *b;
-    conn_rec *c = r->connection;
-    apr_status_t rv;
-
      ap_process_async_request(r);

-    if (!c->data_in_input_filters) {
-        bb = apr_brigade_create(c->pool, c->bucket_alloc);
-        b = apr_bucket_flush_create(c->bucket_alloc);
-        APR_BRIGADE_INSERT_HEAD(bb, b);
-        rv = ap_pass_brigade(c->output_filters, bb);
-        if (APR_STATUS_IS_TIMEUP(rv)) {
-            /*
-             * Notice a timeout as an error message. This might be
-             * valuable for detecting clients with broken network
-             * connections or possible DoS attacks.
-             *
-             * It is still safe to use r / r->pool here as the eor bucket
-             * could not have been destroyed in the event of a timeout.
-             */
-            ap_log_rerror(APLOG_MARK, APLOG_INFO, rv, r, APLOGNO(01581)
-                          "Timeout while writing data for URI %s to the"
-                          " client", r->unparsed_uri);
-        }
-    }
      if (ap_extended_status) {
+        conn_rec *c = r->connection;
          ap_time_process_request(c->sbh, STOP_PREQUEST);
      }
  }
--

Thoughts?


I'll take a look at this patch tomorrow and apply it to see if it works.
Thanks for taking a look at it.

Andy

Reply via email to