Re: [PATCH] lingering close and event

2011-06-13 Thread Stefan Fritsch
Hi Jeff,

On Tuesday 26 April 2011, Jeff Trawick wrote:
 has anyone played with this before?  I've seen it mentioned, and
 joe s had a patch to create a linger thread for worker back in
 2004
 
 the attached patch hasn't been seriously tested (or even seriously
 coded)
 
 if somebody has looked at it seriously, perhaps you can save me
 some time :)

I have looked at limiting the maximum connections per-process for 
event (see STATUS) and think that would be easier to implement if the 
lingering close would be done by the listener thread. Two questions:

Did you have a chance to work on this further? If yes, can you post 
the latest version?

Is your patch based on the work by joe s? I mean if I commit something 
based on your patch, should I mention him in the credit, too?

Cheers,
Stefan


Re: [PATCH] lingering close and event

2011-06-13 Thread Jeff Trawick
On Mon, Jun 13, 2011 at 2:56 PM, Stefan Fritsch s...@sfritsch.de wrote:
 Hi Jeff,

 On Tuesday 26 April 2011, Jeff Trawick wrote:
 has anyone played with this before?  I've seen it mentioned, and
 joe s had a patch to create a linger thread for worker back in
 2004

 the attached patch hasn't been seriously tested (or even seriously
 coded)

 if somebody has looked at it seriously, perhaps you can save me
 some time :)

 I have looked at limiting the maximum connections per-process for
 event (see STATUS) and think that would be easier to implement if the
 lingering close would be done by the listener thread. Two questions:

 Did you have a chance to work on this further? If yes, can you post
 the latest version?

not yet; would love to

but even better would be to have someone else take it up :)


 Is your patch based on the work by joe s? I mean if I commit something
 based on your patch, should I mention him in the credit, too?

not based on his work

I saw Joe S's patch when looking for prior conversations on the topic;
ISTR that it has a totally different implementation predating event
and its particular connection state


Re: [PATCH] lingering close and event

2011-06-13 Thread Joe Schaefer
Yeah my patch was based on worker, not event.  Not sure
what I wrote any more, but it was likely my first crack
at thread programming, so it probably needed work.



- Original Message 
 From: Jeff Trawick traw...@gmail.com
 To: dev@httpd.apache.org
 Sent: Mon, June 13, 2011 3:09:25 PM
 Subject: Re: [PATCH] lingering close and event
 
 On Mon, Jun 13, 2011 at 2:56 PM, Stefan Fritsch s...@sfritsch.de wrote:
  Hi  Jeff,
 
  On Tuesday 26 April 2011, Jeff Trawick wrote:
   has anyone played with this before?  I've seen it mentioned, and
  joe  s had a patch to create a linger thread for worker back in
   2004
 
  the attached patch hasn't been seriously tested (or  even seriously
  coded)
 
  if somebody has looked  at it seriously, perhaps you can save me
  some time  :)
 
  I have looked at limiting the maximum connections per-process  for
  event (see STATUS) and think that would be easier to implement if  the
  lingering close would be done by the listener thread. Two  questions:
 
  Did you have a chance to work on this further? If  yes, can you post
  the latest version?
 
 not yet; would love  to
 
 but even better would be to have someone else take it up  :)
 
 
  Is your patch based on the work by joe s? I mean if I  commit something
  based on your patch, should I mention him in the  credit, too?
 
 not based on his work
 
 I saw Joe S's patch when  looking for prior conversations on the topic;
 ISTR that it has a totally  different implementation predating event
 and its particular connection  state
 


Re: [PATCH] lingering close and event

2011-06-13 Thread Stefan Fritsch
On Monday 13 June 2011, Jeff Trawick wrote:
  I have looked at limiting the maximum connections per-process for
  event (see STATUS) and think that would be easier to implement if
  the lingering close would be done by the listener thread. Two
  questions:
  
  Did you have a chance to work on this further? If yes, can you
  post the latest version?
 
 not yet; would love to

NP

 but even better would be to have someone else take it up :)

I hope I will have some time in the next 1-2 weeks.


Re: [PATCH] lingering close and event

2011-05-07 Thread Greg Ames
hey I like the concept a lot!  a quick peek at
http://apache.org/server-status shows 15 Cs for threads tied up in lingering
close, something like 50 Keepalive threads, and only 13 threads actually
reading or writing.

On Mon, Apr 25, 2011 at 8:53 PM, Jeff Trawick traw...@gmail.com wrote:

 has anyone played with this before?  I've seen it mentioned, and joe s
 had a patch to create a linger thread for worker back in 2004

 +apr_thread_mutex_lock(timeout_mutex);
+APR_RING_INSERT_TAIL(recv_fin_timeout_head, cs,
conn_state_t, timeout_list);
+apr_thread_mutex_unlock(timeout_mutex);

I see where the cs is removed from the ring for the timeout flow.  but what
about for the normal non-timeout flow?

 rv = apr_pollset_create(event_pollset,
-threads_per_child,
+threads_per_child, /* XXX don't we need
more, to handle
+* connections in K-A
or lingering
+* close?
+*/

IIRC the second arg to apr_pollset_create determines the size of the revents
array used to report ready file descriptors.  If there are ever more ready
fds than slots in the array, it's no big deal.  They get reported as ready
on the next apr_pollset_poll call.  So using threads_per_child is just
picking a number out of the air which happens to go up automatically as the
worker process is configured for higher workloads.

Greg


[PATCH] lingering close and event

2011-04-25 Thread Jeff Trawick
has anyone played with this before?  I've seen it mentioned, and joe s
had a patch to create a linger thread for worker back in 2004

the attached patch hasn't been seriously tested (or even seriously coded)

if somebody has looked at it seriously, perhaps you can save me some time :)
Index: server/mpm/event/event.c
===
--- server/mpm/event/event.c(revision 1096609)
+++ server/mpm/event/event.c(working copy)
@@ -147,6 +147,11 @@
 #define apr_time_from_msec(x) (x * 1000)
 #endif
 
+#ifndef MAX_SECS_TO_LINGER
+#define MAX_SECS_TO_LINGER 30
+#endif
+#define SECONDS_TO_LINGER  2
+
 /*
  * Actual definitions of config globals
  */
@@ -172,7 +177,8 @@
 
 static apr_thread_mutex_t *timeout_mutex;
 APR_RING_HEAD(timeout_head_t, conn_state_t);
-static struct timeout_head_t timeout_head, keepalive_timeout_head;
+static struct timeout_head_t timeout_head, keepalive_timeout_head,
+recv_fin_timeout_head;
 
 static apr_pollset_t *event_pollset;
 
@@ -659,6 +665,7 @@
 long conn_id = ID_FROM_CHILD_THREAD(my_child_num, my_thread_num);
 int rc;
 ap_sb_handle_t *sbh;
+apr_status_t rv;
 
 ap_create_sb_handle(sbh, p, my_child_num, my_thread_num);
 
@@ -782,10 +789,45 @@
 }
 
 if (cs-state == CONN_STATE_LINGER) {
-ap_lingering_close(c);
-apr_pool_clear(p);
-ap_push_pool(worker_queue_info, p);
-return 0;
+if (ap_start_lingering_close(c)) {
+ap_log_error(APLOG_MARK, APLOG_INFO, 0, ap_server_conf,
+ lingering-close finished immediately);
+apr_pool_clear(p);
+ap_push_pool(worker_queue_info, p);
+return 0;
+}
+else {
+apr_socket_t *csd = ap_get_module_config(cs-c-conn_config, 
core_module);
+
+rv = apr_socket_timeout_set(csd, 0);
+AP_DEBUG_ASSERT(rv == APR_SUCCESS);
+cs-state = CONN_STATE_LINGER_WAITING;
+/*
+ * If some module requested a shortened waiting period, only wait 
for
+ * 2s (SECONDS_TO_LINGER). This is useful for mitigating certain
+ * DoS attacks.
+ */
+if (apr_table_get(c-notes, short-lingering-close)) {
+cs-expiration_time =
+apr_time_now() + apr_time_from_sec(SECONDS_TO_LINGER);
+}
+else {
+cs-expiration_time =
+apr_time_now() + apr_time_from_sec(MAX_SECS_TO_LINGER);
+}
+apr_thread_mutex_lock(timeout_mutex);
+APR_RING_INSERT_TAIL(recv_fin_timeout_head, cs, conn_state_t, 
timeout_list);
+apr_thread_mutex_unlock(timeout_mutex);
+cs-pfd.reqevents = APR_POLLIN | APR_POLLHUP | APR_POLLERR;
+rv = apr_pollset_add(event_pollset, cs-pfd);
+if (rv != APR_SUCCESS) {
+ap_log_error(APLOG_MARK, APLOG_ERR, rv, ap_server_conf,
+ process_socket: apr_pollset_add failure);
+AP_DEBUG_ASSERT(rv == APR_SUCCESS);
+}
+ap_log_error(APLOG_MARK, APLOG_INFO, 0, ap_server_conf,
+ queued a socket to lingering-close);
+}
 }
 else if (cs-state == CONN_STATE_CHECK_REQUEST_LINE_READABLE) {
 apr_status_t rc;
@@ -888,6 +930,7 @@
 
 APR_RING_INIT(timeout_head, conn_state_t, timeout_list);
 APR_RING_INIT(keepalive_timeout_head, conn_state_t, timeout_list);
+APR_RING_INIT(recv_fin_timeout_head, conn_state_t, timeout_list);
 
 for (lr = ap_listeners; lr != NULL; lr = lr-next) {
 apr_pollfd_t *pfd = apr_palloc(p, sizeof(*pfd));
@@ -1063,6 +1106,38 @@
 return APR_SUCCESS;
 }
 
+static int process_lingering_close(conn_state_t *cs, const apr_pollfd_t *pfd)
+{
+apr_socket_t *csd = ap_get_module_config(cs-c-conn_config, core_module);
+char dummybuf[512];
+apr_size_t nbytes;
+apr_status_t rv;
+
+/* socket is already in non-blocking state */
+do {
+nbytes = sizeof(dummybuf);
+rv = apr_socket_recv(csd, dummybuf, nbytes);
+ap_log_error(APLOG_MARK, APLOG_INFO, rv, ap_server_conf,
+ read on lingering socket: %d/%ld,
+ rv, (long)nbytes);
+} while (rv == APR_SUCCESS);
+
+if (!APR_STATUS_IS_EOF(rv)) {
+return 0;
+}
+
+rv = apr_pollset_remove(event_pollset, pfd);
+AP_DEBUG_ASSERT(rv == APR_SUCCESS);
+
+rv = apr_socket_close(csd);
+AP_DEBUG_ASSERT(rv == APR_SUCCESS);
+
+apr_pool_clear(cs-p);
+ap_push_pool(worker_queue_info, cs-p);
+
+return 1;
+}
+
 static void * APR_THREAD_FUNC listener_thread(apr_thread_t * thd, void *dummy)
 {
 timer_event_t *ep;
@@ -1178,17 +1253,29 @@
 apr_thread_mutex_unlock(g_timer_ring_mtx);
 }
 
-while (num  get_worker(have_idle_worker)) {
+while (num) {
 pt = (listener_poll_type *)