one word syncronize once more

2007-06-14 Thread Dmytro Fedonin

Hi all,

I've got some response which shows that I was not clear enough in my previous 
post. Fullproof solution would be:


Index: server/mpm/worker/worker.c
===
--- server/mpm/worker/worker.c  (revision 545597)
+++ server/mpm/worker/worker.c  (working copy)
@@ -892,7 +887,7 @@
 bucket_alloc = apr_bucket_alloc_create(ptrans);
 process_socket(ptrans, csd, process_slot, thread_slot, bucket_alloc);
 worker_sockets[thread_slot] = NULL;
-requests_this_child--; /* FIXME: should be synchronized - aaron */
+apr_atomic_dec32(&requests_this_child); /* much slower than important 
*/
 apr_pool_clear(ptrans);
 last_ptrans = ptrans;
 }
Because we don't care about if (requests_this_child <= 0) it would be enough. But 
it is too way slow and is not so important.

So, the question is:
If on x86 we have native atomic 'subl $0x1,mem' which is almost as fast as 
optimized mov dec mov, why not to use platform specific feature. As for now x86 
represents majority of hosts.


PS I have found 21 atomic dec/inc in trunk and more than half of them are about 
counters similar to the case above.


--
Best regards,
Dmytro


one word syncronize

2007-06-14 Thread Dmytro Fedonin

Hi all,

Looking through 'server/mpm/worker/worker.c' I have found such a 
combination of TODO/FIXME comments:

1)
/* TODO: requests_this_child should be synchronized - aaron */
if (requests_this_child <= 0) {
2)
requests_this_child--; /* FIXME: should be synchronized - aaron */

And I can not see any point here. These are one word CPU operations, 
thus there is no way to preempt inside this kind of operation. So, one 
CPU is safe by nature of basic operation. If we have several CPUs they 
will synchronize caches any way, thus we will never get inconsistent 
state here. We can only lose time trying to synchronize it in code. Am I 
not right?


PS My assumptions are several threads of the same process are dealing 
with one word of common memory.


--
Best regards,
Dmytro


keepalive connection broken

2007-06-12 Thread Dmytro Fedonin - Sun Microsystems

Hi,

I have researched a problem with broken keepalive connections which is 
similar to bug# 41109. And I have found that in worker.c function

'int ap_graceful_stop_signalled(void)' returns listener_may_exit.
Basically all the code is like this:

int ap_graceful_stop_signalled(void)
/**/
{
/* note: for a graceful termination, listener_may_exit will be set 
before

 *   workers_may_exit, so check listener_may_exit
 */
return listener_may_exit;
}
And it causes keepalive connection to terminate abruptly if server 
process exceeds MaxRequestsPerChild.


In prefork.c for instance this function is different and states for:

int ap_graceful_stop_signalled(void)
{
/* not ever called anymore... */
return 0;
}

As far as it is called only from 'static int 
ap_process_http_connection(conn_rec *c)'

'if (ap_graceful_stop_signalled())
break;' out of serving loop,
I believe 'ap_graceful_stop_signalled' to 'return 0' will solve the problem.
Have tried with trunk and 2.2.x branch with the same positive result.

Patch for trunk is attached.

--
Best regards,
Dmytro
Index: server/mpm/worker/worker.c
===
--- server/mpm/worker/worker.c  (revision 545597)
+++ server/mpm/worker/worker.c  (working copy)
@@ -513,14 +513,9 @@
  */
 
 int ap_graceful_stop_signalled(void)
-/* XXX this is really a bad confusing obsolete name
- * maybe it should be ap_mpm_process_exiting?
- */
 {
-/* note: for a graceful termination, listener_may_exit will be set before
- *   workers_may_exit, so check listener_may_exit
- */
-return listener_may_exit;
+/* not ever called anymore... */
+return 0;
 }
 
 /*


[PATCH]: prefork Bug# 17792 MaxClients invalid prior to ServerLimit

2007-06-06 Thread Dmytro Fedonin - Sun Microsystems

The directives:

  MaxClients 512
  ServerLimit512

in that order results in

  WARNING: MaxClients of 512 exceeds ServerLimit value of 256 servers,
  lowering MaxClients to 256.  To increase, please see the ServerLimit
  directive.

Reversing the directives fixes the problem.

I have tested following variants with changes in plain configuration:

Presence   |   |
---|---|
MaxClients | x  x  x  x  x x o |
ServerLimit| x  x  x  x x x  o |
MC > SL| x x x  x  |
Order (*)  | x  x  |
===|===|
Results|   |
---|---|
Values | ok ok ok ok ok ok ok ok ok|
Verbosity  | 3  0  3  2  1  1  0  0  0 |
Comments   |c c  c  c  c c |

Verbosity -- number of WARNING: to appear.
* - Order with x means ServerLimit appears prior MaxClients.
c - I would say situations with 'c' in comments quite normal and only 
one of them has verbosity 2 if MaxClients goes first and with value more 
than default one (256).


--
Best regards,
Dmytro
Index: server/mpm/prefork/prefork.c
===
--- server/mpm/prefork/prefork.c(revision 539710)
+++ server/mpm/prefork/prefork.c(working copy)
@@ -96,6 +96,7 @@
 static int ap_daemons_min_free=0;
 static int ap_daemons_max_free=0;
 static int ap_daemons_limit=0;  /* MaxClients */
+static int ap_daemons_limit_tmp=0;  /* to check MaxClients after 
server_limit change */
 static int server_limit = DEFAULT_SERVER_LIMIT;
 static int first_server_limit = 0;
 static int changed_limit_at_restart;
@@ -1371,6 +1372,40 @@
 return NULL;
 }
 
+static void check_max_clients ()
+{
+static unsigned char count = 0;
+if (ap_daemons_limit_tmp) {
+ap_daemons_limit = ap_daemons_limit_tmp;
+}
+if (count > 0) {
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+"WARNING: MaxClients is %d now.", ap_daemons_limit);
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+" This message is printed probably because of 
ServerLimit");
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+" directive comes after MaxClients in Your config 
file.");
+}
+
+if (ap_daemons_limit > server_limit) {
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+"WARNING: MaxClients of %d exceeds ServerLimit value "
+"of %d servers,", ap_daemons_limit, server_limit);
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+" lowering MaxClients to %d.  To increase, please "
+"see the ServerLimit", server_limit);
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+" directive.");
+ap_daemons_limit = server_limit;
+count++;
+}
+else if (ap_daemons_limit < 1) {
+ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
+ "WARNING: Require MaxClients > 0, setting to 1");
+ap_daemons_limit = 1;
+}
+}
+
 static const char *set_max_free_servers(cmd_parms *cmd, void *dummy, const 
char *arg)
 {
 const char *err = ap_check_cmd_context(cmd, GLOBAL_ONLY);
@@ -1389,23 +1424,12 @@
 return err;
 }
 
-ap_daemons_limit = atoi(arg);
-if (ap_daemons_limit > server_limit) {
-   ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
-"WARNING: MaxClients of %d exceeds ServerLimit value "
-"of %d servers,", ap_daemons_limit, server_limit);
-   ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
-" lowering MaxClients to %d.  To increase, please "
-"see the ServerLimit", server_limit);
-   ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
-" directive.");
-   ap_daemons_limit = server_limit;
-}
-else if (ap_daemons_limit < 1) {
-ap_log_error(APLOG_MARK, APLOG_STARTUP, 0, NULL,
- "WARNING: Require MaxClients > 0, setting to 1");
-ap_daemons_limit = 1;
-}
+ap_daemons_limit_tmp = atoi(arg);
+
+/* check max_clients even if server_limit has not been set yet
+ * will recheck if server_limit changed
+ */
+check_max_clients();
 return NULL;
 }
 
@@ -1446,6 +1470,9 @@
  "WARNING: Require ServerLimit > 0, setting to 1");
 server_limit = 1;
 }
+
+/* recheck max_clients since server_limit has been changed */
+check_max_clients();
 return NULL;
 }