[EMAIL PROTECTED] wrote:
brianp      2003/01/03 10:32:59

  Modified:    server/mpm/worker fdqueue.c
  Log:
  Replace most of the mutex locking in the worker MPM's "queue info"
  object with atomic compare-and-swap loops.

A nice New Year's present :-)

Presumably this gets rid of the O(ThreadsPerProcess) overhead in _pthread_alt_unlock on Linux SMPs. Unfortunately it's not my box.

  +    /* Atomically increment the count of idle workers */
  +    for (;;) {
  +        prev_idlers = queue_info->idlers;
  +        if (apr_atomic_cas(&(queue_info->idlers), prev_idlers + 1, 
prev_idlers)
  +            == prev_idlers) {
  +            break;
  +        }

apr_atomic_inc?

Some architectures provide lighter weight ways of doing this (e.g. a locked XADD on a 486/Pentium is supposed to be cheaper than a locked CMPXCHG + result test + loop overhead in software). Plus the source code shrinks and is easier to read.

<takes a look at apr_atomic.h> Oh, cripe, apr_atomic_inc doesn't return anything, so we don't have a reliable way of knowing the old or new value. We should fix that. Looking at your code above, the old value would be more convenient. It would also work well for atomically allocating a slot in an array.

  +    /* Atomically decrement the idle worker count */
  +    for (;;) {
  +        apr_uint32_t prev_idlers = queue_info->idlers;
  +        if (apr_atomic_cas(&(queue_info->idlers), prev_idlers - 1, 
prev_idlers)
  +            == prev_idlers) {
  +            break;
  +        }

apr_atomic_dec? That does return something.

Greg



Reply via email to