> Why do mutex locking/unlocking around the cond_wait()? In the original
> implementation, the worker thread just locked the mutex once at startup
> and then let the cond_wait() unlock and re-lock it for each connection.
> With this new code, we're now doing twice as many lock/unlock operations
>
> With a single listener port (I'll run multi-listener tests later today),
>
> MPM RequestsMean resp. CPU CPU
> typeper second time (ms) load utilization
> --
> worker 125037.4 6.1
Also, it looks like the tweaks to worker to reduce the time spent
in mutex-protected code may have worked. In this test case, the
mutex lock/wakeup calls aren't as prominent as they used to be.
syscall seconds calls errors
read 21.223611902
open
I just ran some tests of the latest code base (including all of Aaron's
and my changes to worker and threadpool) to compare the performance of
the thread-based Unix MPMs.
The test config is the same one I used for my last round of tests,
except that I reduced the file size from 10KB to 0KB. Some
[EMAIL PROTECTED] wrote:
> - Fix the call when the worker thread waits for a connection to use
>the new state variable and use mutexes around the cond_wait() call.
>
Why do mutex locking/unlocking around the cond_wait()? In the original
implementation, the worker thread just locked the mu
This ugly thing kills off threads that are running long-lived
requests when we want to do a graceless shutdown of the server.
I tested it by running ab with concurrency of 100 against a 10MB
file and running "bin/apachectl stop ; ps -ef | grep httpd". I
get a bunch of [info] errors in the error lo
Aaron Bannert wrote:
>On Sun, Apr 28, 2002 at 02:52:39PM -0700, Brian Pane wrote:
>
>>[EMAIL PROTECTED] wrote:
>>
>>>aaron 02/04/28 14:35:13
>>>
>>>Modified:server/mpm/experimental/threadpool threadpool.c
>>>Log:
>>>When we signal a condition variable, we need to own the lock that
>>>is
On Sun, Apr 28, 2002 at 02:52:39PM -0700, Brian Pane wrote:
> [EMAIL PROTECTED] wrote:
>
> >aaron 02/04/28 14:35:13
> >
> > Modified:server/mpm/experimental/threadpool threadpool.c
> > Log:
> > When we signal a condition variable, we need to own the lock that
> > is associated with that
[EMAIL PROTECTED] wrote:
>aaron 02/04/28 14:35:13
>
> Modified:server/mpm/experimental/threadpool threadpool.c
> Log:
> When we signal a condition variable, we need to own the lock that
> is associated with that condition variable. This isn't necessary
> for Solaris, but for Posix
-BEGIN PGP SIGNED MESSAGE-
Greetings!
This message is being sent specifically to the Apache httpd
development list. You may or may not have received it through
some other list; I'm trying to get as wide an exposure as
possible. Feel free to forward..
Three pieces of Big News here:
1.
Just a clean up of the function declarations.
Cheers,
-Thom
--
Thom May -> [EMAIL PROTECTED]
for the record:
AGH!
* Knghtbrd- gets back to abusing gdm
Index: include/http_config.h
===
RCS file: /home/cvspublic/httpd-2.0/include/
On Sun, Apr 28, 2002 at 10:55:59AM -0700, Justin Erenkrantz wrote:
> That said, there are a disturbing number of PRs related to suexec in
> bugzilla - we really need someone to go through and verify these
> bugs - otherwise, I'm slightly afraid that suexec is teetering
> towards being abandoned.
On Sun, Apr 28, 2002 at 12:03:57AM -0700, Brian Pane wrote:
> I'm posting this for comments before committing...
>
> Here are some more changes the worker MPM code to reduce
> the time spent in mutex-protected regions.
>
> This patch takes advantage of a new property of the worker
> design that
Bill Stoddard wrote:
>
>>On Sat, Apr 27, 2002 at 07:30:51PM -0700, Justin Erenkrantz wrote:
>>
+qi = apr_palloc(pool, sizeof(*qi));
+memset(qi, 0, sizeof(*qi));
>>>As we said, if you are concerned about the performance aspect
>>>of apr_pcalloc, then we should fix apr_pcal
I'm seeing a number of PRs that suexec when used under worker doesn't
work at all (silently doesn't work!). Can anyone reproduce this?
That said, there are a disturbing number of PRs related to suexec in
bugzilla - we really need someone to go through and verify these
bugs - otherwise, I'm sligh
+---+
| Bugzilla Bug ID |
| +-+
| | Status: UNC=Unconfirmed NEW=New ASS=Assigned
+---+
| Bugzilla Bug ID |
| +-+
| | Status: UNC=Unconfirmed NEW=New ASS=Assigned
On Sun, 28 Apr 2002, Bill Stoddard wrote:
> I have no problem with using apr_palloc()/memset() in place of apr_pcalloc().
Do you have a problem with making apr_pcalloc() a macro?
> > > > +rv = apr_thread_mutex_unlock(queue_info->idlers_mutex);
> > > > +if (rv != APR_SUCCESS) {
> > >
> On Sat, Apr 27, 2002 at 07:30:51PM -0700, Justin Erenkrantz wrote:
> > > +qi = apr_palloc(pool, sizeof(*qi));
> > > +memset(qi, 0, sizeof(*qi));
> >
> > As we said, if you are concerned about the performance aspect
> > of apr_pcalloc, then we should fix apr_pcalloc NOT attempt to
19 matches
Mail list logo