I don't think, that a major problem comes from the "racy" 
notification of queuing  events to the connection threads. 
This has advantages (make os responsible, which does this 
very efficiently, less mutex requirements) and disadvantages 
(little control).

While the current architecture with the cond-broadcast is 
certainly responsible for the problem of simultaneous dieing 
threads (the OS determines, which thread receives sees the 
condition first, therefore round robin), a list of the 
linked connection threads does not help to determine on how 
many threads are actually needed, how bursty thread creation 
should be, how to handle short resource quenches (e.g. 
caused by locks, etc.). By having a conn-thread-queue, the 
threads have to update this queue with their status 
information (being created, warming up, free, busy, 
will-die) which requires some overhead and more mutex locks 
on the driver. The thread-status-handling is done currently 
automatically, a "busy" request ignores currently the 
condition, etc.

On the good side, we would have more control over the 
threads. When a dieing thread notifies the 
conn-thread-queue, one can control thread-creation via this 
hook the same way as on situations, where requests are 
queued. Another good aspect is, that the thread-idle-timeout 
starts to makes sense again on busy sites. Currently, the 
thread-reduction works via counter, since unneeded threads 
die and won't be recreated unless the traffic requires it 
(which works in practice quite well). For busy sites, the 
thread-idle timeout is not needed this way.

currently we have a one-way communication from the driver to 
the conn-threads. with the conn-thread-list (or array), one 
has a two way communication, ... at least, how i understand 
this for now.

-gustaf neumann

On 11.10.12 14:02, Stephen Deasey wrote:
> On Wed, Oct 10, 2012 at 9:44 PM, Jeff Rogers <dv...@diphi.com> wrote:
>> It is possible to get into a situation where there are connections
>> queued but no conn threads running to handle them, meaning nothing
>> happens until a new connection comes in.  When this happens the server
>> will also not shut down cleanly.  As far as I can figure, this can only
>> happen if the connection queue is larger than connsperthread and the
>> load is very bursty (i.e., a load test);  all the existing conn threads
>> can hit their cpt and exit, but a new conn thread only starts when a new
>> connection is queued.  I think the solution here is to limit
>> maxconnections to no more than connsperthread.  Doing so exposes a less
>> severe problem where connections waiting in the driver thread don't get
>> queued for some time; it's less of a problem because there is a timeout
>> and the dirver thread will typically wake up on a closing socket fairly
>> soon, but it can still result in a simple request taking ~3s to
>> complete.  I don't know how to fix this latter problem.
> I think this is racy because all conn threads block on a single
> condition variable. The driver thread and conn threads must cooperate
> to manage the whole life cycle and the code to manage the state is
> spread around.
>
> If instead all conn thread were in a queue, each with it's own
> condition variable, the driver thread could have sole responsibility
> for choosing which conn thread to run by signalling it directly,
> probably in LIFO order rather than the current semi-round-robin order
> which tends to cause all conn threads to expire at once. Conn threads
> would return to the front of the queue, unless wishing to expire in
> which case they'd go on the back of the queue, and the driver would
> signal when it was convenient to do so. Something like that...
>
> ------------------------------------------------------------------------------
> Don't let slow site performance ruin your business. Deploy New Relic APM
> Deploy New Relic app performance management and know exactly
> what is happening inside your Ruby, Python, PHP, Java, and .NET app
> Try New Relic at no cost today and get our sweet Data Nerd shirt too!
> http://p.sf.net/sfu/newrelic-dev2dev
> _______________________________________________
> naviserver-devel mailing list
> naviserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/naviserver-devel


------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to