Some update: after some more testing with the new code, i still think, 
the version is promising, but needs a few tweaks. I have started to 
address the thread creation.

To sum up the thread creation behavior/configuration of naviserver-tip:

   - minthreads (try to keep at least minthreads threads idle)
   - spread (fight against thread mass extinction due to round robin)
   - threadtimeout (useless due to round robin)
   - connsperthread (the only parameter effectively controlling the 
lifespan of an conn thread)
   - maxconnections (controls maximum number of connections in the 
waiting queue, including running threads)
   - concurrentcreatethreshold (percentage of waiting queue full, when 
to create threads in concurrently)

Due to the policy of keeping at least minthreads idle, threads are 
preallocated when the load is high, the number of threads never falls 
under minthreads by construct. Threads stop mostly due to connsperthread.

Naviserver with thread queue (fork)

   - minthreads (try to keep at least minthreads threads idle)
   - threadtimeout (works effectively, default 120 secs)
   - connsperthread (as before, just not varied via spread)
   - maxconnections (as before; use maybe "queuesize" instead)
   - lowwatermark (new)
   - highwatermark (was concurrentcreatethreshold)

The parameter "spread" is already deleted, since the enqueueing takes 
care for a certain distribution, at least, when several threads are 
created. Threads are deleted often before connsperthread due to the 
timeout. Experiments show furthermore, that the rather agressive 
preallocation policy with minthreads idle threads causes now much more 
thread destroy and thread create operations than before. With with 
OpenACS, thread creation is compute-intense (about 1 sec).

In the experimental version, connections are only queued when no 
connection thread is available (the tip version places every connection 
into the queue). Queueing happens with "bulky" requests, when e.g. a 
view causes a bunch (on average 5, often 10+, sometimes 50+) of requests 
for embedded resources (style files, javascript, images). It seems that 
permitting a few queued requests is often a good idea, since the 
connection threads will pick these up typically very quickly.

To make the aggressiveness of the thread creation policy better 
configurable, the experimental version uses for this purpose solely the 
number of queued requests based on two parameters:

   - lowwatermark (if the actual queue size is below this value, don't 
try to create threads; default 5%)
   - highwatermark (if the actual queue size is above this value, allow 
parallel thread creates; default 80%)

To increase the aggressiveness, one could set lowwatermark to e.g. 0, 
causing thread-creates, whenever a connection is queued. Increasing the 
lowwatermark reduces the willingness to create new threads. The 
highwatermark might be useful for benchmark situations, where the queue 
is filled up quickly.

The default values seems to work quite well, it is used currently on 
http://next-scripting.org. However we still need some more experiments 
on different sites to get a better understanding.

hmm final comment: for the regression test, i had to add the policy to 
create threads, when all connection threads are busy. The config file of 
the regression test uses connsperthread 0 (which is the default, but not 
very good as such), causing the exit every connection thread to exit 
after every threads. So, when the request comes in, that we have a 
thread busy, but nothing queued. So, there would not be the need to 
create a new thread. However, when the conn thread exists, the single 
request would not be processed.

So, much more testing is needed.
-gustaf neumann

Am 01.11.12 20:17, schrieb Gustaf Neumann:
> Dear all,
>
> There is now a version on bitbucket, which works quite nice
> and stable, as far i can tell. I have split up the rather
> coarse lock of all pools and introduced finer locks for
> waiting queue (wqueue) and thread queue (tqueue) per pool.
> The changes lead to significant finer lock granularity and
> improve scalability.
>
> I have tested this new version with a synthetic load of 120
> requests per seconds, some slower requests and some faster
> ones, and it appears to be pretty stable. This load keeps
> about 20 connection threads quite busy on my home machine.
> The contention of the new locks is very little: on this test
> we saw 12 busy locks on 217.000 locks on the waiting queue,
> and 9 busy locks out of 83.000 locks on the thread queue.
> These measures are much better than in current naviserver,
> which has on the same test on the queue 248.000 locks with
> 190 busy ones. The total waiting time for locks is reduced
> by a factor of 10. One has to add, that it was not so bad
> before either. The benefit will be larger when multiple
> pools are used.
>
> Finally i think, the code is clearer than before, where the
> lock duration was quite tricky to determine.
>
> opinions?
> -gustaf neumann
>
> PS: For the changes, see:
> https://bitbucket.org/gustafn/naviserver-connthreadqueue/changesets
>
> PS2: have not addressed the server exit signaling yet.
>
> On 29.10.12 13:41, Gustaf Neumann wrote:
>> A version of this is in the following fork:
>>
>> https://bitbucket.org/gustafn/naviserver-connthreadqueue/changesets
>>
>> So far, the competition on the pool mutex is quite high, but
>> i think, it can be improved. Currently primarily the pool
>> mutex is used for conn thread life-cycle management, and it
>> is needed from the main/drivers/spoolers as well from the
>> connection threads to update the idle/running/.. counters
>> needed for controlling thread creation etc. Differentiating
>> these mutexes should help.
>>
>> i have not addressed the termination signaling, but that's
>> rather simple.
>>
>> -gustaf neumann
>>
>> On 28.10.12 03:08, Gustaf Neumann wrote:
>>> i've just implemented lightweight version of the above (just
>>> a few lines of code) by extending the connThread Arg
>>> structure; ....
>> ------------------------------------------------------------------------------
>> The Windows 8 Center - In partnership with Sourceforge
>> Your idea - your app - 30 days.
>> Get started!
>> http://windows8center.sourceforge.net/
>> what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
>> _______________________________________________
>> naviserver-devel mailing list
>> naviserver-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/naviserver-devel
>
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_sfd2d_oct
> _______________________________________________
> naviserver-devel mailing list
> naviserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/naviserver-devel


-- 
Univ.Prof. Dr. Gustaf Neumann
Institute of Information Systems and New Media
WU Vienna
Augasse 2-6, A-1090 Vienna, AUSTRIA


------------------------------------------------------------------------------
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to