Interesting, but I wonder if we're not thinking this through
correctly. My suggestion, and your here, and Gustaf's recet work are
all aimed at refining the model as it currently is, but I wonder if
we're even attempting to do the right thing?

> So I'm assuming that the available processing power - the number of
> threads - should correlate to how busy the server is.  A server that is
> 50% busy should have 50% of its full capacity working.

But what is busy, CPU?  There needs to be an appropriate max number of
threads to handle the max expected load, considering the capabilities
of the machine. Too many and the machine will run slower. But why kill
them when we're no longer busy?

- naviserver conn threads use a relatively large amount of memory
because there tends to be one or more tcl interps associated with each
one

- killing threads kills interps which frees memory

But this is only useful if you can use the memory more profitably some
where else, and I'm not sure you can.

It is incoming load which drives conn thread creation, and therefore
memory usage, not availability of memory. So if you kill of some conn
threads when they're not needed, freeing up some memory for some other
system, how do you get the memory back when you create conn threads
again? There needs to be some higher mechanism which has a global view
of, say your database and web server requirements, and can balance the
memory needs between them.

I think it might be better to drop min/max conn threads and just have
n conn threads, always:

- simpler code

- predictable memory footprint

- bursty loads aren't delayed waiting for conn threads/interps to be created

- interps can be fully pre-warmed without delaying requests

- could back-port aolserver's ns_pools command to dynamically set the
nconnthreads setting

With ns_pools you could do something like use a scheduled proc to set
the nconnthreads down to 10 from 20 between 3-5am when your database
is taking a hefty dump.


Thread pools are used throughout the server: multiple pools of conn
threads, driver spool threads, scheduled proc threads, job threads,
etc. so one clean way to tackle this might be to create a new
nsd/pools.c which implements a very simple generic thread pool which
has n threads, fifo ordering for requests, a tcl interface for
dynamically setting the number of threads, and thread recycling after
n requests. Then try to implement conn threads in terms of it.


btw. an idea for pre-warming conn thread interps: generate a synthetic
request to /_ns/pool/foo/warmup (or whatever) when the thread is
created, before it is added to the queue. This would cause the tcl
source code to be byte compiled, and this could be controlled
precisely be registering a proc for that path.

------------------------------------------------------------------------------
WINDOWS 8 is here. 
Millions of people.  Your app in 30 days.
Visit The Windows 8 Center at Sourceforge for all your go to resources.
http://windows8center.sourceforge.net/
join-generation-app-and-make-money-coding-fast/
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to