On 18.11.12 20:34, Stephen Deasey wrote:
On Sun, Nov 18, 2012 at 1:22 PM, Gustaf Neumann <neum...@wu.ac.at> wrote:
Here are some actual figures with a comparable number of requests:

with minthreads==maxthreads==2
    requests 10182 queued 2695 connthreads 11 cpu 00:05:27 rss 415

below are the previous values, competed by the number of queuing operations
and the rss size in MV

with minthreads=2, create when queue >= 2
    requests 10104 queued 1584 connthreads 27 cpu 00:06:14 rss 466

as anticipated, thread creations and cpu consumption went down, but the
number of queued requests (requests that could not be executed immediately)
increased significantly.
I was thinking of the opposite: make min/max threads equal by
increasing min threads. Requests would never stall in the queue,
unlike the experiment you ran with max threads reduced to min threads.
on the site, we have maxthreads 10. so setting minthreads as well to 10 has the consequence of a larger memsize (and queued substantially reduced).
But there's another benefit: unlike the dynamic scenario requests
would also never stall in the queue when a new thread had to be
started when min < max threads.
you are talking about naviserver before 4.99.4. Both, the version in the tip naviserver repository and the forked version provide already warmed up threads. The version on the main tip starts to listen to the wakup signals only once it is warmed up, the version in the fork adds a thread to the conn queue as well only after the startup is complete. So, in both cases there is no stall. In earlier version, this was as you describe.
What is the down side to increasing min threads up to max threads?
higher memory consumption, maybe more open database connections, allocating resources which are not needed. The degree of wastefullness depends certainly on maxthreads. I would assume that for an admin carefully watching the server needs, setting minthreads==maxthreads to the "right value" can lead to slight improvements as long the load is rather constant over time.

Maybe the most significant benefit of a low maxthreads value is the reduced
memory consumption. On this machine we are using plain Tcl with its "zippy
malloc", which does not release memory (once allocated to its pool) back to
the OS. So, the measured memsize depends on the max number of threads with
tcl interps, especially with large blueprints (as in the case of OpenACS).
Right: the max number of threads *ever*, not just currently. So by
killing threads you don't reduce memory usage, but you do increase
latency for some requests which have to wait for a thread+interp to be
created.
not really with the warm-up feature.
Is it convenient to measure latency distribution (not just average)? I
guess not: we record conn.startTime when a connection is taken out of
the queue and passed to a conn thread, but we don't record the time
when a socket was accepted.
we could record the socket accept time and measure the difference until the start of the connection runtime; when we output this to the accesslog (like logreqtime) we could run whatever statistics we want.
Actually, managing request latency is another area we don't handle so
well. You can influence it by adjusting the OS listen socket accept
queue length, you can adjust the length of the naviserver queue, and
with the proposed change here you can change how aggressive new
threads are created to process requests in the queue. But queue-depth
is a roundabout way of specifying milliseconds of latency. And not
just round-about but inherently imprecise as different URLs are going
to require different amounts of time to complete, and which URLs are
requested is a function of current traffic. If instead of queue size
you could specify a target latency then we could maybe do smarter
things with the queue, such as pull requests off the back of the queue
which have been waiting longer than the target latency, making room
for fresh requests on the front of the queue.
The idea of controlling the number of running threads via queuing latency is interesting, but i have to look into the details before i can comment on this.

-gustaf

------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to