On Sun, Nov 18, 2012 at 1:22 PM, Gustaf Neumann <neum...@wu.ac.at> wrote:
> On 14.11.12 09:51, Gustaf Neumann wrote:
>
> On 13.11.12 15:02, Stephen Deasey wrote:
>
> On Tue, Nov 13, 2012 at 11:18 AM, Gustaf Neumann <neum...@wu.ac.at> wrote:
>
> minthreads = 2
>
> creating threads, when idle == 0
>     10468 requests, connthreads 267
>     total cputime 00:10:32
>
> creating threads, when queue >= 5
>     requests 10104 connthreads 27
>     total cputime 00:06:14
>
> What if you set minthreads == maxthreads?
>
> The number of thread create operations will go further down.
>
> Here are some actual figures with a comparable number of requests:
>
> with minthreads==maxthreads==2
>    requests 10182 queued 2695 connthreads 11 cpu 00:05:27 rss 415
>
> below are the previous values, competed by the number of queuing operations
> and the rss size in MV
>
> with minthreads=2, create when queue >= 2
>    requests 10104 queued 1584 connthreads 27 cpu 00:06:14 rss 466
>
> as anticipated, thread creations and cpu consumption went down, but the
> number of queued requests (requests that could not be executed immediately)
> increased significantly.

I was thinking of the opposite: make min/max threads equal by
increasing min threads. Requests would never stall in the queue,
unlike the experiment you ran with max threads reduced to min threads.
But there's another benefit: unlike the dynamic scenario requests
would also never stall in the queue when a new thread had to be
started when min < max threads.

What is the down side to increasing min threads up to max threads?

> Maybe the most significant benefit of a low maxthreads value is the reduced
> memory consumption. On this machine we are using plain Tcl with its "zippy
> malloc", which does not release memory (once allocated to its pool) back to
> the OS. So, the measured memsize depends on the max number of threads with
> tcl interps, especially with large blueprints (as in the case of OpenACS).

Right: the max number of threads *ever*, not just currently. So by
killing threads you don't reduce memory usage, but you do increase
latency for some requests which have to wait for a thread+interp to be
created.

Is it convenient to measure latency distribution (not just average)? I
guess not: we record conn.startTime when a connection is taken out of
the queue and passed to a conn thread, but we don't record the time
when a socket was accepted.


Actually, managing request latency is another area we don't handle so
well. You can influence it by adjusting the OS listen socket accept
queue length, you can adjust the length of the naviserver queue, and
with the proposed change here you can change how aggressive new
threads are created to process requests in the queue. But queue-depth
is a roundabout way of specifying milliseconds of latency. And not
just round-about but inherently imprecise as different URLs are going
to require different amounts of time to complete, and which URLs are
requested is a function of current traffic. If instead of queue size
you could specify a target latency then we could maybe do smarter
things with the queue, such as pull requests off the back of the queue
which have been waiting longer than the target latency, making room
for fresh requests on the front of the queue.

------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to