Sam Horrocks wrote:
say they take two slices, and interpreters 1 and 2 get pre-empted and
go back into the queue. So then requests 5/6 in the queue have to use
other interpreters, and you expand the number of interpreters in use.
But still, you'll wind up using the smallest number of
There's only one run queue in the kernel. THe first task ready to run is
put
at the head of that queue, and anything arriving afterwards waits. Only
if that first task blocks on a resource or takes a very long time, or
a higher priority process becomes able to run due to an
This doesn't affect the argument, because the core of it is that:
a) the CPU will not completely process a single task all
at once; instead,
it will divide its time _between_ the tasks
b) tasks do not arrive at regular intervals
c) tasks take varying amounts of time to
There seems to be a lot of talk here, and analogies, and zero real-world
benchmarking.
Now it seems to me from reading this thread, that speedycgi would be
better where you run 1 script, or only a few scripts, and mod_perl might
win where you have a large application with hundreds of different
You know, I had brief look through some of the SpeedyCGI code yesterday,
and I think the MRU process selection might be a bit of a red herring.
I think the real reason Speedy won the memory test is the way it spawns
processes.
Please take a look at that code again. There's no smoke
- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "mod_perl list" [EMAIL PROTECTED]; "Stephen Anderson"
[EMAIL PROTECTED]
Sent: Thursday, January 18, 2001 10:38 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales bet