>  > This doesn't affect the argument, because the core of it is that:
>  > 
>  > a) the CPU will not completely process a single task all 
> at once; instead,
>  > it will divide its time _between_ the tasks
>  > b) tasks do not arrive at regular intervals
>  > c) tasks take varying amounts of time to complete
>  > 
[snip]

>  I won't agree with (a) unless you qualify it further - what 
> do you claim
>  is the method or policy for (a)?

I think this has been answered ... basically, resource conflicts (including
I/O), interrupts, long running tasks, higher priority tasks, and, of course,
the process yielding, can all cause the CPU to switch processes (which of
these qualify depends very much on the OS in question).

This is why, despite the efficiency of single-task running, you can usefully
run more than one process on a UNIX system. Otherwise, if you ran a single
Apache process and had no traffic, you couldn't run a shell at the same time
- Apache would consume practically all your CPU in its select() loop 8-)

>  Apache httpd's are scheduled on an LRU basis.  This was 
> discussed early
>  in this thread.  Apache uses a file-lock for its mutex 
> around the accept
>  call, and file-locking is implemented in the kernel using a 
> round-robin
>  (fair) selection in order to prevent starvation.  This results in
>  incoming requests being assigned to httpd's in an LRU fashion.

I'll apologise, and say, yes, of course you're right, but I do have a query:

There are at (IIRC) 5 methods that Apache uses to serialize requests:
fcntl(), flock(), Sys V semaphores, uslock (IRIX only) and Pthreads
(reliably only on Solaris). Do they _all_ result in LRU?

>  Remember that the httpd's in the speedycgi case will have very little
>  un-shared memory, because they don't have perl interpreters in them.
>  So the processes are fairly indistinguishable, and the LRU isn't as 
>  big a penalty in that case.


Yessss...._but_, interpreter for interpreter, won't the equivalent speedycgi
have roughly as much unshared memory as the mod_perl? I've had a lot of
(dumb) discussions with people who complain about the size of
Apache+mod_perl without realising that the interpreter code's all shared,
and with pre-loading a lot of the perl code can be too. While I _can_ see
speedycgi having an advantage (because it's got a much better overview of
what's happening, and can intelligently manage the situation), I don't think
it's as large as you're suggesting. I think this needs to be intensively
benchmarked to answer that....

>  other interpreters, and you expand the number of interpreters in use.
>  But still, you'll wind up using the smallest number of interpreters
>  required for the given load and timeslice.  As soon as those 1st and
>  2nd perl interpreters finish their run, they go back at the beginning
>  of the queue, and the 7th/ 8th or later requests can then 
> use them, etc.
>  Now you have a pool of maybe four interpreters, all being 
> used on an MRU
>  basis.  But it won't expand beyond that set unless your load 
> goes up or
>  your program's CPU time requirements increase beyond another 
> timeslice.
>  MRU will ensure that whatever the number of interpreters in use, it
>  is the lowest possible, given the load, the CPU-time required by the
>  program and the size of the timeslice.

Yep...no arguments here. SpeedyCGI should result in fewer interpreters.


I will say that there are a lot of convincing reasons to follow the
SpeedyCGI model rather than the mod_perl model, but I've generally thought
that the increase in that kind of performance that can be obtained as
sufficiently minimal as to not warrant the extra layer... thoughts, anyone?

Stephen.

Reply via email to