----- Original Message -----
From: "Sam Horrocks" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "mod_perl list" <[EMAIL PROTECTED]>; "Stephen Anderson"
<[EMAIL PROTECTED]>
Sent: Thursday, January 18, 2001 10:38 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc
ripts that contain un-shared memory


>  There's only one run queue in the kernel.  THe first task ready to run is
put
>  at the head of that queue, and anything arriving afterwards waits.  Only
>  if that first task blocks on a resource or takes a very long time, or
>  a higher priority process becomes able to run due to an interrupt is that
>  process taken out of the queue.

Note that any I/O request that isn't completely handled by buffers will
trigger the 'blocks on a resource' clause above, which means that
jobs doing any real work will complete in an order determined by
something other than the cpu and not strictly serialized.  Also, most
of my web servers are dual-cpu so even cpu bound processes may
complete out of order.

>  > Similarly, because of the non-deterministic nature of computer systems,
>  > Apache doesn't service requests on an LRU basis; you're comparing
SpeedyCGI
>  > against a straw man. Apache's servicing algortihm approaches randomness,
so
>  > you need to build a comparison between forced-MRU and random choice.
>
>  Apache httpd's are scheduled on an LRU basis.  This was discussed early
>  in this thread.  Apache uses a file-lock for its mutex around the accept
>  call, and file-locking is implemented in the kernel using a round-robin
>  (fair) selection in order to prevent starvation.  This results in
>  incoming requests being assigned to httpd's in an LRU fashion.

But, if you are running a front/back end apache with a small number
of spare servers configured on the back end there really won't be
any idle perl processes during the busy times you care about.  That
is, the  backends will all be running or apache will shut them down
and there won't be any difference between MRU and LRU (the
difference would be which idle process waits longer - if none are
idle there is no difference).

>  Once the httpd's get into the kernel's run queue, they finish in the
>  same order they were put there, unless they block on a resource, get
>  timesliced or are pre-empted by a higher priority process.

Which means they don't finish in the same order if (a) you have
more than one cpu, (b) they do any I/O (including delivering the
output back which they all do), or (c) some of them run long enough
to consume a timeslice.

>  Try it and see.  I'm sure you'll run more processes with speedycgi, but
>  you'll probably run a whole lot fewer perl interpreters and need less ram.

Do you have a benchmark that does some real work (at least a dbm
lookup) to compare against a front/back end mod_perl setup?

>  Remember that the httpd's in the speedycgi case will have very little
>  un-shared memory, because they don't have perl interpreters in them.
>  So the processes are fairly indistinguishable, and the LRU isn't as
>  big a penalty in that case.
>
>  This is why the original designers of Apache thought it was safe to
>  create so many httpd's.  If they all have the same (shared) memory,
>  then creating a lot of them does not have much of a penalty.  mod_perl
>  applications throw a big monkey wrench into this design when they add
>  a lot of unshared memory to the httpd's.

This is part of the reason the front/back end  mod_perl configuration
works well, keeping the backend numbers low.  The real win when serving
over the internet, though, is that the perl memory is no longer tied
up while delivering the output back over frequently slow connections.

   Les Mikesell
       [EMAIL PROTECTED]


Reply via email to