> >  There's only one run queue in the kernel.  THe first task ready to run is
 > put
 > >  at the head of that queue, and anything arriving afterwards waits.  Only
 > >  if that first task blocks on a resource or takes a very long time, or
 > >  a higher priority process becomes able to run due to an interrupt is that
 > >  process taken out of the queue.
 > 
 > Note that any I/O request that isn't completely handled by buffers will
 > trigger the 'blocks on a resource' clause above, which means that
 > jobs doing any real work will complete in an order determined by
 > something other than the cpu and not strictly serialized.  Also, most
 > of my web servers are dual-cpu so even cpu bound processes may
 > complete out of order.

 I think it's much easier to visualize how MRU helps when you look at one
 thing running at a time.  And MRU works best when every process runs
 to completion instead of blocking, etc.  But even if the process gets
 timesliced, blocked, etc, MRU still degrades gracefully.  You'll get
 more processes in use, but still the numbers will remain small.

 > >  > Similarly, because of the non-deterministic nature of computer systems,
 > >  > Apache doesn't service requests on an LRU basis; you're comparing
 > SpeedyCGI
 > >  > against a straw man. Apache's servicing algortihm approaches randomness,
 > so
 > >  > you need to build a comparison between forced-MRU and random choice.
 > >
 > >  Apache httpd's are scheduled on an LRU basis.  This was discussed early
 > >  in this thread.  Apache uses a file-lock for its mutex around the accept
 > >  call, and file-locking is implemented in the kernel using a round-robin
 > >  (fair) selection in order to prevent starvation.  This results in
 > >  incoming requests being assigned to httpd's in an LRU fashion.
 > 
 > But, if you are running a front/back end apache with a small number
 > of spare servers configured on the back end there really won't be
 > any idle perl processes during the busy times you care about.  That
 > is, the  backends will all be running or apache will shut them down
 > and there won't be any difference between MRU and LRU (the
 > difference would be which idle process waits longer - if none are
 > idle there is no difference).

 If you can tune it just right so you never run out of ram, then I think
 you could get the same performance as MRU on something like hello-world.

 > >  Once the httpd's get into the kernel's run queue, they finish in the
 > >  same order they were put there, unless they block on a resource, get
 > >  timesliced or are pre-empted by a higher priority process.
 > 
 > Which means they don't finish in the same order if (a) you have
 > more than one cpu, (b) they do any I/O (including delivering the
 > output back which they all do), or (c) some of them run long enough
 > to consume a timeslice.
 > 
 > >  Try it and see.  I'm sure you'll run more processes with speedycgi, but
 > >  you'll probably run a whole lot fewer perl interpreters and need less ram.
 > 
 > Do you have a benchmark that does some real work (at least a dbm
 > lookup) to compare against a front/back end mod_perl setup?

 No, but if you send me one, I'll run it.

Reply via email to