According to [EMAIL PROTECTED]:
> > 
> > This is basically what you get with the 'two-apache' mode.
> 
> To be frank... it's not.  Not even close.

It is the same to the extent that you get a vast reduction in the
number of backend mod_perl processes.  As I mentioned before, I
see a fairly constant ratio of 10-1 but it is really going to depend
on how fast your script can deliver its output back to the front
end (some of mine are slow).  It is difficult to benchmark this on
a LAN because the thing that determines the number of front-end
connections is the speed at which the content can be delivered back
to the client.  On internet connections you will see many slow
links, and letting those clients talk directly to mod_perl is the
only real problem.

> Especially in the case that
> the present site I'm working on where they have certain boxes for
> dynamic, others for static.

This is a perfect setup.  Let the box handling static content
also proxy the dynamic requests to the backend.

> This is usefull when you have one box
> running dynamic/static requests..., but it's not a solution, it's a
> work around.  (I should say we're moving to have some boxes static
> some dynamic... at present it's all jumbled up ;-()

Mod_rewrite is your friend when you need to spread things over
an arbitrary mix of boxes.  And it doesn't hurt much to
run an extra front end on your dynamic box either - it will
almost always be a win if clients are hitting it directly.

A fun way to convince yourself that the front/back end setup is
working is to run something called 'lavaps' (at least under Linux,
you can find this at www.freshmeat.net).  This shows your processes
as moving colored blobs floating around with the size related to
memory use and the activity and brightness related to processor
use.  It is pretty dramatic on a box typically running 200 1Meg
frontends, and 20 10Meg backends. You get the idea quickly what
would happen with 200 10Meg processes instead - or trying to
funnel through one perl backend.
  
> Well, now your discussing threaded perl... a whole seperate bag of
> tricks :).  That's not what I'm talking about... I'm talking about
> running a standard perl inside of a threaded enviro.  I've done this,
> and thrown tens of thousands of requests at it with no problems.

You could simulate this by configuring mod_perl backend to only
run one child and let the backlog sit in the listen queue. But
you will end up with the same problem.

> I
> believe threaded perl is an attempt to allow multiple simultaneous
> requests going into a single perl engine that is "multi threaded".
> There are problems with this... and it's difficult to accomplish, and
> alltogether a slower approach than queing because of the context
> switching type overhead.  Not to mention the I/O issue of this...
> yikes! makes my head spin.

What happens in your model - or any single threaded, single processing
model - when something takes longer than you expect?  If you are
just doing internal CPU processing and never have an error in your
programs you will be fine, but much of my mod_perl work involves
database connections and network I/O to yet another server for the
data to be displayed.  Some of these are slow and I can't allow
other requests to block until all prior ones have finished.  The
apache/mod_perl model automatically keeps the right number of
processes running to handle the load and since I mostly run
dual-processor machines I want at least a couple running all the
time.

   Les Mikesell
    [EMAIL PROTECTED]

Reply via email to