According to Perrin Harkins:
> On Tue, 25 Apr 2000 [EMAIL PROTECTED] wrote:
> > With mod_proxy you really only need a few mod_perl processes because
> > no longer is the mod_perl ("heavy") apache process i/o bound.  It's
> > now CPU bound.  (or should be under heavy load)
> 
> I think for most of us this is usually not the case, since most web apps
> involve using some kind of external data source like a database or search
> engine.  They spend most of their time waiting on that resource rather
> than using the CPU.

If you have tried it and it didn't work for you, please post the
details to help us understand your real bottleneck.  Most of my hits
involve both another datasource and a database and I still
see a 10-1 reduction of mod_perl processes with the proxy model.
The real problem is slow client connections over the internet.
If you are only serving a local LAN a proxy won't help but
you won't have slow clients either.

> Isn't is common wisdom that parallel processing is better for servers than
> sequenential anyway, since it means most people don't have to wait as long
> for a response?

Only up to the point where the processes continue to run in parallel.
If you are CPU bound, this will be the number of CPUs.  If you
are doing disk access it will be the number of heads that work
independently.  Going to a database server you will have the
same constraints plus any transaction processing that forces
serialization.

>The sequential model is great if you're the next in line,
> but terrible if there are 50 big requests in front of you and yours is
> very small.  Parallelism evens things out.

Or it just adds more overhead.  If you have enough parallelism to
keep your bottleneck busy, the 50th request can only come out slower
by switching among jobs more often.  Anyway, with the proxy model
the cheap way to increase parallelism is to spread jobs across
different backend machines.

  Les Mikesell
   [EMAIL PROTECTED] 

Reply via email to