Issac Goldstand wrote:
> CC-ing to [EMAIL PROTECTED] in the hopes that someone 
> (**cough**wrowe?**cough**)
> might shed some deeper insight into why things were/are done the way
> they are, and what, if anything, would be needed to be done to make
> things better.
> 
> I don't think that the problem is mod_perl, as much as the winnt MPM in
> Apache2.  The bottom line is that if anything goes wrong, you need the
> singleton child process to recycle itself, and very often in the case of
> mod_perl that can take a long time.  But the essential problem still
> exists with PHP, mod_python, even theoretically in a minimal vanilla
> httpd install.

As Mladen hints, this was just 'the way it was done' since the Apache 1.3
MPM was first created.  I began to set up more of the structures for having
parallel running httpd's and Mladen took this one step further with his
winxp mpm, but the bottom line is that resource sharing just doesn't work
the same way on windows as unix, and things like interlocking parallel
writes by two processes to a log file, for example, still need additional
work to behave without a huge performance hit.

So you are strictly asking if winxp will ever become more tolerant of the
processes crashing due to bad code?  Well, the answer is yes, either parallel
running child processes and/or hot standby have always been on the table; if
someone has time to design (as Mladen did with winxp mpm) and enough win folk
have time to review (something that hasn't happened with Mladen's design.)

That will never solve the problem that your dead process just took down dozens
of parallel requests in the process.  And no - windows will never perform all
that well with 1 worker/process.  If you want a very insecure MPM, it would
be possible to defer the process exit after a fault on one thread (suspending
that thread while the others run out) but this makes root exploits far easier
to design.

Reply via email to