Now I always wondered why it takes 30-60 and more seconds for the test suite to start under worker mpm, compared to just about 10 for prefork mpm. I did a simple
# term1 % tail -F t/logs/error_log
# term2 % env MOD_PERL_TRACE=all t/TEST -start
and voila, you can see with unarmed eye that we spend 5 seconds and more time (depending on your CPU) in each call to modperl_interp_new() and there are several of these calls. Looks very impractical to me.
Applying a simple trace to modperl_interp.c:
+ MP_TRACE_i(MP_FUNC, "perl_clone start\n");
interp->perl = perl_clone(perl, clone_flags);
+ MP_TRACE_i(MP_FUNC, "perl_clone end\n");easily shows that perl_clone() is the one that takes an impractical amount of time to complete.
Where does this bring us? When a new request is coming in and all interpreters are busy and the pool quota is not full, a new interepreter will be cloned. And your request will be served in 6-10 secs instead of 50msecs. Now imagine a busy webserver with no spare CPU cycles, perl_clone may take 20 secs and more. Indeed running in parallel 'perl -le 'do { $a=5; $b = $a**5 } while 1' slows perl_clone twice. Running a few of those CPU hogs, slows it down by the number of processes fighting for CPU slices.
I hope I'm not crying wolf and things are better than what I see them, please tell me that I'm totally wrong.
The only bright side I can see at the moment is that most people won't have 150 modules loaded. But 50 modules will scale bad enough.
Randy, do you see the same slow behavior under win32? I expect it to be the case.
__________________________________________________________________ Stas Bekman JAm_pH ------> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
