Re: load average: 24.07, 14.76, 9.20
Philip Mak wrote: Hi all, I've been having the following problem with my machine (400MHz, 192 MB RAM, 8.4 GB SCSI disk): 1:27am up 3 days, 7:33, 8 users, load average: 24.07, 14.76, 9.20 Every once in a while, the load average gets up to a very high level (at this point, programs start getting "Out of memory!" errors, etc.). I don't really know what to do to fix this, other than typing /sbin/reboot. Looking at "top" doesn't show any very big processes, so I suspect it might be being caused by a large number of small processes. Use Apache::Resource, PerlModule everything you can especially Apache::ASP, and use Apache::ASP-Loader() to precompile your scripts. If you are getting out of memory errors, make sure your MaxRequests are low (200 often), MaxClients low (100 often), and use a mod_proxy server in front to help offload requests. If you are really curious as to what your httpd is doing, fire it up under -X (standalone mode) and strace it in a development environment with adequate representation from the access_log. --Josh P.S. If you need a blitz tune of your environment, I could make some time for consulting work, I deal well with Linux, Solaris, MySQL Oracle. _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051
Re: load average: 24.07, 14.76, 9.20
Joshua Chamas wrote: Use Apache::Resource, PerlModule everything you can especially Apache::ASP, and use Apache::ASP-Loader() to precompile your scripts. If you are getting out of memory errors, make sure your MaxRequests are low (200 often), MaxClients low (100 often), and use a mod_proxy server in front to help offload requests. I find Apache::SizeLimit is more effective than setting MaxRequests, because it won't kill off your well-behaved processes and thus spares you from extra process spawning, re-opening databases, etc. Make sure you don't set Apache::Resource to kill anything that's close to "normal". It does a harsh kill which can leave your users with a "document contains no data" and possibly mess up open dbm files, etc. It works well for catching runaways though. - Perrin
Re: load average: 24.07, 14.76, 9.20
How much of a performance penalty does using Apache::SizeLimit have? Is there some quantitative way of setting how often to check process size with the "Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 10;" that results in the best performance? Perrin Harkins wrote: I find Apache::SizeLimit is more effective than setting MaxRequests, because it won't kill off your well-behaved processes and thus spares you from extra process spawning, re-opening databases, etc. Make sure you don't set Apache::Resource to kill anything that's close to "normal". It does a harsh kill which can leave your users with a "document contains no data" and possibly mess up open dbm files, etc. It works well for catching runaways though. - Perrin
Re: load average: 24.07, 14.76, 9.20
Perrin Harkins wrote: Buddy Lee Haystack wrote: How much of a performance penalty does using Apache::SizeLimit have? Not enough that you'll notice it. It really depends on 2 things: - What OS you're on - How complex your scripts are. Here's the code that does the size check, which varies depending on your OS: # return process size (in KB) sub linux_size_check { my $size = 0; local(*FH); if (open(FH, "/proc/self/status")) { while (FH) { last if (($size) = (/^VmRSS:\s+(\d+)/)) } close(FH); } else { error_log("Fatal Error: couldn't access /proc/self/status"); } return($size); } sub solaris_2_6_size_check { my $size = -s "/proc/self/as" or error_log("Fatal Error: /proc/self/as doesn't exist or is empty"); $size = int($size/1024); # to get it into kb return($size); } sub bsd_size_check { return( (BSD::Resource::getrusage())[2] ); } As you can see, your mod_perl handler would have to be _extremely_ simple for this code to take a non-trivial proportion of its run-time.