Am 11.03.2010 um 22:41 schrieb ARTHUR GOLDBERG: > Running Perl programs in mod_perl in Apache (2.2) on RHEL: > >> [10 a...@virtualplant:/etc]$ cat /etc/redhat-release >> Red Hat Enterprise Linux Server release 5.4 (Tikanga) >> [11 a...@virtualplant:/etc]$ uname -r >> 2.6.18-164.11.1.el5 > Occasionally a process grows so large that it freezes the system:
I had a similar problem, ending with the kernel message: no more processes to kill, giving up. so I installed Apache2::Resource and set # set limit to 500MB PerlSetEnv PERL_RLIMIT_AS 500 PerlChildInitHandler Apache2::Resource and now the problem dossn't appear again. And I don't think that Apache2::SizeLimit can handle this issue, since it checks the size after the request, which may help if you have a memory leak, but my problem was caused by a single request and the process grew very fast during this request, eating up all memory. Setting RLIMIT is handled by the OS , so that process get's killed when it grows to large. > >> several of them will use so much memory that kswapd takes all the CPU: >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> 349 root 10 -5 0 0 0 R 37.7 0.0 5:53.56 kswapd1 >> 348 root 20 -5 0 0 0 R 35.8 0.0 5:57.67 kswapd0 > and > > > >> from /etc/httpd/logs/error_log >> Feb 24 14:35:32 virtualplant setroubleshoot: SELinux is preventing the http >> daemon from connecting to network port 3306 For complete SELinux messages. >> run sealert -l 0afcfa46-07b8-48eb-aec3-e7dda9872b84 >> Feb 24 14:35:34 virtualplant avahi-daemon[3133]: Invalid query packet. >> Feb 24 14:55:06 virtualplant last message repeated 6 times >> Feb 24 15:00:44 virtualplant last message repeated 3 times >> Feb 24 15:00:55 virtualplant last message repeated 5 times >> Feb 24 15:01:21 virtualplant dhclient: DHCPREQUEST on eth0 to 128.122.128.24 >> port 67 >> Feb 24 15:09:51 virtualplant kernel: hald-addon-stor invoked oom-killer: >> gfp_mask=0xd0, order=0, oomkilladj=0 >> Feb 24 15:09:51 virtualplant kernel: >> Feb 24 15:09:51 virtualplant kernel: Call Trace: >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff800c6076>] >> out_of_memory+0x8e/0x2f3 >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff8000f487>] >> __alloc_pages+0x245/0x2ce >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff80017812>] >> cache_grow+0x133/0x3c1 >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff8005c2e5>] >> cache_alloc_refill+0x136/0x186 >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff8000ac12>] >> kmem_cache_alloc+0x6c/0x76 >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff80012658>] getname+0x25/0x1c2 >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff80019cba>] >> do_sys_open+0x17/0xbe >> Feb 24 15:09:51 virtualplant kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 > > Then I need to cycle the box's power. > > I'm implementing a multi-layer defense against this. > > 1) Try to prevent input that might blow up a process. However, this will be > imperfect. > > 2) Kill apache httpd processes occasionally, to control the effect of slow > perl memory leaks. I'll do this by setting MPM Worker MaxRequestsPerChild to > some modest value. (I'll try 100.) > > 3) Kill processes that grow too big, which concerns this message. > > In bash, ulimit sets user resource limits. With mod_perl on Apache > Apache2::Resource controls the size of httpd processes. Both eventually call > setrlimit(int resource, const struct rlimit *rlim). > > With Apache2::Resource one can put this in the httpd.conf: > > >> PerlModule Apache2::Resource >> # set child memory limit in megabytes >> # RLIMIT_AS (address space) will work to limit the size of a process on Linux >> PerlSetEnv PERL_RLIMIT_AS 1000:1100 >> >> # this loads Apache2::Resource for each new httpd; that will set the ulimits >> from the Perl environment variables >> PerlChildInitHandler Apache2::Resource >> > > OK, that kills big processes. What happens next is that Perl runs out of > memory (outputs "Out of Memory!") and calls the __DIE__ signal handler. So, > my plan is to catch the signal, redirect the browser to an error page, and > finally kill the process. Before the http Request handler is called I say: > >> $SIG{__DIE__} = \&oversizedHandler; >> > Then when __DIE__ fires the code below runs. > >> use CGI; >> use English; >> use BSD::Resource qw(setrlimit getrlimit get_rlimits getrusage); >> >> # SIG handler called when __DIE__ fires >> sub oversizedHandler{ >> my $a = shift; >> chomp $a; >> print STDERR "handler in process $PID called with '$a'\n"; >> >> # up the soft AS limit to the hard limit, so that we've some RAM to use; >> in this example we free up 100 MB, much more than needed >> my $success = setrlimit('RLIMIT_AS', 1100*1024 * 1024, 1100 * 1024 * >> 1024); >> if( $success ) { >> print STDERR "set limits to 512*1024 * 1024\n"; >> } >> >> $cgi = CGI->new; >> print $cgi->redirect( -location => >> 'http://website.com/program.cgi¶m1=value1¶m2=value2); >> >> CORE::exit(); >> } >> > Here's the problem. Nothing goes to STDOUT, so I cannot write to the browser. > > Thus, my question is: how can one kill an oversized process and provide > feedback to the user at the browser? > > One alternative seems to be to use the special variable $^M (see the perlvar > manpage for more details) as recommended by the modperlbook. How does one > determine whether -DPERL_EMERGENCY_SBRK is defined, and if one does that does > STDOUT still work? > > BR > > A > > Arthur P. Goldberg > > Research Scientist in Bioinformatics > Plant Systems Biology Laboratory > www.virtualplant.org > > Visiting Academic > Computer Science Department > Courant Institute of Mathematical Sciences > www.cs.nyu.edu/artg > > a...@cs.nyu.edu > New York University > 212 995-4918 > Coruzzi Lab > 8th Floor Silver Building > 1009 Silver Center > 100 Washington Sq East > New York NY 10003-6688 > > > Mit freundlichen Grüßen Rolf Schaufelberger Geschäftsführer plusW GmbH Vorstadtstr. 61 -67 Tel. 07181 47 47 305 73614 Schorndorf Fax. 07181 47 45 344 www.plusw.de www.mypixler.com www.calendrino.de