Re: Callback called exit.
Brian Hirt wrote: I've typed up my suggestions to the troubleshooting doc, and incorporated glen's suggestions too. Stas wants me to post to the list to see if there are any comments / corrections. I wasn't sure if I should put a comment in about __DIE__ handlers and their use with evals, it seemed like that might be too general perl. Thanks Brian, very late but now committed. Index: src/docs/1.0/guide/troubleshooting.pod === RCS file: /home/cvspublic/modperl-docs/src/docs/1.0/guide/troubleshooting.pod,v retrieving revision 1.28 diff -u -r1.28 troubleshooting.pod --- src/docs/1.0/guide/troubleshooting.pod 5 May 2004 03:29:38 - 1.28 +++ src/docs/1.0/guide/troubleshooting.pod 6 May 2004 22:40:07 - @@ -589,27 +589,45 @@ If something goes really wrong with your code, Perl may die with an Out of memory! message and/or Callback called exit. Common causes of this -are never-ending loops, deep recursion, or calling an -undefined subroutine. Here's one way to catch the problem: See Perl's -INSTALL document for this item: +are never-ending loops, deep recursion, or calling an undefined subroutine. - =item -DPERL_EMERGENCY_SBRK +If you are using perl 5.005 or later, and perl is compiled to use it's own +malloc rutines, you can trap out of memory errors by setting aside an extra +memory pool in the special variable $^M. By default perl uses the operating +system malloc for many popular systems, so unless you build perl with +'usemymalloc=y' you probably wont be able to use $^M. To check if your mod_perl +was compiled to use perl's internal malloc(), stick the following code in a +handler and see if usemymalloc is set to 'y' + + use Config; + + print Config::myconfig(); + +Here is an explanation of $^M from perlvar(i): + + $^M By default, running out of memory is an untrap- + pable, fatal error. However, if suitably built, + Perl can use the contents of $^M as an emergency + memory pool after die()ing. Suppose that your + Perl were compiled with -DPERL_EMERGENCY_SBRK and + used Perl's malloc. Then + + $^M = 'a' x (1 16); + + would allocate a 64K buffer for use in an emer- + gency. See the INSTALL file in the Perl distribu- + tion for information on how to enable this option. + To discourage casual use of this advanced feature, + there is no English long name for this variable. - If PERL_EMERGENCY_SBRK is defined, running out of memory need not be a - fatal error: a memory pool can allocated by assigning to the special - variable $^M. See perlvar(1) for more details. - -If you compile with that option and add 'Cuse Apache::Debug level +If your perl installation supports $^M and you add 'Cuse Apache::Debug level =Egt 4;' to your Perl script, it will allocate the C$^M emergency pool and the C$SIG{__DIE__} handler will call CCarp::confess, giving you a stack trace which should reveal where the problem is. See the CApache::Resource module for ways to control httpd processes. -Note that Perl 5.005 and later have CPERL_EMERGENCY_SBRK turned on -by default. - -The other trick is to have a startup script initialize +Another trick is to have a startup script initialize CCarp::confess, like so: use Carp (); @@ -617,6 +635,24 @@ this way, when the real problem happens, CCarp::confess doesn't eat memory in the emergency pool (C$^M). + +Some other mod_perl users have reported that this works well for them: + +## Allocate 64K as an emergency memory pool for use in out of memory situation +$^M = 0x00 x 65536; + +## Little trick to initialize this routine here so that in the case of OOM, +## compiling this routine doesn't eat memory from the emergency memory pool $^M +use CGI::Carp (); +eval { CGI::Carp::confess('init') }; + +## Importing CGI::Carp sets $main::SIG{__DIE__} = \CGI::Carp::die; +## Override that to additionally give a stack backtrace +$main::SIG{__DIE__} = \CGI::Carp::confess; + +Discussion of $^M has come up on PerlMonks, and there is speculation that $^M is a +forgotten feature that's not well supported. See +http://perlmonks.org/index.pl?node_id=287850 for more information. =head2 server reached MaxClients setting, consider raising the MaxClients setting -- __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
On Thu, May 06, 2004 at 10:55:14AM -0600, Brian Hirt wrote: On May 6, 2004, at 10:27 AM, Perrin Harkins wrote: On Wed, 2004-05-05 at 22:11, Brian Hirt wrote: I've been running across a problem lately where a child process terminates because of an out of memory error. It prints Out of Memory once, the the process sucks up all available cpu print Callback called exit. to the log file until it hit's it's 2GB max size. I'm just guessing here, but this is probably because apache is trying to spawn new processes, and they keep dying because there's no memory. Thanks for the response, interesting insight into the history of $^M. When I've seen this happen, it's the same PID spewing the messages, there are no forkings going on. The system isn't actually out of memory, and there is plenty of it available for the parent httpd process to fork.The child process has an rlimit set which is why it's getting an out of memory error. I initially set the rlimit, because at one point in the past the ImageMagick module would every now and then go crazy and consume all available memory which would bring down everything. Yes, thanks as well. I didn't know how ineffective that was, and am glad I wasn't setting aside too much memory for it. Brian, if you can trigger the OOM and Callback called exit loop, would you try my example mod_perl_startup.pl and use this at the end: ## Importing CGI::Carp sets $main::SIG{__DIE__} = \CGI::Carp::die; ## Override that to additionally give a stack backtrace $main::SIG{__DIE__} = sub { undef $M; CGI::Carp::confess; }; The 'undef $M' will mark the memory as unused (as long as nothing else has a reference to it), and if Perl garbage collection kicks in before the looping problem, then you might have some memory to work with. I don't know the threshold offhand that Perl uses to trigger freeing the memory back to the system when using the system malloc, but a couple of MBs would most likely do it. Of course, this assumes that the loop is occurring somewhere after die() is called, and after this routine is called. Worth a shot... Cheers, Glenn -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
On Wed, 2004-05-05 at 22:11, Brian Hirt wrote: I've been running across a problem lately where a child process terminates because of an out of memory error. It prints Out of Memory once, the the process sucks up all available cpu print Callback called exit. to the log file until it hit's it's 2GB max size. I'm just guessing here, but this is probably because apache is trying to spawn new processes, and they keep dying because there's no memory. = See Perl's INSTALL document for this item: This might have been true at one point. Newer versions of perl 5.6 and 5.8 have no reference to this option in the INSTALL document There was discussion about this on PerlMonks: http://perlmonks.org/index.pl?node_id=287850 The most informative quote was this one: $^M was an Ilya-thing, which he added in during his work on perl's built-in memory allocation system. Nobody else ever really cared about it so far as I remember from p5p, so it suffers some from both neglect and unfinshedness. I remember in the 5.004/5.005 days that it used to work, but I honestly don't remember how, nor whether it ever worked for anyone other than Ilya. - Perrin -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
I too followed the advice too, but it did nothing but lead my down the wrong path. The advice should be updated. My point is that $^M does absolutely nothing unless you use perl's malloc, which isn't true for most common perl installations these days. compiling with PERL_EMERGENCY_SBRK doesn't help either because it's the default if you usemymalloc, and useless if you don't You MUST compile perl with -Dusemymalloc=y. A simple grep in the perl hints directory shows that many popular systems such as linux, freebsd and openbsd default to the system malloc which disables the functionality of $^M. I'd simply like to see the documentation updated, and I'm happy to do it. I know it would have saved me hours and hours of headaches. The documentation as it stands now is misleading. of course if you use perl's malloc, the advice helps. I'd still like to know why mod_perl can get into an infinite loop writtitng Callback called exit.In perl.c, when that happens my_exit_jump(); is called which should presumably exit the process, but somehow that doesn't happen and some sort of infinite loop occurs outside of my code that fills the log of with gigibytes of 'Callback called exit' messages. regards, brian On May 5, 2004, at 11:00 PM, Glenn wrote: On Wed, May 05, 2004 at 08:11:56PM -0600, Brian Hirt wrote: I've been running across a problem lately where a child process terminates because of an out of memory error. It prints Out of Memory once, the the process sucks up all available cpu print Callback called exit. to the log file until it hit's it's 2GB max size. I have some Apache::Resource limits set, and they probably need to be raised, but the way the error is handled is not very graceful. I'd expect the child to just terminate after reporting the first error message. I'm not sure if this is a perl problem or a mod_perl problem. I'd still like to figure out how to prevent the repeating message from happening. Anyway, I've been pulling my hair out trying to prevent this, and I've finally figured out how to trap this. I have some suggestions for the documentation, because the following url could use some help: http://perl.apache.org/docs/1.0/guide/ troubleshooting.html#Callback_called_exit I've followed that advice and explicitly allocated memory into $^M. I have the following in my mod_perl_startup.pl, which I run from httpd.conf with PerlRequire /path/to/mod_perl_startup.pl If 64K is not enough for you, try increasing the allocation. Cheers, Glenn use strict; ## -- ## -- ## This section is similar in scope to Apache::Debug. ## Delivers a stack backtrace to the error log when perl code dies. ## Allocate 64K as an emergency memory pool for use in out of memory situation $^M = 0x00 x 65536; ## Little trick to initialize this routine here so that in the case of OOM, ## compiling this routine doesn't eat memory from the emergency memory pool $^M use CGI::Carp (); eval { CGI::Carp::confess('init') }; ## Importing CGI::Carp sets $main::SIG{__DIE__} = \CGI::Carp::die; ## Override that to additionally give a stack backtrace $main::SIG{__DIE__} = \CGI::Carp::confess; -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
On May 6, 2004, at 10:27 AM, Perrin Harkins wrote: On Wed, 2004-05-05 at 22:11, Brian Hirt wrote: I've been running across a problem lately where a child process terminates because of an out of memory error. It prints Out of Memory once, the the process sucks up all available cpu print Callback called exit. to the log file until it hit's it's 2GB max size. I'm just guessing here, but this is probably because apache is trying to spawn new processes, and they keep dying because there's no memory. Thanks for the response, interesting insight into the history of $^M. When I've seen this happen, it's the same PID spewing the messages, there are no forkings going on. The system isn't actually out of memory, and there is plenty of it available for the parent httpd process to fork.The child process has an rlimit set which is why it's getting an out of memory error. I initially set the rlimit, because at one point in the past the ImageMagick module would every now and then go crazy and consume all available memory which would bring down everything. -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
Brian Hirt wrote: I too followed the advice too, but it did nothing but lead my down the wrong path. The advice should be updated. My point is that $^M does absolutely nothing unless you use perl's malloc, which isn't true for most common perl installations these days. compiling with PERL_EMERGENCY_SBRK doesn't help either because it's the default if you usemymalloc, and useless if you don't You MUST compile perl with -Dusemymalloc=y. A simple grep in the perl hints directory shows that many popular systems such as linux, freebsd and openbsd default to the system malloc which disables the functionality of $^M. I'd simply like to see the documentation updated, and I'm happy to do it. I know it would have saved me hours and hours of headaches. The documentation as it stands now is misleading. of course if you use perl's malloc, the advice helps. Doc patches are always welcome here. Please patch against the source pod. http://perl.apache.org/download/docs.html#Download I'd still like to know why mod_perl can get into an infinite loop writtitng Callback called exit.In perl.c, when that happens my_exit_jump(); is called which should presumably exit the process, but somehow that doesn't happen and some sort of infinite loop occurs outside of my code that fills the log of with gigibytes of 'Callback called exit' messages. Normally that happens when perl gets its calls stack messed up and it starts to loop. I know I hit that myself while developing mp2 when I was trying to write my own version of die/exit/etc, which I quickly gave up. It is possible that there is a bug in perl, which gets triggered only in certain situations. If you can give p5p a reproducable case, I'm sure it'll be fixed. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
Glenn wrote: [...] http://perl.apache.org/docs/1.0/guide/troubleshooting.html#Callback_called_exit I've followed that advice and explicitly allocated memory into $^M. I have the following in my mod_perl_startup.pl, which I run from httpd.conf with PerlRequire /path/to/mod_perl_startup.pl If 64K is not enough for you, try increasing the allocation. Cheers, Glenn use strict; ## -- ## -- ## This section is similar in scope to Apache::Debug. ## Delivers a stack backtrace to the error log when perl code dies. ## Allocate 64K as an emergency memory pool for use in out of memory situation $^M = 0x00 x 65536; ## Little trick to initialize this routine here so that in the case of OOM, ## compiling this routine doesn't eat memory from the emergency memory pool $^M use CGI::Carp (); eval { CGI::Carp::confess('init') }; ## Importing CGI::Carp sets $main::SIG{__DIE__} = \CGI::Carp::die; ## Override that to additionally give a stack backtrace $main::SIG{__DIE__} = \CGI::Carp::confess; Brian, you may want to include Glenn's useful tips as well in the patch. __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Re: Callback called exit.
I've typed up my suggestions to the troubleshooting doc, and incorporated glen's suggestions too. Stas wants me to post to the list to see if there are any comments / corrections. I wasn't sure if I should put a comment in about __DIE__ handlers and their use with evals, it seemed like that might be too general perl. Index: src/docs/1.0/guide/troubleshooting.pod === RCS file: /home/cvspublic/modperl-docs/src/docs/1.0/guide/troubleshooting.pod,v retrieving revision 1.28 diff -u -r1.28 troubleshooting.pod --- src/docs/1.0/guide/troubleshooting.pod 5 May 2004 03:29:38 - 1.28 +++ src/docs/1.0/guide/troubleshooting.pod 6 May 2004 22:40:07 - @@ -589,27 +589,45 @@ If something goes really wrong with your code, Perl may die with an Out of memory! message and/or Callback called exit. Common causes of this -are never-ending loops, deep recursion, or calling an -undefined subroutine. Here's one way to catch the problem: See Perl's -INSTALL document for this item: +are never-ending loops, deep recursion, or calling an undefined subroutine. - =item -DPERL_EMERGENCY_SBRK +If you are using perl 5.005 or later, and perl is compiled to use it's own +malloc rutines, you can trap out of memory errors by setting aside an extra +memory pool in the special variable $^M. By default perl uses the operating +system malloc for many popular systems, so unless you build perl with +'usemymalloc=y' you probably wont be able to use $^M. To check if your mod_perl +was compiled to use perl's internal malloc(), stick the following code in a +handler and see if usemymalloc is set to 'y' + + use Config; + + print Config::myconfig(); + +Here is an explanation of $^M from perlvar(i): + + $^M By default, running out of memory is an untrap- + pable, fatal error. However, if suitably built, + Perl can use the contents of $^M as an emergency + memory pool after die()ing. Suppose that your + Perl were compiled with -DPERL_EMERGENCY_SBRK and + used Perl's malloc. Then + + $^M = 'a' x (1 16); + + would allocate a 64K buffer for use in an emer- + gency. See the INSTALL file in the Perl distribu- + tion for information on how to enable this option. + To discourage casual use of this advanced feature, + there is no English long name for this variable. - If PERL_EMERGENCY_SBRK is defined, running out of memory need not be a - fatal error: a memory pool can allocated by assigning to the special - variable $^M. See perlvar(1) for more details. - -If you compile with that option and add 'Cuse Apache::Debug level +If your perl installation supports $^M and you add 'Cuse Apache::Debug level =Egt 4;' to your Perl script, it will allocate the C$^M emergency pool and the C$SIG{__DIE__} handler will call CCarp::confess, giving you a stack trace which should reveal where the problem is. See the CApache::Resource module for ways to control httpd processes. -Note that Perl 5.005 and later have CPERL_EMERGENCY_SBRK turned on -by default. - -The other trick is to have a startup script initialize +Another trick is to have a startup script initialize CCarp::confess, like so: use Carp (); @@ -617,6 +635,24 @@ this way, when the real problem happens, CCarp::confess doesn't eat memory in the emergency pool (C$^M). + +Some other mod_perl users have reported that this works well for them: + +## Allocate 64K as an emergency memory pool for use in out of memory situation +$^M = 0x00 x 65536; + +## Little trick to initialize this routine here so that in the case of OOM, +## compiling this routine doesn't eat memory from the emergency memory pool $^M +use CGI::Carp (); +eval { CGI::Carp::confess('init') }; + +## Importing CGI::Carp sets $main::SIG{__DIE__} = \CGI::Carp::die; +## Override that to additionally give a stack backtrace +$main::SIG{__DIE__} = \CGI::Carp::confess; + +Discussion of $^M has come up on PerlMonks, and there is speculation that $^M is a +forgotten feature that's not well supported. See +http://perlmonks.org/index.pl?node_id=287850 for more information. =head2 server reached MaxClients setting, consider raising the MaxClients setting regards, Brian On May 6, 2004, at 1:19 PM, Stas Bekman wrote: Brian Hirt wrote: I too followed the advice too, but it did nothing but lead my down the wrong path. The advice should be updated. My point is that $^M does absolutely nothing unless you use perl's malloc, which isn't true for most common perl installations these days. compiling with PERL_EMERGENCY_SBRK doesn't help either because it's the default if you usemymalloc, and useless if you don't You MUST compile perl with -Dusemymalloc=y. A simple grep in the perl hints directory shows that many popular systems such as linux, freebsd and openbsd
Re: Callback called exit.
Brian Hirt wrote: I've typed up my suggestions to the troubleshooting doc, and incorporated glen's suggestions too. Stas wants me to post to the list to see if there are any comments / corrections. I wasn't sure if I should put a comment in about __DIE__ handlers and their use with evals, it seemed like that might be too general perl. While true for the rest of the documentation, there is no limit to what you can put it into the troubleshooting section, as long as it helps to troubleshoot the issue :) -- __ Stas BekmanJAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide --- http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html
Callback called exit.
I've been running across a problem lately where a child process terminates because of an out of memory error. It prints Out of Memory once, the the process sucks up all available cpu print Callback called exit. to the log file until it hit's it's 2GB max size. I have some Apache::Resource limits set, and they probably need to be raised, but the way the error is handled is not very graceful. I'd expect the child to just terminate after reporting the first error message. I'm not sure if this is a perl problem or a mod_perl problem. I'd still like to figure out how to prevent the repeating message from happening. Anyway, I've been pulling my hair out trying to prevent this, and I've finally figured out how to trap this. I have some suggestions for the documentation, because the following url could use some help: http://perl.apache.org/docs/1.0/guide/ troubleshooting.html#Callback_called_exit = Note that Perl 5.005 and later have PERL_EMERGENCY_SBRK turned on by default. This is only true if perl was built to use it's own malloc. However, usemymalloc=y is not the default for many systems because they assume the OS version is probably a better implementation (which could be true). However, when perl's internal malloc is used, none of the suggestions for solving the out of memory problem or repeated Callback called exit messages work. = See Perl's INSTALL document for this item: This might have been true at one point. Newer versions of perl 5.6 and 5.8 have no reference to this option in the INSTALL document = =item -DPERL_EMERGENCY_SBRK . a better quotation would be from perlvar.pod which states the crux of the matter: . Suppose that your Perl were compiled with -DPERL_EMERGENCY_SBRK and used Perl's malloc ... -- Report problems: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html List etiquette: http://perl.apache.org/maillist/email-etiquette.html