Re: avoiding child death by size limit
On Thu, Dec 10, 2009 at 11:28 AM, E R pc88m...@gmail.com wrote: Hi, I have a problem where a mod_perl handler will allocate a lot of memory when processing a request, and this causes Apache to kill the child due to exceeding the configure child size limit. Are there any special techniques people use to avoid this situation? Does SizeLimit count actual memory used or does it just look at the process size? You can make sure the variables that the memory was malloc'd for have gone out of scope, and there are not trailing references to them. Perl will then reuse that memory. Occasionaly Perl will free memory it's not using, but why, where and when can be hard to determine. It's not always as clear cut as undef'ing the variable. In terms of managing memory in your mod_perl process you can use any of the standard techniques: * change/fix the code (fix memory leak, Tie structure to filesystem/db, ...) * recycle your children after N number of request are completed * offload the memory intensive work into a different process space (another instance of Apache/mod_perl or even just Perl) * use a system set resource limit (not so good because it can kill the proc mid-request) * Apache2::SizeLimit (kill the proc after it's done serving the request when it grows too big) These were off the top of my head. THere may be other techniques I missed. -wjt
bytes malloc-ed less bytes free-ed
as kind of a followup to my last question... is there a way to determine this quantity: the amount of memory malloc-ed minus the amount of memory free-ed? It seems that this would be easy for malloc()/free() to keep track of. I would like to compare that value with the process size to get an idea of how much memory my program needs vs. how much it is actually taking up. Thanks, ER
Re: avoiding child death by size limit
E R wrote: Hi, I have a problem where a mod_perl handler will allocate a lot of memory when processing a request, and this causes Apache to kill the child due to exceeding the configure child size limit. Chances are that a child does not exceed this memory right away, at the first request. More likely, it uses more and more memory at each request it processes, and finally after a number of requests, it exceeds the maximum memory and gets killed. In other words, it is leaking. So, paraphrasing someone else : don't treat the symptom, treat the cause. However, the memory allocated will get freed up or re-used by the next request - If it is a real leak, then no, it will not. I think the memory is just fragmented enough to be automatically reclaimed by the memory allocator (I've heard that some mallocs can return memory to the OS in 1 MB chunks.) See William's answer : unlikely. Are there any special techniques people use to avoid this situation? Does SizeLimit count actual memory used or does it just look at the process size? In a previous similar exercise, in despair I used the module Devel::Leak as follows : use Devel::Leak; my $DEBUGMem = 1; my ($SVTable,$prevSVCount,$lastSVCount); if ($DEBUGMem) { $prevSVCount = Devel::Leak::NoteSV($SVTable); warn [$$] before something, total SVs : $prevSVCount; } do_something(); # .. which could be leaking if ($DEBUGMem) { $lastSVCount = Devel::Leak::CheckSV($SVTable); warn [$$] after something : total SVs : $lastSVCount; warn [$$] new SVs : ,($lastSVCount - $prevSVCount); } It doesn't require any specially-compiled perl. It does not actually print the memory size used. It just provides a count of new things that have been allocated and not freed by do_something(). It is very rough, but it was very helpful to me to find out what exact piece of code was leaking things, which is basically an alias for memory. The point is, if it keeps on growing around the same piece of code each time you process a request, then you at least know where bad things happen. If it happens in your code, then when you know where, it should be possible to fix it. If it happens in someone else's module that you are using, there are usually several alternative modules for just about anything on CPAN. If there aren't and you cannot do without, /then/ maybe you should considering limiting the number of requests that each child handles before it gets killed. But that should not be the first choice, because it is the least efficient.
Re: avoiding child death by size limit
On Fri, Dec 11, 2009 at 11:37 AM, William T dietbud...@gmail.com wrote: You can make sure the variables that the memory was malloc'd for have gone out of scope, and there are not trailing references to them. Perl will then reuse that memory. It will keep the memory allocated to the out-of-scope variables unless you undef them. There's a summary of many PerlMonks discussions on this topic here: http://www.perlmonks.org/?node_id=803515 - Perrin
Re: avoiding child death by size limit
Perrin Harkins wrote: On Fri, Dec 11, 2009 at 11:37 AM, William T dietbud...@gmail.com wrote: You can make sure the variables that the memory was malloc'd for have gone out of scope, and there are not trailing references to them. Perl will then reuse that memory. It will keep the memory allocated to the out-of-scope variables unless you undef them. There's a summary of many PerlMonks discussions on this topic here: http://www.perlmonks.org/?node_id=803515 Perrin, that is in the end, in my personal opinion, a rather confusing discussion. So, to an extent, is your phrase above. When you say It (perl) will keep the memory, do you mean that - the perl interpreter embedded in this Apache child will keep the memory (and not return it to the OS), but will re-use it if possible for other variable allocations happening within its lifetime ? or - the perl interpreter embedded in this Apache child will keep the memory (and not return it to the OS), and never re-use it again for any other variable allocation happening within its lifetime (in other words, this is a leak) ? Does any guru care to provide some simple real-world examples of when memory once allocated to a variable is/is not being re-used, in a mod_perl handler context ? Or pointers to ditto ? (Maybe at first independently of whether the memory also may, or not, be returned by Perl to the OS) Maybe on this last subject, what I gather from this and other discussions I've seen, is that once the Perl interpreter obtained some memory from the OS, it rarely returns it (before it exits); and if it does, it is in any case not practically predictable in a cross-platform way, so one cannot rely on it. Is that a fair interpretation ? Would it be preferable/easier if I construct some simple examples myself and present them here asking if memory allocated to my $a is being leaked or not ?
Re: avoiding child death by size limit
On Fri, Dec 11, 2009 at 5:56 PM, André Warnier a...@ice-sa.com wrote: When you say It (perl) will keep the memory, do you mean that - the perl interpreter embedded in this Apache child will keep the memory (and not return it to the OS), but will re-use it if possible for other variable allocations happening within its lifetime ? or - the perl interpreter embedded in this Apache child will keep the memory (and not return it to the OS), and never re-use it again for any other variable allocation happening within its lifetime (in other words, this is a leak) ? Option 3: Perl will keep the memory and reuse it for that exact same lexical variable the next time you enter that section of code. It's a performance optimization that is usually a good thing, unless you put a lot of data in one lexical in some code that you rarely run. It's not a leak. Perl tracks the memory and will reuse it, just not for any other variables. Does any guru care to provide some simple real-world examples of when memory once allocated to a variable is/is not being re-used, in a mod_perl handler context ? It's simple to see. Slurp a file into a lexical. Let it go out of scope. There are many posts on this subject in the archives here as well as on PerlMonks and the p5p archives. Maybe on this last subject, what I gather from this and other discussions I've seen, is that once the Perl interpreter obtained some memory from the OS, it rarely returns it (before it exits); and if it does, it is in any case not practically predictable in a cross-platform way, so one cannot rely on it. Is that a fair interpretation ? Yes, you can't expect to get memory back. - Perrin
[error] Software caused connection abort at
Dear all, Recently i've been noticing an error on the apache error log that states:[error] Software caused connection abort at [script path] line [line number]. Subsequently, i've been noticing another error, :Apache IO flush: (103) Software caused connection abort at -e line 0 Always fails upon printing to stdout. Can anyone point me in the right direction? Thanks, Apache Server version: Apache/2.0.52 Server built: Nov 11 2009 03:49:40 Platform: RHEL5 on a x86_64
Re: [error] Software caused connection abort at
discobeta wrote: Dear all, Recently i've been noticing an error on the apache error log that states:[error] Software caused connection abort at [script path] line [line number]. Subsequently, i've been noticing another error, :Apache IO flush: (103) Software caused connection abort at -e line 0 Always fails upon printing to stdout. Can anyone point me in the right direction? Thanks, Apache Server version: Apache/2.0.52 Server built: Nov 11 2009 03:49:40 Platform: RHEL5 on a x86_64 I started having the same error about a week ago. I don't have any answers, only questions. Here's an earlier thread on this: http://marc.info/?l=apache-modperlm=125879662802023w=2 I put the eval in as suggested and write it to my log -- I don't think that fixes things though. See my message from earlier today to describe my setup. I failed to mention that I am on an x86_64 machine, too. What version of mod_perl are you using? Do you use Apache::DBI? Version? Apache2::Request? Just looking for leads myself. Take care, Kurt Hansen khan...@charityweb.net