Dave,

 Actually in each fork.  I was tarring up information from another
server with Net::SSH::Perl.  The full tar ball was written to stdout and
I would gzip in on the originating server.  If I did not fork each tar
the server would crash in a matter of minutes.  But with the forks I
actually contected adn grabbed tar balls from a 100 servers, 8 at a
time.

The os was solaris x86 but I find that the script runs better on Red Hat
7.3.

   I did not use top when monitoring the script I actually used 
vmstat 1

   I kinda new with perl.  The script was my first script over 200
lines.  And like I said id would crash in a under an hour with out the
forks.

   It just made sense that when that child did die the memory the child
held for the tar was released.  I actually saw memory drop real low then
after the write to disk it would jump up.

Sound right to you?  Or am I missing something?

--chad   

On Fri, 2002-09-20 at 14:07, david wrote:
> Chad Kellerman wrote:
> 
> > 
> > here's my $.02 on this subject.  Correct me if I am wrong.
> > Once perl uses memory it does not want to let it go back to the system.
> > I believe I have read the the developers are working on this.  Since you
> > have your script running as a daemon.  It will not release a lot of
> > memory back to the system, if any at all.
> 
> currently, the memory will not be released back to the OS. your OS mostly 
> likely do not support that. many langugages that handles memory management 
> internally have the same problem. in C/C++, memory management is the job of 
> the programmer but if you put your data on the stack, they won't be 
> released back to the OS until your program exit. if, however, you request 
> something from the heap, you will have the chance to relese them back to 
> the OS. that's nice because you actually release what you don't need back 
> to the OS, not just your process pool.
> 
> > 
> > I had a similar problem.  The way I worked around it is:
> > I knew where my script was eating up memory.  So at these point I fork()
> > children.  Once the child completes and dies the memory is released back
> > into the system.
> > 
> 
> i don't know if what you describle really works. when you fork, you are 
> making an exact copy of the process running. the child process will include 
> the parent process's code, data, stack etc. if the fork success, you will 
> have 2 pretty much identical process. they are not related other than the 
> parent-child relation the kernal keeps track. so if your child process 
> exit, it should release all the memory of it's own but shouldn't take 
> anything with it from it's parent. this means your child process's exit 
> should not cause your parent process's memory pool to be returned back to 
> the OS.
> 
> but you said you really see a dramatic decrease in memory consumption but if 
> you check your process's memory foot print(let say, simply look at it from 
> the top command), does it's size reduce at all?
> 
> david
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to