Chad Kellerman wrote:

> Dave,
> 
>  Actually in each fork.  I was tarring up information from another
> server with Net::SSH::Perl.  The full tar ball was written to stdout and
> I would gzip in on the originating server.  If I did not fork each tar
> the server would crash in a matter of minutes.  But with the forks I
> actually contected adn grabbed tar balls from a 100 servers, 8 at a
> time.
> 

yes, now that it make sense to me. the tarring portion of your code is 
likely to create huge heap (like the share memory module, i think it's call 
ShareLite or something like that, everytime it pull something back from the 
share memory, it creates a heap for it. the heap will not go away (because 
Perl does it's own memory management) until the client exit.). you should 
verify that if Net::SSH::Perl does something similar. it seems like it can 
be the case. now, if you create a child process for that, it's the child 
process that creates the heap, not the parent, so when the child process 
exit, everything is destoried(code, data stack,heap, etc). that's why you 
don't see your parent eating up a lot of memory.

i could be totally wrong but with my experience with ShareLit, the situation 
is similiar. indeed, we use similar solution as your forking except that we 
don't fork, we simply exec().

david

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to