I'm running a nightly job to fork off several FTP sessions to retrieve
files.  The job spawns off ~100 FTP "Drones" and hits about 3700 sites
across our WAN.  Twice it has become "hung" when one FTP drone refused to
die.  Looking in the drones log showed it had just requested an FTP GET
($ftp->get(myfile)) and it just never came back.  NET::FTP seems to timeout
on almost everything, providing a reasonable way to recover when a get
fails, but somehow there is something not getting caught.  Killing the hung
child allows the entire process to finish cleanly.  But since much of the
data coming back is time sensative, having me come in and kill it at 8am is
not acceptable.

Has anyone encountered this problem with NET::FTP?  and even if you
haven't, can anyone suggest a way to possibly make a non-hangable FTP?
would using retr be a better solution since you can have finer grained
control over the retrieval process?  Can anyone suggest how to make a
process watchdog that could just kill off the child after a certain amount
of elapsed time?  can I wrap a $ftp->get in an alarm?

Any ideas?

Chuck

_______________________________________________
Perl-Unix-Users mailing list. To unsubscribe go to 
http://listserv.ActiveState.com/mailman/subscribe/perl-unix-users

Reply via email to