[EMAIL PROTECTED] wrote:
> 
> I'm running a nightly job to fork off several FTP sessions to retrieve
> files.  The job spawns off ~100 FTP "Drones" and hits about 3700 sites
> across our WAN.  Twice it has become "hung" when one FTP drone refused to
> die.  Looking in the drones log showed it had just requested an FTP GET
> ($ftp->get(myfile)) and it just never came back.  NET::FTP seems to timeout
> on almost everything, providing a reasonable way to recover when a get
> fails, but somehow there is something not getting caught.  Killing the hung
> child allows the entire process to finish cleanly.  But since much of the
> data coming back is time sensative, having me come in and kill it at 8am is
> not acceptable.
> 
> Has anyone encountered this problem with NET::FTP?  and even if you
> haven't, can anyone suggest a way to possibly make a non-hangable FTP?
> would using retr be a better solution since you can have finer grained
> control over the retrieval process?  Can anyone suggest how to make a
> process watchdog that could just kill off the child after a certain amount
> of elapsed time?  can I wrap a $ftp->get in an alarm?

Luckily you are on UNIX, so alarm should take care of it for you.

See perlfunc alarm function for example.  You could probably sleep 
and try again once or twice after the initial failure.

-- 
  ,-/-  __      _  _         $Bill Luebkert   ICQ=14439852
 (_/   /  )    // //       DBE Collectibles   Mailto:[EMAIL PROTECTED] 
  / ) /--<  o // //      http://dbecoll.tripod.com/ (Free site for Perl)
-/-' /___/_<_</_</_     Castle of Medieval Myth & Magic http://www.todbe.com/
_______________________________________________
Perl-Unix-Users mailing list. To unsubscribe go to 
http://listserv.ActiveState.com/mailman/subscribe/perl-unix-users

Reply via email to