Chas. Owens told me on 02/12/2008 12:30 PM:
> On Feb 12, 2008 1:08 PM, Kashif Salman <[EMAIL PROTECTED]> wrote:
>> On Feb 12, 2008 9:58 AM, Chas. Owens <[EMAIL PROTECTED]> wrote:
>>> On Feb 12, 2008 12:38 PM, Michael Barnes <[EMAIL PROTECTED]> wrote:
>>> snip
>>>> I'm the new kid and this is a beginners forum, so I welcome all ideas
>>>> and options.  Forgiving my ignorance, would you mind giving an example
>>>> of how I would do this with lsof?
>>> snip
>>>
>>> This only works on systems with lsof.  Checking file size change is a
>>> fairly portable, if inexact, way of checking to see if a file is still
>>> being written to.  Of course, lsof also has problems: what if the file
>>> is opened, written to, and closed, opened, written to, closed, etc.
>>> and the lsof runs while the file is closed?
>>>
>>> #!/usr/bin/perl
>>>
>>> use warnings;
>>> use strict;
>>>
>>> my $lsof = "/usr/sbin/lsof";
>>> my $file = "/tmp/foo.$$";
>>> my $pid  = fork;
>>> die "could not fork" unless defined $pid;
>>>
>>> unless ($pid) {
>>>         #child writes to the file for 5 seconds
>>>         open my $fh, ">", $file
>>>                 or die "could not open $file\n";
>>>         for my $i (1 .. 5) {
>>>                 print $fh "$i\n";
>>>                 sleep 1;
>>>         }
>>>         exit;
>>> }
>>>
>>> #parent monitors file
>>> my $file_holder_pid = qx($lsof -t $file);
>>> chomp($file_holder_pid);
>>> while ($file_holder_pid) {
>>>         print "$file is current being held open by $file_holder_pid\n";
>>>         sleep 1;
>>>         $file_holder_pid = qx($lsof -t $file);
>>>         chomp($file_holder_pid);
>>> }
>>>
>> Yes, it does make the script tied down to mostly UNIX/Linux systems. I
>> assumed once the file is dumped in, it won't be accessed by that same
>> process so no write close, write close operations on the file.
>>
> 
> Yeah, the lsof solution isn't bad, it just isn't the right tool all of
> the time, and neither is the size monitoring solution.  They both have
> their own strengths and weaknesses and it is important to know what
> they are.  There is a third option that solves the problems that both
> of the previous solutions have: a signal file.  You can be sure the
> file is finished being written if you can arrange for the upstream
> process to write a zero byte file when it is finished writing.  Of
> course, it has the drawback that you need to be able to change the
> upstream code to send the signal file (which is often impractical).
> 

The signal file has been suggested before.  However, a signal file is
useful when there is an upstream process you can control.  In this case,
the "upstream process" is actually a number of users dragging and
dropping files into a folder.  Depending on the file and which user, it
may be a 2k text file or a 300+MB wav file.  Depending on where they are
and what equipment they are using, it might take a bit for that big file
to get copied over.  Nobody or nothing should be touching those files
other than my dropbox script which pulls them out and puts them
someplace else based on what they are.

Thanks for all the alternative ideas.

Michael


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to