On Sat, 2004-12-04 at 17:25 -0800, John W. Krahn wrote:
> Dan Jones wrote:
> > On Thu, 2004-12-02 at 00:13 -0800, Mr M senthil kumar wrote:
> > 
> >><SNIP>
> >>
> >>>I have a file with thousands of line  like :
> >>>/abc/def/ijk/test.txt
> >>>/pqr/lmn/test1.t
> >>>I want to get the directory where the files test.txt and test1.txt are
> >>>lying.
> >>
> >></SNIP>
> >>
> >>Hi,
> >>You can try the following:
> >>
> >>#!/usr/bin/perl
> >>open (IN,"<input_file") || die "Cannot open file: $!";
> >>open (OUT,">output_file") || die "Cannot send the output: $!";
> > 
> > 
> > I see this a lot.  One thing that immediately occurs to me is that if
> > opening IN succeeds but opening OUT fails, the program dies without
> > closing IN.  Is this acceptable code in the Perl world or should the
> > code close all open files before dieing?
> 
> The operating system handles all resources like files and memory so when the 
> program exits the operating system frees up all file handle resources and the 
> memory allocated for the strings "<input_file" and "Cannot open file: $!".  
> If 
> the operating system didn't do this then programming would be a *LOT* harder 
> and a *LOT* less robust!
> 
> As an analogy:  If you rent a hotel/motel room you *could* clean it up 
> yourself before you check out, but most people don't.  :-)

I understand that the system will usually clean up the messes you leave
behind.  However, in application programming with higher level
languages, it's considered extremely poor programming practice to rely
on this behavior.  My question isn't whether the system will close the
file, it's whether this is considered acceptable program behavior.


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to