One word of caution.... it looks to me like this will catch duplicates 
lines, just as long as the duplicate lines follow each other. You may want 
to do some kind of a sort process prior to running this line of code.
Only reason I bring this up... I've been bitten by this same problem in the 
past:)
I've used rpsort (I think it's freeware/shareware) to sort an entire line 
(not exactly efficient for larger files, but it seems to do the trick),
I also got some help from someone on this list who provided the following 
(this didn't require a sort process prior to running it)

while (<INPUTFILE>){
   if (not $seen{$_}) {
       $seen{$_} = 1;
        print OUTFILE;
    }
    else {
    }
  }

I wish I could tell you why/how it works (I'm *still* working my way up to 
newbie status), but it does. (Magic??)..  It'll take longer for big files, 
but again, it does the trick.

HTH
Carl

At 12:30 PM 7/30/2001 -0500, David Blevins wrote:
>Here is a one-liner I just wrote to delete duplicate lines in a file.
>
>perl -ni.bak -e 'unless ($last eq $_){print $_};$last=$_;' theFile
>
>Going with the TMTOWTDI credo, I was just curious if anyone knew of a better
>way.
>
>Thanks,
>David
>
>
>--
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to