Hi All,

As below, and in most solutions to removing duplicate records from a simple list, a 
hash is most frequently used.  A hash introduces another dimension...the values and 
the keys, whereas a foreach $var @array seems like it would be more simple and get the 
job done.

I know their is a good reason why hashes are used to sort simple 1 dimensional lists 
but could someone tell me why?

In the below example I wondered why $outray{$in}  was being set to  1.

Any guidance is much appreciated!

Sincerely,

Wormy


-----Original Message-----
From: Charles Maier [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 03, 2000 5:48 PM
To: Perl; [EMAIL PROTECTED]
Subject: RE: Removing evil people , also leading spaces when copying an
array tofile

Your BEST bet for de-duping any file.. is put the data in a hash.

## untested!!!
#open list of ip addresses
$file1 = 'c:\ipaddresses.txt';

open (handle1, "<$file1");
while ($in = <handle1>){
        chomp($in);
        $outray{$in} = 1;
}
close handle1;
# %outray now has deduped data
# time to write back

open (handle2, ">$file2") || die "can't open file $!";
foreach $i(keys(%outray)){
        print "$i\n";
}
close handle2;

HTH
Chuck


@things = <handle1> ;
#put ip addresses into an array


$file2 = 'c:\spaces.txt';
#declare variable for filename

open (handle2, ">$file2") || die "can't open file $!";

print handle2 "@things" || die "no file to print to? $!";
#print array to file spacey.txt


_______________________________________________
Perl-Win32-Web mailing list
[EMAIL PROTECTED]
http://listserv.ActiveState.com/mailman/listinfo/perl-win32-web

Reply via email to