From: Chas Owens <[EMAIL PROTECTED]>
> On Mon, 2002-03-04 at 10:32, Brett W. McCoy wrote:
> > On Mon, 4 Mar 2002, Dave Adams wrote:
> >
> > > I have a problem which I would like to use perl to resolve,
> > > however I'm not sure if it is possible to do.
> > >
> > > I need to scan a file and check
On Mon, 2002-03-04 at 10:32, Brett W. McCoy wrote:
> On Mon, 4 Mar 2002, Dave Adams wrote:
>
> > I have a problem which I would like to use perl to resolve, however I'm not
> > sure if it is possible to do.
> >
> > I need to scan a file and check some conditions, first if field 9 is
> > duplicate
A quick dissection of what is going on here:
On Mon, 2002-03-04 at 10:20, Nikola Janceski wrote:
> Yes you can do it with perl, and I suggest using hashes.
>
> open(FILE, "$file");
This opens a file and associates it with FILE; you really should say
something like 'or die "Could not open $file:
On Mon, 4 Mar 2002, Dave Adams wrote:
> I have a problem which I would like to use perl to resolve, however I'm not
> sure if it is possible to do.
>
> I need to scan a file and check some conditions, first if field 9 is
> duplicated on 1 or more rows, then I need to check field 10 to see which i
Yes you can do it with perl, and I suggest using hashes.
open(FILE, "$file");
my %seen;
while(){
my ($item9, $item10) = (split /\|/, $_)[8,9];
if(exists $seen{$item9}){
if($itme10 > $seen{$item9}){
# the new $item10 is larger than the last o
Dave Adams wrote:
>
> Hi there,
>
> I have a problem which I would like to use perl to resolve, however I'm not
> sure if it is possible to do.
>
> I need to scan a file and check some conditions, first if field 9 is
> duplicated on 1 or more rows, then I need to check field 10 to see which is