Thanks, your solution is one way how solve this problem! I found other ...
sub remove_ary_dupes{ my @[EMAIL PROTECTED]; #Define hash undef %saw; @[EMAIL PROTECTED] = (); #Hash owerwrite automaticly duplicated values! ;) my @out = sort keys %saw; # remove sort if undesired return @out; }
sub grep_log_file{ my @[EMAIL PROTECTED]; #if (undef $sparam[0]) {print "No records found with @ARGV\n"; exit 0} my ($first, $match); my $block_size=20480; my ($tmp_block, $pos_corect,@ary_match, @res_ary);
open(F, "$LOG_FILE") or die "Error opening $LOG_FILE: $!";
#Read LOG file block by block
while (read(F,$first,$block_size)) {
$end_pos=rindex($first,"\n");
# $first=~/(.*)\n(.*)$/; #Not used, because so slowly for long lines!
# Compare read line length with default block size!
# If not equal, its last block in cycle
if (length($first) ne $block_size) {
$match=$first; $match = $tmp_block . $first;
$tmp_block=""; $pos_corect=0;
} else {
$match=substr($first,0, $end_pos); $match=$tmp_block . $match ;
$pos_corect=$block_size-$end_pos; $tmp_block=substr($first,$end_pos,$block_size-$end_pos);
}
my @log_lines=split("\n",$match);
##Grep all lines
@log_lines=remove_ary_dupes(@log_lines); #Remove duplicated log lines!
foreach my $j (@sparam) {
push @ary_match, (grep /$j/, @log_lines);
}
#push @res_ary, @ary_match;
}
close (F);
return @ary_match;
}
Try this code, if have similar problem!
On Fri, 26 Sep 2003 20:13:21 +0530, Ramprasad A Padmanabhan <[EMAIL PROTECTED]> wrote:
Juris wrote:I have one small problem! HowTo read from large text file text in binary mode?
if i want read all file, i use this code:
my (@LOG_FILE); open (FL, "/var/log/maillog"); @LOG_FILE=<FL>; close (FL); #After this code execution file contents stored in array @LOG_FILE @LOG_FILE=grep /$something/, @LOG_FILE
I not want read all file string by struing - it's so slowly, if file is large and contains more than 1000000 records!
I need read each 500000 bytes, but how?
Please, help!
When you are opening big files never do
@array = <FILE>
This essentially reads the entire file into an array and is very expensive on memory.
you could do something like while(<FILE>){ push @arr , $_ if(/$something/); }
But IMHO this still that may not be the best way.
What I would do is
system("grep $something $filename > $tempfile"); # *Nothing* beats gnu grep when you parse large file
open(FILE,$tempfile); # Now if you really want the lines in an array @lines = <FILE>
Ram
-- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
-- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]