Yes, your right. It seems to me there has got to be a more efficient way to accomplish what is being attempted. Right now, the solution is making a copy of the file, then checking every line of the original against every line of the copy. This method becomes exponentially more processor intensive for additional lines to the file. Perhaps a recognize would be to loop through the lines, give them a line number, and write them to a copy. Then sort the copy alphanumerically, then loop through the copy checking one line against the next, and if the next is a duplicate, through it away, and continue checking it against the next until they are not the same. As soon as their not the same, take the line that is not the same and begin checking it against the next line. So on and so forth. Because every like line will be right next to each other, you don't have to check each line against every other line, just the ones next to it. After your done eliminating the duplicates, sort the file again by line number (this only if its necessary to keep them in a certain order), and your done. Granted this method requires that you find some efficient manner of sorting the lines in large text files, but if somebody knows of something that can do this, viola.
Regards, David ----- Original Message ----- From: "Bob Showalter" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Monday, July 15, 2002 12:00 PM Subject: RE: Why "Premature end of script headers"? > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Monday, July 15, 2002 12:23 PM > To: 'Octavian Rasnita'; [EMAIL PROTECTED] > Subject: Re: Why "Premature end of script headers"? > > > > Probably your web server is timing out the request and killing your > > script. Web servers don't like to run long-running processes like > > this. Perhaps you can fork off a child and have the child take care > > of it. > > Another solution is to have something like the following in your loop: > -------------------------------------------------- > local $|=1; # print things right away (no buffering) > # $ltime and $ctime should be defined before the loop > $ctime = time(); > if ($ltime ne $ctime){ > print " . "; > $ltime = $ctime; > } > -------------------------------------------------- > Then the browser does not time out because it continually > gets information, so it simply prints > another period to the browser every one second that it is > working. This can help you know its still > working. True. But there are two possible downsides to consider: 1. It ties up a server process for an extended period. 2. The server will kill the CGI script if the client goes away or the connection is lost. -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]