Bill's solution is no less "computationally complex" than any of the correct 
solutions, which all involved reading the file, and skipping the first 300 lines. The 
original poster didn't say he wanted to feed the result to STDIN of another process, but 
rather:
Grant Kelly wrote:
I have a text file, it's about 2.3 GB. I need to delete the first 300
lines, and I don't want to have to load the entire thing into an
editor.

Given this information, the proposed solutions where all reasonable. Once we 
had more information, more elegant solutions where available.

True, but I was making an assumption on program operation based on previous comments in this thread: mainly that the resultant file was to be fed into a database with no mention of file reuse.


"Computational Complexity" has little to do with what the command line "looks 
like", but rather what goes on inside the program.


We obviously have enough information to fully describe Grant's problem now, so from here on we're just arguing on the semantics of the solution based on a description of the problem from yesterday. Please note that I'm not in disagreement on any front... but...

First off, I'm familiar with the definition of computational complexity.

If we assume a complete solution to Grant's problem (file is read, 300 lines removed, fed into database), and that there is a _similar_ _cost_ for each proposed solution, then reading and writing the file twice will cost more than the same operations once. Right? I agree that previous solutions can be modified to perform similarly to Bill's suggested solution, but, as they stand, they will have a greater time complexity.

- Sebastian

_______________________________________________
RLUG mailing list
[email protected]
http://lists.rlug.org/mailman/listinfo/rlug

Reply via email to