On Nov 4, 2013, at 8:30 AM, Amal Thomas <amalthomas...@gmail.com> wrote:

> Yes I have found that after loading to RAM and then reading lines by lines 
> saves a huge amount of time since my text files are very huge.
> 

[huge snip]

> -- 
> AMAL THOMAS
> Fourth Year Undergraduate Student
> Department of Biotechnology
> IIT KHARAGPUR-721302
> _______________________________________________
> Tutor maillist  -  Tutor@python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor

How long are the lines in your file?  In particular, are they many hundreds or 
thousands of characters long, or are they only few hundred characters, say 200 
or less?

Unless they are so long as to exceed the normal buffer size of your OS's 
read-ahead buffer, I strongly suspect that the big time sink in your attempt to 
read line-by-line was some inadvertent inefficiency that you introduced.  
Normally, when reading from a text file, python buffers the reads (or uses the 
host OS buffering).  Those reads pull in huge chunks of text WAY ahead of where 
the actual python processing is going on, and are VERY efficient.

-Bill
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to