I wish to read a large data file (file size is around 1.8 MB) and manipulate the data in this file. Just reading and writing the first 500 lines of this file is causing a problem. I wrote:
fin = open('gene-GS00471-DNA_B01_ 1101_37-ASM.tsv') count = 0 for i in fin.readlines(): print i count += 1 if count >= 500: break and got this error msg: Traceback (most recent call last): File "H:\genome_4_omics_study\GS000003696-DID\GS00471-DNA_B01_1101_37-ASM\GS00471-DNA_B01\ASM\gene-GS00471-DNA_B01_1101_37-ASM.tsv\test.py", line 3, in <module> for i in fin.readlines(): MemoryError ------- is there a way to stop python from slurping all the file contents at once?
_______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor