@Dave: thanks.. By the way I am running my codes on a server with about 100GB ram but I cant afford my code to use 4-5 times the size of the text file. Now I am using read() / readlines() , these seems to be more efficient in memory usage than io.StringIO(f.read()).
On Mon, Nov 4, 2013 at 9:23 PM, Steven D'Aprano <st...@pearwood.info> wrote: > On Mon, Nov 04, 2013 at 02:48:11PM +0000, Dave Angel wrote: > > > Now I understand. Processing line by line is slower because it actually > > reads the whole file. The code you showed earlier: > > > > > I am currently using this method to load my text file: > > > *f = open("output.txt") > > > content=io.StringIO(f.read()) > > > f.close()* > > > But I have found that this method uses 4 times the size of text file. > > > > will only read a tiny portion of the file. You don't have any loop on > > the read() statement, you just read the first buffer full. So naturally > > it'll be much faster. > > > Dave, do you have a reference for that? As far as I can tell, read() > will read to EOF unless you open the file in non-blocking mode. > > http://docs.python.org/3/library/io.html#io.BufferedIOBase.read > > > > I am of course assuming you don't have a machine with 100+ gig of RAM. > > There is that, of course. High-end servers can have multiple hundreds of > GB of RAM, but desktop and laptop machines rarely have anywhere near > that. > > > -- > Steven > > _______________________________________________ > Tutor maillist - Tutor@python.org > To unsubscribe or change subscription options: > https://mail.python.org/mailman/listinfo/tutor > -- *AMAL THOMASFourth Year Undergraduate Student Department of Biotechnology IIT KHARAGPUR-721302*
_______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor