Re: Working with Huge Text Files

2005-03-19 Thread cwazir
John Machin wrote: > More meaningful names wouldn't go astray either :-) I heartily concur! Instead of starting with: fields = line.strip().split(',') you could use something like: (f_name, f_date, f_time, ...) = line.strip().split(',') Of course then you won't be able to use ', '.join(fiel

Re: Working with Huge Text Files

2005-03-19 Thread John Machin
[EMAIL PROTECTED] wrote: > Lorn Davies wrote: > > > if (fields[8] == 'N' or 'P') and (fields[6] == '0' or '1'): > > ## This line above doesn't work, can't figure out how to struct? > > In Python you would need to phrase that as follows: > if (fields[8] == 'N' or fields[8] == 'P') and (fields[6

Re: Working with Huge Text Files

2005-03-19 Thread cwazir
Lorn Davies wrote: > if (fields[8] == 'N' or 'P') and (fields[6] == '0' or '1'): > ## This line above doesn't work, can't figure out how to struct? In Python you would need to phrase that as follows: if (fields[8] == 'N' or fields[8] == 'P') and (fields[6] == '0' or fields[6] == '1'): or al

Re: Working with Huge Text Files

2005-03-19 Thread Lorn Davies
Thank you all very much for your suggestions and input... they've been very helpful. I found the easiest apporach, as a beginner to this, was working with Chirag's code. Thanks Chirag, I was actually able to read and make some edit's to the code and then use it... woohooo! My changes are annotated

Re: Working with Huge Text Files

2005-03-18 Thread Al Christians
Michael Hoffman wrote: [EMAIL PROTECTED] wrote: It will only be this simple if you can guarantee that the original file is actually sorted by the first field. And if not you can either sort the file ahead of time, or just keep reopening the files in append mode when necessary. You could sort them

Re: Working with Huge Text Files

2005-03-18 Thread Al Christians
I did some similar stuff way back about 12-15 years ago -- in 640k MS-DOS with gigabyte files on 33 MHz machines. I got good performance, able to bring up any record out of 10 million or so on the screen in a couple of seconds (not using Python, but that should not make much difference, maybe ev

Re: Working with Huge Text Files

2005-03-18 Thread cwazir
Hi, Lorn Davies wrote: > . working with text files that range from 100MB to 1G in size. > . > XYZ,04JAN1993,9:30:27,28.87,7600,40,0,Z,N > XYZ,04JAN1993,9:30:28,28.87,1600,40,0,Z,N > . I've found that for working with simple large text files like this, nothing beats the plain old buil

Re: Working with Huge Text Files

2005-03-18 Thread [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote: > Lorn Davies wrote: > > Hi there, I'm a Python newbie hoping for some direction in working > with > > text files that range from 100MB to 1G in size. Basically certain > rows, > > sorted by the first (primary) field maybe second (date), need to be > > copied and written t

Re: Working with Huge Text Files

2005-03-18 Thread Michael Hoffman
[EMAIL PROTECTED] wrote: It will only be this simple if you can guarantee that the original file is actually sorted by the first field. And if not you can either sort the file ahead of time, or just keep reopening the files in append mode when necessary. You could sort them in memory in your Python

Re: Working with Huge Text Files

2005-03-18 Thread [EMAIL PROTECTED]
Lorn Davies wrote: > Hi there, I'm a Python newbie hoping for some direction in working with > text files that range from 100MB to 1G in size. Basically certain rows, > sorted by the first (primary) field maybe second (date), need to be > copied and written to their own file, and some string manip

Working with Huge Text Files

2005-03-18 Thread Lorn Davies
Hi there, I'm a Python newbie hoping for some direction in working with text files that range from 100MB to 1G in size. Basically certain rows, sorted by the first (primary) field maybe second (date), need to be copied and written to their own file, and some string manipulations need to happen as w