On Sat, Feb 20, 2010 at 5:32 PM, Vincent Davis wrote:
>
> Thanks again for the comment, not sure I will implement all of it but I will
> separate the "if not row" The files have some extraneous blank rows in the
> middle that I need to be sure not to import as blank rows.
> I am actually having
Thanks again for the comment, not sure I will implement all of it but I will
separate the "if not row" The files have some extraneous blank rows in the
middle that I need to be sure not to import as blank rows.
I am actually having trouble with this filling my sys memory, I posted a
separate questi
On Sat, Feb 20, 2010 at 4:21 PM, Vincent Davis wrote:
> Thanks for the help, this is considerably faster and easier to read (see
> below). I changed it to avoid the "break" and I think it makes it easy to
> understand. I am checking the conditions each time slows it but it is worth
> it to me at t
Thanks for the help, this is considerably faster and easier to read (see
below). I changed it to avoid the "break" and I think it makes it easy to
understand. I am checking the conditions each time slows it but it is worth
it to me at this time.
Thanks again
Vincent
def read_data_file(filename):
On Fri, Feb 19, 2010 at 1:58 PM, Vincent Davis wrote:
> In reference to the several comments about "[x for x in read] is basically
> a copy of the entire list. This isn't necessary." or list(read). I had
> thought I had a problem with having iterators in the takewhile() statement.
> I thought I te
In reference to the several comments about "[x for x in read] is basically a
copy of the entire list. This isn't necessary." or list(read). I had thought
I had a problem with having iterators in the takewhile() statement. I
thought I testes and it didn't work. It seems I was wrong. It clearly works
On Fri, Feb 19, 2010 at 10:22 AM, Vincent Davis wrote:
> I have some some (~50) text files that have about 250,000 rows each. I am
> reading them in using the following which gets me what I want. But it is not
> fast. Is there something I am missing that should help. This is mostly an
> question t
On 2/19/2010 3:02 PM, MRAB wrote:
Is this any better?
def read_data_file(filename):
reader = csv.reader(open(filename, "U"),delimiter='\t')
data = []
for row in reader:
if '[MASKS]' in row:
break
data.append(row)
As noted in another thread recently, you
Vincent Davis wrote:
I have some some (~50) text files that have about 250,000 rows each. I
am reading them in using the following which gets me what I want. But it
is not fast. Is there something I am missing that should help. This is
mostly an question to help me learn more about python. It t
I have some some (~50) text files that have about 250,000 rows each. I am
reading them in using the following which gets me what I want. But it is not
fast. Is there something I am missing that should help. This is mostly an
question to help me learn more about python. It takes about 4 min right no
10 matches
Mail list logo