> > I'm trying to read a large CSV files (some 40000 records) with
> > DBD::CSV version 0.2002 and perl v5.8.1 built for
> > i586-linux-thread-multi
> > 
> > Reading smaller CSV files (i.e. around 5000 records) works like a
> > charm. Only large ones fail.
> 
> If all you want to do is to read all records it would IMHO be more 
> efficient to use Text::CSV_XS. If you needed any filtering you'd have 
> to implement it yourself though, but there should be no limit on the 
> number of rows.

That's indeed all I want to do (at this point in time :) and
is way more efficient. Thanks alot for the pointer. The program
is now working as it should.

However since I came across it:
_If_ I would want to use DBD::CSV on large (i.e. 50000 or more records)
CSV files, what would I have to do ?
Is there a buildin limit on the number of records DBD::CSV can handle ?
And if yes, would it be easy to increase that ?
(and if yes it possibly should be mentioned in the POD...)

Best,
Michael
-- 
 Vote against SPAM - see http://www.politik-digital.de/spam/
 Michael Gerdau       email: [EMAIL PROTECTED]
 GPG-keys available on request or at public keyserver

Reply via email to