Text::CSV_XS will handle what you want to do just fine. You could do: while(my $rec = $sth->fetchrow_arrayref()){ print OUTFILE $csv->combine(@{$rec}),$/; } If you are pulling large amounts of data across your network, look at doing some optimization by setting RowCacheSize in the DBI to a higher number. I have found 1200 to be optimal for my stuff. Record size does make a difference with this number. >>>amonotod <[EMAIL PROTECTED]> 09/20 11:23 am >>> Hello, I am wondering if there is any kind of switch that can be passed to DBD::CSV that will enable it to parse only one EOL at a time? We are using DBD::CSV to parse files into databases, and it is working beautifully. Unfortunately, some of these files are in excess of 100MB, and none are them are slated to stop growing. When parsing files under @10MB, it goes fairly quickly, but the parse time on very large files can be up to 30 minutes on a P4 1.7GHz with 1GB RAM. I am currently parsing a 122MB file, and perls memory use went to over 300MB, with a parse time of 32 minutes for this file. What I'd like to see is DBD::CSV create the handle, allow me to select * from table, but then to wait on actually calling the select statement, while allowing me to call $sth->fetchrow_array against it. In the background, after the initial statement and after each fetchrow_array or fetchrow, DBD::CSV would call the next line of data, parsing to the next EOL... So, I know I'm dreaming, but is this possible with the present DBD::CSV? Yes, I could manually open() the file, parse to an EOL, and then call DBD::CSV against the in-memory values, but I'd rather not. DBD::CSV is very good about finding the next EOL, and making sure it is not part of a quoted field, and I'd much rather rely on that... Maybe this needs a separate module, like DBD::CSV::Loader or something like that, but that would be beyond my l33t skillz... :-( Ideas? Tips? Flames? Thanks, amonotod -- `\|||/ amonotod@ | sun|perl|windows (@@) charter.net | sysadmin|dba _____|_____|_____|_____|_____|_____|_____|_____|