"Pablo Fischer" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Hi > > I have 10 files, each file has 3000 lines. I have two options but I need to > know which option is better in disk space and in cpu (the fastest one when I > need th edata): > > 1. Keep the 10 files in one Zip, and When its time to use them, unzip them, > and start parsing and processing each one. > > 2. Save each file in row of a table of my Database (using MySql, so I need to > use DBI), and when Its time to use it, connect to the database, and retrieve > row by row.
It depends. On how you want to process the files' records when you have opened them. - sequentially / at random - frequently / infrequently - read/write or read-only and so on. I would vote for option 1 unless you're doing it once a minute and need read/write random access indexed by content, but even then there are other ways. (3 .. 42) :) How do you want to use this stuff Pablo? Rob -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]