> I have a simple table with five columns and 450,000 rows. In SQLiteSpy,
> I can run "SELECT * FROM trend_data" and get all 450,000 rows in 4.5
> seconds. But in my program, if I use sqlite3_prepare() and
> sqlite3_step() until I run out of data, it takes 55 seconds to get
> through all rows.
Selecting 100K rows/second without ORDER BY seems reasonable on a fast machine.
What timing do you get for this command?
time sqlite3 your.db "SELECT * FROM trend_data" | wc -l
Run the command more than once, as the first timing is always slower.
____________________________________________________________________________________Fussy?
Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel and lay
it on us. http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------