Hi,
We are in the process of migrating our .NET desktop applications database
from sql server express to sqlite (system.data.sqlite provider). As part of
the task we converted one the large client databases to sqlite using an open
source tool and tested some of the common queries.
A simple "select * from table" query takes twice as longer time in sqlite
compared to sql server express, Both use the same data structure and exactly
the same code except for the connection and command objects. This specific
table has just over a million rows and is related with lot of other tables.
The total database size is about 450MB and 130 tables.
My question is does reducing the number of rows (changing the db design)
will help in any way or the query is a function of the total database size
irrespective of number of rows in a table? Related with that is the
performance of a simple direct query affected by the constraints it has in
Sqlite? Is there any way we can optimize the performance?
Thanks, I appreciate your input.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to