Depending on the nature of the data and queries, increasing the
block size may help.

   Posting some information about your schema and queries is the only
way to get truly good advice on this though, I think.  There is no
"-runfast" switch you can include on the command line to fix things.
:)  The answers are almost guaranteed to be found in your use of
SQLite, not in the database itself.

   -T

On Fri, Feb 20, 2009 at 12:22 AM, Kim Boulton <k...@jesk.co.uk> wrote:
> Hello,
>
> I'm trying out Sqlite3 with an eye to improving the performance of
> queries on an existing MySQL database.
>
> I've imported the data into sqlite which is approx. 30 million rows of
> part numbers each with a price.
>
> So far, it's approx. four times slower than the MySQL version, and the
> size of the sqlite database is too big to fit in memory (several GB)
> whereas I can get the MySQL data down to 900MB if it's compressed and
> read only.
>
> I would appreciate some tips or pointers on getting sqlite3 performance
> up and the data size down. I googled but couldn't find much.
>
> I don't need concurrency or inserts, it's single user, read only.
>
> TIA
>
> kimb
>
>
>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to