Hi all,

I use an SQLite database w/ the rtree extension to hold information
about genetic polymorphism (snp's), based on UCSC's mapping work and
their mysql table dumps.  My database is write-once, read from then
on.

One of the tables has almost 19 million rows.  This is the table on
which I build my 2-D rtree index.

I read the data from tab delimited mysql dumps into the basic tables,
then run a query that pulls the data I want to rtree-index out of it's
table, cleans up one of the columns, the inserts it into the rtree
index.  Finally I add a set of indices to several columns in the big
table.

I tried using .import to move the data into the tables, but gave up
after it ran well past the time it took to do the inserts.  That
didn't seem to help with the index/rtree creation time either.

I'm turning synchronous off and doing the inserts inside of a
transaction.

I was wondering if there was some way to populate the database inside
a ':memory:' database and then dump/copy/... the results into a file.
I've seen posts that suggest that I can select from the memory tables
and insert into the persistent ones, but that seems like it'd take
more work to get the indexes.

I'd be interested in any suggestions for making my loading go faster.

Thanks,

g.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to