Firstly, I have just released a new Perl DBD::SQLite module, which
fixes a few perl-side bugs and moves us from 3.7.7 to 3.7.9.

Secondly, for a collection of arbitrary SQLite database files with
unknown schemas that were created before 3.7.8, is it possible to do
bulk-processing of the databases to gain the speed improvements of the
new merge-sort indexing in 3.7.8?

My reading of the release notes is that connecting to each database
file and issuing REINDEX might be enough, plus maybe a VACUUM and
ANALYZE afterwards for good measure.

But I can't see anything saying explicitly this is enough, or if I
need to drop all of the indexes entirely and create them again.

The latter is obviously a more problematic situation when the schemas
are arbitrary.

Thanks for your advice

Adam Kennedy
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to