On Tue, 6 Aug 2013 10:44:49 -0400
Richard Hipp <d...@sqlite.org> wrote:

> On Tue, Aug 6, 2013 at 10:21 AM, Simon Slavin <slav...@bigfraud.org> wrote:
> 
> >
> > On 6 Aug 2013, at 3:12pm, Bo Peng <ben....@gmail.com> wrote:
> >
> > > The problems we have only happens with large databases and it is
> > > difficult to create test cases to reproduce them.
> >
> > SQLite is not optimised for databases which contain a very large number of
> > tables (hundreds).  It has to scan the long list of tables quite a lot.
> >  This will lead to low speeds.
> >
> >
> SQLite uses hashing to find tables - no list scanning.
> 
> However, when you do sqlite3_open(), SQLite has to read and parse the
> entire schema.  SQLite is very efficient about that, but it does take time
> proportional to the size of the schema.  Once the database is open, the
> schema is not read again (all schema lookups are via hash) unless some
> other process changes the schema, in which case the previous schema parse
> must be discarded and the whole thing read and parsed again.
> 
> Detail:  The schema parse does not happen during the call sqlite3_open().
> Instead, sqlite3_open() sets a flag that indicates that the schema needs to
> be read, but the the actually reading and parsing is deferred until it is
> really needed.

Can't autovacuum in incremental mode help in large databases with large 
schemas? Db file is open for read only and shouldn't happen updates to ptrmap 
info.

---   ---
Eduardo Morras <emorr...@yahoo.es>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to