On 13 Jan 2016, at 11:43am, Olivier Mascia <om at integral.be> wrote:

> Is there any known structural performance issue working with a schema made of 
> about 100 tables, about 80 foreign keys constraints, and some indexes in 
> addition to those implicit of the primary keys and foreign keys. In my book 
> it does not qualify as a complex schema, some tables would have 30 to 40 
> columns and 4 or 5 tables are candidates for a moderate number of rows 
> (rarely more than 1 million), while one of the tables could receive about 10 
> millions rows after some years of data collection (so again nothing really 
> fancy).

None of these numbers are anywhere near the limits of SQLite.  If you want 
specific numbers (which you don't need to worry about with this database) see 
this:

<https://www.sqlite.org/limits.html>

> Does sqlite have to reparse the schema text often to execute the queries? Or 
> is the schema somehow translated internally to a, stored, digested 
> ('compiled') format, to ease its working?

The first time SQLite needs the database schema it reads it from the file and, 
as you suspected, stores it internally (in a more convenient and efficient 
format).  The internal version is valid only until a transaction ends and the 
database is unlocked, because after that another process might create/delete a 
table, or create a UNIQUE index or something.  However, a schema-version 
counter is stored in the database and if, on the next command, SQLite sees that 
this hasn't changed it will continue to use its internal copy.

> The application which would use this schema is a server-side application 
> (quite along the lines described in http://sqlite.org/whentouse.html).  We 
> have started experimentation and things look very good, excellent should I 
> say, so the above question is more about which details to supervise, which 
> traps to avoid.  I'm pretty sure there are people here with valuable 
> background with similar datasets.

I wrote server-side front ends to SQLite databases without problems.  Any 
problems come from network communications, not SQLite.

My only concern with what you wrote is that you mention 100 tables.  In order 
to find the table you specify SQLite has to go through a list of (hashed, I 
think) table names, and going through an average of 50 of them per command can 
be slow.  If you have many tables with the same columns it's more efficient to 
merge them into one table with an extra column for the original table-name.  
Not only can this speed things up but it makes the database file smaller since 
each table and index in the database has its own list of pages of data.

Simon.

Reply via email to