On Wed, Jan 13, 2016 at 12:43 PM, Olivier Mascia <om at integral.be> wrote:

> Is there any known structural performance issue working with a schema made
> of [many] tables, [...] foreign keys constraints, [...] indexes [...]
> primary keys and foreign keys. [...] tables would have [many] columns and
> [...] rows
>

SQLite scales well in general.


> Does sqlite have to reparse the schema text often to execute the queries?
> Or is the schema somehow translated internally to a, stored, digested
> ('compiled') format, to ease its working?
>

Don't know. But I never heard about a 'compiled' form for the schema.
Statements, yes, that's the VDBE code, but I believe the schema is reparsed
from the text in sqlite_master when necessary. The SQL parser for SQLite is
quite fast BTW, and unless you don't cache your prepared statement and
recreate (and thus reparse them) all the time, it's unlikely to be an issue
IMHO.


> The application which would use this schema is a server-side application
> (quite along the lines described in http://sqlite.org/whentouse.html).
> We have started experimentation and things look very good, excellent should
> I say, so the above question is more about which details to supervise,
> which traps to avoid.  I'm pretty sure there are people here with valuable
> background with similar datasets.
>

In a server app, it depends what kind of concurrency you want to support.
Write locks are exclusive, and database wide, and SQLite doesn't implement
MVCC, so concurrency could be an issue. WAL mode is supposed to be better
for concurrency too.
if your data can be naturally partitioned into several database files (all
using the same schema), you can achieve better concurrency if you can open
more than one DB file simultaneously. This is an approach we're using
internally. Assumes your FKs don't cross these Db file boundaries.
Another gotcha is related to large blobs, if you need partial update of
them. SQLite must rewrite the whole blob. May not apply to you though. Same
issue with alter table add column if you have large blobs, SQLite must
rewrite the rows (and the blobs), which can be very expensive. Typical
work-around (hack IMHO) is to put your blobs in separate aux tables.

There are other small gotchas (untyped / dynamically typed columns, legacy
non-standard behaviors, etc...) but the above are the two things that come
to mind given what you describe. --DD

Reply via email to