On Tue, Aug 6, 2013 at 11:03 AM, Simon Slavin <slav...@bigfraud.org> wrote:

>
> On 6 Aug 2013, at 3:44pm, Richard Hipp <d...@sqlite.org> wrote:
>
> > On Tue, Aug 6, 2013 at 10:21 AM, Simon Slavin <slav...@bigfraud.org>
> wrote:
> >
> >> SQLite is not optimised for databases which contain a very large number
> of
> >> tables (hundreds).  It has to scan the long list of tables quite a lot.
> >> This will lead to low speeds.
> >
> > SQLite uses hashing to find tables - no list scanning.
>
> I sit corrected.  Thanks.
>

I need to correct myself, slightly.

Tables, indices, triggers, and views are looked up via hash.  However, the
names of columns in a table are stored in an array, which is scanned.  So
if you have tables with vast numbers of columns, there could be a slowdown
in code generation (sqlite3_prepare) as that vast array of column names is
scanned.

However, we've never noticed a performance issue there and have hence never
felt the need to hash the column names.

Key points:

(1)  The size of the schema effects start-up time.  Larger schemas take
longer to start up, but run just as quickly once start-up is complete.

(2)  Tables with vast numbers of columns (thousands) can take longer to
sqlite3_prepare() than reasonably-sized well-normalized tables.  But nobody
ever notices the difference, or at least has never reported it.

(3)  The minimum size of a database file is one page per table and index
(including indices that are implied by UNIQUE constraints).  So if you are
using a 4K page size and you have a schema with 1000 tables and indices, an
empty database file will be 4MB in size.
-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to