On Wed, Jun 24, 2009 at 02:21:09PM -0500, Matthew O'Keefe wrote:
> 
> 
> We are using SQLite for indexing a huge number (i.e., 100 million to 1
> billion) of key pairs
> that are represented by an 88-byte key. We are using a single table with a
> very large number of rows (one for each data chunk), and two columns.
> 
> The table has two columns.  One is of type ³text² and the other is type
> ³integer².
> > 
> > The table is created with:
> > 
> > CREATE TABLE chunks
> > (
> >   name text primary key,
> >   pid integer not null
> );
> 
> As expected, as the
> table grows, the underlying B-tree implementation for SQLite means that the
> number of
> disks accesses to (a) find, and (b) add a chunk, grows larger and larger.
> We¹ve tested up
> to 20 million chunks represented in the table: as expected performance
> exponentially 
> decreases as the number of table entries grows.
> 
> We wanted to post to the mailing list to see if there are any obvious,
> first-order things
> we can try to improve performance for such a large table.

Bit late to the game...

Try increasing your page size. The larger page size will result in greater
fan out of the btree, resulting in a shallower tree and less IO requests.

Christian
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to