Hi,
I have an application where a table is filled with elements whose primary key
is specified at insertion, and is actually a combination of several independent
IDs. Example: ItemID = (id0 32) + (id1 16) + (id2).
The range covered by each ID guarantees that their sum will never exceed the 64
bit maximum size of an sqlite primary key. The advantage of this approach is
that a function performing a SELECT statement can pre-compute the id that needs
to be retrieved from the database. This is what I call a sparse table, meaning
that the table will never have more than X items, but the primary key range is
actually much bigger than X. Sorry if my definitions are not standard, SQL is
not my native language :)
This scheme is used because IDs are usually inserted all at once in a single
batch, and then they have to be regularly updated over a very long time. So,
there are relatively few INSERTS and a LOT of UPDATES and SELECTS.
I'm wondering if the advantage in search speed obtained by this ID assignment
scheme may somehow be offset by other factors like additional memory usage,
less efficient inserts, etc. Can anyone offer counterarguments, or recommend a
better scheme?
Regards
David
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users