I have an application with the following behavior:
- Data is large -- gigabytes or 10s of gigabytes (will never fit into
memory)
- the indexes for the data is small -- will definitely fit into memory
- it will be relatively rare that any particular data is accessed more
often than
I have a little experience with a somewhat similar setup: a typical real
file is 200MB...2GB. I am making a snapshot of the data structure (few
hundred K) into an attached in-memory-database.
I've seen that the timre required to create that snapshot depends largely on
the size of the entire
John Bachir wrote:
i've read other posts on this list that say that we can't guess what sqlite
will do with cache.
It uses a simple LRU algorithm to determine which pages to kick out of
the page cache first (so at least it's somewhat deterministic).
however, could i be relatively confident
There is also the compile-time option
SQLITE_DIRECT_OVERFLOW_READ
(see here: http://www.sqlite.org/compile.html)
that makes content in overflow pages bypass the SQlite page cache. In my
understanding, that should help if the majority data consists of large BLOB
/ String cells.
(It would be
On 21 Sep 2012, at 12:51pm, _ph_ hauptma...@yahoo.com wrote:
I've seen that the timre required to create that snapshot depends largely on
the size of the entire table, even if only selected columns go into the
snapshot.
I.e.
ATTACH ':memory:' AS mem;
INSERT INTO mem.Snapshot
On 21/09/2012 7:54 AM, Clemens Ladisch wrote:
John Bachir wrote:
i've read other posts on this list that say that we can't guess what sqlite
will do with cache.
It uses a simple LRU algorithm to determine which pages to kick out of
the page cache first (so at least it's somewhat