My application uses SQLite to cache data. Specifically, the cache isn't the
storage of record for anything; it's just a local copy of data that's stored
remotely on servers of record. The cache primarily facilitates rapid searching
of the data locally, via FTS.
I'm seeking specific advice on ho
On 17 Feb 2016, at 2:37pm, Dave Baggett wrote:
> I'm seeking specific advice on how to tune SQLite for this application when
> deployed on a target with extremely poor write performance. On this target
> writing in many small chunks is much more expensive than writing in a single
> big sequen
I should have clarified: the problematic locking is happening in the OS layer.
I've completely disable SQLite (thread) locking by building with
SQLITE_THREADSAFE=0.
(And, yes, I'm only using SQLite from a single thread!)
Regarding transactions, I'm bundling write operations into transactions, bu
On 17 Feb 2016, at 2:48pm, Dave Baggett wrote:
> Regarding transactions, I'm bundling write operations into transactions, but
> not optimally. If, say, I do a huge write as a single transaction, will that
> cause SQLite to keep everything in memory until I COMMIT? Perhaps that's the
> right s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/02/16 06:37, Dave Baggett wrote:
> I'd welcome any suggestions
How about two databases? Create an in memory database for the cache.
Then whenever it hits a certain size (eg 64MB) or time passed (eg 5
minutes), copy/move data from the memory da
On 17 Feb 2016, at 3:34pm, Simon Slavin wrote:
> A lot of operations on the database file are done at the beginning and end of
> every transaction. If your journal is in memory, then you can dramatically
> disk usage by using large transactions. So I think you are right and you
> should che
That's a great suggestion. One issue, though is that I'd have to run two FTS
searches to search -- one on the disk-based database, and one on the
memory-based one. I also already have the database split into 8 .dat files for
scaling purposes. :)
But this may be workable -- thanks. (BTW, I am us
OK, that helps -- thank you.
One clarification: is it the case that transaction bundling ONLY affects
write/delete operations -- i.e., those operations that alter the database?
Another clarification: is it the case that writes within a single transaction
will remain in the in-memory page cache
Clarification to my own post:
When I talk about buffering, say, 16MB of write transactions in memory, I want
the effects transactions to be visible to readers immediately (once they are
"commited" to RAM). This seems akin to WAL mode where SQLite scans the .wal
file prior to scanning the .dat f
Just FYI, FTS writes each transaction's index data in a segment, then does
segment merges over time. So there's some advantage to bulk updates versus
one-at-a-time updates in terms of index fragmentation and write overhead.
Having an in-memory FTS table which you spill to the on-disk table(s) as a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/02/16 08:20, Dave Baggett wrote:
> One issue, though is that I'd have to run two FTS searches to
> search -- one on the disk-based database, and one on the
> memory-based one
You see issues, I see features :-)
The memory based cache would conta
On 17 Feb 2016, at 4:27pm, Dave Baggett wrote:
> One clarification: is it the case that transaction bundling ONLY affects
> write/delete operations -- i.e., those operations that alter the database?
A transaction must be created for read operations too. Else a database might
be changed in th
12 matches
Mail list logo