Hello there.

I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU where it would use a
100% on test systems with idle IO. Effectively what the application does is
constantly doing lookups as fast as it can.

To counteract this I increased the page size to 8192 (Unix server with advfs
having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h to
512000. This worked. My application starts at low memory usage and as it
gradually gains more memory. As it gains more memory it uses more CPU and
reaches a point where it finally uses 100% CPU and 5Gb of ram.

Every now and then the lookup table is udpated. As soon as the application
does this however the performance goes back to a crawl and slowly builds up
again as described in the previous paragraph. The memory usage stays at 5Gb.
All that I can think of is that the update invalidates the cache. The update
is not very big, say 200000 rows in a table that has about 45 million rows.

What exactly is happening here?

Regards.
-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944881.html#a11190285
Sent from the SQLite mailing list archive at Nabble.com.


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to