Nope your not missing anything. Thats how it works.

It acquires the lock when it must spill its dirty cache to disk!

What happens is the journal file holds the original pages and the new writes 
(inserts) are written to the db file. So an exclusive lock is taken out by the 
writing thread/connection. The original pages are written to the journal then 
the modified pages are written to the database file. 
Upon commit the journal file is deleted. A rollback copies the pages from the 
journal back into the original database file.


You are probably going to have to commit more frequently to allow readers 
access.

SQLITE does not have any kind of multi versioning of the database pages. Thus 
you may not have one connection reading and another writing. 

HTH
Ken



--- On Tue, 1/27/09, Ian Frosst <frossto...@gmail.com> wrote:

> From: Ian Frosst <frossto...@gmail.com>
> Subject: [sqlite]  Transaction behavior with large numbers of inserts
> To: sqlite-users@sqlite.org
> Date: Tuesday, January 27, 2009, 6:49 PM
> I have an application which on a single thread writes data
> to a database.
> This writing can result in millions of writes to the
> database in a single
> transaction (started with BEGIN TRANSACTION.)  I also have
> other threads and
> processes which read the same database, but never write to
> it, and never
> start transactions  (they just issue select commands
> against it.)  In the
> writing application, I have one thread which originally
> opens the database,
> but then hands it off to the writer thread, and never
> touches it again.
> 
> The issue that I am seeing is that when the transaction is
> started,
> everything behaves as expected; the writer happily calls
> inserts within his
> connection, and they go to wherever the isolated
> modification data goes,
> which I presume is the journal file.  Initially, other
> processes and threads
> can freely read the database, and all is good.  Fast
> forward a bit, to when
> a large number of inserts have occurred.  It seems that
> when the page cache
> gets filled up, SQLite tries to find some free pages, which
> causes an
> exclusive lock to be obtained within the context of the
> transaction, that is
> not relinquished until the completion of the transaction. 
> This causes some
> pain, because my readers that were happily able to read
> from the original
> database are now stopped up until the potentially long
> running transaction
> completes (they either get database is locked errors, or in
> my case, their
> infintely waiting busy handlers do just that.)
> 
> My question is; am I assessing the situation correctly, and
> if so, is there
> anything I can do to avoid this rather nasty situation?  I
> would rather not
> take the periodic-commit approach, and increasing the page
> cache size would
> potentially cause the use of more memory than I can spare. 
> Disk usage,
> however, is not a problem; I can use as much as is
> necessary.
> 
> Thanks very much.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to