Hi,

Does your reader threads access to the same table as your writer ?

Maybe (I'm not 100% sure it works) you could try :
on a single file database, with the PRAGMA read_uncommited=1 on the
readers connections (this is not a good idea if the tables are the same
between readers and writers, it depends on what degree of reliability you
need on your data), and with shared_cache_enable(1).

Maybe it won't work because it's in the same file. Then another idea would
be :
create 2 database files, for_writer.sqlite, and for_reader.sqlite
if writer and / or reader thread need to access the other db (for_reader
resp. for_writer) then it's possible trough a attach command (attach
"for_reader.sqlite" as forreader"

these are suggestions, think about it and perhaps it would bring you to a
solution !

regards,

Vincent

> I have an application which on a single thread writes data to a database.
> This writing can result in millions of writes to the database in a single
> transaction (started with BEGIN TRANSACTION.)  I also have other threads
> and
> processes which read the same database, but never write to it, and never
> start transactions  (they just issue select commands against it.)  In the
> writing application, I have one thread which originally opens the
> database,
> but then hands it off to the writer thread, and never touches it again.
>
> The issue that I am seeing is that when the transaction is started,
> everything behaves as expected; the writer happily calls inserts within
> his
> connection, and they go to wherever the isolated modification data goes,
> which I presume is the journal file.  Initially, other processes and
> threads
> can freely read the database, and all is good.  Fast forward a bit, to
> when
> a large number of inserts have occurred.  It seems that when the page
> cache
> gets filled up, SQLite tries to find some free pages, which causes an
> exclusive lock to be obtained within the context of the transaction, that
> is
> not relinquished until the completion of the transaction.  This causes
> some
> pain, because my readers that were happily able to read from the original
> database are now stopped up until the potentially long running transaction
> completes (they either get database is locked errors, or in my case, their
> infintely waiting busy handlers do just that.)
>
> My question is; am I assessing the situation correctly, and if so, is
> there
> anything I can do to avoid this rather nasty situation?  I would rather
> not
> take the periodic-commit approach, and increasing the page cache size
> would
> potentially cause the use of more memory than I can spare.  Disk usage,
> however, is not a problem; I can use as much as is necessary.
>
> Thanks very much.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to