On Mon, Nov 28, 2011 at 3:26 PM, Liam Kenny <l...@druidsoftware.com> wrote:

>
> Hi,
>
> When working with queries that produce very large result sets, I would
> like to be able to return these results in manageable batches to the
> requester.
> First thought was to hold a handle to the results set and return the
> responses in batches - doesnt seem too good since it holds a read lock and
> interferes with writers when it goes on too long.
> The shared cache mode option with read uncommitted isolation strategy
> looked like an option and seems to work well but the warning about results
> being potentially "inconsistent" is a bit of a worry.
>

You would do better to use the Write-Ahead Log (WAL) mode.  See
http://www.sqlite.org/wal.html for additional information.  WAL is a more
recent innovation than read_uncommitted, and it works better since you
never have to worry about inconsistency.


>
> What level of inconsistency could we be talking about here ?
>
> - Rows that are already deleted still appear in a results set ?
> - Some columns that have been updated appear in the result with other
> columns from the older version ?
> - Reading from /dev/random ?
>
> Any advice ?
>
> Many thanks for any thoughts/suggestions.
>
> All the best,
> Liam.
>
>
> ______________________________**_________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>
>



-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to