Hi,

When working with queries that produce very large result sets, I would like to 
be able to return these results in manageable batches to the requester.
First thought was to hold a handle to the results set and return the responses 
in batches - doesnt seem too good since it holds a read lock and interferes 
with writers when it goes on too long.
The shared cache mode option with read uncommitted isolation strategy looked like an 
option and seems to work well but the warning about results being potentially 
"inconsistent" is a bit of a worry.

What level of inconsistency could we be talking about here ?

- Rows that are already deleted still appear in a results set ?
- Some columns that have been updated appear in the result with other columns 
from the older version ?
- Reading from /dev/random ?

Any advice ?

Many thanks for any thoughts/suggestions.

All the best,
Liam.


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to