On 20 Sep 2009, at 8:06pm, Mohit Sindhwani wrote:

> Kees Nuyt wrote:
>> On Sun, 20 Sep 2009 12:02:17 +0800, Mohit Sindhwani
>> <m...@onghu.com> wrote:
>>
>>> Hi!  An embedded SQL-based database that we used earlier had a  
>>> concept
>>> of packed fetches - this would mean that we could create a certain
>>> buffer for results, prepare a query, execute it and read back the
>>> results in "groups" of 10 or 20 or 1000 (or "n") results per  
>>> call.. this
>>> was significantly faster than reading the results one at a time.

You can move the entire table of answers, by using the sqlite function  
which loads the entire table which is sqlite3_exec().  Or you can  
write your own routine which uses _prepare, _step, etc.. but parcels  
up the results of _step in batches.  There is no low-level difference  
in what happens, since all of these solutions come down to calls to  
_step, and if your own code is reasonably efficient, none of these  
solutions will be any faster than any other.

>> Have a look at:
>> http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>>
>> It is not exactly what you are looking for, but it may apply
>> to your use case.
>
> I'm just trying to see if there's a way to move more data per request.
> But it seems not (for now).

By all means write your own routine that calls _step lots of times.   
But it won't be faster or more efficient.  Other database engines have  
a large overhead for each call to their libraries so fewer calls  
results in improvements in efficiency and speed.  This overhead can be  
caused by having to do authentication (username/password) or access a  
server running on another computer for each function call.  SQLite  
does not have to do these things and has almost no overhead for each  
call to _step, so there's no opportunity for savings.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to