-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eberhard, Markus (external) wrote:
> In other
> words: minimize the number of accesses to database file (but I'm not
> sure if this is possible). 

You could use the SQLite backup API to read the entire file into
:memory: (quickly and sequentially) and then issue the query against the
memory database.  That will obviously only make sense if a substantial
portion of the database is returned by the select query.

You can also use the operating system file api to read the entire file
(discarding what you read) which will warm the operating system file
cache meaning less disk accesses during the actual query.  (This assumes
you aren't doing something like using XP with its default 10MB file cache.)

Finally you could access the blobs later on demand.  Rather than having
the select return the blobs, just get the rowids.  Later on you can use
the incremental blob I/O api to get the blob contents.  This approach is
useful if you don't need them all at once (eg you have a user interface
with a scroll bar) and that they are non-trivial in size (ie at least
several database pages).

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkpW9EEACgkQmOOfHg372QR60wCghtye362kXB6ByDfpj2I2rm88
nckAn0LiFPAUkRe7BNN3My92+MNZo0aE
=z2GP
-----END PGP SIGNATURE-----
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to