Asif Lodhi uttered:

Hi Kees,

Thanks for replying.

On 6/17/07, Kees Nuyt <[EMAIL PROTECTED]> wrote:
>... thankful if you experts would give me an "accurate" and fair
>picture of the crash-recovery aspects of SQLite - without any hype.

I'm not sure if you would qualify this as hype, but sqlite is
used in many end-user products, ranging from operating systems ..

Basically, I intend to use sqlite's data capacity as well - I mean
2^41 bytes - for reasonably sized databases. Well, not as much as 2^41
but somewhere around 2^32 to 2^36 bytes. I would like to know if the
"crash-recovery" feature will still work and the high-performance
mentioned will be valid even if I have this kind of a data volume. And
yes, I am talking about highly normalized database schemas with number
of tables exceeding 80. Please reply assuming I tend to come up
optimized db & query designs - keeping in view general rules for
database/query optimizations.


SQLite is not optimised for large datasets. Data recovery will work, as advertised, in the general case including large datasets, but the memory footprint of the library increases as the size of the database grows.

Consider using larger pages than the default 1024 bytes to limit the number of pages SQLite must track.

Other than that, the performance should degrade predictably with increasing datasets, given that SQLite uses the same BTree(+) based algorithms used by most database engines.



--
Thanks again and best regards,

Asif

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


--
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
     X                           - AGAINST MS ATTACHMENTS
    / \

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to