Le ven 18/06/2004 à 03:23, [EMAIL PROTECTED] a écrit :
> I'm using sqlite in heavily-loaded system consisting of database files 
> that are created over a two-day period. The average database file is about 
> 800 meg. After extensive testing early in the piece I concluded that only 
> sqlite was suitable. All alternative technoligies I tested were far too 
> slow for my (now aging) sun hardware. I tested sqlite, postgres, mysql, 
> and sapdb. Anecdotally, I'd have to say that SQLite is ideal for targeted 
> mid-size databases.

Same here. I work on backup solutions, and use sqlite for indexing whole
filesystems. The typical db is between 200 and 700 mb in size, comprises
around 1,5 million rows, and while polling the FS for indexing purposes,
I get speeds of 150-300 FS stat() + read + insert/update per second(!)
on average hardware using perl DBI. Read performance is also incredible,
even whith LIKE clause! Neither postgres nor mysql were able to sustain
such rates, always ending up with corrupted memory or db files and even
hard lockups (but that may be my mistake, I still don't quite get it
about postgres memory management). Sqlite has never failed me, even
once. 

David Morel

-- 
***********************************************
[EMAIL PROTECTED]
OpenPGP public key: http://www.amakuru.net/dmorel.asc

Attachment: signature.asc
Description: Ceci est une partie de message =?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e=2E?=

Reply via email to