> A few hundred blocks of raw data? Blocksize approx 300K bytes?  
> Database
> created and dropped by the same process? 500 blocks is approx 150M
> bytes; why not keep it in a hash table in memory? If you keep it in a
> database or the file system, it's going to be washed through your real
> memory and pagefile-aka-swap-partition anyway, so just cut out the
> middlemen :-)

You're right, but who said I have only 1 DB at a time :-) ?
In fact, I have several DBs and I do not known in advance what size it  
will represent. Perhaps 500MB. And I need RAM for other stuff, so the  
simplest thing is to use "normal" DBs. Using memory DBs and swapping  
them aftwerwards would not be smooth.

But we are not answering my initial question !

Can I expect some gain in
-recompiling SQLite (which options/DEFINEs would help ?)
-using custom memory allocators (I am on Win32, in a multi-threaded  
environment, and yes, "it's bad")
-using compression

Regards,

Pierre Chatelier

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to