John Machin wrote:
> On 16/03/2009 5:48 PM, Pierre Chatelier wrote:
>>> A few hundred blocks of raw data? Blocksize approx 300K bytes?  
>>> Database
>>> created and dropped by the same process? 500 blocks is approx 150M
>>> bytes; why not keep it in a hash table in memory? If you keep it in a
>>> database or the file system, it's going to be washed through your real
>>> memory and pagefile-aka-swap-partition anyway, so just cut out the
>>> middlemen :-)
>> You're right, but who said I have only 1 DB at a time :-) ?
> 
> You didn't say anything at all about how many DBs you have, so it wasn't 
> you.
> 
> 
>> In fact, I have several DBs and I do not known in advance what size it  
>> will represent.
> 
> What is "it"?
> 
>> Perhaps 500MB. And I need RAM for other stuff, so the  
>> simplest thing is to use "normal" DBs.
> 
> You've lost me now. You need RAM for your working set of whatever you 
> are acccessing at the time, doesn't matter whether it came from a file 
> or a DB (which is just a structured file, probably not optimised for 
> 300KB BLOBs) or you built it in memory, and what's not being used at the 
> time will be in your filesystem or in your swap partition.
> 
> Please re-read what I wrote, to which your response was "You're right", 
> then consider that the total amount of data is not very relevant, what 
> matters is the size of your working set, mostly irrespective of its source.
> 
> However the overhead of packing/unpacking 300KB blobs into/out of a 
> database can't be overlooked.
> 
> I would suggest giving serious thought to a variant of an earlier 
> poster's suggestion: have the BLOBs each in its own file in the file 
> system, but mmap them.
> 
> 
>> Using memory DBs and swapping  
>> them aftwerwards would not be smooth.
>>
>> But we are not answering my initial question !
>>
>> Can I expect some gain in
>> -recompiling SQLite (which options/DEFINEs would help ?)
>> -using custom memory allocators (I am on Win32, in a multi-threaded  
>> environment, and yes, "it's bad")
>> -using compression
> 
> Compression? You tell us. What percentage compression do you get with 
> these 300KB BLOBs with (say) bz2? How long does it take to read in a 
> bz2-compressed BLOB and uncompress it compared to reading in an 
> uncompressed BLOB?
> 
> Cheers,
> John
> 

I'm doing nearly the exact same thing, except my BLOBs are about 8k, and 
they compressed to about 2k. It turned out in my testing to be 
significantly faster to skip the compression step (disk space is cheap, 
right?) and write the data directly to the database.

As to your other questions, I have no answers, and indeed, would be 
interested in them if you came up with some!

Mark

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to