Hi there,

I'm wondering how larger BLOBs in a database affect performance of accessing 
the non-blob data.   We've got a database with potentially a few million 
records in the main tables (of just strings and ints), but joined to that data 
set we want to store up to thousands (maybe 75000 max) of blobs ranging in size 
from 75kB - 4MB. If it comes down to it, we can use the filesystem for these, 
but we'd prefer to store them in a database if it didn't bog things down 
(cleaner to manage implementation-wise and user-experience-wise).    Now I'm 
guessing that storing all those blobs will slow down access to the main tables 
(assuming records are added gradually - most without associated blobs, some 
with), because records would be spread out over many more pages (more seeking / 
disk access) - is that correct?    Would performance likely be better if the 
blob table were stored in a separate database file?   Also, is it possible to 
adjust page size on a per-database basis
 (I'm thinking larger pages for the blob database would be better, right?)?  
Any other suggestions to do this efficiently?

Sorry if this is answered elsewhere; when searching, most of the BLOB 
performance posts I've seen are about storing / retrieving the blobs.  We're 
not so worried about the time to retrieve / store the blobs as much as time to 
access the main tables.

Thanks!
Paul
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to