I have a little experience with a somewhat similar setup: a typical "real"
file is 200MB...2GB. I am making a "snapshot" of the data structure (few
hundred K) into an attached in-memory-database. 

I've seen that the timre required to create that snapshot depends largely on
the size of the entire table, even if only selected columns go into the
snapshot. 

I.e. 

   ATTACH ':memory:' AS mem;
  INSERT INTO mem.Snapshot SELECT col1, col2 FROM Data;

is much slower if 'Data' contains an additional column with large data.
Moving my item meta data (small) into a table separate from the possibly
large blobs helped immensely. 

Note: I haven't investigated that much, as separating the large data column
into a separate table makes sense for other reasons. It could be that the
significant difference - even though very consistent to observe with
multiple files - was more due to OS/disk caching than SQLite itself. 




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/index-has-higher-cache-priority-than-data-tp64393p64397.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to