On Fri, 29 Mar 2013 15:17:52 -0400
Jeff Archer <jsarc...@nanotronicsimaging.com> wrote:

> I have previously made an apparently bad assumption about this so now I
> would like to go back to the beginning of the problem and ask the most
> basic question first without any preconceived ideas.
> 
> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
> 
> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
> 
> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
> 
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?
> Or, is it simple that I should just write directly to file myself?

Piping through gzip -6 or xz -2 will minimize bytes to write. If you are 
working with 5D images xz (7-zip LZMA fork) will do the best. 

For processing you do zcat file | processing_application or xzcat file | 
processing_application

---   ---
Eduardo Morras <emorr...@yahoo.es>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to