Brannon King wrote:
> The benefits I'm trying to get out of sqlite are the data queries. I
> collect a large, sparse 2D array from hardware. The hardware device is
> giving me a few GB of data data at 200MB/s. Future hardware versions
> will be four times that fast and give me terabytes of data. After I
> have the data, I then have to go through and make calculations on
> sub-boxes of that data. (I'll post some more about that in a different
> response.) I was trying to avoid coding my own
> sparce-matrix-file-stream-mess that I would have to do if I didn't
> have a nice DB engine. I think sqlite will work. I think it will be
> fast enough. I'll have some nice RAID controllers on the production
> machines with 48-256MB caches.

Hello Brannon,

    I am simply curious.  This sounds like an amazing engineering
challenge.  If it is not a secret, can you describe what this data
represents and how it will be used? 

    What is the ultimate source of this data? 

    How many days/weeks/eons of it do you plan to accumulate?   How much
raw disk space is that?

    If backups and journaling are not important, then is it safe to
assume that you can always regenerate that data on demand?  Is each
"set" of data identical, or only statistically similar to prior sets?

    Your project sounds like fun though, from what little I've read of
this thread.  Sure beats writing boring financial software ;)

Reply via email to