You don't seem to need a data manipulation system like Sqlite, more a
form of high volume storage. Do you really need elaborate SQL,
journalling, ROLLBACK and assured disk storage?
Di you consider some form of hashed storage, perhaps linear hashing, to
build a compact and high performance associative array for your sparsely
keyed data.
Do you really need the overhead of B-trees is you are just storing a
sparse array?
JS
Brannon King wrote:
The benefits I'm trying to get out of sqlite are the data queries. I
collect a large, sparse 2D array from hardware. The hardware device is
giving me a few GB of data data at 200MB/s. Future hardware versions
will be four times that fast and give me terabytes of data. After I have
the data, I then have to go through and make calculations on sub-boxes
of that data. (I'll post some more about that in a different response.)
I was trying to avoid coding my own sparce-matrix-file-stream-mess that
I would have to do if I didn't have a nice DB engine. I think sqlite
will work. I think it will be fast enough. I'll have some nice RAID
controllers on the production machines with 48-256MB caches.
From experimentation, the "UNIQUE INDEX" command is only 20% slower
than the "INDEX" command. That doesn't make sense to me. I would think
the UNIQUE INDEX creation would be take longer because it has to do
redundancy checks. And I still feel there should be a way to create a
UNIQUE INDEX without waiting for the redundancy check.
Dennis Cote wrote
A more general question is why are you trying to use sqlite? If you
need the
maximum possible speed you may be better off using in memory vectors and
maps and highly tuned routines like the C++ STL algorithms instead of
sqlite. Then you will can go back to using binary file I/O. SQLite adds
overhead above and beyond binary file I/O, it will always be slower.
If the
benefits of sqlite don't outweigh the costs, you should stick with binary
I/O. What benefit are you hoping to get out of using sqlite?