This is not at all my case ...
I don't obviously write 1 by 1, but using blocks of data ( array of
struct ), virtual tables wrappers, and "insert ... select".
This way I can achieve >200k rec/s, or at least 100k when having some
more fields.
Right now I'm completely CPU bound, it's 100% load at high rate. IO is
almost out of question, at <10MB /s; and I use 8k page size and of
course synchronous off, wal mode...
Another type of data (less fields but with a blob inside 2-32kB) easily
reaches ~40MB/s but only a few thousands rec/s.
The performance drops abruptly when having more fields (I don't remember
the magic threshold); it seems most of the load is needed for field
coding ? I use only integers for space optimization (varint); this is
also good as I have high dynamic range.
Multi-core sure helps to have enough CPU power for the rest (hardware
connection, pre-processing, etc).
I would definitely like to be able to get more performance, but I can
live with the current numbers. One can use some high-end CPUs if really
wants such high rates (the hardware around costs ~100x more :) ).
BTW I asked a few times already, is it possible to get/compile a windows
dll for sqlite4 (just for evaluation)?
Last time I checked, it didn't compile on windows at all.
Gabriel
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users