Andre du Plessis uttered:
How can one optimize the creation of the journal file. The problem is this, for our system which is an event based one each message needs to be insterted and committed to the database (guaranteed), this results in a commit per insert, this was obviously unacceptably slow and according to the lists by design you need to do more bulk transactions, and with some efford I modified it with a combination of bulk and temporary tables to do quite a few at a time, but the problem remains that committing is terribly slow.
What sort of latency is acceptable for commits? How many event sources are there? How do you track commits against the event source?
If you require guaranteed single inserts, a client/server database may be better performing, as writes can be better optimised in a redo/undo journal used by client/server systems, rather than the undo journal used by SQLite.
I'm thinking the journal file, the fact that it is created written and deleted each time which is slowing it down, Is there not a way to create and set up a permanent journal file pre-allocated to a certain size, so sqlite does not have to go through the OS each time to create and delete this file?
If you require multi-process access to the database, then Dan's suggestion may not work, and you'll have to optimise the sync case. You can do this, as suggested elsewhere by turning of synchronous updates. This leaves you at the mercy of the OS to guard against crashes.
Depending on your OS, you may be able to optimise OS sync. Solaris ZFS is a tree based FS, a bit like WAFL by NetApp. Synchronous writes are aggregated and written to minimize seeks. I've not done any benchmarking on ZFS, so YMMV (time to power up the Solaris Express partition, me thinks.)
Linux ext3 can write data to the same journal that FS meta-data is written to, which can greatly enhance single insert speeds due to the journal being written at disk IO speed without seeks. Tests I've done indicate a doubling of performance over regular ordered data writing that is the default for ext3.
Finally, on NetBSD (where LFS is still actively developed) you may see performance improvements using LFS, for similar reasons to the ext3 case above. I've not, however, tried that recently, so again YMMV.
Of course, if you're not running Solaris, Linux or NetBSD, you may be stuck as not many other OS/FS support such optimisations.
Along the same lines of this question, is there a way to fix the initial size of the DB, ie set it to pre-allocate 700mb for instance so that no growing of the db file is needed until the space is exceeded, may also speed things up.
Probably won't improve speed that much, especially as you approach your working database size. Avoid vacuuming your database, so that free pages are recycled and the database size will stabilise. If your dataset is likely to constantly grow without bounds, then SQLite may not be your optimal choice in the long run, and a client/server database may provide better performance over the long term.
You might also try increasing your page size, up to the maximum of 32768, so that new page allocations are required less.
Thank you very much in advance.
-- /"\ \ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \ ----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------