I'm working on an open source messaging server (MsgSrv) which is currently using SQLite. MsgSrv'ers are a network of P2P applications, which communicate with one another. Each communication is a single message, which is archived in a local SQLite database.

When messaging traffic increases there are a lot of disk accesses. One way to address the disk access issue is to use transactions... but that would involve caching record data until a point in time when that data can be flushed to the disk DB encased in a transaction block.
{this, I suspect might be the best approach...}


Another approach might be to use two SQLite databases. One db would be a normal disk database and the other would be a memory only database. The idea is to write to the memory database and to periodically flush the memory database to the file database.

Clearly the latter approach requires more processing power but has considerably more benefits. For one, my MsgSrv application is free to use SQL queries throughout, while the movement from memory DB to disk DB happens transparently to the application.

With the former approach (managing memory structures in code), I end up with a more tightly coupled solution – however, using hash maps would be wickly fast!

I'm wondering if anyone has advice or concepts I should be considering - or perhaps point out something I've missed?

Carlos
--
EMAIL: [EMAIL PROTECTED], [EMAIL PROTECTED]
WEB: http://www.chessbrain.net
AOLIM: carlosjustiniano
MSNIM: [EMAIL PROTECTED]
YIM: [EMAIL PROTECTED]
PGP Key ID: 0x99AA9E49
RSS: http://www.chessbrain.net/cjus.rss













---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to