General suggestion to you, Mark, would be change your database engine, use something else than SQLite.
Application that continuously writing to the database thousands of rows (what size database can reach with it, I wonder?) from one process and tries to read from another process is DOA with SQLite. It's even written at http://www.sqlite.org/whentouse.html (see "High Concurrency" part at the end). You can get some luck with such kind of application combined with SQLite if you work from the single process with shared cache turned on (and maybe even read_uncommitted would also have to be turned on). But if you want to do it from different processes you better do it with MySQL or alike. Pavel On Mon, Oct 26, 2009 at 4:21 AM, Mark Flipphi <[email protected]> wrote: > Hello, > > We have a sqlite database that is used to strore measurement data. > > One application is on a server and is continuous storing data to the > database. > We use : > BEGIN > INSERT > INSERT > .... > COMMIT > The commit is called when there are more then 100.000 inserts or 1 > second has elapsed. > > Now an other application is running on a desktop pc and opens the > database from the server harddisk (shared drive) > We need to be able to acces the data in the database, but the database > is almost always locked. > This application is reading the data and needs to be able to > occasionally store some fields. > > I think it has something to do with the BEGIN that locks the database. > > Are there any suggestion on how to solve this ? > > Server is Windows 2008 R2, Desktop is Windows Vista > > With kind regards, > > Mark Flipphi > > _______________________________________________ > sqlite-users mailing list > [email protected] > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list [email protected] http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

