If you create a file on disk and set PRAGMA synchronous = OFF, you should get pretty close to the performance of a shared in-memory database on most modern desktop operating systems - maybe close enough that you won't care to do anything beyond that. If you further look at the recent discussion/patch to disable journaling entirely, you should get even closer. Going this route means you won't have to worry so much about the case where someone accidentally pumps 4 gig of data into your database and sucks up all RAM.
Keep in mind that if you do these things, then it is quite trivial to generate corrupt database files if your app or OS crashes. So you need to arrange to delete database files on app start-up to reset your state (an in-memory database wouldn't have that problem!). On a Unix-based system, you may be able to open the database then delete the underlying path, but that may not work for however you are sharing things. -scott On Fri, Apr 18, 2008 at 11:25 AM, James Gregurich <[EMAIL PROTECTED]> wrote: > > > I'm working on a commercial, boxed, desktop product. I can't be > creating new mounted disks on a customer's system every time he uses > my application. > > > > How about this... > > > suppose I create a temporary db file on disk. Each task ( a thread) > opens a connection to the temp file and attaches an in-memory db to > it. The task then writes to tables in the attached in-memory db. When > the task is done, the tables in the in-memory db are merged into the > disk file and the attached in-memory db is closed. reader connections > would only read from the disk file. > > Will such a design give me full concurrency on my writer tasks until > they are ready to flush their results to the disk file? As I > understand it, the attached db won't be locked by reading done on the > disk file. > > > thanks, > James > > > > On Apr 18, 2008, at 10:33 :39AM, Dennis Cote wrote: > >> James Gregurich wrote: >>> If the sqlite statement had a temporary storage area so that I could >>> load up a bunch of rows and then commit them in one shot so that the >>> lock on the db was not held very long by a single transaction, that >>> would probably work. >>> >> >> Using a RAM disk you could insert rows into one database as they are >> generated. This would be your batch. >> >> Then periodically attach that database to the main database and copy >> all >> the new rows to the main DB table in one in a auto transaction. >> >> attach "batch.db" as batch; >> begin; >> insert into main.tbl select * from batch.tbl; >> delete from batch.tbl; >> commit; >> detach batch; >> >> This will only lock the main database for a short period while it is >> updated. >> >>> However, my reading of the documentation leads me to believe that >>> using the bind functions in a loop with an insert statement will lock >>> the entire in-memory DB until the bulk insert is done....which >>> means I >>> would get no benefit from concurrency. >>> >>> Is this correct? >> >> Readers are blocked by a writer until the write transaction is >> committed. >> >>> >>> BTW: does the question I posed on modifying the library to add a >>> feature to attach an in-memory data stores to another one via the C >>> API belong on the sqlite-dev list? >>> >> >> That list doesn't get much traffic. Your question was fine here. >> >> It would be fairly involved to change the handling of in memory >> databases. They don't have names to use with the attach command, and >> they don't do any locking since they can only be accessed from a >> single >> connection currently. The locking in SQLite is done with POSIX file >> locks which can't be used for in memory databases since they aren't >> files. You're welcome to try of course, but it seems like a lot of >> work >> for little return when there are other ways to do what you want. >> >> HTH >> Dennis Cote >> >> >> _______________________________________________ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users