James Gregurich wrote:
> If the sqlite statement had a temporary storage area so that I could  
> load up a bunch of rows and then commit them in one shot so that the  
> lock on the db was not held very long by a single transaction, that  
> would probably work.
> 

Using a RAM disk you could insert rows into one database as they are 
generated. This would be your batch.

Then periodically attach that database to the main database and copy all 
the new rows to the main DB table in one in a auto transaction.

     attach "batch.db" as batch;
     begin;
     insert into main.tbl select * from batch.tbl;
     delete from batch.tbl;
     commit;
     detach batch;

This will only lock the main database for a short period while it is 
updated.

> However, my reading of the documentation leads me to believe that  
> using the bind functions in a loop with an insert statement will lock  
> the entire in-memory DB until the bulk insert is done....which means I  
> would get no benefit from concurrency.
> 
> Is this correct?

Readers are blocked by a writer until the write transaction is committed.

> 
> BTW: does the question I posed on modifying the library to add a  
> feature to attach  an in-memory data stores to another one via the C  
> API belong on the sqlite-dev list?
> 

That list doesn't get much traffic. Your question was fine here.

It would be fairly involved to change the handling of in memory 
databases. They don't have names to use with the attach command, and 
they don't do any locking since they can only be accessed from a single 
connection currently. The locking in SQLite is done with POSIX file 
locks which can't be used for in memory databases since they aren't 
files. You're welcome to try of course, but it seems like a lot of work 
for little return when there are other ways to do what you want.

HTH
Dennis Cote


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to