Hello.

Just wondering what the group opinion is on something.

I have a bunch of home-grown C++ wrappers for Sqlite and when my app needs to 
use the DB, most routines just instance one of these DB wrapper objects on the 
stack and go for it. The constructor creates a new DB connection, which is 
closed at destructor time. Very convenient.

All is well, EXCEPT, I have ONE big, long transaction that collects and stores 
a lot of data from some sensors. If this data is big enough, it will eventually 
cause that connection to obtain an exclusive lock on the DB. Now if the data 
collection code subsequently calls any subroutine that instances up a quick and 
dirty DB object on the stack and tries to use it, a "database is locked" (or 
"database busy" on write) error will be returned.

My app is single threaded.

I am have a couple of options:

1. Pass the DB object containing the connection with the open data collection 
transaction to the subroutines that need a DB. (Which is what I have been 
doing.)

2. Secretly keep a single, global connection open and let all the instances of 
the wrapper object use it.

3. Switch from rollback to WAL (or something) to keep the connections out of 
each other's way. That seems kind of a big deal since I then have to arrange 
the COMMITs.

Generally, I've had a "get in, get out" philosophy, but I can see the 
advantages of a single DB connection that exists for the life of the app. 
(Still, that seems like it might be more fragile than actually closing the DB 
file, dumping caches, and updating the directory.)

So what do all you hot-shots think is best practice? In/out, long-lived or 
something else?

Thanks

-- Ward



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to