I'll start with my question, the add detail about the environment and scenario motivating it:
,---- | In shared-cache mode, is it possible for two different connections | (both connected to the shared cache) to mutate two different tables at | the same time? `---- My reading of the documentation[1] on shared-cache mode says no, but my understanding is further confused by the discussion on this list in 2009. One discussion thread here[2] from November 2009 touches on the same question, but never comes to a definitive resolution: ,---- | From: Nicolas Rivera <nicolas.rivera-tirg1qow...@public.gmane.org> | Subject: multiple threads with shared cache mode | Date: Tue, 24 Nov 2009 13:22:39 -0500 | Message-ID: <4b0c246f.7050...@ac4s.com> `---- In that thread, two authors agree that only one write can occur at a time, but "Ken" suggests[3] that two writes may occur concurrently. In another discussion, John Crenshaw suggests[4] that two threads on separate connections to a shared cache can update separate tables concurrently. ,---- | From: "John Crenshaw" <johncrenshaw-tup11gostmhbdgjk7y7...@public.gmane.org> | Subject: Re: shared cache mode and 'LOCKED' | Date: Wed, 28 Oct 2009 15:38:18 -0400 | Message-ID: <457544406303854089089d6c1bb232c002467...@mail093.mail.lan> `---- My application currently uses shared-cache mode, with several threads each using their own connections against that same cache. In my confusion -- and hope that this concurrency is possible -- I attempted to manage contention by using per-table locks in my program; each thread looking to do a bulk insert operation against a single table acquires that table's lock, issues a series of INSERT statements within a bounding transaction, and then commits. No two threads will then be able to mutate the same table at the same time. However, I've noticed errors arising out of the mediating JDBC library[5], announcing that "database table is locked," which I think means that SQLITE_LOCKED is being returned from the "step" calls for some of the INSERT operations. This suggests that my partitioning by table is insufficient. Is it the case that my application should instead lock across all of these connections against the same shared cache, so that no more than one thread attempts to issue an INSERT statement or commit a transaction? Like the others in the aforementioned threads, I too am confused by the promised benefits of table-level lock granularity. Does that really just help with allowing readers to continue against one table while a writer is writing to some other table? Footnotes: [1] http://www.sqlite.org/sharedcache.html [2] http://thread.gmane.org/gmane.comp.db.sqlite.general/52767 [3] http://article.gmane.org/gmane.comp.db.sqlite.general/52784 [4] http://thread.gmane.org/gmane.comp.db.sqlite.general/51381/focus=51885 [5] http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC -- Steven E. Harris _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users