Dangerous and disturbing this puzzle is. Only a bug could have locked
those connections.

If I discover anything useful I'll report it separately (no need to
hijack this topic for that.)

John

-----Original Message-----
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Thursday, October 22, 2009 1:27 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] commit time


On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:

> An open cursor will block. I've watched it. It was a major problem,  
> and
> I spent many many hours stepping through SQLite before I finally  
> figured
> it out. Once I carefully closed out cursors, the problem went away.  
> (In
> my case I had a long running write process trying to commit a
> transaction so it could yield to another connection in a separate  
> thread
> that wanted to write. If cursors were open on a table, the other
> connection would refuse to grab a write lock on that table, even  
> though
> the transaction was committed and there were no open writers.)
>
> I don't remember where for sure (may have been in
> sqlite3BtreeBeginTrans) SQLite specifically checks for open cursors.
>
> The write lock doesn't stop you from reading, but an open cursor DOES
> stop you from writing. You have to check for SQLITE_LOCKED, no way
> around it.

I don't understand the situation described in the first paragraph. But
the statement above is at least not universally true. Tcl test cases
"shared-[01].3.11" and "shared-[01].3.1.2" (see around line 229 of
the test/shared.test file in the source distribution) are examples of
one connection writing to a table while a second connection is scanning
through the same table using an open cursor. In this case it is the
"second connection" is operating in read_uncommitted mode.

Dan.




>
> John
>
> -----Original Message-----
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Thursday, October 22, 2009 12:06 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] commit time
>
>
> On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
>
>> if thread 1 opens a read cursor in read uncommitted mode it can
>> block a write lock?  i thought the read happens w/o a lock?
>
> If using read-uncommitted mode, a reader thread will not block a
> writer thread that is using the same shared-cache. Except, it does
> block a writer from modifying the database schema.
>
> Dan.
>
>
>>
>> -----Original Message-----
>> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 12:03 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Good, a single write thread saves you all the hassle involved with
>> yielding. Unfortunately, even without multiple writers blocking is
>> still
>> possible. If thread 1 opens a cursor, and thread 2 tries to write
>> before
>> that cursor has been closed, it will return SQLITE_LOCKED. Since any
>> read query will return a cursor, there is always a possibility for
>> blocking, and you need to handle SQLITE_LOCKED.
>>
>> John
>>
>> -----Original Message-----
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 2:09 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> very good.  i don't anticipate multiple writers so this should be
>> pretty
>> simple.
>>
>> -----Original Message-----
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Crenshaw
>> Sent: Wednesday, October 21, 2009 9:15 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Yes, you have to call sqlite3_enable_shared_cache before opening any
>> database connections, then execute "PRAGMA read_uncommitted = true;"
>> on
>> each connection. Blocking can still happen in some situations, but  
>> you
>> can handle it as I described in my original reply.
>>
>> John
>>
>> -----Original Message-----
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Wednesday, October 21, 2009 12:05 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>>
>> it sounds like this is the feature you recommend using:
>>
>> "A database connection in read-uncommitted mode _does not attempt to
>> obtain read-locks before reading_ from database tables as described
>> above. This can lead to inconsistent query results if another  
>> database
>> connection modifies a table while it is being read, but it also means
>> that a read-transaction opened by a connection in read-uncommitted
>> mode
>> can neither block nor be blocked by any other connection."
>>
>> this is precisely what i need.  thanks very much.
>>
>> ________________________________________
>> From: sqlite-users-boun...@sqlite.org [sqlite-users-
>> boun...@sqlite.org]
>> On Behalf Of John Crenshaw [johncrens...@priacta.com]
>> Sent: Tuesday, October 20, 2009 7:18 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] commit time
>>
>> Sounds like a great candidate for shared cache with PRAGMA
>> read_uncommitted = true.
>>
>> If other threads may also need a write lock on that table, you should
>> handle SQLITE_LOCKED by incrementing a waiter count and calling
>> sqlite3_unlock_notify. The thread doing the inserting can check to  
>> see
>> if anybody is waiting (blocked) and yield by committing the current
>> transaction and waiting for the blocked thread to unblock. Be aware,
>> you
>> should also close any open cursors before yielding, because open
>> cursors
>> will prevent write locks and you'll waste time yielding for nothing.
>>
>> John
>>
>> -----Original Message-----
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tom Broadbent
>> Sent: Tuesday, October 20, 2009 8:05 PM
>> To: sqlite-users@sqlite.org
>> Subject: [sqlite] commit time
>>
>> i have a simple join table containing two ids from two other
>> tables.  i
>> have an index on each of the ids in the join table.
>>
>> CREATE TABLE ContentWordItem (word_id INT, item_id INT);
>>
>> CREATE INDEX idx_ContentWordItem_word ON ContentWordItem(word_id);
>> // index to perform fast queries by word_id
>>
>> CREATE INDEX idx_ContentWordItem_item ON ContentWordItem(item_id);
>> // index to perform fast deletes by item_id
>>
>> i have a large amount of data to insert into this join table on a  
>> slow
>> embedded device.  i need to avoid locking this join table for more
>> than
>> a second or two at a time so that i can make queries on this table.
>>
>> so here's the question:  how do i insert small chunks of data into
>> this
>> table w/o taking a hit each time i commit?
>>
>> what i'm doing is:
>>
>> *         read a chunk of data from flat data file into vector of id
>> pairs
>>
>> *         begin transaction
>>
>> *         loop thru vector of id pairs binding and inserting
>>
>> *         commit transaction
>>
>> *         repeat until data is exhausted
>>
>> i'm seeing that the reading, binding, and inserting is very fast (300
>> ms) but the commit is taking upwards of 3 seconds.  when i increase  
>> my
>> chunk size by a factor of 10 the insert doesn't appear to take 10x
>> longer but the commit still takes upwards of 3 seconds.  the point is
>> that the commit hit appears to be much greater than the insert hit  
>> but
>> doesn't appear to scale directly.
>>
>> it appears that the commit is updating the indexes and taking a long
>> time.  is this a correct evaluation?
>>
>> it also appears that the commit takes longer as the size of the table
>> grows (i.e. the index is getting bigger).  is this expected?
>>
>> what i'm worried about is that by reducing the chunk size (to avoid
>> locking the db for a long time) i add a significant amount of time to
>> the insert process because the commits are costing several seconds.
>> however, locking the db for a long time is not desirable.
>>
>> i'm also concerned about the commit time increasing over time as the
>> amount of data in the table increases.
>>
>> is there a better approach?
>>
>> thanks
>> tom
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>>
>>
______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>>
______________________________________________________________________
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to