[ https://issues.apache.org/jira/browse/DERBY-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12661580#action_12661580 ]
Knut Anders Hatlen commented on DERBY-2991: ------------------------------------------- Thanks for clarifying the comments about the tests, Mike. I'll make the modifications and come back with the results. > Maybe you could add a short comment about in which cases we will > research the tree with the new scheme vs. the old scheme. The old scheme (current Derby) will research the tree in these situations: a) Locks (scan lock, row lock or prev key lock) could not be obtained immediately when positioning at the start of the scan b) Transaction was committed after we released the latch (then there's no scan lock protecting the position anymore) c) Some other operation within the same transaction caused a split on the page we were positioned on after we released the latch (scan lock doesn't protect us against changes in the same transaction) The new scheme will research the tree in situation (a) above, and additionally: d) A row was moved off the page after we released the latch (caused by split or purge) e) The current page was updated (any update, as long as the page version was changed) *and* evicted from the page cache after we released the latch > Do you know if in the char/varchar case if the source DVD is already > in String form (vs raw character array). It is still in raw form when the value is copied, but it will be converted to string form SQLChar.setFrom(DVD) calls getString() on the DVD. In the tests I've run till now, it shouldn't matter since we'll end up calling DVD.getString() from EmbedRS.getString() on all the strings anyway, and the two SQLChar instances share the same immutable String instance. In other cases it could matter, so it would be better if we could share the char array between the two SQLChar objects and avoid the allocation of the String. Not sure if this is safe, though. SQLChar treats the raw char array as an immutable data type in most cases, but not in normalize(). It's the same with the DECIMAL tests. The source DVD is in raw form (byte[] + int) and is converted to a BigDecimal that's shared between the two SQLDecimal instances. The cheapest way to copy the key would probably be to have a method in the Page interface that could copy the raw row to a byte array. We would only have to deserialize the key if repositioning was needed, and we would only need one extra object allocation per scan in the normal case (unless the byte buffer we allocated the first time was too small). Is that an option? The methods to serialize and deserialize DVDs are public methods of all DVDs (writeExternal() and writeExternal()), so it's not really like it would be exposing implementation details in interface methods. > Index split deadlock > -------------------- > > Key: DERBY-2991 > URL: https://issues.apache.org/jira/browse/DERBY-2991 > Project: Derby > Issue Type: Bug > Components: Store > Affects Versions: 10.2.2.0, 10.3.1.4 > Environment: Windows XP, Java 6 > Reporter: Bogdan Calmac > Assignee: Knut Anders Hatlen > Attachments: d2991-preview-1a.diff, d2991-preview-1a.stat, > d2991-preview-1b.diff, d2991-preview-1b.stat, d2991-preview-1c.diff, > d2991-preview-1c.stat, d2991-preview-1d.diff, d2991-preview-1d.stat, > derby.log, InsertSelectDeadlock.java, perftest.diff, Repro2991.java, > stacktraces_during_deadlock.txt > > > After doing dome research on the mailing list, it appears that the index > split deadlock is a known behaviour, so I will start by describing the > theoretical problem first and then follow with the details of my test case. > If you have concurrent select and insert transactions on the same table, the > observed locking behaviour is as follows: > - the select transaction acquires an S lock on the root block of the index > and then waits for an S lock on some uncommitted row of the insert transaction > - the insert transaction acquires X locks on the inserted records and if it > needs to do an index split creates a sub-transaction that tries to acquire an > X lock on the root block of the index > In summary: INDEX LOCK followed by ROW LOCK + ROW LOCK followed by INDEX LOCK > = deadlock > In the case of my project this is an important issue (lack of concurrency > after being forced to use table level locking) and I would like to contribute > to the project and fix this issue (if possible). I was wondering if someone > that knows the code can give me a few pointers on the implications of this > issue: > - Is this a limitation of the top-down algorithm used? > - Would fixing it require to use a bottom up algorithm for better > concurrency (which is certainly non trivial)? > - Trying to break the circular locking above, I would first question why > does the select transaction need to acquire (and hold) a lock on the root > block of the index. Would it be possible to ensure the consistency of the > select without locking the index? > ----- > The attached test (InsertSelectDeadlock.java) tries to simulate a typical > data collection application, it consists of: > - an insert thread that inserts records in batch > - a select thread that 'processes' the records inserted by the other thread: > 'select * from table where id > ?' > The derby log provides detail about the deadlock trace and > stacktraces_during_deadlock.txt shows that the inser thread is doing an index > split. > The test was run on 10.2.2.0 and 10.3.1.4 with identical behaviour. > Thanks, > Bogdan Calmac. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.