[ 
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15521114#comment-15521114
 ] 

Yu Li commented on HBASE-16698:
-------------------------------

Checked below failed UT cases in HadoopQA report and confirmed all could pass 
in local:
{noformat}
org.apache.hadoop.hbase.client.TestReplicasClient
org.apache.hadoop.hbase.client.TestFromClientSide
org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
{noformat}
I have seen some of these failed cases in HadoopQA report for several JIRAs, 
not sure whether any JIRA already track them down.

Regarding the findbugs issue:
{noformat}
Bug type UL_UNRELEASED_LOCK (click for details) 
In class org.apache.hadoop.hbase.regionserver.HRegion
In method 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion$BatchOperation)
At HRegion.java:[line 3262]
{noformat}
I think it's a similar fingbugs false positive like this one in 
[stackoverflow|http://stackoverflow.com/questions/5408940/possible-findbugs-false-positive-of-ul-unreleased-lock-exception-path].
 I could see some methods suppress fingbugs warning through 
{{@edu.umd.cs.findbugs.annotations.SuppressWarnings}} such as 
{{HRegion#doClose}}, but I don't think it's a good idea doing the same thing 
for {{doMiniBatchMutate}}. Any suggestions [~stack]?

> Performance issue: handlers stuck waiting for CountDownLatch inside 
> WALKey#getWriteEntry under high writing workload
> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16698
>                 URL: https://issues.apache.org/jira/browse/HBASE-16698
>             Project: HBase
>          Issue Type: Improvement
>          Components: Performance
>    Affects Versions: 1.1.6, 1.2.3
>            Reporter: Yu Li
>            Assignee: Yu Li
>         Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, 
> hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers 
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside 
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by 
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call 
> {{WALKey#getWriteEntry}} after appending edit to WAL, and 
> {{seqNumAssignedLatch}} is only released when the relative append call is 
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). 
> Because currently we're using a single event handler for the ringbuffer, the 
> append calls are handled one by one (actually lot's of our current logic 
> depending on this sequential dealing logic), and this becomes a bottleneck 
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on 
> all regions are dealt with in sequential, which causes contention among 
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, 
> that we could grab the WriteEntry before publishing append onto ringbuffer 
> and use it as sequence id, only that we need to add a lock to make "grab 
> WriteEntry" and "append edit" a transaction. This will still cause contention 
> inside a region but could avoid contention between different regions. This 
> solution is already verified in our online environment and proved to be 
> effective.
> Notice that for master (2.0) branch since we already change the write 
> pipeline to sync before writing memstore (HBASE-15158), this issue only 
> exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to