[ 
https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15358380#comment-15358380
 ] 

stack commented on HBASE-15716:
-------------------------------

I think it is fine. I went to hack on your class because I'd thought you'd left 
off consideration of isolationlevel...but I think the way you do it is clean. 
The other method is for ourside consumers and should probably go away anyways 
(it seems to be used by Scanners only...).

Let me make up a patch that has your nice class [~ikeda] and the other changes 
too... 

I spent a bit of a while testing this patch. Looks good. JFR no longer reports 
this as a point of contention. If I thread dump, safe point no longer has the 
region scanner init showing. Comparing perf runs, i see we are doing the same 
throughput, perhaps a very slight bit more.

Looking before and after using honest profiler 
https://github.com/RichardWarburton/honest-profiler, it says that 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl appears now MORE 
often in stack traces when samping but that as an endpoint in a stack trace, 
there is less incidence; percentages are small -- less than 1% -- which would 
seem to indicate that focus on this as a point of contention is probably 
overblown because it appears in safepoint thread dumps (i.e. my manual thread 
dumping -- see "Why (Most) Sampling Java Profilers Are Fucking Terrible" 
http://psy-lob-saw.blogspot.co.za/2016/02/why-most-sampling-java-profilers-are.html
 -- though this post says JFR does not rely on safe point....).



> HRegion#RegionScannerImpl scannerReadPoints synchronization constrains random 
> read
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-15716
>                 URL: https://issues.apache.org/jira/browse/HBASE-15716
>             Project: HBase
>          Issue Type: Bug
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>         Attachments: 
> 15716.implementation.using.ScannerReadPoints.branch-1.patch, 
> 15716.prune.synchronizations.patch, 15716.prune.synchronizations.v3.patch, 
> 15716.prune.synchronizations.v4.patch, 15716.prune.synchronizations.v4.patch, 
> 15716.wip.more_to_be_done.patch, HBASE-15716.branch-1.001.patch, 
> HBASE-15716.branch-1.002.patch, HBASE-15716.branch-1.003.patch, 
> HBASE-15716.branch-1.004.patch, ScannerReadPoints.java, Screen Shot 
> 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 PM.png, 
> Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at 2.25.26 
> PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot 2016-04-27 at 
> 9.49.35 AM.png, TestScannerReadPoints.java, before_after.png, 
> current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, 
> remove.locks.patch, remove_cslm.patch
>
>
> Here is a [~lhofhansl] special.
> When we construct the region scanner, we get our read point and then store it 
> with the scanner instance in a Region scoped CSLM. This is done under a 
> synchronize on the CSLM.
> This synchronize on a region-scoped Map creating region scanners is the 
> outstanding point of lock contention according to flight recorder (My work 
> load is workload c, random reads).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to