[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024368#comment-15024368
 ] 

ramkrishna.s.vasudevan commented on HBASE-13082:
------------------------------------------------

bq.Could also have the method internally do these checks for null and check for 
zero count?
Done.
bq.closeAfterCompaction seems misnamed. The return is whether there are 
references outstanding and if the file can be safely removed/closed?
isReferencedInReads() is better?
bq.Does StoreFileScanner have reference to StoreFileManager?
No there is no reference.
Coming to this StorefileManager and refcounting, as you can see that SFM has a 
list of storeFiles that are under the current store. But the StorefileScanner 
is created by the StoreFile only. In the sense the StoreScanner creates a 
StorefileScanner on each of this storefiles returned by the StorefileManager. 
The decrement in ref counting we will have to do when the scanner on the 
Storefile is done (ie StoreFileScanner.close()). So this how can the SFM do 
this ref counting? SFM is not involved in the scan process except for 
maintiaining the list of storefiles eligible for scanning. What do you think?  
If we need to support this way, then SFM needs to be modified iin such a way 
that the scanner creation also happen thro the SFM.
bq.Gets complicated here on the end in closeAndArchiveCompactedFiles where 
there is a lock for the Store being used to close out storefiles.... And, you 
are just following on from what is in here already. Ugh.
I thought it is better to have the lock there (though it is a lock at the store 
level) where the compacted file is updated. One thing can be done is maintain a 
new lock only for the compacted file but doing so one problem is that 
addCompactionResults we need to split into two like first hold the current lock 
and update the storeFile list and then hold the new compaciton lock and update 
the compactedFiles list. And use this comapcted lock every time we remove (in 
write mode) and select the compacted files ( in read mode).
bq.On how often to call checkResetHeap, we have to be prompt because we are 
carrying a snapshot until we do reset?
Yes. We are carrying the snapshot hence thought it is better if we can check on 
every scan API so that the snapshot can be released as soon as possible. 


> Coarsen StoreScanner locks to RegionScanner
> -------------------------------------------
>
>                 Key: HBASE-13082
>                 URL: https://issues.apache.org/jira/browse/HBASE-13082
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Assignee: ramkrishna.s.vasudevan
>         Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, 
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch, 
> HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_1_WIP.patch, 
> HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, 
> HBASE-13082_4.patch, HBASE-13082_9.patch, HBASE-13082_9.patch, 
> HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, 
> LockVsSynchronized.java, gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to