[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15531893#comment-15531893
 ] 

ramkrishna.s.vasudevan commented on HBASE-16608:
------------------------------------------------

Yet to fully understand the issue, but am very sure it is because of the merge 
and snapshot that happens parallely. Previously there was a wait() that got 
removed.
How does merge and snapshot work together? Can you check that part once again.
{code}
org.apache.hadoop.hbase.DroppedSnapshotException: region: 
TestTable,00000000000000000010695414,1475147500030.2c0a4161a4462d9070921ceb0fe22390.
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2538)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2223)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2192)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2083)
        at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2009)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:502)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:472)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:75)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: offset (808910118) + length (8) 
exceed the capacity of the array: 2097152
        at 
org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:840)
        at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:814)
        at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:799)
        at org.apache.hadoop.hbase.KeyValue.getTimestamp(KeyValue.java:1511)
        at org.apache.hadoop.hbase.KeyValue.getTimestamp(KeyValue.java:1502)
        at 
org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:192)
        at 
org.apache.hadoop.hbase.regionserver.querymatcher.MinorCompactionScanQueryMatcher.match(MinorCompactionScanQueryMatcher.java:40)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:564)
        at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:132)
        at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
        at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:880)

{code}
Able to get this quite easily. I infact tried running with no inmemory 
compaction and I did not get this issue. I am still in the process of debugging 
will get back here once I am clear on how this is caused. Just updating in case 
you have time to see the problem.
Also addIntoPooledChunks now adds from all the segments into one segment. From 
each segment there is a chance the pooledChunk is full and you add all those 
full chunkQueue to another chunkQueue which at the max can hold 
chunkPool.getMaxCount().

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---------------------------------------------------------------------------------------
>
>                 Key: HBASE-16608
>                 URL: https://issues.apache.org/jira/browse/HBASE-16608
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Anastasia Braginsky
>            Assignee: Anastasia Braginsky
>         Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, 
> HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, 
> HBASE-16417-V10.patch, HBASE-16608-V01.patch, HBASE-16608-V03.patch, 
> HBASE-16608-V04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to