[ 
https://issues.apache.org/jira/browse/HBASE-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14179973#comment-14179973
 ] 

qian wang commented on HBASE-12320:
-----------------------------------

no idea,maybe we can adjust the order of index to avoid the long key first in 
the index block

> hfile index can't flush to disk in memstore when key portion of kv is larger 
> than 128kb and hfile generated two level index above
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-12320
>                 URL: https://issues.apache.org/jira/browse/HBASE-12320
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.6, 0.98.1
>         Environment: cdh4.5.0
> cdh5.1.0
>            Reporter: qian wang
>         Attachments: TestLongIndex.java
>
>
> when you make a big key portion of a keyvalue, for example, a big rowkey or a 
> big family or a big qualifer. all in words, once the kv.getkeylength of kv 
> that you put is larger than 128kb and it is the first kv of the datablock. 
> And you go on writing a new datablock so as to make two-level index in the 
> hfile. You couldn't flush the data in memstore. The rs that corresponding 
> region lies in will keep flushing state and the new hfile in .tmp of the 
> region is writing forever. You can't stop it unless kill the rs.
> From my point, the index generation logic of hifle result in the bug, when 
> the intermediate index is larger  than 128kb, it will generate next level 
> index, but the first key above 128kb will cause an endless loop to generate 
> next level index.
> attached my test code, you can try it in an empty table in your test cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to