[ 
https://issues.apache.org/jira/browse/HBASE-15241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15241:
--------------------------
    Description: We can only load 100k blocks from a file. If 256Gs of SSD and 
blocks are 4k in size to align with SSD block read, and you want it all in 
cache, the 100k  limit gets in the way (The 100k may be absolute limit... 
checking. In UI I see 100k only). There is a configuration which lets you up 
the number per file, hbase.ui.blockcache.by.file.max. This helps.  (was: We can 
only load 100k blocks from a file. If 256Gs of SSD and blocks are 4k in size to 
align with SSD block read, and you want it all in cache, the 100k  limit gets 
in the way (The 100k may be absolute limit... checking. In UI I see 100k only).)

> Blockcache only loads 100k blocks from a file
> ---------------------------------------------
>
>                 Key: HBASE-15241
>                 URL: https://issues.apache.org/jira/browse/HBASE-15241
>             Project: HBase
>          Issue Type: Sub-task
>          Components: BucketCache
>            Reporter: stack
>
> We can only load 100k blocks from a file. If 256Gs of SSD and blocks are 4k 
> in size to align with SSD block read, and you want it all in cache, the 100k  
> limit gets in the way (The 100k may be absolute limit... checking. In UI I 
> see 100k only). There is a configuration which lets you up the number per 
> file, hbase.ui.blockcache.by.file.max. This helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to