[ https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15793319#comment-15793319 ]
Ted Yu commented on HBASE-14061: -------------------------------- lgtm {code} 169 * @return Storage policy name. {code} Add note that the returned policy name may be null > Support CF-level Storage Policy > ------------------------------- > > Key: HBASE-14061 > URL: https://issues.apache.org/jira/browse/HBASE-14061 > Project: HBase > Issue Type: Sub-task > Components: HFile, regionserver > Environment: hadoop-2.6.0 > Reporter: Victor Xu > Assignee: Yu Li > Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch > > > After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] > and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote > a patch to implement cf-level storage policy. > My main purpose is to improve random-read performance for some really hot > data, which usually locates in certain column family of a big table. > Usage: > $ hbase shell > > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => > > 'POLICY_NAME'} > > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => > > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}} > HDFS's setStoragePolicy can only take effect when new hfile is created in a > configured directory, so I had to make sub directories(for each cf) in > region's .tmp directory and set storage policy for them. > Besides, I had to upgrade hadoop version to 2.6.0 because > dfs.getStoragePolicy cannot be easily written in reflection, and I needed > this api to finish my unit test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)