[jira] [Assigned] (HBASE-14061) Support CF-level Storage Policy

2016-04-11 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned HBASE-14061:
-

Assignee: Yu Li  (was: Victor Xu)

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14061) Support CF-level Storage Policy

2015-07-12 Thread Victor Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Xu reassigned HBASE-14061:
-

Assignee: Victor Xu

 Support CF-level Storage Policy
 ---

 Key: HBASE-14061
 URL: https://issues.apache.org/jira/browse/HBASE-14061
 Project: HBase
  Issue Type: Improvement
  Components: HFile, regionserver
 Environment: hadoop-2.6.0
Reporter: Victor Xu
Assignee: Victor Xu

 After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
 and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
 a patch to implement cf-level storage policy. 
 My main purpose is to improve random-read performance for some really hot 
 data, which usually locates in certain column family of a big table.
 Usage:
 $ hbase shell
  alter 'TABLE_NAME', METADATA = {'hbase.hstore.block.storage.policy' = 
  'POLICY_NAME'}
  alter 'TABLE_NAME', {NAME='CF_NAME', METADATA = 
  {'hbase.hstore.block.storage.policy' = 'POLICY_NAME'}}
 HDFS's setStoragePolicy can only take effect when new hfile is created in a 
 configured directory, so I had to make sub directories(for each cf) in 
 region's .tmp directory and set storage policy for them.
 Besides, I had to upgrade hadoop version to 2.6.0 because 
 dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
 this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)