For #1, can you clarify whether your workload is read heavy, write heavy or
mixed load of read and write ?

For #2, have you run major compaction after the second bulk load ?

On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang <joyoungzh...@gmail.com> wrote:

> *1、How can i get hbase table memory used?*
> *2、Why hdfs size of hbase table  double  when i use bulkload*
>
> bulkload file to qimei_info
>
> 101.7 G  /user/hbase/data/default/qimei_info
>
> bulkload same file to qimei_info agagin
>
> 203.3 G  /user/hbase/data/default/qimei_info
>
> hbase(main):001:0> describe 'qimei_info'
> DESCRIPTION
>
>
>  'qimei_info', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER =>
> 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '
>
>  1', COMPRESSION => 'LZO', MIN_VERSIONS => '0', TTL => '2147483647',
> KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536',
>
>   IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>
>
> 1 row(s) in 1.4170 seconds
>
>
> *Besh wishes.*
> *Thanks.*
>

Reply via email to