Does HBase bulkload support Incremental data?
How does it work if the incremental data key-range overlap with the data
already exists?
Thanks
MPRESSION => 'LZO' or COMPRESSION => 'NONE'
2016-07-20 12:09 GMT+08:00 Ted Yu :
> What format are the one billion records saved in at the moment ?
>
> The answer would depend on the compression scheme used for the table:
>
> http://hbase.apache.org/bo
There is a 100G date of one billion records.
If i save it to hbase.
What the size of "hadoop fs -ls /user/hbase/data/default/table_name"?
Best wishes.
second bulk load ?
>
> On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang
> wrote:
>
> > *1、How can i get hbase table memory used?*
> > *2、Why hdfs size of hbase table double when i use bulkload*
> >
> > bulkload file to qimei_info
> >
> > 101.7 G /user
*1、How can i get hbase table memory used?*
*2、Why hdfs size of hbase table double when i use bulkload*
bulkload file to qimei_info
101.7 G /user/hbase/data/default/qimei_info
bulkload same file to qimei_info agagin
203.3 G /user/hbase/data/default/qimei_info
hbase(main):001:0> describe 'qi