Does HBase bulkload support Incremental data?

2018-03-27 Thread Jone Zhang
Does HBase bulkload support Incremental data? How does it work if the incremental data key-range overlap with the data already exists? Thanks

Re: How can i speculate hbase storage space?

2016-07-20 Thread Jone Zhang
MPRESSION => 'LZO' or COMPRESSION => 'NONE' 2016-07-20 12:09 GMT+08:00 Ted Yu : > What format are the one billion records saved in at the moment ? > > The answer would depend on the compression scheme used for the table: > > http://hbase.apache.org/bo

How can i speculate hbase storage space?

2016-07-19 Thread Jone Zhang
There is a 100G date of one billion records. If i save it to hbase. What the size of "hadoop fs -ls /user/hbase/data/default/table_name"? Best wishes.

Re: How can i get hbase table memory used? Why hdfs size of hbase table double when i use bulkload?

2016-05-02 Thread Jone Zhang
second bulk load ? > > On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang > wrote: > > > *1、How can i get hbase table memory used?* > > *2、Why hdfs size of hbase table double when i use bulkload* > > > > bulkload file to qimei_info > > > > 101.7 G /user

How can i get hbase table memory used? Why hdfs size of hbase table double when i use bulkload?

2016-04-28 Thread Jone Zhang
*1、How can i get hbase table memory used?* *2、Why hdfs size of hbase table double when i use bulkload* bulkload file to qimei_info 101.7 G /user/hbase/data/default/qimei_info bulkload same file to qimei_info agagin 203.3 G /user/hbase/data/default/qimei_info hbase(main):001:0> describe 'qi