Re: Compacting same table

2016-04-03 Thread Sumit Nigam
Hi Ted, Thank you for your reply. Yes, the source is an internal team which seemed to have done compaction of the same table twice in succession and made such an observation. To me, that is counter-intuitive because the Hbase code also has clear logic to show that it does not re-compact store

Re: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray error in nelwy build Hbase

2016-04-03 Thread beeshma r
HI Ted, Any modification need in configuration to solve this issue? or i need to upgrade hadoop version? Please advise this:) Thanks Beeshma On Sat, Apr 2, 2016 at 4:05 PM, beeshma r wrote: > HI Ted/jeremy > > My Hbase verion is HBase 2.0.0-SNAPSHOT > Hadoop version is

Indefinite pause while trying to cleanup data

2016-04-03 Thread Jorge Figueira
Hi, I have reported https://issues.apache.org/jira/browse/HBASE-15573 and you didn't try to find the cause probably it's my fault, because from the start I didn't explain with great detail all facts clearly. After 2 months with Hbase

Connecting to hbase 1.0.3 via java client stuck at zookeeper.ClientCnxn: Session establishment complete on server

2016-04-03 Thread Sachin Mittal
I am stuck on connecting to hbase 1.0.3 via simple java client. The program hangs at: [main] zookeeper.ZooKeeper: Initiating client connection, connectString= 127.0.0.1:2181 sessionTimeout=9 watcher=hconnection-0x1e67b8720x0, quorum=127.0.0.1:2181, baseZNode=/hbaseenter code here

Re: Compacting same table

2016-04-03 Thread Ted Yu
bq. I have been informed Can you disclose the source of such information ? For hbase.hstore.compaction.kv.max , hbase-default.xml has: The maximum number of KeyValues to read and then write in a batch when flushing or compacting. Set this lower if you have big KeyValues and problems with

Compacting same table

2016-04-03 Thread Sumit Nigam
Hi, I have been informed that compacting (manual) the same hbase table takes same amount of time even when done in quick succession. This seems counter-intuitive because an already compacted table should not take same amount of time. Also, what is the use of hbase.hstore.compaction.kv.max