Different ways there for you to achieve
1. preScannerOpen -> Here u will get Scan object. You can add a new
Filter into the Scan object and pass the Scan object (or the
attribute you look for) into this Filter. The later scan op will use
this Filter and within that u can do filter of cells.
2.
i have created the cluster with 3 region server and i used YCSB to test the
performance of the cluster.
https://github.com/brianfrankcooper/YCSB/tree/master/hbase098
Created the table as mentioned in the link with equal splits(pre split) in
the regions , But i noted that Load isn't equally
*1、How can i get hbase table memory used?*
*2、Why hdfs size of hbase table double when i use bulkload*
bulkload file to qimei_info
101.7 G /user/hbase/data/default/qimei_info
bulkload same file to qimei_info agagin
203.3 G /user/hbase/data/default/qimei_info
hbase(main):001:0> describe
Hi I'm thinking of setting up cluster replication in hbase, I didn't find
any information on version compatibility though, as in between the
clusters. Is there any information on that?
i.e. can i replicate from 0.94 to a 1.2.1 cluster?
The reason I'd want to is to perform an upgrade at the same
Phoenix does not support HBase-1.2 clusters right now. Only 4.8 release of
Phoenix will support running Phoenix with HBase-1.2.
See https://issues.apache.org/jira/browse/PHOENIX-2833
What version of Phoenix did you deploy? It might be the case that Phoenix
coprocessors are just throwing
Here is sample scan output from a working cluster
hbase:namespace,,146075636.acc7841bcbaca column=info:regioninfo,
timestamp=1460756360969, value={ENCODED =>
acc7841bcbacafacf336e48bb14794de, NAME => 'hbase:namespace,,14607
facf336e48bb14794de.
5636.acc7841bcbacafacf336e48bb14794de.',
Hi
I have my hbase:meta table entries as :
SYSTEM.CATALOG,,1461831992343.a6daf63bd column=info:regioninfo,
timestamp=1461831993549, value={ENCODED => a6daf63bde1f1456ca4acee228b8f5fe,
NAME => 'SYSTEM
e1f1456ca4acee228b8f5fe.