I gave it a few more shots and it was back to normal...
Bulk loading is faster but more important (for us) it's more stable and
doesn't cause full GC in the region server even if loading it more then
usual.
The map time remains the same. For reduce we chose to write out a sequence
file so it's quit
I think you can set setWriteToWAL() method to false to reduce the
amount of log infor. But you may get risks when your cluster is down.
regards!
yong
On Fri, Nov 23, 2012 at 7:58 AM, iwannaplay games
wrote:
> Hi,
>
> Everytime i query hbase or hive ,there is a significant growth in my
> log file
Hi,
If your row keys really are really longs, i.e stored as: Long.getBytes()
and NOT Long.toString().getBytes() than you could just use:
Filter rowFilter = new RowFilter(CompareOp.LESS, new
BinaryComparator(Bytes.toBytes(20L)))
You can verify how row keys are stored by doing:
scan '', {LIMIT =>
Hello users,
I have 20 rows in an HBase table and the rowkey is in long format
starting from 1 to 20. I want to query from this table with the
condition like the rowkey starts with 1. I tried with |PrefixFilter| and
|BinaryPrefixComparator| but it is working fine only if the rowkey is in
stri