Re: scan is slower after bulk load

2012-11-23 Thread Amit Sela
I gave it a few more shots and it was back to normal... Bulk loading is faster but more important (for us) it's more stable and doesn't cause full GC in the region server even if loading it more then usual. The map time remains the same. For reduce we chose to write out a sequence file so it's quit

Re: Log files occupy lot of Disk size

2012-11-23 Thread yonghu
I think you can set setWriteToWAL() method to false to reduce the amount of log infor. But you may get risks when your cluster is down. regards! yong On Fri, Nov 23, 2012 at 7:58 AM, iwannaplay games wrote: > Hi, > > Everytime i query hbase or hive ,there is a significant growth in my > log file

Re: PrefixFilter is not working with long in HBase

2012-11-23 Thread David Koch
Hi, If your row keys really are really longs, i.e stored as: Long.getBytes() and NOT Long.toString().getBytes() than you could just use: Filter rowFilter = new RowFilter(CompareOp.LESS, new BinaryComparator(Bytes.toBytes(20L))) You can verify how row keys are stored by doing: scan '', {LIMIT =>

PrefixFilter is not working with long in HBase

2012-11-23 Thread vishnu
Hello users, I have 20 rows in an HBase table and the rowkey is in long format starting from 1 to 20. I want to query from this table with the condition like the rowkey starts with 1. I tried with |PrefixFilter| and |BinaryPrefixComparator| but it is working fine only if the rowkey is in stri