There is a property you can tune to lower default num of retries from 10 to
any number like 2.
On Wednesday, April 9, 2014, kanwal wrote:
> I'm currently running into an issue on my local setup where my application
> is
> unable to connect to the hbase table but I'm successfully able to query th
Bear in mind each region will return its top n, then you will have to run
another top n in your client code. This introduce a numerical error : top
on top.
On Thursday, April 10, 2014, Bogala, Chandra Reddy
wrote:
> Hi,
> I am planning to write endpoint coprocessor to calculate TOP N results for
Here was the change I made to pom.xml in order to build against
0.98.1-hadoop1:
http://pastebin.com/JEX3A0kR
I still got some compilation errors, such as:
[ERROR]
/Users/tyu/twitbase/src/main/java/HBaseIA/TwitBase/hbase/RelationsDAO.java:[156,14]
cannot find symbol
[ERROR] symbol : method
coproc
Generally (and this is database lore not just HBase) if you use an LRU type
cache, your working set does not fit into the cache, and you repeatedly scan
this working set you have created the worst case scenario. The database does
all the work caching the blocks, and subsequent scans will need bl
It should be newest version of each value.
Cheers
On Thu, Apr 10, 2014 at 9:55 AM, gortiz wrote:
> Another little question is, when the filter I'm using, Do I check all the
> versions? or just the newest? Because, I'm wondering if when I do a scan
> over all the table, I look for the value "5"
Another little question is, when the filter I'm using, Do I check all
the versions? or just the newest? Because, I'm wondering if when I do a
scan over all the table, I look for the value "5" in all the dataset or
I'm just looking for in one newest version of each value.
On 10/04/14 16:52, gor
I was trying to check the behaviour of HBase. The cluster is a group of
old computers, one master, five slaves, each one with 2Gb, so, 12gb in
total.
The table has a column family with 1000 columns and each column with 100
versions.
There's another column faimily with four columns an one image o
Can you give us a bit more information:
HBase release you're running
What filters are used for the scan
Thanks
On Apr 10, 2014, at 2:36 AM, gortiz wrote:
> I got this error when I execute a full scan with filters about a table.
>
> Caused by: java.lang.RuntimeException:
> org.apache.hadoop.h
Here is a reference implementation for aggregation :
http://search-hadoop.com/c/HBase:hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java||Hbase+aggregation+endpoint
You can find it in hbase source code.
Cheers
On Apr 10, 2014, at 4:29 AM, "Bogala, Chandra
Hi,
I am planning to write endpoint coprocessor to calculate TOP N results for my
usecase. I got confused with old apis and new apis.
I followed below links and try to implement. But looks like api's changed a
lot. I don't see many of these classes in hbase jars. We are using Hbase 0.96.
Can any
I got this error when I execute a full scan with filters about a table.
Caused by: java.lang.RuntimeException:
org.apache.hadoop.hbase.regionserver.LeaseException:
org.apache.hadoop.hbase.regionserver.LeaseException: lease
'-4165751462641113359' does not exist
at
org.apache.hadoop.hbase.r
Hi
Found soluion. I used non hortonworks hadoop lib "hadoop-core-1.2.1.jar "
removed hadoop-core-1.2.1.jar and copied:
cp /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-76.jar ./libs/
[hbase@sandbox hbase_connect]$ javac -cp
./libs/*:./libs/hbase-0.96.2-hadoop2/lib/* Hbase_connect.java
[hbase@sandb
Hi
I have java code:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.
13 matches
Mail list logo