Yes there is:
groupIdorg.apache.hbase/groupId
artifactIdhbase/artifactId
version0.92.1/version
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE
But, I think there's a direct relation between improving performance in
large scan and memory for memstore. Until I understand, memstore just
work as cache to write operations.
On 09/04/14 23:44, Ted Yu wrote:
Didn't quite get what you mean, Asaf.
If you're talking about HBASE-5349, please
Hi
I have java code:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import
Hi
Found soluion. I used non hortonworks hadoop lib hadoop-core-1.2.1.jar
removed hadoop-core-1.2.1.jar and copied:
cp /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-76.jar ./libs/
[hbase@sandbox hbase_connect]$ javac -cp
./libs/*:./libs/hbase-0.96.2-hadoop2/lib/* Hbase_connect.java
I got this error when I execute a full scan with filters about a table.
Caused by: java.lang.RuntimeException:
org.apache.hadoop.hbase.regionserver.LeaseException:
org.apache.hadoop.hbase.regionserver.LeaseException: lease
'-4165751462641113359' does not exist
at
Hi,
I am planning to write endpoint coprocessor to calculate TOP N results for my
usecase. I got confused with old apis and new apis.
I followed below links and try to implement. But looks like api's changed a
lot. I don't see many of these classes in hbase jars. We are using Hbase 0.96.
Can
Here is a reference implementation for aggregation :
http://search-hadoop.com/c/HBase:hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java||Hbase+aggregation+endpoint
You can find it in hbase source code.
Cheers
On Apr 10, 2014, at 4:29 AM, Bogala, Chandra
Can you give us a bit more information:
HBase release you're running
What filters are used for the scan
Thanks
On Apr 10, 2014, at 2:36 AM, gortiz gor...@pragsis.com wrote:
I got this error when I execute a full scan with filters about a table.
Caused by: java.lang.RuntimeException:
I was trying to check the behaviour of HBase. The cluster is a group of
old computers, one master, five slaves, each one with 2Gb, so, 12gb in
total.
The table has a column family with 1000 columns and each column with 100
versions.
There's another column faimily with four columns an one image
Another little question is, when the filter I'm using, Do I check all
the versions? or just the newest? Because, I'm wondering if when I do a
scan over all the table, I look for the value 5 in all the dataset or
I'm just looking for in one newest version of each value.
On 10/04/14 16:52,
It should be newest version of each value.
Cheers
On Thu, Apr 10, 2014 at 9:55 AM, gortiz gor...@pragsis.com wrote:
Another little question is, when the filter I'm using, Do I check all the
versions? or just the newest? Because, I'm wondering if when I do a scan
over all the table, I look
Generally (and this is database lore not just HBase) if you use an LRU type
cache, your working set does not fit into the cache, and you repeatedly scan
this working set you have created the worst case scenario. The database does
all the work caching the blocks, and subsequent scans will need
Here was the change I made to pom.xml in order to build against
0.98.1-hadoop1:
http://pastebin.com/JEX3A0kR
I still got some compilation errors, such as:
[ERROR]
/Users/tyu/twitbase/src/main/java/HBaseIA/TwitBase/hbase/RelationsDAO.java:[156,14]
cannot find symbol
[ERROR] symbol : method
13 matches
Mail list logo