Siva, you may want to take a look at Apache Hannibal.
Kim
On Wed, Apr 8, 2015 at 1:12 PM, Siva sbhavan...@gmail.com wrote:
Thanks Geovanie for your response.
On Mon, Apr 6, 2015 at 3:20 PM, Geovanie Marquez
geovanie.marq...@gmail.com
wrote:
Cloudera Manager if you are using a
Not sure if this will help but it is worth to take a look the master
hostname /ip used by zk and make sure the same hostname/ip is in your
/etc/hosts. For example,
hbase zkcli
get /hbase/master
Kim
On Wed, Jan 29, 2014 at 12:11 PM, Fernando Iwamoto - Plannej
fernando.iwam...@plannej.com.br
Hello all,
Does hbase 0.94.x support hadoop 2.2?
Because I got this exception when trying to connect to a table,
Exception in thread main java.lang.NoSuchMethodError:
org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Ljava/io/InputStream;
at
. So maybe you will get 6gb
and 4gb instead of 5 and 5.
Now, add some deletes, some compactions, some manual splits, and you will
end with a scenario like the one you sent.
hth.
JM
2013/12/18 Kim Chew kchew...@gmail.com
Sorry if it may sounds like an open-end question, but I am
you sent.
hth.
JM
2013/12/18 Kim Chew kchew...@gmail.com
Sorry if it may sounds like an open-end question, but I am wondering
why
this scenario happened after many region-splits,
https://github.com/sentric/hannibal/wiki/Usage#wiki-region_splits
Sorry if it may sounds like an open-end question, but I am wondering why
this scenario happened after many region-splits,
https://github.com/sentric/hannibal/wiki/Usage#wiki-region_splits
It seems to me that the writes are concentrated to the first two
bars(Regions) after the splits.
Thanks.
I am wondering if there is a restrain on the number of regions that a table
could have. For example, if I have a table that grows very fast so the
region keeps splitting, is it possible that the table could have as many
regions as it could until all the resource run out?
Thanks.
Kim
the backing array only holds one KeyValue (and the buffer size and the
KeyValue length should match).
Does that make sense? I know this can be a bit confusing.
-- Lars
- Original Message -
From: Kim Chew kchew...@gmail.com
To: user@hbase.apache.org; lars hofhansl la...@apache.org
Hello,
I have a strange situation that I can't wrap my head around it. Say, for
example, I have an KeyValue instance, shouldn't
myKV.getLength() == myKV.getBuffer().length ?
Given that, getLength() returns Length of bytes this KeyValue occupies
in
25, 2013 at 2:06 AM, Kim Chew kchew...@gmail.com wrote:
Hello,
I have a strange situation that I can't wrap my head around it. Say,
for
example, I have an KeyValue instance, shouldn't
myKV.getLength() == myKV.getBuffer().length ?
Given that, getLength() returns Length of bytes
you
show me where it is done?
Thanks a lot.
Kim
We use that buffer and pass it up the chain without making any further
copy of the KV.
-- Lars
- Original Message -
From: Kim Chew kchew...@gmail.com
To: user@hbase.apache.org
Cc:
Sent: Wednesday, September 25, 2013 12:06 AM
20 10:44:44 PDT 2013 Stopping hbase (via master)
Thanks.
Kim
On Thu, Sep 19, 2013 at 9:46 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Kim,
Oracle JDK? Or OpenJDK?
Anything on the hbase .out file?
JM
2013/9/19 Kim Chew kchew...@gmail.com
Hi Jean-Marc,
JDK 1.7
I googled it a bit so it seems to be caused by a bad port, probably have
nothing to do with hbase itself.
Kim
On Fri, Sep 20, 2013 at 11:44 AM, Kim Chew kchew...@gmail.com wrote:
Hi Jean-Marc,
From the .out file,
ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP
wrote:
Hi Kim,
You should have a .log file but also a .out file. It's the .out file I was
wondering about.
Also, are you facing this issue only when stopping HBase? Is HBase working
fine the reste of the time?
JM
2013/9/20 Kim Chew kchew...@gmail.com
Hi Jean-Marc,
java version
the
process from shutting down.
-- Lars
From: Kim Chew kchew...@gmail.com
To: user@hbase.apache.org
Sent: Friday, September 20, 2013 11:44 AM
Subject: Re: Stopping hbase results in core dump.
Hi Jean-Marc,
From the .out file,
ERROR: transport error 202
Hello there,
I use stop-hbase.sh to shut down HBase but I always got a core dump,
stopping hbase./home/kchew/hbase-0.94.8/bin/stop-hbase.sh: line 58: 55477
Aborted (core dumped) nohup nice -n ${HBASE_NICENESS:-0}
$HBASE_HOME/bin/hbase --config ${HBASE_CONF_DIR} master stop $@
Hi Jean-Marc,
JDK 1.7 and hbase-0.94.8
Kim
On Thu, Sep 19, 2013 at 5:18 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Kim,
Which java version are you using and which HBase version?
JM
2013/9/19 Kim Chew kchew...@gmail.com
Hello there,
I use stop-hbase.sh to shut
Hello there,
As titled. Also I would like to know where do the hbase-site.xml and
hbase-env.sh used by the RS is stored? I can see the modified
hbase-env.sh is pushed to the RS but not the hbase-site.xml. Thanks.
Kim
or disseminate the message to anyone except the intended
recipient. If you have received this message in error, or are not the named
recipient(s), please immediately notify the sender by return email, and
delete all copies of this message.
On Wed, Jul 24, 2013 at 5:01 PM, Kim Chew kchew...@gmail.com
Hello there,
I have an Endpoint CP which I deployed by changing the table schema using
the 'alter' command in hbase shell. I stored my CP in hdfs. My table has 4
regions, say,
region-1
region-2
region-3
region-4
Before I deployed a new CP, I use this command to remove the old CP (Do I
have to
Sorry, I forgot to add that each region is located in a different RS.
On Wed, Jul 24, 2013 at 4:57 PM, Kim Chew kchew...@gmail.com wrote:
Hello there,
I have an Endpoint CP which I deployed by changing the table schema using
the 'alter' command in hbase shell. I stored my CP in hdfs. My
Suppose I have a table foo which spans four RS,
RS1
RS2
RS3
RS4
I have deployed my coprocessor to table foo and I invoke my coprocessor
using Batch.call and pass this object to HTable's coprocessorExec.
Some how RS2 has died, it seems like I am at the mercy of the RPC after it
Another thing is,
2013-07-15 05:15:58,764 WARN org.apache.hadoop.ipc.
HBaseServer:
Incorrect header or version mismatch from 127.0.0.1:46149 got version
47 expected version 3
Seems like you are using different versions of HBase when compiling your
codes.
Kim
On Mon, Jul 15, 2013 at 5:40 AM,
No, Endpoint processor can be deployed via configuration only.
In hbase-site.xml, there should be an entry like this,
property
namehbase.coprocessor.region.classes/name
valuemyEndpointImpl/value
/property
Also, you have to let HBase know where to find your class, so in
hbase-env.sh
I think I have run into a similar situation like Pavel.
My method returns MapLong, ArrayListFoo, where Foo is
public class Foo implements Writable {
String something;
longcounter1;
longcounter2;
blah
};
And I got the following exception when I called my
25 matches
Mail list logo