Hi Guys
We are recently trying to upgrade our HBase from 0.94 to 0.98, our original
PIG setup seems not work with the new HBase anymore, it throws the
following class not found exception:
java.lang.NoClassDefFoundError:
org/apache/hadoop/hbase/filter/WritableByteArrayComparable
at java.lang.Class
Hello,
I am trying to debug a coprocessor code on hbase 0.94.24, which seems to work
well on 0.94.5, but I cannot make it work on 0.94.24.
Here is the copy of some coprocessor init code:
public class TestEndpoint implements TestIface, HTableWrapper {
…
@Override
public void start(Coprocess
Thanks, Soumitra Kumar,
I didn’t know why you put hbase-protocol.jar in SPARK_CLASSPATH, while add
hbase-protocol.jar, hbase-common.jar, hbase-client.jar, htrace-core.jar in
--jar, but it did work.
Actually, I put all these four jars in SPARK_CLASSPATH along with HBase conf
directory.
2014-10-
See https://issues.apache.org/jira/browse/HBASE-6658
See this thread as well:
http://search-hadoop.com/m/T9Y7N1zRrGX1/Writablebytearraycomparable+hbase&subj=Re+Pig+cannot+load+data+using+hbasestorage
Cheers
On Oct 16, 2014, at 12:03 AM, Ramon Wang wrote:
> Hi Guys
>
> We are recently trying t
How many rows are there in test1 table ?
Please consider approach #1 for efficiency.
Cheers
On Oct 15, 2014, at 10:24 PM, Vimal Jain wrote:
> Hi,
> I have a hbase table( say test1) with 3 cfs ( a,b,c) and i have bunch of
> cqs in each of these cf.
> I also have one more table ( say test2) wit
There will be around 2 million rows in test1.
Any specific reason for approach #1 being more efficient ?
I thought approach #-2 being efficient as the ruby script will run on same
machine ( I have only one node cluster ) ,so there wont be any network
calls whereas in approach # 1 , i was planning t
On Thu, Oct 16, 2014 at 2:18 AM, Liu, Ming (HPIT-GADSC)
wrote:
>
>
> By the way, I cannot find anywhere to download a 0.94.5 hbase source code,
> can anyone tell me if there is somewhere I can find it?
>
> I know old version is obsolete , but this is not for production, but for
> research, so ple
bq. ruby script will run on same machine ( I have only one node cluster )
,so there wont be any network calls
I don't think the above is true: ruby script would still need to contact
zookeeper to find the location of -ROOT-.
There is no short circuit read for ruby script. Normal RPC is used to
com
Ming:
The tar ball in the archive contains source code. See example below:
$ tar tzvf hbase-0.94.5.tar.gz | grep '\.java' | grep Assignment
-rw-r--r-- 0 jenkins jenkins 47982 Feb 7 2013
hbase-0.94.5/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java
-rw-r--r-- 0 jenkins j
Hi,
I got a change finally to apply the timeout changes and it lasted longer before
it started failing, but its now just throwing the same errors on
"procedure.Subprocedure: Subprocedure pool is full!"
Looking through the logs, it looks like the nodes may not correctly updating
they completion
Great, it worked.
I don't have an answer what is special about SPARK_CLASSPATH vs --jars, just
found the working setting through trial an error.
- Original Message -
From: "Fengyun RAO"
To: "Soumitra Kumar"
Cc: u...@spark.apache.org, user@hbase.apache.org
Sent: Thursday, October 16, 20
Anyone had a problem using hbase-client on hadoop2? Seems like the Put class
is missing a method (either default public constructor or some init method) and
throws an exception when my MR jobs starts up.
I’m using:
HDP 2.1 with
hbase-client.0.98.0.2.1.4.0-632-hadoop2.jar
Stack trace:
hadoop
Do you have more information about the NoSuchMethodException w.r.t. Put()
ctor ?
Put() ctor doesn't exist in 0.98
Can you check GeoAnalyticFormatBulkLoader ?
The other exceptions are for hdfs.
Cheers
On Thu, Oct 16, 2014 at 11:35 AM, THORMAN, ROBERT D wrote:
> Anyone had a problem using hbase
Can you confirm that you're using the same version of hbase in your project
dependencies as with your runtime system? Seems like you might have some
0.94 mixed in somewhere.
On Thu, Oct 16, 2014 at 2:57 PM, Ted Yu wrote:
> Do you have more information about the NoSuchMethodException w.r.t. Put()
Hi, can anyone help with above? Feels like I'm missing something obvious.
On Wednesday, October 15, 2014, Nishanth S wrote:
> Thanks Ted .I will take a look.
>
> -Nishanth
>
> On Wed, Oct 15, 2014 at 3:43 PM, Ted Yu > wrote:
>
> > Nishanth:
> > Good question.
> >
> > As a general coding guide,
Thanks Ted and Sean,
It is great to find the archives, why I did not find them for so long time...
:-)
The original coprocessor author reply me yesterday, it is my fault. In fact,
that coprocessor is loaded after the hbase startup, not during hbase
regionserver startup time. In their client ap
0.94.5 to 0.94.24 are all releases in the 0.94 train.
There is no change w.r.t. the timing of 'root-region-server’ znode creation.
Cheers
On Thu, Oct 16, 2014 at 8:19 PM, Liu, Ming (HPIT-GADSC)
wrote:
> Thanks Ted and Sean,
>
> It is great to find the archives, why I did not find them for so l
hi,maillist:
how can do do data migrate from hbase 0.94 to 0.96 ,copy file from old
hadoop cluster to new hadoop cluster is OK ,any one can help me?
hi,maillist :
i installed CDH4.4 with hbase version 0.94.6 ,(no cloudera manager
involved) but when i test snapshot function ,i get error like this
,acturally , i add the following info into my
/etc/hbase/conf/hbase-site.xml (each node) ,and restart hbase cluster,still
same error,anyone k
See http://stackoverflow.com/questions/21777018/big-data-hbase if it can help.
Regards
Ashish
-Original Message-
From: ch huang [mailto:justlo...@gmail.com]
Sent: 17 October 2014 12:02
To: user@hbase.apache.org
Subject: can not enable snapshot function on hbase 0.94.6
hi,maillist :
20 matches
Mail list logo