Hi
I am trying to pass a POJO (RequestVO say) having a single member variable
i.e
*Text Name; *to coprocessorExec method.
This VO implements writable
But getting following exception
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=10,
*exceptions:*
Tue May 27
Hi ,
1) Where can we find hbase.098 tar
with hbase 0.98 directory structure similar to hbase-0.94.19 ?
i,e ..
Directory structure of hbase 0.98
# ls
bin/ dev-support/ hbase-common/ hbase-hadoop2-compat/
hbase-prefix-tree/ hbase-shell/ LICENSE.txt README.txt
CHANGES.txt
Follow http://hbase.apache.org/book/quickstart.html
Choose a download site from the list of Apache Download Mirrors mentioned in
the link
For RPMs, why do you need rpm build ? The tar installation is relatively easier
Regards
KASHIF
On Tue, May 27, 2014 at 2:38 PM, oc tsdb oc.t...@gmail.com wrote:
Hi ,
1) Where can we find hbase.098 tar
with hbase 0.98 directory structure similar to hbase-0.94.19 ?
i,e ..
Directory structure of hbase 0.98
# ls
bin/ dev-support/ hbase-common/ hbase-hadoop2-compat/
I put massive records into HBase and found that one of the region servers
crashed. I checked the RS log and NameNode log and found them complaining
that some block does not exist.
For example:
*In RS's log:*
java.io.IOException: Bad response ERROR for block
What hbase / hadoop release are you using ?
Cheers
On Tue, May 27, 2014 at 4:25 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
I put massive records into HBase and found that one of the region servers
crashed. I checked the RS log and NameNode log and found them complaining
that some block
Can you confirm the version of HBase ?
To my knowledge, cdh5 is based on 0.96
Cheers
On Tue, May 27, 2014 at 1:36 AM, Vikram Singh Chandel
vikramsinghchan...@gmail.com wrote:
Hi
I am trying to pass a POJO (RequestVO say) having a single member variable
i.e
*Text Name; *to
+1
Downloaded, checked hash, checked doc.
Loaded data into standaalone mode. Checked it made it. Checked UI. All
seems fine.
Put it up on my little test cluster and ran my blockcache loadings (I had
to copy in hadoop 2.4.x libs). It started fine over data written by trunk.
Seems fine. No
+1
Downloaded, checked signature, ran tests, checked doc. Ran ITs for
visibility labels, tags and encryption (HFile and WAL). All looks good.
-Anoop-
On Tue, May 27, 2014 at 10:20 PM, Stack st...@duboce.net wrote:
+1
Downloaded, checked hash, checked doc.
Loaded data into standaalone
Can you check your server logs for a full stack trace? This sounds like it
could be similar to this:
On Tue, May 27, 2014 at 10:15 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you confirm the version of HBase ?
To my knowledge, cdh5 is based on 0.96
Cheers
On Tue, May 27, 2014 at 1:36 AM,
Sorry, accidentally hit send... I meant to suggest this:
http://stackoverflow.com/questions/20257356/hbase-client-scan-could-not-initialize-org-apache-hadoop-hbase-util-classes/
--Tom
On Tue, May 27, 2014 at 11:14 AM, Tom Brown tombrow...@gmail.com wrote:
Can you check your server logs for a
Hi Ted
Yes you were right
The CDH version is 4.5 and HBase version is 0.94.6
Tom
full stack trace of log as (displayed on console) is
at
java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at
Make sure that your Writable implementation code is correct. Do you have any
unit tests for your Writable implementation?
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From:
I have not done doing some thing like this. But I think making the VO as
writable is not enough.
You have to add the new VO class entry into HbaseObjectWritable. (Yes u
have to touch the hbase code)
You can see that all the Writable classes, which client send to server, is
added into this class
In HbaseObjectWritable#readObject(), we have:
if(Writable.class.isAssignableFrom(instanceClass)){
Writable writable = WritableFactories.newInstance(instanceClass,
conf);
try {
writable.readFields(in);
Vikram:
Please double check your VO class w.r.t. Writable
Hi Vladimir
No as of now i didn't have junit for it.will do it first thing in the
morning
Anoop
Thanks for the suggestions will look into that class and will get back to
you.
Is passing Params to Coprocessor so complex i thought at least that would
be easy out of all :D
On Tue, May 27, 2014 at
Downloaded, checked signature, poked around with a mini-cluster + shell, ui
looks good, built against phoenix.
+1
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, May 27, 2014 at 10:09 AM, Anoop John anoop.hb...@gmail.com wrote:
+1
Downloaded, checked signature, ran
Regardless of what version of hbase you use, updates are initially just as
expensive as new writes. Newer versions of hbase have become more
efficient at writes. Updates that require reading the value before an
update (like an append or insert) cost more because of the read operation.
Numbers
I‘m using HDP 2.0.6
2014-05-28 0:03 GMT+08:00 Ted Yu yuzhih...@gmail.com:
What hbase / hadoop release are you using ?
Cheers
On Tue, May 27, 2014 at 4:25 AM, Tao Xiao xiaotao.cs@gmail.com
wrote:
I put massive records into HBase and found that one of the region servers
crashed. I
Run an fsck on /hbase to check if there are any inconsistencies.
On Wed, May 28, 2014 at 6:23 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
I‘m using HDP 2.0.6
2014-05-28 0:03 GMT+08:00 Ted Yu yuzhih...@gmail.com:
What hbase / hadoop release are you using ?
Cheers
On Tue, May
fsck on /apps/hbase says that it is healthy
2014-05-28 10:58 GMT+08:00 Bharath Vissapragada bhara...@cloudera.com:
Run an fsck on /hbase to check if there are any inconsistencies.
On Wed, May 28, 2014 at 6:23 AM, Tao Xiao xiaotao.cs@gmail.com
wrote:
I‘m using HDP 2.0.6
21 matches
Mail list logo