I saw the following exceptions happened frequently: any hints?
Failed to scan rows on table : start=04, end=05, Failed
after retry of OutOfOrderScannerNextException: was there a rpc timeout?
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
Here is what I am trying to figure out: in the same table, if cell A is
updated after cell B, is it guaranteed that the time stamp of cell A is always
bigger than the time stamp of cell B, even cell A and cell B could be stored on
different machines (therefore these two machines might out of
On Tue, Feb 25, 2014 at 4:30 PM, S. Zhou myx...@yahoo.com wrote:
I just downloaded the HBase 0.98-hadoop2. After I run start-hbase.sh,
the Hbase does not actually start. I tried to search online but failed to
find a solution. Please help.
The message in master log is:
2014-02-25 15:24:04,533
hadoop 2.3.0 uses hadoop-common-2.3.0
hbase 0.98 uses hadoop-common-2.2.0
On Wednesday, February 26, 2014 9:14 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you check the version of hadoop-common jar in your classpath to see if
there is conflict ?
On Wed, Feb 26, 2014 at 9:06 AM, S. Zhou myx
I just downloaded the HBase 0.98-hadoop2. After I run start-hbase.sh, the
Hbase does not actually start. I tried to search online but failed to find a
solution. Please help.
The message in master log is:
2014-02-25 15:24:04,533 INFO [master:localhost:6] master.ServerManager:
Waiting for
I checked the Java doc on put(ListPut puts) of HTableInterface and it does
not say how to get the failed rows in case exception happened (see below): can
I assume the failed rows are contained in puts list?
Throws:
InterruptedIOException
RetriesExhaustedWithDetailsException
Compared to the
I try to delete multiple columns for the same row in HBase. I checked the API
for the Delete class, method deleteColumns, and have some confusion.
Basically, I am not sure: if I should call deleteColumns on the same Delete
object multiple times (to delete multiple columns), or create multiple
I am trying the new version and run into some problem: details here:
https://groups.google.com/forum/#!topic/asynchbase/zsIsLOZgiVc
Could u please help? We are trying to migrate to Hadoop 2.2 with HBase
0.96. But this issue blocks the migration of one application.
On Monday, October 28, 2013
I need to copy data from Hadoop cluster A to cluster B. I know I can use
distCp tool to do that. Now the problem is: cluster A has version 1.2.1 and
cluster B has version 0.20.x. So distcp tool from either version does not
work on both versions. Is there a possible way to do that? So far I
I am new to HBase. I am trying to check if a user is able to configure the
number of region servers (not #regions) in a HBase cluster. If yes, any
guidelines on the number of region servers?
Thanks
Senqiang
.
From: Stack st...@duboce.net
To: Hbase-User user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Saturday, August 17, 2013 4:42 PM
Subject: Re: where to download 0.95.1-hadoop1 jar file using Maven?
On Thu, Aug 15, 2013 at 7:50 PM, S. Zhou myx...@yahoo.com wrote:
Hi
I wrote a java program which connects to hbase (I am using asynchbase). At
this time
hbase/hadoop are all running on my local dev box as pseudo cluster. I
believe hbase/hadoop are all setup correctly since I can create hbase
tables, update hbase tables, run MR jobs etc.
The problem is:
Ted, do you have any public maven repository to download hbase client with
version hbase-0.95.1-hadoop1? I am kind of surprised that it becomes an issue.
From: S. Zhou myx...@yahoo.com
To: user@hbase.apache.org user@hbase.apache.org
Sent: Thursday, August 15
Hi there,
I am writing Java program to access HBase and the version of Hbase I use is
0.95.1-hadoop1. The problem is: I failed to download the jar file for this
version from Maven central repository even though it is listed at the Maven
central repository
From HBase web site, I can only find the recent versions of HBase (e.g.
http://mirror.esocc.com/apache/hbase/). I like to download version 0.94.0, any
idea where can I find it? Thanks!
Thanks Ted. But it seems still not working after I add that repository: the
error message is the same.
From: Ted Yu yuzhih...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Friday, August 16, 2013 10:54 AM
Subject
.
From: Ted Yu yuzhih...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Friday, August 16, 2013 11:04 AM
Subject: Re: where to download old versions of HBase?
I couldn't find the download either.
May I ask why you wanted such an old release
Thanks Ted. I will try that
From: Ted Yu yuzhih...@gmail.com
To: S. Zhou myx...@yahoo.com; user@hbase.apache.org user@hbase.apache.org
Sent: Friday, August 16, 2013 12:11 PM
Subject: Re: where to download old versions of HBase?
Putting back user mailing
,
TableMap.class);
From: S. Zhou myx...@yahoo.com
To: Ted Yu yuzhih...@gmail.com
Cc: user@hbase.apache.org user@hbase.apache.org
Sent: Thursday, July 11, 2013 10:19 PM
Subject: Re: MapReduce job with mixed data sources: HBase table and HDFS files
i use
I am running a very simple MR HBase job (reading from a tiny HBase table and
outputs nothing). I run it on a pseudo-distributed HBase cluster on my local
machine which uses a pseudo-distributed HDFS (on local machine again). When I
run it, I get the following exception: Unable to find region
Yes, I can see the table through hbase shell and web ui (localhost:60010). hbck
reports ok
From: Jean-Marc Spaggiari jean-m...@spaggiari.org
To: user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Thursday, July 11, 2013 11:01 AM
Subject: Re: HBase
To: user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Thursday, July 11, 2013 3:51 PM
Subject: Re: MapReduce job with mixed data sources: HBase table and HDFS files
TextInputFormat wouldn't work:
public class TextInputFormat extends FileInputFormatLongWritable, Text {
Take a look
i use org.apache.hadoop.mapreduce.lib.input.MultipleInputs
I run on pseudo-distributed hadoop (1.2.0) and Pseudo-distributed HBase
(0.95.1-hadoop1).
From: Ted Yu yuzhih...@gmail.com
To: S. Zhou myx...@yahoo.com
Cc: user@hbase.apache.org user
in a separate map
only job and then use its output along with other HDFS input.
There is a significant disparity between the reads from HDFS and from
HBase.
On Jul 3, 2013, at 10:34 AM, S. Zhou myx...@yahoo.com wrote:
Azuryy, I am looking at the MultipleInputs doc. But I could not figure
...@gmail.com
To: user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Wednesday, July 10, 2013 10:21 AM
Subject: Re: MapReduce job with mixed data sources: HBase table and HDFS files
Can you utilize initTableMapperJob() (which
calls TableMapReduceUtil.convertScanToString() underneath) ?
On Wed
Thanks Azuryy. Does it work on multiple clusters (e.g. HBase in cluster 1 and
HDFS files in another cluster 2)?
From: Azuryy Yu azury...@gmail.com
To: user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent: Tuesday, July 2, 2013 10:06 PM
Subject: Re
Azuryy, I am looking at the MultipleInputs doc. But I could not figure out how
to add HBase table as a Path to the input? Do you have some sample code? Thanks!
From: Azuryy Yu azury...@gmail.com
To: user@hbase.apache.org; S. Zhou myx...@yahoo.com
Sent
disparity between the reads from HDFS and from
HBase.
On Jul 3, 2013, at 10:34 AM, S. Zhou myx...@yahoo.com wrote:
Azuryy, I am looking at the MultipleInputs doc. But I could not figure
out how to add HBase table as a Path to the input? Do you have some sample
code? Thanks
Hi there,
I know how to create MapReduce job with HBase data source only or HDFS file as
data source. Now I need to create a MapReduce job with mixed data sources, that
is, this MR job need to read data from both HBase and HDFS files. Is it
possible? If yes, could u share some sample code?
29 matches
Mail list logo