HiI'm using hbase with about 20 regionserver. And one regionserver failed
to write most of datanodes quickly, finally cause this regionserver die. While
other regionserver is ok.
logs like this:java.io.IOException: Bad response ERROR for block
Which Hadoop release are you using ?
Have you run fsck ?
Cheers
On Oct 14, 2014, at 2:31 AM, sunww spe...@outlook.com wrote:
Hi
I'm using hbase with about 20 regionserver. And one regionserver failed
to write most of datanodes quickly, finally cause this regionserver die.
While
I'm using Hadoop 2.0.0 and not run fsck. only one regionserver have these
dfs logs, strange.
Thanks
CC: user@hadoop.apache.org
From: yuzhih...@gmail.com
Subject: Re: write to most datanode fail quickly
Date: Tue, 14 Oct 2014 02:43:26 -0700
To: user@hadoop.apache.org
Which Hadoop release
Can you check NameNode log for 132.228.48.20 ?
Have you turned on short circuit read ?
Cheers
On Oct 14, 2014, at 3:00 AM, sunww spe...@outlook.com wrote:
I'm using Hadoop 2.0.0 and not run fsck.
only one regionserver have these dfs logs, strange.
Thanks
CC:
Hi
dfs.client.read.shortcircuit is true.
this is namenode log at that moment:http://paste2.org/U0zDA9ms
It seems like there is no special in namenode log.
Thanks
CC: user@hadoop.apache.org
From: yuzhih...@gmail.com
Subject: Re: write to most datanode fail quickly
Date: Tue, 14 Oct 2014
Hi
I am playing with hadoop-2 filesystem. I have two namenodes with HA and
six datanodes.
I tried different configurations and killed namenodes ans so on...
Now I have situation where most of my data are there but some corrupted
blocks exists.
hdfs fsck / - gives my loads of Under replicated
Hi Experts,
I'm going to to do some computation-intensive operation under Hadoop
framework. I'm wondering which is the best way to code in C++ under
Hadoop framework? I'm aware of three options: Hadoop Streaming, Hadoop
Pipes, and Hadoop C++ Extension. I heard that Hadoop Pipes has/would be
I'm trying to test some MapReduces with MRUnit 1.1.0, but I didn't get
results.
The code that I execute is:
mapTextDriver.withInput(new LongWritable(1), new Text(content));
ListPairNullWritable, Text outputs = mapTextDriver.run();
But, I never got an output, the list always has
132.228.48.20 didn't show up in the snippet (spanning 3 minutes only) you
posted.
I don't see error or exception either.
Perhaps search in wider scope.
On Tue, Oct 14, 2014 at 5:36 AM, sunww spe...@outlook.com wrote:
Hi
dfs.client.read.shortcircuit is true.
this is namenode log at that
Hi All,
Apologies if this has been addressed somewhere already. But I couldn't find
relevant information on building from source at [1]. I have downloaded
1.2.1 from [2]. Any pointers appreciated. (I am on OSX. But I could switch
to Linux if required which I expect to be the case?).
Regards
Bud
Try the instructions for branch-1 at
https://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk
-Ray
On Tue, Oct 14, 2014 at 11:08 AM, buddhika chamith chamibuddh...@gmail.com
wrote:
Hi All,
Apologies if this has been addressed somewhere already. But I couldn't
find relevant
Hi,
I am trying to find out if the Yarn unmanaged AM can be used in a secure
cluster (Kerberized). I stumbled upon a ticket from 2012 that points to the
fact that there might not be a way for the tokens to be passed to unmanaged
AM yet https://issues.apache.org/jira/browse/YARN-937. However, I am
Hithe correct ip is 132.228.248.20.I check hdfs log in the dead
regionserver, it have some error message, maybe it's useful.
http://paste2.org/NwpcaGVv
Thanks
Date: Tue, 14 Oct 2014 10:28:31 -0700
Subject: Re: write to most datanode fail quickly
From: yuzhih...@gmail.com
To:
Hadoop streaming is the best option for you. It doesn't has high I/O
overhead if you don't add a high I/O in your c++ code.
hadoop streaming use buidin MapReduce, it just redirect input/out stream
for your c++ application.
On Tue, Oct 14, 2014 at 10:33 PM, Y. Z. zhaoyansw...@gmail.com wrote:
Thanks, Azuryy!
I found some examples about Pipes. Is Hadoop Pipes still support in
Hadoop 2.2?
Sincerely,
Yongan
On 10/14/2014 11:20 PM, Azuryy Yu wrote:
Hadoop streaming is the best option for you. It doesn't has high I/O
overhead if you don't add a high I/O in your c++ code.
hadoop
yes. it always supports hadoop pipe in v2.
On Wed, Oct 15, 2014 at 11:33 AM, Y Z zhaoyansw...@gmail.com wrote:
Thanks, Azuryy!
I found some examples about Pipes. Is Hadoop Pipes still support in Hadoop
2.2?
Sincerely,
Yongan
On 10/14/2014 11:20 PM, Azuryy Yu wrote:
Hadoop streaming
Thanks!:)
Sincerely,
Yongan
On 10/14/2014 11:38 PM, Azuryy Yu wrote:
yes. it always supports hadoop pipe in v2.
On Wed, Oct 15, 2014 at 11:33 AM, Y Z zhaoyansw...@gmail.com
mailto:zhaoyansw...@gmail.com wrote:
Thanks, Azuryy!
I found some examples about Pipes. Is Hadoop Pipes
17 matches
Mail list logo