issues about hadoop-0.20.0

2015-07-18 Thread longfei li
Hello!
I built a hadoop cluster including 12 nodes which is based on arm(cubietruck), 
I run simple program wordcount to find how many words of h in hello, it runs 
perfectly. But I run a mutiple program like pi,i run like this:
./hadoop jar hadoop-example-0.21.0.jar pi 100 10
infomation
15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
15/07/18 11:38:54 INFO security.Groups: Group mapping 
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=30
15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated. 
Instead, use mapreduce.task.attempt.id
15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for 
parsing the arguments. Applications should implement Tool for the same.
15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process : 1
15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated. 
Instead, use mapreduce.job.maps
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following namenodes' 
delegation tokens:null
15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
15/07/18 11:49:54 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_00_0, Status : FAILED
Task attempt_201507181137_0001_r_00_0 failed to report status for 602 
seconds. Killing!
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task outputhadoop-slave7
15/07/18 11:49:58 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_02_0, Status : FAILED
Task attempt_201507181137_0001_r_02_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task outputhadoop-slave5
15/07/18 11:50:00 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_03_0, Status : FAILED
Task attempt_201507181137_0001_r_03_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task outputhadoop-slave12
15/07/18 11:50:03 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_04_0, Status : FAILED
Task attempt_201507181137_0001_r_04_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task outputhadoop-slave8
15/07/18 11:50:06 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_07_0, Status : FAILED
Task attempt_201507181137_0001_r_07_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task outputhadoop-slave11
15/07/18 11:50:08 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_08_0, Status : FAILED
Task attempt_201507181137_0001_r_08_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 11:50:11 WARN mapreduce.Job: Error reading task outputhadoop-slave9
15/07/18 11:50:11 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_09_0, Status : FAILED
Task attempt_201507181137_0001_r_09_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:13 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 11:50:13 WARN mapreduce.Job: Error reading task outputhadoop-slave4
15/07/18 11:50:13 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_06_0, Status : FAILED
Task attempt_201507181137_0001_r_06_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:16 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 11:50:16 WARN mapreduce.Job: Error reading task outputhadoop-slave6
15/07/18 11:50:16 INFO mapreduce.Job: Task Id : 
attempt_201507181137_0001_r_05_0, Status : FAILED
Task attempt_201507181137_0001_r_05_0 failed to report status for 601 
seconds. Killing!
15/07/18 11:50:18 WARN mapreduce.Job: Error reading task outputhadoop-slave1
15/07/18 11:50:18 WARN 

Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Hey,
I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

I want to build Hadoop 1.2.1 on my Ubuntu VM.

I'm not able to find the .src.tar.gz file for 1.2.1.

Can anyone help me out?

Thanks,
Vikram Bajaj.


Re: Hadoop 1.2.1 Source file

2015-07-18 Thread James Bond
Did you get to look at this?
https://wiki.apache.org/hadoop/HowToContribute

and this
https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

Question: What are you trying to do here? Are you trying to contribute or
are you trying to learn?

On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java, ssh,
 and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz
 file as well as the .src.tar.gz file, while version 1.2.1 has only the
 .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
I'm actually using Ubuntu as a VM on Windows 8.1.
On 18 Jul 2015 4:01 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 what operating system are you using ?

 read about git and source code management

 On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj vikrambajaj220...@gmail.com
  wrote:

 Could you please explain that in layman's terms? I'm pretty new to all
 this :)
 On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 try checking out git repo and switch to branch 1.2 ?
 https://github.com/apache/hadoop/tree/branch-1.2

 On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




 --
 Nitin Pawar




 --
 Nitin Pawar



Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Okay. Thank you!

But, is there any direct file to be downloaded? Versions like 2.7.1 have
the .src.tar.gz file... Has it been removed for 1.2.1?
On 18 Jul 2015 4:17 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 please read about how to clone git repository and switch branch.
 It would help you to get the code

 On Sat, Jul 18, 2015 at 4:05 PM, Vikram Bajaj vikrambajaj220...@gmail.com
  wrote:

 I'm actually using Ubuntu as a VM on Windows 8.1.
 On 18 Jul 2015 4:01 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 what operating system are you using ?

 read about git and source code management

 On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Could you please explain that in layman's terms? I'm pretty new to all
 this :)
 On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 try checking out git repo and switch to branch 1.2 ?
 https://github.com/apache/hadoop/tree/branch-1.2

 On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me
 :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




 --
 Nitin Pawar




 --
 Nitin Pawar




 --
 Nitin Pawar



Re: total vcores per node containers in yarn

2015-07-18 Thread Harsh J
What version of Apache Hadoop are you running? Recent changes have made
YARN to auto-compute this via hardware detection, by default (rather than
the 8 default).

On Fri, Jul 17, 2015 at 11:31 PM Shushant Arora shushantaror...@gmail.com
wrote:

 In Yarn there is a setting to specify no of vcores that can be allocated
 to containers.

 yarn.nodemanager.resource.cpu-vcores

 In my cluster's nodes yarn-site.xml, this property is not specified. But
 total vcores displayed on RM's web page for the nodes are different than
 the default(8). Is there any other place to control this number ?



Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Nitin Pawar
try checking out git repo and switch to branch 1.2 ?
https://github.com/apache/hadoop/tree/branch-1.2

On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




-- 
Nitin Pawar


Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Nitin Pawar
please read about how to clone git repository and switch branch.
It would help you to get the code

On Sat, Jul 18, 2015 at 4:05 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:

 I'm actually using Ubuntu as a VM on Windows 8.1.
 On 18 Jul 2015 4:01 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 what operating system are you using ?

 read about git and source code management

 On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Could you please explain that in layman's terms? I'm pretty new to all
 this :)
 On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 try checking out git repo and switch to branch 1.2 ?
 https://github.com/apache/hadoop/tree/branch-1.2

 On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me
 :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




 --
 Nitin Pawar




 --
 Nitin Pawar




-- 
Nitin Pawar


Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Ted Yu
I doubt there would be any new 1.x hadoop release.

Please take a look at the links James provided.

See also:
http://hadoop.apache.org/docs/r2.7.1/

There's link to trunk hdfs Jenkins:
https://builds.apache.org/job/Hadoop-hdfs-trunk/

where you can find command for building:

+ /home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests
-Drequire.test.libhadoop -Pnative


There're many components of hadoop. Once you determine the component
you want to focus, you can subscribe to corresponding mailing list:


http://hadoop.apache.org/mailing_lists.html


Cheers


On Sat, Jul 18, 2015 at 6:50 AM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:

 I'm trying to contribute. But I'm not sure how to. I'm really a beginner
 and I clearly need to learn first.

 But I'd like to know how to continue what I started.
 On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:

 Did you get to look at this?
 https://wiki.apache.org/hadoop/HowToContribute

 and this
 https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

 Question: What are you trying to do here? Are you trying to contribute or
 are you trying to learn?

 On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java,
 ssh, and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz
 file as well as the .src.tar.gz file, while version 1.2.1 has only the
 .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me
 :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.





Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Okay! Thanks :)
On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
I was just confused because versions like 2.7.1 have both the .tar.gz file
as well as the .src.tar.gz file, while version 1.2.1 has only the .tar.gz
file.
On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
I'm trying to contribute. But I'm not sure how to. I'm really a beginner
and I clearly need to learn first.

But I'd like to know how to continue what I started.
On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:

 Did you get to look at this?
 https://wiki.apache.org/hadoop/HowToContribute

 and this
 https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

 Question: What are you trying to do here? Are you trying to contribute or
 are you trying to learn?

 On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj vikrambajaj220...@gmail.com
  wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java,
 ssh, and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz
 file as well as the .src.tar.gz file, while version 1.2.1 has only the
 .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me
 :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.





Re: issues about hadoop-0.20.0

2015-07-18 Thread Harsh J
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at
this point, and may carry some issues unfixed via further releases. Please
consider using a newer release.

Is there a specific reason you intend to use 0.21.0, which came out of a
branch long since abandoned?

On Sat, Jul 18, 2015 at 1:27 PM longfei li hblong...@163.com wrote:

 Hello!
 I built a hadoop cluster including 12 nodes which is based on
 arm(cubietruck), I run simple program wordcount to find how many words of h
 in hello, it runs perfectly. But I run a mutiple program like pi,i run like
 this:
 ./hadoop jar hadoop-example-0.21.0.jar pi 100 10
 infomation
 15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
 found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use
 core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
 core-default.xml, mapred-default.xml and hdfs-default.xml respectively
 15/07/18 11:38:54 INFO security.Groups: Group mapping
 impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
 cacheTimeout=30
 15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id is deprecated.
 Instead, use mapreduce.task.attempt.id
 15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
 for parsing the arguments. Applications should implement Tool for the same.
 15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to process
 : 1
 15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is deprecated.
 Instead, use mapreduce.job.maps
 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
 15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the following
 namenodes' delegation tokens:null
 15/07/18 11:38:59 INFO mapreduce.Job: Running job: job_201507181137_0001
 15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
 15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
 15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
 15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
 15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
 15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
 15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
 15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
 15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_00_0, Status : FAILED
 Task attempt_201507181137_0001_r_00_0 failed to report status for 602
 seconds. Killing!
 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
 outputhadoop-slave7
 15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
 outputhadoop-slave7
 15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_02_0, Status : FAILED
 Task attempt_201507181137_0001_r_02_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
 outputhadoop-slave5
 15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
 outputhadoop-slave5
 15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_03_0, Status : FAILED
 Task attempt_201507181137_0001_r_03_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
 outputhadoop-slave12
 15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
 outputhadoop-slave12
 15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_04_0, Status : FAILED
 Task attempt_201507181137_0001_r_04_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
 outputhadoop-slave8
 15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
 outputhadoop-slave8
 15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_07_0, Status : FAILED
 Task attempt_201507181137_0001_r_07_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
 outputhadoop-slave11
 15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
 outputhadoop-slave11
 15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_08_0, Status : FAILED
 Task attempt_201507181137_0001_r_08_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
 outputhadoop-slave9
 15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
 outputhadoop-slave9
 15/07/18 11:50:11 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_09_0, Status : FAILED
 Task attempt_201507181137_0001_r_09_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
 outputhadoop-slave4
 15/07/18 11:50:13 WARN mapreduce.Job: Error reading task
 outputhadoop-slave4
 15/07/18 11:50:13 INFO mapreduce.Job: Task Id :
 attempt_201507181137_0001_r_06_0, Status : FAILED
 Task attempt_201507181137_0001_r_06_0 failed to report status for 601
 seconds. Killing!
 15/07/18 11:50:16 WARN 

Re: sendChunks error

2015-07-18 Thread marius

sry forgot about that

this is the while datanode log since the last startup:
http://pastebin.com/DAN6tQJY

The hadoop version is 2.6.0, i installed it via the tarball.
It is a two node cluster with one being both master and slave and one 
pure slave node. I already tested this with dfs.replication on 1 and 3.


And your translation is correct

Thanks


Am 17.07.2015 um 18:15 schrieb Ted Yu:
bq. IOException: Die Verbindung wurde vom Kommunikationspartner 
zurückgesetzt


Looks like the above means 'The connection was reset by the 
communication partner'


Which hadoop release do you use ?

Can you pastebin more of the datanode log ?

Thanks

On Fri, Jul 17, 2015 at 9:11 AM, marius m.die0...@googlemail.com 
mailto:m.die0...@googlemail.com wrote:


Hi,

when i tried to run some Jobs on my hadoop cluster, i found the
following error in my  datanode logs:
(the german means connection reseted by peer)

2015-07-17 16:33:45,671 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockSender.sendChunks() exception:
java.io.IOException: Die Verbindung wurde vom
Kommunikationspartner zurückgesetzt
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:443)
at
sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:575)
at org.apache.hadoop.net

http://org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at

org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at

org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
at

org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:496)
at

org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at

org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)

i already googled this but i could not find anything...
This appears several times and then the error vanishes and the
jobs proceeds normally, and the job does not fail. This happens on
various nodes. I already formated my namenode but that did not fix it.

Thanks and greetings

Marius






Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Could you please explain that in layman's terms? I'm pretty new to all this
:)
On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 try checking out git repo and switch to branch 1.2 ?
 https://github.com/apache/hadoop/tree/branch-1.2

 On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj vikrambajaj220...@gmail.com
  wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




 --
 Nitin Pawar



Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Ted Yu
http://apache.arvixe.com/hadoop/common/stable1/

The .tar.gz has source code. 




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com wrote:
 
 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)
 
 I want to build Hadoop 1.2.1 on my Ubuntu VM.
 
 I'm not able to find the .src.tar.gz file for 1.2.1.
 
 Can anyone help me out?
 
 Thanks,
 Vikram Bajaj.


Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
So, now that I have the .tar.gz file, how do I build hadoop?

P.S. : I don't know if this matters, but I've already installed java, ssh,
and set up the namenode. All the nodes (namenkde, secondarynamenode,
datanode) start up successfully and I also ran a mapreduce example that
worked.

How do I proceed with building hadoop?
On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz file
 as well as the .src.tar.gz file, while version 1.2.1 has only the .tar.gz
 file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




Re: total vcores per node containers in yarn

2015-07-18 Thread Shushant Arora
Its hadoop 2.5.0.
Whats the logic of default using hardware detection. Say My node has 8
actual core and 32 virtual cores. Its taking 26 as value of vcores
available of this node on RM UI.

On Sat, Jul 18, 2015 at 7:22 PM, Harsh J ha...@cloudera.com wrote:

 What version of Apache Hadoop are you running? Recent changes have made
 YARN to auto-compute this via hardware detection, by default (rather than
 the 8 default).

 On Fri, Jul 17, 2015 at 11:31 PM Shushant Arora shushantaror...@gmail.com
 wrote:

 In Yarn there is a setting to specify no of vcores that can be allocated
 to containers.

 yarn.nodemanager.resource.cpu-vcores

 In my cluster's nodes yarn-site.xml, this property is not specified. But
 total vcores displayed on RM's web page for the nodes are different than
 the default(8). Is there any other place to control this number ?




Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Nitin Pawar
what operating system are you using ?

read about git and source code management

On Sat, Jul 18, 2015 at 3:57 PM, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:

 Could you please explain that in layman's terms? I'm pretty new to all
 this :)
 On 18 Jul 2015 3:52 pm, Nitin Pawar nitinpawar...@gmail.com wrote:

 try checking out git repo and switch to branch 1.2 ?
 https://github.com/apache/hadoop/tree/branch-1.2

 On Sat, Jul 18, 2015 at 3:50 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.




 --
 Nitin Pawar




-- 
Nitin Pawar


Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Okay :)

I'm focusing on hdfs, by the way. Forgot to mention that. My bad!
On 18 Jul 2015 7:43 pm, Ted Yu yuzhih...@gmail.com wrote:

 I doubt there would be any new 1.x hadoop release.

 Please take a look at the links James provided.

 See also:
 http://hadoop.apache.org/docs/r2.7.1/

 There's link to trunk hdfs Jenkins:
 https://builds.apache.org/job/Hadoop-hdfs-trunk/

 where you can find command for building:

 + /home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests 
 -Drequire.test.libhadoop -Pnative


 There're many components of hadoop. Once you determine the component you want 
 to focus, you can subscribe to corresponding mailing list:


 http://hadoop.apache.org/mailing_lists.html


 Cheers


 On Sat, Jul 18, 2015 at 6:50 AM, Vikram Bajaj vikrambajaj220...@gmail.com
  wrote:

 I'm trying to contribute. But I'm not sure how to. I'm really a beginner
 and I clearly need to learn first.

 But I'd like to know how to continue what I started.
 On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:

 Did you get to look at this?
 https://wiki.apache.org/hadoop/HowToContribute

 and this
 https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

 Question: What are you trying to do here? Are you trying to contribute
 or are you trying to learn?

 On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java,
 ssh, and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz
 file as well as the .src.tar.gz file, while version 1.2.1 has only the
 .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with
 me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.






Re: total vcores per node containers in yarn

2015-07-18 Thread Neil Jonkers
Hi,

Also what scheduler are you using?
DefaultResourceCalculator only consider memory.

Regards

div Original message /divdivFrom: Shushant Arora 
shushantaror...@gmail.com /divdivDate:18/07/2015  4:18 PM  (GMT+02:00) 
/divdivTo: user@hadoop.apache.org /divdivSubject: Re: total vcores per 
node containers in yarn /divdiv
/divIts hadoop 2.5.0.
Whats the logic of default using hardware detection. Say My node has 8 actual 
core and 32 virtual cores. Its taking 26 as value of vcores available of this 
node on RM UI.

On Sat, Jul 18, 2015 at 7:22 PM, Harsh J ha...@cloudera.com wrote:
What version of Apache Hadoop are you running? Recent changes have made YARN to 
auto-compute this via hardware detection, by default (rather than the 8 
default).

On Fri, Jul 17, 2015 at 11:31 PM Shushant Arora shushantaror...@gmail.com 
wrote:
In Yarn there is a setting to specify no of vcores that can be allocated to 
containers.

yarn.nodemanager.resource.cpu-vcores

In my cluster's nodes yarn-site.xml, this property is not specified. But total 
vcores displayed on RM's web page for the nodes are different than the 
default(8). Is there any other place to control this number ?



Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Sean Busbey
Hi Vikram!

Please join the common-dev@hadoop mailing list for these kinds of questions.

Working directly with the source repository is meant for folks working on
contributions in the community, and we define that boundary by joining the
development lists rather than the user lists.

-- 
Sean
On Jul 18, 2015 9:16 AM, Vikram Bajaj vikrambajaj220...@gmail.com wrote:

 Okay :)

 I'm focusing on hdfs, by the way. Forgot to mention that. My bad!
 On 18 Jul 2015 7:43 pm, Ted Yu yuzhih...@gmail.com wrote:

 I doubt there would be any new 1.x hadoop release.

 Please take a look at the links James provided.

 See also:
 http://hadoop.apache.org/docs/r2.7.1/

 There's link to trunk hdfs Jenkins:
 https://builds.apache.org/job/Hadoop-hdfs-trunk/

 where you can find command for building:

 + /home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests 
 -Drequire.test.libhadoop -Pnative


 There're many components of hadoop. Once you determine the component you 
 want to focus, you can subscribe to corresponding mailing list:


 http://hadoop.apache.org/mailing_lists.html


 Cheers


 On Sat, Jul 18, 2015 at 6:50 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 I'm trying to contribute. But I'm not sure how to. I'm really a beginner
 and I clearly need to learn first.

 But I'd like to know how to continue what I started.
 On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:

 Did you get to look at this?
 https://wiki.apache.org/hadoop/HowToContribute

 and this
 https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

 Question: What are you trying to do here? Are you trying to contribute
 or are you trying to learn?

 On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java,
 ssh, and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example 
 that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the .tar.gz
 file as well as the .src.tar.gz file, while version 1.2.1 has only the
 .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with
 me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.






Re: Hadoop 1.2.1 Source file

2015-07-18 Thread Vikram Bajaj
Sure.
On 18 Jul 2015 11:06 pm, Sean Busbey bus...@cloudera.com wrote:

 Hi Vikram!

 Please join the common-dev@hadoop mailing list for these kinds of
 questions.

 Working directly with the source repository is meant for folks working on
 contributions in the community, and we define that boundary by joining the
 development lists rather than the user lists.

 --
 Sean
 On Jul 18, 2015 9:16 AM, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay :)

 I'm focusing on hdfs, by the way. Forgot to mention that. My bad!
 On 18 Jul 2015 7:43 pm, Ted Yu yuzhih...@gmail.com wrote:

 I doubt there would be any new 1.x hadoop release.

 Please take a look at the links James provided.

 See also:
 http://hadoop.apache.org/docs/r2.7.1/

 There's link to trunk hdfs Jenkins:
 https://builds.apache.org/job/Hadoop-hdfs-trunk/

 where you can find command for building:

 + /home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests 
 -Drequire.test.libhadoop -Pnative


 There're many components of hadoop. Once you determine the component you 
 want to focus, you can subscribe to corresponding mailing list:


 http://hadoop.apache.org/mailing_lists.html


 Cheers


 On Sat, Jul 18, 2015 at 6:50 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 I'm trying to contribute. But I'm not sure how to. I'm really a
 beginner and I clearly need to learn first.

 But I'd like to know how to continue what I started.
 On 18 Jul 2015 7:06 pm, James Bond bond.b...@gmail.com wrote:

 Did you get to look at this?
 https://wiki.apache.org/hadoop/HowToContribute

 and this
 https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt

 Question: What are you trying to do here? Are you trying to contribute
 or are you trying to learn?

 On Sat, Jul 18, 2015 at 5:29 PM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 So, now that I have the .tar.gz file, how do I build hadoop?

 P.S. : I don't know if this matters, but I've already installed java,
 ssh, and set up the namenode. All the nodes (namenkde, secondarynamenode,
 datanode) start up successfully and I also ran a mapreduce example 
 that
 worked.

 How do I proceed with building hadoop?
 On 18 Jul 2015 4:49 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 I was just confused because versions like 2.7.1 have both the
 .tar.gz file as well as the .src.tar.gz file, while version 1.2.1 has 
 only
 the .tar.gz file.
 On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
 wrote:

 Okay! Thanks :)
 On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:

 http://apache.arvixe.com/hadoop/common/stable1/

 The .tar.gz has source code.




 On Jul 18, 2015, at 3:20 AM, Vikram Bajaj 
 vikrambajaj220...@gmail.com wrote:

 Hey,
 I'm new to Hadoop, so please correct me if I'm wrong and bear with
 me :)

 I want to build Hadoop 1.2.1 on my Ubuntu VM.

 I'm not able to find the .src.tar.gz file for 1.2.1.

 Can anyone help me out?

 Thanks,
 Vikram Bajaj.






Re: issues about hadoop-0.20.0

2015-07-18 Thread Ulul

Hi

I'd say than no matter what version is running, parameters seem not fit 
the cluster that doesn't manage to handle 100 maps that each process a 
billion samples : it's hitting the mapreduce timeout of 600 seconds


I'd try with something like 20 10

Ulul

Le 18/07/2015 12:17, Harsh J a écrit :
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained 
releases at this point, and may carry some issues unfixed via further 
releases. Please consider using a newer release.


Is there a specific reason you intend to use 0.21.0, which came out of 
a branch long since abandoned?


On Sat, Jul 18, 2015 at 1:27 PM longfei li hblong...@163.com 
mailto:hblong...@163.com wrote:


Hello!
I built a hadoop cluster including 12 nodes which is based on
arm(cubietruck), I run simple program wordcount to find how many
words of h in hello, it runs perfectly. But I run a mutiple
program like pi,i run like this:
./hadoop jar hadoop-example-0.21.0.jar pi 100 10
infomation
15/07/18 11:38:54 WARN conf.Configuration: DEPRECATED:
hadoop-site.xml found in the classpath. Usage of hadoop-site.xml
is deprecated. Instead use core-site.xml, mapred-site.xml and
hdfs-site.xml to override properties of core-default.xml,
mapred-default.xml and hdfs-default.xml respectively
15/07/18 11:38:54 INFO security.Groups: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=30
15/07/18 11:38:55 WARN conf.Configuration: mapred.task.id
http://mapred.task.id is deprecated. Instead, use
mapreduce.task.attempt.id http://mapreduce.task.attempt.id
15/07/18 11:38:55 WARN mapreduce.JobSubmitter: Use
GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
15/07/18 11:38:55 INFO input.FileInputFormat: Total input paths to
process : 1
15/07/18 11:38:58 WARN conf.Configuration: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: number of splits:1
15/07/18 11:38:58 INFO mapreduce.JobSubmitter: adding the
following namenodes' delegation tokens:null
15/07/18 11:38:59 INFO mapreduce.Job: Running job:
job_201507181137_0001
15/07/18 11:39:00 INFO mapreduce.Job:  map 0% reduce 0%
15/07/18 11:39:20 INFO mapreduce.Job:  map 100% reduce 0%
15/07/18 11:39:35 INFO mapreduce.Job:  map 100% reduce 10%
15/07/18 11:39:36 INFO mapreduce.Job:  map 100% reduce 20%
15/07/18 11:39:38 INFO mapreduce.Job:  map 100% reduce 90%
15/07/18 11:39:58 INFO mapreduce.Job:  map 100% reduce 100%
15/07/18 11:49:47 INFO mapreduce.Job:  map 100% reduce 89%
15/07/18 11:49:49 INFO mapreduce.Job:  map 100% reduce 19%
15/07/18 11:49:54 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_00_0, Status : FAILED
Task attempt_201507181137_0001_r_00_0 failed to report status
for 602 seconds. Killing!
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
outputhadoop-slave7
15/07/18 11:49:57 WARN mapreduce.Job: Error reading task
outputhadoop-slave7
15/07/18 11:49:58 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_02_0, Status : FAILED
Task attempt_201507181137_0001_r_02_0 failed to report status
for 601 seconds. Killing!
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
outputhadoop-slave5
15/07/18 11:50:00 WARN mapreduce.Job: Error reading task
outputhadoop-slave5
15/07/18 11:50:00 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_03_0, Status : FAILED
Task attempt_201507181137_0001_r_03_0 failed to report status
for 601 seconds. Killing!
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
outputhadoop-slave12
15/07/18 11:50:03 WARN mapreduce.Job: Error reading task
outputhadoop-slave12
15/07/18 11:50:03 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_04_0, Status : FAILED
Task attempt_201507181137_0001_r_04_0 failed to report status
for 601 seconds. Killing!
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
outputhadoop-slave8
15/07/18 11:50:06 WARN mapreduce.Job: Error reading task
outputhadoop-slave8
15/07/18 11:50:06 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_07_0, Status : FAILED
Task attempt_201507181137_0001_r_07_0 failed to report status
for 601 seconds. Killing!
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
outputhadoop-slave11
15/07/18 11:50:08 WARN mapreduce.Job: Error reading task
outputhadoop-slave11
15/07/18 11:50:08 INFO mapreduce.Job: Task Id :
attempt_201507181137_0001_r_08_0, Status : FAILED
Task attempt_201507181137_0001_r_08_0 failed to report status
for 601 seconds. Killing!
15/07/18 11:50:11 WARN mapreduce.Job: Error reading task
outputhadoop-slave9
15/07/18 11:50:11