om: Koji Noguchi
>Reply-To: "mapreduce-user@hadoop.apache.org"
>
>Date: Tue, 31 Jan 2012 10:59:35 -0800
>To: "mapreduce-user@hadoop.apache.org" ,
>"markus.jel...@openindex.io"
>Subject: Re: hadoop-1.0.0 and errors with log.index
>
>On our clus
On 31.01.2012 19:32, Arun C Murthy wrote:
Anything in TaskTracker logs ?
Actually I've found something like this in jobs log. When running
wordcount I got it in a file:
${result_dir}/_logs/history/job_201201311241_0008_1328055139152_hdfs_word+count
MapAttempt TASK_TYPE="SETUP" TASKID="task_
On our cluster, it usually happen when jvm crash with invalid jvm params or
jni crashing at init phase.
stderr/stdout files are created but log.index does not exist when this
happens.
We should fix this.
Koji
On 1/31/12 10:49 AM, "Markus Jelsma" wrote:
> Yes, the stacktrace in my previous m
Yes, the stacktrace in my previous message is from the task tracker. It seems
to happen when there is no data locality for the mapper and it needs to get it
from some other datanode. The number of failures is the same as the number of
rack-local mappers.
> Anything in TaskTracker logs ?
>
> On
Anything in TaskTracker logs ?
On Jan 31, 2012, at 10:18 AM, Markus Jelsma wrote:
> In our case, which seems to be the same problem, the web UI does not show
> anything useful except the first line of the stack trace:
>
> 2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
In our case, which seems to be the same problem, the web UI does not show
anything useful except the first line of the stack trace:
2012-01-03 21:16:27,256 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201201031651_0008_m_000233_0
Only the task tracker lo
Actually, all that is telling you is that the task failed and the job-client
couldn't display the logs.
Can you check the JT web-ui and see why the task failed ?
If you don't see anything there, you can try see the TaskTracker logs on the
node on which the task ran.
Arun
On Jan 31, 2012, at 3
On 31/01/12 12:48, Markus Jelsma wrote:
I've seen that the number of related failures is almost always the same as the
number of rack-local mappers. Do you see this as well?
Yes, it seems that way.
Marcin
I've seen that the number of related failures is almost always the same as the
number of rack-local mappers. Do you see this as well?
On Tuesday 31 January 2012 12:21:44 Marcin Cylke wrote:
> Hi
>
> I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
> went relatively smoothly
Hi
I've upgraded my hadoop cluster to version 1.0.0. The upgrade process
went relatively smoothly but it rendered the cluster inoperable due to
errors in jobtrackers operation:
# in job output
Error reading task
outputhttp://hadoop4:50060/tasklog?plaintext=true&attemptid=attempt_201201311241_000
10 matches
Mail list logo