I have chnaged on my namenode and the datanodes mapred-site.xml , to include
mapred.userlog.retain.hours
2
And yet my job xml retains 24 .
Am I doing anything wrong
--
View this message in context:
http://old.nabble.com/mapred.userlog.retain.hours-tp29298881p29298881.html
Sent f
lease correct if I am wrong .
vishalsant wrote:
>
> I ran a Map Reduce job , and created using Multiple Outputs and reduced to
> a bunch of files
> I catted them , did a copytoLocal and all the good stuff one does.
>
> I come back after couple of days and those files are 0
. if folks need to see one way.
vishalsant wrote:
>
> I am a newbie to hadoop so please bear with me if this is naive.
>
> I have defined a Mapper/Reducer and I desire to run it on a hadoop cluster
> My question is
>
> * Do I need to specify the Mapper/Reduce
Not sure , what is happening here .. in the sense that is this critical?
I had read that the status of a task is passed on to the jobtracker over
http.
Is that true ?
I see tasks killed b'coz of expiree , even though the Datanode seems to be
alive and kicking ( expect for the above exception )..
eport pbly.
Will close the thread , when this is resolved with the disk issue ( which it
seems to be ).
vishalsant wrote:
>
> Hi guys,
>
> I see the exception below when I launch a job
>
>
> 0/04/27 10:54:16 INFO mapred.JobClient: map 0% reduce 0%
> 10/04/27 10
Hi guys,
I see the exception below when I launch a job
0/04/27 10:54:16 INFO mapred.JobClient: map 0% reduce 0%
10/04/27 10:54:22 INFO mapred.JobClient: Task Id :
attempt_201004271050_0001_m_005760_0, Status : FAILED
Error initializing attempt_201004271050_0001_m_005760_0:
java.lang.Numbe
JobConf.setJar(..) might be the way , but that class is deprecated and no
method in the Job has a corresponding addition.
vishalsant wrote:
>
> I am a newbie to hadoop so please bear with me if this is naive.
>
> I have defined a Mapper/Reducer and I desire to run it on a hadoop
I am a newbie to hadoop so please bear with me if this is naive.
I have defined a Mapper/Reducer and I desire to run it on a hadoop cluster
My question is
* Do I need to specify the Mapper/Reducer in the classpath of all my
DataNodes/JobTracker Node or can they be uploaded to the cluster as mo