Hi Uma,
I tried various settings, but it keeps giving me the memory error.
From what I can tell there should be plenty of memory available since
I'm running on a 4gb node and trying to copy a 100kb file.
Is this the correct place to adjust the memory setting for a child
process forked from a
As part of my Java mapper I have a command executes some standalone
code on a local slave node. When I run a code it executes fine, unless
it is trying to access some local files in which case I get the error
that it cannot locate those files.
Digging a little deeper it seems to be executing from
Hi Joris,
You cannot configure the work directory directly. You can configure the local
directory with property 'mapred.local.dir', and it will be used further to
create the work directory like
'${mapred.local.dir}/taskTracker/jobcache/$jobid/$taskid/work'. Based on this,
you can relatively
Hi,
I am writing some Map Reduce programs in pseudo-distributed mode.
I am getting some error in my program and would like to debug it.
For that I want to embed some print statements in my Map / Reduce.
But when I am running the mappers, the prints doesn't seem to show up in the
terminal.
Print out go to the task logs. You can see those in the log directory on the
tasktracker nodes or through the jobtracker web GUI.
-Joey
On Sep 26, 2011, at 19:47, Arko Provo Mukherjee arkoprovomukher...@gmail.com
wrote:
Hi,
I am writing some Map Reduce programs in pseudo-distributed
Hi Arko,
Request you to look into the userlogs folder of the corresponding task.
It will have three file sysout, syslog and syserr. Your System.out.println()
will be captured in these files.
The usual location for userlogs folder is ${HADOOP_LOG_DIR}/userlogs.
From MapReduce tutorial at: