2/mapred_tutorial.html#Directory+Structure
>
>
> Thanks
> Devaraj
>
> From: Joris Poort [gpo...@gmail.com]
> Sent: Monday, September 26, 2011 11:20 PM
> To: mapreduce-user
> Subject: Execution directory for child process within mapper
&
As part of my Java mapper I have a command executes some standalone
code on a local slave node. When I run a code it executes fine, unless
it is trying to access some local files in which case I get the error
that it cannot locate those files.
Digging a little deeper it seems to be executing from
rrent task is not getting the enough memory to initialize. or try to
> reduce the mapred.map.child.java.opts to 256 , if your map task can exeute
> with that memory.
>
> Regards,
> Uma
>
> - Original Message -
> From: Joris Poort
> Date: Saturday, September 24, 2011
As part of my Java mapper I have a command executes some code on the
local node and copies a local output file to the hadoop fs.
Unfortunately I'm getting the following output:
"Error occurred during initialization of VM"
"Could not reserve enough space for object heap"
I've tried adjusti
t; to "true" for your job
> and fail a task. Then head down to its specific TT's mapred.local.dir
> directory and see what the permissions of your distributed files look
> like?
>
> For the rest, can you ensure if simple tests (for permissions, etc.) like:
>
&
Hi,
I'm trying to set permissions for the tasktracker and/or mapred user.
Basically I'm trying to execute and modify files from within the
mapper, but the code errors out stating that the mapred user on the
slave node doesn't have the right permissions to modify/execute files.
Any help or tips on
Hi,
I'm using a hadoop streaming with a python mapper and am trying to
execute an external code that has been imaged onto the worker nodes.
What is the best way to accomplish this?
I've tried to use the same commands that I can run when I ssh into the
node, but unfortunately this doesn't work. I