the enough memory to initialize. or try to
reduce the mapred.map.child.java.opts to 256 , if your map task can exeute
with that memory.
Regards,
Uma
- Original Message -
From: Joris Poort gpo...@gmail.com
Date: Saturday, September 24, 2011 5:50 am
Subject: Hadoop java mapper
As part of my Java mapper I have a command executes some standalone
code on a local slave node. When I run a code it executes fine, unless
it is trying to access some local files in which case I get the error
that it cannot locate those files.
Digging a little deeper it seems to be executing from
As part of my Java mapper I have a command executes some code on the
local node and copies a local output file to the hadoop fs.
Unfortunately I'm getting the following output:
Error occurred during initialization of VM
Could not reserve enough space for object heap
I've tried adjusting
to its specific TT's mapred.local.dir
directory and see what the permissions of your distributed files look
like?
For the rest, can you ensure if simple tests (for permissions, etc.) like:
$ sudo -u mapred command you need to run via a task
Passes or not?
On Fri, Sep 9, 2011 at 5:28 AM, Joris
Hi,
I'm trying to set permissions for the tasktracker and/or mapred user.
Basically I'm trying to execute and modify files from within the
mapper, but the code errors out stating that the mapred user on the
slave node doesn't have the right permissions to modify/execute files.
Any help or tips
Hi,
I'm using a hadoop streaming with a python mapper and am trying to
execute an external code that has been imaged onto the worker nodes.
What is the best way to accomplish this?
I've tried to use the same commands that I can run when I ssh into the
node, but unfortunately this doesn't work.