Some other options that effect the number of mappers and reducers and the
amount of memory they use:

mapred.child.java.opts*  *-Xmx1200M  (e.g. heap for your mapper/reducer or
any other java options) - this will decide the number of slots(512M) per
mapper

splitsize will effect the number of splits(and in effect the number of
mappers) depending on your input file and input format(in case you are using
fileinputformat or deriving from it)
mapreduce.input.fileinputformat.split.maxsize  <max number of bytes>
mapreduce.input.fileinputformat.split.minsize   <min number of bytes>

-Adi



On Thu, Aug 11, 2011 at 2:11 AM, Harsh J <ha...@cloudera.com> wrote:

> It applies to all Hadoop daemon processes (JT, TT, NN, SNN, DN) and
> all direct commands executed via the 'hadoop' executable.
>
> On Thu, Aug 11, 2011 at 11:37 AM, Xiaobo Gu <guxiaobo1...@gmail.com>
> wrote:
> > Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
> > one Java process?
> >
> > Regards,
> >
> > Xiaobo Gu
> >
> > On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog <goks...@gmail.com>
> wrote:
> >> If the server is dedicated to this job, you might as well give it
> >> 10-15g. After that shakes out, try changing the number of mappers &
> >> reducers.
> >>
> >> On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu <guxiaobo1...@gmail.com>
> wrote:
> >>> Hi Adi,
> >>>
> >>> Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
> >>> what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
> >>> dedicated for a Single Node Hadoop with 1 data node instance, and the
> >>> it will run 4 mapper and reducer tasks .
> >>>
> >>> Regards,
> >>>
> >>> Xiaobo Gu
> >>>
> >>>
> >>> On Sun, Aug 7, 2011 at 11:35 PM, Adi <adi.pan...@gmail.com> wrote:
> >>>>>>Caused by: java.io.IOException: error=12, Not enough space
> >>>>
> >>>> You either do not have enough memory allocated to your hadoop
> daemons(via
> >>>> HADOOP_HEAPSIZE) or swap space.
> >>>>
> >>>> -Adi
> >>>>
> >>>> On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu <guxiaobo1...@gmail.com>
> wrote:
> >>>>
> >>>>> Hi,
> >>>>>
> >>>>> I am trying to write a map-reduce job to convert csv files to
> >>>>> sequencefiles, but the job fails with the following error:
> >>>>> java.lang.RuntimeException: Error while running command to get file
> >>>>> permissions : java.io.IOException: Cannot run program "/bin/ls":
> >>>>> error=12, Not enough space
> >>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
> >>>>>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
> >>>>>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
> >>>>>        at
> >>>>>
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
> >>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
> >>>>>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
> >>>>>        at
> >>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
> >>>>>        at
> >>>>>
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
> >>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
> >>>>>        at java.security.AccessController.doPrivileged(Native Method)
> >>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>>        at
> >>>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> >>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> >>>>> Caused by: java.io.IOException: error=12, Not enough space
> >>>>>        at java.lang.UNIXProcess.forkAndExec(Native Method)
> >>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
> >>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> >>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
> >>>>>        ... 16 more
> >>>>>
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
> >>>>>        at
> >>>>>
> org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
> >>>>>        at
> >>>>> org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
> >>>>>        at
> >>>>>
> org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
> >>>>>        at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
> >>>>>        at java.security.AccessController.doPrivileged(Native Method)
> >>>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>>        at
> >>>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> >>>>>        at org.apache.hadoop.mapred.Child.main(Child.java:253)
> >>>>>
> >>>>
> >>>
> >>
> >>
> >>
> >> --
> >> Lance Norskog
> >> goks...@gmail.com
> >>
> >
>
>
>
> --
> Harsh J
>

Reply via email to