I haven't played a lot with it, but you may want to check if setting HADOOP_NAMENODE_OPTS, HADOOP_TASKTRACKER_OPTS help. Let me know if you find a way to do this :)
Cheers! Amogh -----Original Message----- From: Fernando Padilla [mailto:f...@alum.mit.edu] Sent: Wednesday, July 22, 2009 9:47 AM To: common-user@hadoop.apache.org Subject: Re: best way to set memory I was thinking not for M/R, but for the actual daemons: When I go and start up a daemon (like below). They all use the same hadoop-env.sh. Which allows you to only set the HADOOP_HEAPSIZE once.. not differently for each daemon-type.. bin/hadoop-daemon.sh start namenode bin/hadoop-daemon.sh start datanode bin/hadoop-daemon.sh start secondarynamenode bin/hadoop-daemon.sh start jobtracker bin/hadoop-daemon.sh start tasktracker Amogh Vasekar wrote: > If you need to set the java_options for mem., you can do this via configure > in your MR job. > > -----Original Message----- > From: Fernando Padilla [mailto:f...@alum.mit.edu] > Sent: Wednesday, July 22, 2009 9:11 AM > To: common-user@hadoop.apache.org > Subject: best way to set memory > > So.. I want to have different memory profiles for > NameNode/DataNode/JobTracker/TaskTracker. > > But it looks like I only have one environment variable to modify, > HADOOP_HEAPSIZE, but I might be running more than one on a single > box/deployment/conf directory. > > Is there a proper way to set the memory for each kind of server? Or has > an issue been created to document this bug/deficiency??