thank you :)
I already created an issue as well:
https://issues.apache.org/jira/browse/HADOOP-6167
and have a patch against it, maybe you can mark 6168 as a duplicate of 6167.
what do you think of the patch on 6167?
Allen Wittenauer wrote:
FWIW, we actually push a completely separate config to the name node, jt,
etc, because of some of the other settings (like saves and
dfs.[in|ex]cludes). But if you wanted to do an all-in-one, well...
Hmm. Looking at the code, this worked differently than I always thought it
did (at least in 0.18). Like Amogh, I thought that HADOOP_NAMENODE_OPTS (or
at least HADOOP_NAMENODE_HEAPSIZE) would override, but that clearly isn't
the case.
I've filed HADOOP-6168 and appropriately bonked some of my local Hadoop
committers on the head. :)
On 7/22/09 7:50 AM, "Fernando Padilla" <f...@alum.mit.edu> wrote:
But right now the script forcefully adds and extra -Xmx1000m even if you
don't want it..
I guess I'll be submitting a patch for hadoop-daemon.sh later. :) :)
thank you all
On 7/22/09 2:25 AM, Amogh Vasekar wrote:
I haven't played a lot with it, but you may want to check if setting
HADOOP_NAMENODE_OPTS, HADOOP_TASKTRACKER_OPTS help. Let me know if you find a
way to do this :)
Cheers!
Amogh
-----Original Message-----
From: Fernando Padilla [mailto:f...@alum.mit.edu]
Sent: Wednesday, July 22, 2009 9:47 AM
To: common-user@hadoop.apache.org
Subject: Re: best way to set memory
I was thinking not for M/R, but for the actual daemons:
When I go and start up a daemon (like below). They all use the same
hadoop-env.sh. Which allows you to only set the HADOOP_HEAPSIZE once..
not differently for each daemon-type..
bin/hadoop-daemon.sh start namenode
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start secondarynamenode
bin/hadoop-daemon.sh start jobtracker
bin/hadoop-daemon.sh start tasktracker
Amogh Vasekar wrote:
If you need to set the java_options for mem., you can do this via configure
in your MR job.
-----Original Message-----
From: Fernando Padilla [mailto:f...@alum.mit.edu]
Sent: Wednesday, July 22, 2009 9:11 AM
To: common-user@hadoop.apache.org
Subject: best way to set memory
So.. I want to have different memory profiles for
NameNode/DataNode/JobTracker/TaskTracker.
But it looks like I only have one environment variable to modify,
HADOOP_HEAPSIZE, but I might be running more than one on a single
box/deployment/conf directory.
Is there a proper way to set the memory for each kind of server? Or has
an issue been created to document this bug/deficiency??