[ https://issues.apache.org/jira/browse/HADOOP-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Harsh J resolved HADOOP-466. ---------------------------- Resolution: Fixed Fix Version/s: 0.20.0 This problem indeed exists if one doesn't use the HADOOP_IDENT_STRING, but that's a valid workaround than adding a dependency on md5sum and the like (or do we already use it?). I think this may be resolved as fixed with the availability of HADOOP_IDENT_STRING to workaround with. Workaround (tested to work in 0.20.2): {code} # To start a second DN on same machine, with separated config. HADOOP_IDENT_STRING=$USER-DN2 hadoop-daemon.sh --config /conf/dn2 start datanode HADOOP_IDENT_STRING=$USER-DN2 hadoop-daemon.sh --config /conf/dn2 stop datanode # These manage the PIDs as well, and will not complain that stuff is already running. {code} > Startup scripts will not start instances of Hadoop daemons w/different > configs w/o setting separate PID directories > ------------------------------------------------------------------------------------------------------------------- > > Key: HADOOP-466 > URL: https://issues.apache.org/jira/browse/HADOOP-466 > Project: Hadoop Common > Issue Type: Improvement > Components: conf > Affects Versions: 0.5.0 > Reporter: Vetle Roeim > Fix For: 0.20.0 > > Attachments: hadoop-466.diff > > > Configuration directories can be specified by either setting HADOOP_CONF_DIR > or using the --config command line option. However, the hadoop-daemon.sh > script will not start the daemons unless the PID directory is separate for > each configuration. > The issue is that the code for generating PID filenames is not dependent on > the configuration directory. While the PID directory can be changed in > hadoop-env.sh, it seems a little unnecessary to have this restriction. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira