Hi all,
Can I run mutiple HDFS instances, that is, n seperate namenodes and n
datanodes, on a single machine?
I've modified core-site.xml and hdfs-site.xml to avoid port and file
conflicting between HDFSes, but when I started the second HDFS, I got the
errors:
Starting namenodes on [localhost]
Yes you can but if you want the scripts to work, you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.
I instead prefer to start the daemons up via their direct command such
as hdfs namenode and so and move them to the background, with
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
Everything looks fine now.
Seems direct command hdfs namenode gives a better sense of control :)
Thanks a lot.
在 2013年4月18日星期四,Harsh J 写道:
Yes you can but if you want the scripts to work, you should have them
use a
Glad you got this working... can you explain your use case a little? I'm
trying to understand why you might want to do that.
On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao aolixi...@gmail.com wrote:
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
Everything looks
Actually I'm trying to do something like combining multiple namenodes so
that they present themselves to clients as a single namespace, implementing
basic namenode functionalities.
在 2013年4月18日星期四,Chris Embree 写道:
Glad you got this working... can you explain your use case a little? I'm
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao aolixi...@gmail.com wrote:
Actually I'm trying to do something
Not really, fereration provides seperate namespaces, but I want it looks
like one namespace. My basic idea is to maintain a map from files to
namenodes, it receive RPC calls from client and forward them to specific
namenode that in charge of the file. It's challenging for me but
I'll figure out