When I reformat the namenode, I also delete the data and name
directories to avoid this problem. Other times, I changed the version
number in the VERSION file to what it was expecting. I did the latter
only during development, though. I would not attempt this on the
production cluster.
Da
Yes this is a known bug.
http://issues.apache.org/jira/browse/HADOOP-1212
You should manually remove "current" directory from every data-node
after reformatting the name-node and start the cluster again.
I do not believe there is any other way.
Thanks,
--Konstantin
Taeho Kang wrote:
No, I don't
No, I don't think it's a bug.
Your datanodes' data partition/directory was probably used in other HDFS
setup and thus had other namespaceID.
Or you could've used other partition/directory for your new HDFS setup by
setting different values for "dfs.data.dir" on your datanode. But in this
case, yo
I was following the quickstart guide to run pseudo-distributed operations with
Hadoop 0.16.4. I got it to work successfully the first time. But I failed to
repeat the steps (I tried to re-do everything from re-formating the HDFS). Then
by looking at the log files of the daemons, I found out the