Vinay, if the Hadoop docs are not clear in this regard, can you please
create a jira to add these details?

On Fri, Nov 16, 2012 at 12:31 AM, Vinayakumar B <vinayakuma...@huawei.com>wrote:

> Hi,****
>
> ** **
>
> If you are moving from NonHA (single master) to HA, then follow the below
> steps.****
>
> **1.       **Configure the another namenode’s configuration in the
> running namenode and all datanode’s configurations. And configure logical
> *fs.defaultFS*****
>
> **2.       **Configure the shared storage related configuration.****
>
> **3.       **Stop the running NameNode and all datanodes.****
>
> **4.       **Execute ‘hdfs namenode –initializeSharedEdits’ from the
> existing namenode installation, to transfer the edits to shared storage.**
> **
>
> **5.       **Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc
> using ‘hadoop-daemon.sh start zkfc’****
>
> **6.       **Now restart the namenode from existing installation. If all
> configurations are fine, then NameNode should start successfully as
> STANDBY, then zkfc will make it to ACTIVE.****
>
> ** **
>
> **7.       **Now install the NameNode in another machine (master2) with
> same configuration, except ‘dfs.ha.namenode.id’.****
>
> **8.       **Now instead of format, you need to copy the name dir
> contents from another namenode (master1) to master2’s name dir. For this
> you are having 2 options.****
>
> **a.       **Execute ‘hdfs namenode -bootStrapStandby’  from the master2
> installation.****
>
> **b.      **Using ‘scp’ copy entire contents of name dir from master1 to
> master2’s name dir.****
>
> **9.       **Now start the zkfc for second namenode ( No need to do zkfc
> format now). Also start the namenode (master2)****
>
> ** **
>
> Regards,****
>
> Vinay-****
>
> *From:* Uma Maheswara Rao G [mailto:mahesw...@huawei.com]
> *Sent:* Friday, November 16, 2012 1:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> ** **
>
> If you format namenode, you need to cleanup storage directories of
> DataNode as well if that is having some data already. DN also will have
> namespace ID saved and compared with NN namespaceID. if you format NN, then
> namespaceID will be changed and DN may have still older namespaceID. So,
> just cleaning the data in DN would be fine.****
>
>  ****
>
> Regards,****
>
> Uma****
> ------------------------------
>
> *From:* hadoop hive [hadooph...@gmail.com]
> *Sent:* Friday, November 16, 2012 1:15 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> Seems like you havn't format your cluster (if its 1st time made). ****
>
> On Fri, Nov 16, 2012 at 9:58 AM, a...@hsk.hk <a...@hsk.hk> wrote:****
>
> Hi, ****
>
> ** **
>
> Please help!****
>
> ** **
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :****
>
> ** **
>
> 2782 Jps****
>
> 2126 NameNode****
>
> 2720 SecondaryNameNode****
>
> i.e. The datanode on this server could not be started****
>
> ** **
>
> In the log file, found: ****
>
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993****
>
> ** **
>
> ** **
>
> ** **
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.****
>
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.****
>
> ** **
>
> ** **
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out*
> ***
>
> Exception in thread "main" java.lang.NoClassDefFoundError: master2****
>
> Caused by: java.lang.ClassNotFoundException: master2****
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)****
>
> at java.security.AccessController.doPrivileged(Native Method)****
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)****
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)****
>
> Could not find the main class: master2.  Program will exit.****
>
> QUESTION: Any other solutions?****
>
> ** **
>
> ** **
>
> ** **
>
> Thanks****
>
> ** **
>
> ** **
>
> ** **
>
>   ****
>
> ** **
>
> ** **
>
> ** **
>



-- 
http://hortonworks.com/download/

Reply via email to