Hi,

I'm new to hadoop.

I'm having trouble formatting my hdfs namenode for the first time.

This is a fresh install.

I'm using ha hdfs with zookeeper, hadoop, hive.

I'm following the instructions that say:
[https://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/ClusterSetup.html]
.....
Once all the necessary configuration is complete, distribute the files to the 
HADOOP_CONF_DIR directory on all the machines.
This section also describes the various Unix users who should be starting the 
various components and uses the same Unix accounts and groups used previously:
Hadoop Startup
To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.
Format a new distributed filesystem as hdfs:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
.....
I'm getting these error message:
16/06/22 00:53:17 WARN client.QuorumJournalManager: Waited 59051 ms 
(timeout=60000 ms) for a response for hasSomeData. Succeeded so far: 
[10.118.112.102:8485]
16/06/22 00:53:17 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Timed out waiting for response from loggers
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:228)
        at 
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:940)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1382)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507)
16/06/22 00:53:18 INFO ipc.Client: Retrying connect to server: 
hadoopdn/10.118.112.101:8485. Already tried 2 time(s); maxRetries=45

How do I fix this please?
I'm running the journalnode on all nodes and nothing else.
I've also tried running namenode on hadoopnn but it fails with 
"java.io.IOException: NameNode is not formatted"

=======================================================================================================================
1. versions
Hadoop 2.6.4
Hive 2.0.1
Zookeeper 3.4.6
2. planned layout:
                                                                                
                IP                            Zookeeper          Quorum         
      NameNode
hadoopnn           Hadoop Name Node                       10.118.112.102  yes   
                      yes         yes
hadoopnn2         Hadoop Name Node Secondary 10.118.112.99     yes              
           yes         yes
hadoopdn           Hadoop Data Node/Slave                             
10.118.112.101  yes                         yes         no
hadoopdn2         Hadoop Data Node/Slave 2          10.118.112.100  yes         
                yes         no
hadoopdn3         Hadoop Data Node/Slave 3          10.118.112.103  yes         
                yes         no

                                NameNode        NodeManager  ResourceManager    
       HistoryServer    WebAppProxy  JournalNode
hadoopnn           yes                         no                           no  
                         no                           no                        
   yes
hadoopnn2         yes                         no                           yes  
                       yes                         yes                         
yes
hadoopdn           no                           yes                         no  
                         no                           no                        
   yes
hadoopdn2         no                           yes                         no   
                        no                           no                         
  yes
hadoopdn3         no                           yes                         no   
                        no                           no                         
  yes

                                HistoryServer    WebAppProxy  JournalNode
hadoopnn           no                           no                           yes
hadoopnn2         yes                         yes                         yes
hadoopdn           no                           no                           yes
hadoopdn2         no                           no                           yes
hadoopdn3         no                           no                           yes
3. Configuration files
==>core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoopnn:9000</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ha-cluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/jn</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoopnn:2181,hadoopnn2:2181,hadoopdn:2181,hadoopdn2:2181,hadoopdn3:2181</value>
</property>
</configuration>
===>hdfs-site.xml
<configuration>
<property>
      <name>dfs.namenode.name.dir</name>
      <value>file:///usr/local/hadoop/nn_namespace</value>
</property>
<property>
      <name>dfs.datanode.data.dir</name>
      <value>file:///usr/local/hadoop/dn_namespace</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ha-cluster</value>
</property>
<property>
<name>dfs.ha.namenodes.ha-cluster</name>
<value>hadoopnn,hadoopnn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.hadoopnn</name>
<value>hadoopnn:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.hadoopnn2</name>
<value>hadoopnn2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.hadoopnn</name>
<value>hadoopnn:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.hadoopnn2</name>
<value>hadoopnn2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoopnn:8485;hadoopnn2:8485;hadoopdn:8485;hadoopdn2:8485;hadoopdn3:8485/ha-cluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ha-cluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hduser/.ssh/id_rsa</value>
</property>
</configuration>
===>yarn-site.xml
<configuration>
<property>
      <name>yarn.resourcemanager.scheduler.class</name>
      
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
</property>
<property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoopnn2:8025</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoopnn2:8035</value>
</property>
<property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoopnn2:8050</value>
</property>
</configuration>

==>

Reply via email to