I am trying to start my hadoop cluster manually.
I am having trouble figuring out what this error means.

I see this error repeatedly and eventually the namenode shuts down
[2023-10-10 21:03:37,179] INFO Retrying connect to server: 
vmnode1/192.168.1.159:8485. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
(org.apache.hadoop.ipc.Client)


Does this mean that the journal node is having trouble?
Looking on vmnode1's journal log I do not see anything that looks bad to me
[2023-10-10 21:11:24,583] INFO Using callQueue: class 
java.util.concurrent.LinkedBlockingQueue, queueCapacity: 500, scheduler: class 
org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 
(org.apache.hadoop.ipc.CallQueueManager)
[2023-10-10 21:11:24,603] INFO Listener at 0.0.0.0:8485 
(org.apache.hadoop.ipc.Server)
[2023-10-10 21:11:24,606] INFO Starting Socket Reader #1 for port 8485 
(org.apache.hadoop.ipc.Server)
[2023-10-10 21:11:24,914] INFO IPC Server listener on 8485: starting 
(org.apache.hadoop.ipc.Server)
[2023-10-10 21:11:24,917] INFO IPC Server Responder: starting 
(org.apache.hadoop.ipc.Server)
[2023-10-10 21:11:25,481] INFO Initializing journal in directory 
/hadoop/data/hdfs/journalnode/mycluster 
(org.apache.hadoop.hdfs.qjournal.server.JournalNode)
[2023-10-10 21:11:25,521] INFO Lock on 
/hadoop/data/hdfs/journalnode/mycluster/in_use.lock acquired by nodename 
10296@vmnode1 (org.apache.hadoop.hdfs.server.common.Storage)
[2023-10-10 21:11:25,562] INFO Scanning storage 
FileJournalManager(root=/hadoop/data/hdfs/journalnode/mycluster) 
(org.apache.hadoop.hdfs.qjournal.server.Journal)
[2023-10-10 21:11:25,643] INFO Latest log is 
EditLogFile(file=/hadoop/data/hdfs/journalnode/mycluster/current/edits_inprogress_0000000000000000017,first=0000000000000000017,last=0000000000000000017,inProgress=true,hasCorruptHeader=false)
 ; journal id: mycluster (org.apache.hadoop.hdfs.qjournal.server.Journal)
[2023-10-10 21:11:25,993] INFO Starting SyncJournal daemon for journal 
mycluster (org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer)
[2023-10-10 21:11:26,017] INFO 
/hadoop/data/hdfs/journalnode/mycluster/edits.sync directory already exists. 
(org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer)
[2023-10-10 21:11:26,018] INFO Syncing Journal /0:0:0:0:0:0:0:0:8485 with 
vmnode1/192.168.1.159:8485, journal id: mycluster 
(org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer)




this is my hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
      <name>fs.defaultFS</name>
      <value>hdfs://mycluster</value>
   </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>vmnode1:2181,vmnode2:2181,vmnode3:2181</value>
  </property>

  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
  </property>

  <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>nn1,nn2,nn3</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>vmnode1:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>vmnode2:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn3</name>
    <value>vmnode3:8020</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.mycluster.nn1</name>
    <value>vmnode1:9870</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.nn2</name>
    <value>vmnode2:9870</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.nn3</name>
    <value>vmnode3:9870</value>
  </property>

  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://vmnode1:8485;vmnode2:8485;vmnode3:8485/mycluster</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>

  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/hadoop/data/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/hadoop/data/hdfs/datanode</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/hadoop/data/hdfs/journalnode</value>
  </property>

  <property>
    <name>dfs.ha.nn.not-become-active-in-safemode</name>
    <value>false</value>
  </property>

</configuration>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to