First of all, could you please explain how you installed Hadoop? It's
possible that you may have already disclosed this information in a previous
thread, but please understand that I haven't gone through all of them and
don't have all the details memorized.

I haven't actually tried it, but I believe that when we want to change the
log level for processes that are started as daemons, such as Namenode and
Datanode, we should configure the HADOOP_DAEMON_ROOT_LOGGER environment
variable in etc/hadoop/hadoop-env.sh:


# Default log4j setting for interactive commands
# Java property: hadoop.root.logger
# export HADOOP_ROOT_LOGGER=INFO,console

# Default log4j setting for daemons spawned explicitly by
# --daemon option of hadoop, hdfs, mapred and yarn command.
# Java property: hadoop.root.logger
# export HADOOP_DAEMON_ROOT_LOGGER=INFO,RFA


On Wed, Oct 4, 2023 at 5:11 PM Harry Jamison
<harryjamiso...@yahoo.com.invalid> wrote:

> @*Kiyoshi Mizumaru*
>
> How would I do that?
> I tried changing
>
> /hadoop/etc/hadoop/hadoop-env.sh
>
> export HADOOP_*ROOT*_LOGGER=TRACE,console
>
> But that did not seem to work, I still only get INFO.
> On Tuesday, October 3, 2023 at 09:13:13 PM PDT, Harry Jamison
> <harryjamiso...@yahoo.com.invalid> wrote:
>
>
> I am not sure exactly what the problem is now.
>
> My namenode (and I think journal node are getting shut down.
> Is there a way to tell Why it is getting the shutdown signal?
>
> Also the datanode seems to be getting this error
> End of File Exception between local host is
>
>
> Here are the logs, and I only see INFO logging, and then a the Shutdown
>
> [2023-10-03 20:53:00,873] INFO Initializing quota with 12 thread(s)
> (org.apache.hadoop.hdfs.server.namenode.FSDirectory)
>
> [2023-10-03 20:53:00,876] INFO Quota initialization completed in 1
> milliseconds
>
> name space=2
>
> storage space=0
>
> storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
> (org.apache.hadoop.hdfs.server.namenode.FSDirectory)
>
> [2023-10-03 20:53:00,882] INFO Total number of blocks            = 0
> (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)
>
> [2023-10-03 20:53:00,884] INFO Starting CacheReplicationMonitor with
> interval 30000 milliseconds
> (org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor)
>
> [2023-10-03 20:53:00,884] INFO Number of invalid blocks          = 0
> (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)
>
> [2023-10-03 20:53:00,884] INFO Number of under-replicated blocks = 0
> (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)
>
> [2023-10-03 20:53:00,884] INFO Number of  over-replicated blocks = 0
> (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)
>
> [2023-10-03 20:53:00,884] INFO Number of blocks being written    = 0
> (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)
>
> [2023-10-03 20:53:00,884] INFO STATE* Replication Queue initialization
> scan for invalid, over- and under-replicated blocks completed in 67 msec
> (org.apache.hadoop.hdfs.StateChange)
>
> [2023-10-03 20:54:16,453] ERROR RECEIVED SIGNAL 15: SIGTERM
> (org.apache.hadoop.hdfs.server.namenode.NameNode)
>
> [2023-10-03 20:54:16,467] INFO SHUTDOWN_MSG:
>
> /************************************************************
>
> SHUTDOWN_MSG: Shutting down NameNode at vmnode1/192.168.1.159
>
> ************************************************************/
> (org.apache.hadoop.hdfs.server.namenode.NameNode)
>
>
>
>
> When I start the data node I see this
>
> [2023-10-03 20:53:00,882] INFO Namenode Block pool
> BP-1620264838-192.168.1.159-1696370857417 (Datanode Uuid
> 66068658-b08b-49cd-aba0-56ac1f29e7d5) service to vmnode1/
> 192.168.1.159:8020 trying to claim ACTIVE state with txid=15
> (org.apache.hadoop.hdfs.server.datanode.DataNode)
>
> [2023-10-03 20:53:00,882] INFO Acknowledging ACTIVE Namenode Block pool
> BP-1620264838-192.168.1.159-1696370857417 (Datanode Uuid
> 66068658-b08b-49cd-aba0-56ac1f29e7d5) service to vmnode1/
> 192.168.1.159:8020 (org.apache.hadoop.hdfs.server.datanode.DataNode)
>
> [2023-10-03 20:53:00,882] INFO After receiving heartbeat response,
> updating state of namenode vmnode1:8020 to active
> (org.apache.hadoop.hdfs.server.datanode.DataNode)
>
> [2023-10-03 20:54:18,771] WARN IOException in offerService
> (org.apache.hadoop.hdfs.server.datanode.DataNode)
>
> java.io.EOFException: End of File Exception between local host is:
> "vmnode1/192.168.1.159"; destination host is: "vmnode1":8020; :
> java.io.EOFException; For more details see:
> http://wiki.apache.org/hadoop/EOFException
>
> at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930)
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:879)
>
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1571)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1513)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
>
> at com.sun.proxy.$Proxy19.sendHeartbeat(Unknown Source)
>
> at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:168)
>
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:562)
>
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:710)
>
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:920)
>
> at java.base/java.lang.Thread.run(Thread.java:829)
>
> Caused by: java.io.EOFException
>
> at java.base/java.io.DataInputStream.readInt(DataInputStream.java:397)
>
> at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1906)
>
> at
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1187)
>
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1078)
>
>

Reply via email to