masatana commented on PR #1322:
URL: https://github.com/apache/bigtop/pull/1322#issuecomment-2579505184
RPM test result
<details>
Building rpm on Rocky 8 (w/ Docker)
```
$ ./gradlew allclean hadoop-pkg-ind repo-ind -POS=rockylinux-8
```
Smoke tests on Rocky 8
```
$ cd provisioner/docker
$ ./docker-hadoop.sh --enable-local-repo --disable-gpg-check
--docker-compose-plugin -C config_rockylinux-8.yaml -F
docker-compose-cgroupv2.yml --stack hdfs,yarn,mapreduce --smoke-tests hdfs
-c 3
(snip)
Gradle Test Executor 2 finished executing tests.
> Task :bigtop-tests:smoke-tests:hdfs:test
Finished generating test XML results (0.023 secs) into:
/bigtop-home/bigtop-tests/smoke-tests/hdfs/build/test-results/test
Generating HTML test report...
Finished generating test html results (0.024 secs) into:
/bigtop-home/bigtop-tests/smoke-tests/hdfs/build/reports/tests/test
Now testing...
:bigtop-tests:smoke-tests:hdfs:test (Thread[Execution worker for ':' Thread
5,5,main]) completed. Took 9 mins 7.834 secs.
BUILD SUCCESSFUL in 8m 45s
29 actionable tasks: 8 executed, 21 up-to-date
Stopped 1 worker daemon(s).
+ rm -rf buildSrc/build/test-results/binary
+ rm -rf /bigtop-home/.gradle
```
Install additional packages (journalnode,zkfc,dfsrouter,secondarynamenode)
```
$ ./docker-hadoop.sh -dcp --exec 1 /bin/bash
$ dnf install hadoop-hdfs-journalnode hadoop-hdfs-secondarynamenode
hadoop-hdfs-zkfc hadoop-hdfs-dfsrouter -y
```
Check if systemctl works (`systemctl start` & `systemctl status`)
```
$ for service_name in namenode datanode journalnode secondarynamenode zkfc
dfsrouter; do systemctl start hadoop-hdfs-$service_name; done
Job for hadoop-hdfs-zkfc.service failed because the control process exited
with error code.
See "systemctl status hadoop-hdfs-zkfc.service" and "journalctl -xe" for
details.
```
```
$ for service_name in namenode datanode journalnode secondarynamenode zkfc
dfsrouter; do systemctl status hadoop-hdfs-$service_name; done
● hadoop-hdfs-namenode.service - Hadoop NameNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 02:51:34 UTC; 2h 19min ago
Docs: https://hadoop.apache.org/
Main PID: 7549 (java)
Tasks: 73 (limit: 98358)
Memory: 454.4M
CGroup:
/docker/071de24523e57040eff639e76330f9e4fe9ddb5d3d956a660a52a655ddd7823f/system.slice/hadoop-hdfs-namenode.service
└─7549
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_namenode -Djava.net.preferIPv4Stack=true
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote -Dyar…
Jan 09 05:07:31 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service:
Unknown serialization key: ref-gid
Jan 09 05:07:31 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service:
Changed dead -> running
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service:
Trying to enqueue job hadoop-hdfs-namenode.service/start/replace
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service:
Installed new job hadoop-hdfs-namenode.service/start as 425
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service:
Enqueued job hadoop-hdfs-namenode.service/start as 425
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-namenode.service: Job
hadoop-hdfs-namenode.service/start finished, result=done
Warning: Journal has been rotated since unit was started. Log output is
incomplete or unavailable.
● hadoop-hdfs-datanode.service - Hadoop DataNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 02:51:39 UTC; 2h 19min ago
Docs: https://hadoop.apache.org/
Main PID: 7746 (java)
Tasks: 76 (limit: 98358)
Memory: 500.0M
CGroup:
/docker/071de24523e57040eff639e76330f9e4fe9ddb5d3d956a660a52a655ddd7823f/system.slice/hadoop-hdfs-datanode.service
└─7746
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_datanode -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.…
Jan 09 05:07:31 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service:
Unknown serialization key: ref-gid
Jan 09 05:07:31 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service:
Changed dead -> running
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service:
Trying to enqueue job hadoop-hdfs-datanode.service/start/replace
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service:
Installed new job hadoop-hdfs-datanode.service/start as 456
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service:
Enqueued job hadoop-hdfs-datanode.service/start as 456
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-datanode.service: Job
hadoop-hdfs-datanode.service/start finished, result=done
Warning: Journal has been rotated since unit was started. Log output is
incomplete or unavailable.
● hadoop-hdfs-journalnode.service - Hadoop Journalnode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-journalnode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 05:09:05 UTC; 1min 52s ago
Docs: https://hadoop.apache.org/
Process: 31278 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start journalnode (code=exited, status=0/SUCCESS)
Main PID: 31329 (java)
Tasks: 44 (limit: 98358)
Memory: 164.0M
CGroup:
/docker/071de24523e57040eff639e76330f9e4fe9ddb5d3d956a660a52a655ddd7823f/system.slice/hadoop-hdfs-journalnode.service
└─31329
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_journalnode -Djava.net.preferIPv4Stack=true
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.log.file=hadoop-hdfs-journa…
Jan 09 05:09:03 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
User lookup succeeded: uid=996 gid=993
Jan 09 05:09:03 071de24523e5 systemd[31278]:
hadoop-hdfs-journalnode.service: Executing: /usr/bin/hdfs --config
/etc/hadoop/conf --daemon start journalnode
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Child 31278 belongs to hadoop-hdfs-journalnode.service.
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Control process exited, code=exited status=0
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Got final SIGCHLD for state start.
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Main PID guessed: 31329
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Changed start -> running
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Job hadoop-hdfs-journalnode.service/start finished, result=done
Jan 09 05:09:05 071de24523e5 systemd[1]: Started Hadoop Journalnode.
Jan 09 05:09:05 071de24523e5 systemd[1]: hadoop-hdfs-journalnode.service:
Failed to send unit change signal for hadoop-hdfs-journalnode.service:
Connection reset by peer
● hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
Loaded: loaded
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static; vendor
preset: disabled)
Active: active (running) since Thu 2025-01-09 05:09:07 UTC; 1min 50s ago
Docs: https://hadoop.apache.org/
Process: 31372 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start secondarynamenode (code=exited, status=0/SUCCESS)
Main PID: 31423 (java)
Tasks: 37 (limit: 98358)
Memory: 340.5M
CGroup:
/docker/071de24523e57040eff639e76330f9e4fe9ddb5d3d956a660a52a655ddd7823f/system.slice/hadoop-hdfs-secondarynamenode.service
└─31423
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxre…
Jan 09 05:09:05 071de24523e5 systemd[31372]:
hadoop-hdfs-secondarynamenode.service: Executing: /usr/bin/hdfs --config
/etc/hadoop/conf --daemon start secondarynamenode
Jan 09 05:09:05 071de24523e5 hdfs[31372]: WARNING:
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS.
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Child 31372 belongs to
hadoop-hdfs-secondarynamenode.service.
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Control process exited, code=exited
status=0
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Got final SIGCHLD for state start.
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Main PID guessed: 31423
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Changed start -> running
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Job
hadoop-hdfs-secondarynamenode.service/start finished, result=done
Jan 09 05:09:07 071de24523e5 systemd[1]: Started Hadoop Secondary NameNode.
Jan 09 05:09:07 071de24523e5 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Failed to send unit change signal for
hadoop-hdfs-secondarynamenode.service: Connection reset by peer
● hadoop-hdfs-zkfc.service - Hadoop ZKFC
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; static;
vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 05:09:09 UTC;
1min 48s ago
Docs: https://hadoop.apache.org/
Process: 31467 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start zkfc (code=exited, status=1/FAILURE)
Jan 09 05:09:07 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: User
lookup succeeded: uid=996 gid=993
Jan 09 05:09:07 071de24523e5 systemd[31467]: hadoop-hdfs-zkfc.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start zkfc
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Child
31467 belongs to hadoop-hdfs-zkfc.service.
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Control
process exited, code=exited status=1
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Got final
SIGCHLD for state start.
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Failed
with result 'exit-code'.
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Changed
start -> failed
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Job
hadoop-hdfs-zkfc.service/start finished, result=failed
Jan 09 05:09:09 071de24523e5 systemd[1]: Failed to start Hadoop ZKFC.
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-zkfc.service: Unit
entered failed state.
● hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 05:09:11 UTC; 1min 46s ago
Docs: https://hadoop.apache.org/
Process: 31564 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start dfsrouter (code=exited, status=0/SUCCESS)
Main PID: 31615 (java)
Tasks: 69 (limit: 98358)
Memory: 294.5M
CGroup:
/docker/071de24523e57040eff639e76330f9e4fe9ddb5d3d956a660a52a655ddd7823f/system.slice/hadoop-hdfs-dfsrouter.service
└─31615
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_dfsrouter -Djava.net.preferIPv4Stack=true
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.log.file=hadoop-hdfs-dfsroute…
Jan 09 05:09:09 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service: User
lookup succeeded: uid=996 gid=993
Jan 09 05:09:09 071de24523e5 systemd[31564]: hadoop-hdfs-dfsrouter.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start dfsrouter
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service:
Child 31564 belongs to hadoop-hdfs-dfsrouter.service.
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service:
Control process exited, code=exited status=0
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service: Got
final SIGCHLD for state start.
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service: Main
PID guessed: 31615
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service:
Changed start -> running
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service: Job
hadoop-hdfs-dfsrouter.service/start finished, result=done
Jan 09 05:09:11 071de24523e5 systemd[1]: Started Hadoop dfsrouter.
Jan 09 05:09:11 071de24523e5 systemd[1]: hadoop-hdfs-dfsrouter.service:
Failed to send unit change signal for hadoop-hdfs-dfsrouter.service: Connection
reset by peer
```
While ZKFC failed to start, we can confirm that it can launch via systemd
but shutdown immediately because we didn't configure HA settings (see the log
below).
```
$ cat /var/log/hadoop-hdfs/hadoop-hdfs-zkfc-071de24523e5.log
(snip)
************************************************************/
2025-01-09 05:09:08,670 INFO
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: registered UNIX signal
handlers for [TERM, HUP, INT]
2025-01-09 05:09:09,034 ERROR
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: DFSZKFailOverController
exiting due to earlier exception
org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this
namenode.
2025-01-09 05:09:09,039 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1: org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled
for this namenode.
2025-01-09 05:09:09,045 INFO
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at
071de24523e5.bigtop.apache.org/172.19.0.4
************************************************************/
```
Check if prepared unit files are used (`systemctl cat`)
```
$ for service_name in namenode datanode journalnode secondarynamenode zkfc
dfsrouter; do systemctl cat hadoop-hdfs-$service_name; done
# /usr/lib/systemd/system/hadoop-hdfs-namenode.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop NameNode
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start namenode
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop namenode
# /usr/lib/systemd/system/hadoop-hdfs-datanode.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop DataNode
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start datanode
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop datanode
# /usr/lib/systemd/system/hadoop-hdfs-journalnode.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop Journalnode
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start journalnode
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop journalnode
# /usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop Secondary NameNode
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start
secondarynamenode
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop
secondarynamenode
# /usr/lib/systemd/system/hadoop-hdfs-zkfc.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop ZKFC
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start zkfc
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop zkfc
# /usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Documentation=https://hadoop.apache.org/
Description=Hadoop dfsrouter
Before=multi-user.target
Before=graphical.target
After=remote-fs.target
[Service]
User=hdfs
Group=hdfs
Type=forking
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
RemainAfterExit=no
SuccessExitStatus=5 6
ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start dfsrouter
ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop dfsrouter
```
Check if they work after restart (container restart, `systemctl start`)
```
$ docker restart (container id)
$ for service_name in namenode datanode journalnode secondarynamenode zkfc
dfsrouter; do systemctl start hadoop-hdfs-$service_name; done
Job for hadoop-hdfs-zkfc.service failed because the control process exited
with error code.
See "systemctl status hadoop-hdfs-zkfc.service" and "journalctl -xe" for
details.
$ for service_name in namenode datanode journalnode secondarynamenode zkfc
dfsrouter; do systemctl status hadoop-hdfs-$service_name; done
● hadoop-hdfs-namenode.service - Hadoop NameNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 08:40:09 UTC; 5min ago
Docs: https://hadoop.apache.org/
Process: 48 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start namenode (code=exited, status=0/SUCCESS)
Main PID: 101 (java)
Tasks: 73 (limit: 98358)
Memory: 439.6M
CGroup:
/docker/8dc2c59a2af63935747266a2157e516fee3c344f22909d4a6733ae0247bce98b/system.slice/hadoop-hdfs-namenode.service
└─101
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_namenode -Djava.net.preferIPv4Stack=true
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote -Dyarn>
Jan 09 08:40:06 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: User
lookup succeeded: uid=996 gid=993
Jan 09 08:40:06 8dc2c59a2af6 systemd[48]: hadoop-hdfs-namenode.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start namenode
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Child
48 belongs to hadoop-hdfs-namenode.service.
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Control process exited, code=exited status=0
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Got
final SIGCHLD for state start.
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Main
PID guessed: 101
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Changed start -> running
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Job
hadoop-hdfs-namenode.service/start finished, result=done
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: Started Hadoop NameNode.
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Failed to send unit change signal for hadoop-hdfs-namenode.service: Connection
reset by peer
● hadoop-hdfs-datanode.service - Hadoop DataNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 08:40:11 UTC; 5min ago
Docs: https://hadoop.apache.org/
Process: 151 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start datanode (code=exited, status=0/SUCCESS)
Main PID: 206 (java)
Tasks: 65 (limit: 98358)
Memory: 238.3M
CGroup:
/docker/8dc2c59a2af63935747266a2157e516fee3c344f22909d4a6733ae0247bce98b/system.slice/hadoop-hdfs-datanode.service
└─206
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_datanode -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.l>
Jan 09 08:40:09 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: User
lookup succeeded: uid=996 gid=993
Jan 09 08:40:09 8dc2c59a2af6 systemd[151]: hadoop-hdfs-datanode.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start datanode
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Child
151 belongs to hadoop-hdfs-datanode.service.
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Control process exited, code=exited status=0
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Got
final SIGCHLD for state start.
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Main
PID guessed: 206
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Changed start -> running
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Job
hadoop-hdfs-datanode.service/start finished, result=done
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: Started Hadoop DataNode.
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Failed to send unit change signal for hadoop-hdfs-datanode.service: Connection
reset by peer
● hadoop-hdfs-journalnode.service - Hadoop Journalnode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-journalnode.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 08:40:10 UTC; 5min ago
Docs: https://hadoop.apache.org/
Process: 322 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start journalnode (code=exited, status=0/SUCCESS)
Main PID: 373 (java)
Tasks: 44 (limit: 98358)
Memory: 163.7M
CGroup:
/docker/8dc2c59a2af63935747266a2157e516fee3c344f22909d4a6733ae0247bce98b/system.slice/hadoop-hdfs-journalnode.service
└─373
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_journalnode -Djava.net.preferIPv4Stack=true
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.log.file=hadoop-hdfs-journaln>
Jan 09 08:40:11 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
User lookup succeeded: uid=996 gid=993
Jan 09 08:40:11 8dc2c59a2af6 systemd[322]: hadoop-hdfs-journalnode.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start journalnode
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Child 322 belongs to hadoop-hdfs-journalnode.service.
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Control process exited, code=exited status=0
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Got final SIGCHLD for state start.
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Main PID guessed: 373
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Changed start -> running
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Job hadoop-hdfs-journalnode.service/start finished, result=done
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: Started Hadoop Journalnode.
Jan 09 08:40:10 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Failed to send unit change signal for hadoop-hdfs-journalnode.service:
Connection reset by peer
● hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
Loaded: loaded
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static; vendor
preset: disabled)
Active: active (running) since Thu 2025-01-09 08:40:12 UTC; 5min ago
Docs: https://hadoop.apache.org/
Process: 454 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start secondarynamenode (code=exited, status=0/SUCCESS)
Main PID: 505 (java)
Tasks: 37 (limit: 98358)
Memory: 340.7M
CGroup:
/docker/8dc2c59a2af63935747266a2157e516fee3c344f22909d4a6733ae0247bce98b/system.slice/hadoop-hdfs-secondarynamenode.service
└─505
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremo>
Jan 09 08:40:10 8dc2c59a2af6 systemd[454]:
hadoop-hdfs-secondarynamenode.service: Executing: /usr/bin/hdfs --config
/etc/hadoop/conf --daemon start secondarynamenode
Jan 09 08:40:10 8dc2c59a2af6 hdfs[454]: WARNING:
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS.
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Child 454 belongs to
hadoop-hdfs-secondarynamenode.service.
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Control process exited, code=exited
status=0
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Got final SIGCHLD for state start.
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Main PID guessed: 505
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Changed start -> running
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Job
hadoop-hdfs-secondarynamenode.service/start finished, result=done
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]: Started Hadoop Secondary NameNode.
Jan 09 08:40:12 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Failed to send unit change signal for
hadoop-hdfs-secondarynamenode.service: Connection reset by peer
● hadoop-hdfs-zkfc.service - Hadoop ZKFC
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; static;
vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:40:14 UTC;
5min ago
Docs: https://hadoop.apache.org/
Process: 551 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start zkfc (code=exited, status=1/FAILURE)
Jan 09 08:40:12 8dc2c59a2af6 systemd[551]: hadoop-hdfs-zkfc.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start zkfc
Jan 09 08:40:13 8dc2c59a2af6 hdfs[551]: ERROR: Cannot set priority of zkfc
process 602
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Child 551
belongs to hadoop-hdfs-zkfc.service.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Control
process exited, code=exited status=1
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Got final
SIGCHLD for state start.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Failed
with result 'exit-code'.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Changed
start -> failed
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Job
hadoop-hdfs-zkfc.service/start finished, result=failed
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: Failed to start Hadoop ZKFC.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Unit
entered failed state.
● hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service;
static; vendor preset: disabled)
Active: active (running) since Thu 2025-01-09 08:40:17 UTC; 4min 59s ago
Docs: https://hadoop.apache.org/
Process: 636 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start dfsrouter (code=exited, status=0/SUCCESS)
Main PID: 687 (java)
Tasks: 69 (limit: 98358)
Memory: 296.8M
CGroup:
/docker/8dc2c59a2af63935747266a2157e516fee3c344f22909d4a6733ae0247bce98b/system.slice/hadoop-hdfs-dfsrouter.service
└─687
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.432.b06-2.el8.x86_64/bin/java
-Dproc_dfsrouter -Djava.net.preferIPv4Stack=true
-Dyarn.log.dir=/var/log/hadoop-hdfs -Dyarn.log.file=hadoop-hdfs-dfsrouter->
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: User
lookup succeeded: uid=996 gid=993
Jan 09 08:40:14 8dc2c59a2af6 systemd[636]: hadoop-hdfs-dfsrouter.service:
Executing: /usr/bin/hdfs --config /etc/hadoop/conf --daemon start dfsrouter
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Child 636 belongs to hadoop-hdfs-dfsrouter.service.
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Control process exited, code=exited status=0
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Got
final SIGCHLD for state start.
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Main
PID guessed: 687
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Changed start -> running
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Job
hadoop-hdfs-dfsrouter.service/start finished, result=done
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: Started Hadoop dfsrouter.
Jan 09 08:40:17 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Failed to send unit change signal for hadoop-hdfs-dfsrouter.service: Connection
reset by peer
```
Check if we can stop the service (`systemctl stop`)
```
[root@8dc2c59a2af6 /]# for service_name in namenode datanode journalnode
secondarynamenode zkfc dfsrouter; do systemctl stop hadoop-hdfs-$service_name;
done
[root@8dc2c59a2af6 /]# for service_name in namenode datanode journalnode
secondarynamenode zkfc dfsrouter; do systemctl status
hadoop-hdfs-$service_name; done
● hadoop-hdfs-namenode.service - Hadoop NameNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service;
static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:46:57 UTC; 7s
ago
Docs: https://hadoop.apache.org/
Process: 841 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
stop namenode (code=exited, status=0/SUCCESS)
Process: 48 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start namenode (code=exited, status=0/SUCCESS)
Main PID: 101 (code=exited, status=143)
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Child
101 belongs to hadoop-hdfs-namenode.service.
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Main
process exited, code=exited, status=143/n/a
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Child
841 belongs to hadoop-hdfs-namenode.service.
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Control process exited, code=exited status=0
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Got
final SIGCHLD for state stop.
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Failed with result 'exit-code'.
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service:
Changed stop -> failed
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Job
hadoop-hdfs-namenode.service/stop finished, result=done
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: Stopped Hadoop NameNode.
Jan 09 08:46:57 8dc2c59a2af6 systemd[1]: hadoop-hdfs-namenode.service: Unit
entered failed state.
● hadoop-hdfs-datanode.service - Hadoop DataNode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service;
static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:46:59 UTC; 5s
ago
Docs: https://hadoop.apache.org/
Process: 900 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
stop datanode (code=exited, status=0/SUCCESS)
Process: 151 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start datanode (code=exited, status=0/SUCCESS)
Main PID: 206 (code=exited, status=143)
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Child
206 belongs to hadoop-hdfs-datanode.service.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Main
process exited, code=exited, status=143/n/a
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Child
900 belongs to hadoop-hdfs-datanode.service.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Control process exited, code=exited status=0
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Got
final SIGCHLD for state stop.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Failed with result 'exit-code'.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service:
Changed stop -> failed
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Job
hadoop-hdfs-datanode.service/stop finished, result=done
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: Stopped Hadoop DataNode.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-datanode.service: Unit
entered failed state.
● hadoop-hdfs-journalnode.service - Hadoop Journalnode
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-journalnode.service;
static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:47:00 UTC; 4s
ago
Docs: https://hadoop.apache.org/
Process: 961 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
stop journalnode (code=exited, status=0/SUCCESS)
Process: 322 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start journalnode (code=exited, status=0/SUCCESS)
Main PID: 373 (code=exited, status=143)
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Child 373 belongs to hadoop-hdfs-journalnode.service.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Main process exited, code=exited, status=143/n/a
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Child 961 belongs to hadoop-hdfs-journalnode.service.
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Control process exited, code=exited status=0
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Got final SIGCHLD for state stop.
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Failed with result 'exit-code'.
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Changed stop -> failed
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Job hadoop-hdfs-journalnode.service/stop finished, result=done
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: Stopped Hadoop Journalnode.
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]: hadoop-hdfs-journalnode.service:
Unit entered failed state.
● hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
Loaded: loaded
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static; vendor
preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:46:58 UTC; 6s
ago
Docs: https://hadoop.apache.org/
Process: 1020 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
stop secondarynamenode (code=exited, status=0/SUCCESS)
Process: 454 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start secondarynamenode (code=exited, status=0/SUCCESS)
Main PID: 505 (code=exited, status=143)
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Child 505 belongs to
hadoop-hdfs-secondarynamenode.service.
Jan 09 08:47:00 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Main process exited, code=exited,
status=143/n/a
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Child 1020 belongs to
hadoop-hdfs-secondarynamenode.service.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Control process exited, code=exited
status=0
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Got final SIGCHLD for state stop.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Failed with result 'exit-code'.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Changed stop -> failed
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Job
hadoop-hdfs-secondarynamenode.service/stop finished, result=done
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: Stopped Hadoop Secondary NameNode.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]:
hadoop-hdfs-secondarynamenode.service: Unit entered failed state.
● hadoop-hdfs-zkfc.service - Hadoop ZKFC
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; static;
vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:40:14 UTC;
6min ago
Docs: https://hadoop.apache.org/
Process: 551 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start zkfc (code=exited, status=1/FAILURE)
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Got final
SIGCHLD for state start.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Failed
with result 'exit-code'.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Changed
start -> failed
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Job
hadoop-hdfs-zkfc.service/start finished, result=failed
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: Failed to start Hadoop ZKFC.
Jan 09 08:40:14 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Unit
entered failed state.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Trying to
enqueue job hadoop-hdfs-zkfc.service/stop/replace
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Installed
new job hadoop-hdfs-zkfc.service/stop as 279
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Enqueued
job hadoop-hdfs-zkfc.service/stop as 279
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-zkfc.service: Job
hadoop-hdfs-zkfc.service/stop finished, result=done
● hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service;
static; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2025-01-09 08:46:59 UTC; 5s
ago
Docs: https://hadoop.apache.org/
Process: 1083 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
stop dfsrouter (code=exited, status=0/SUCCESS)
Process: 636 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon
start dfsrouter (code=exited, status=0/SUCCESS)
Main PID: 687 (code=exited, status=143)
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Child 687 belongs to hadoop-hdfs-dfsrouter.service.
Jan 09 08:46:58 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Main
process exited, code=exited, status=143/n/a
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Child 1083 belongs to hadoop-hdfs-dfsrouter.service.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Control process exited, code=exited status=0
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Got
final SIGCHLD for state stop.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Failed with result 'exit-code'.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service:
Changed stop -> failed
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Job
hadoop-hdfs-dfsrouter.service/stop finished, result=done
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: Stopped Hadoop dfsrouter.
Jan 09 08:46:59 8dc2c59a2af6 systemd[1]: hadoop-hdfs-dfsrouter.service: Unit
entered failed state.
```
</details>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]