[jira] [Updated] (HDFS-13474) Unable to start Hadoop DataNodes

2018-04-19 Thread robbie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

robbie updated HDFS-13474:
--
Description: 
I am trying to follow the instructions in the Getting Started guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?
  
 Java Version,
{quote}rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java --version 
 java 9.0.4 
 Java(TM) SE Runtime Environment (build 9.0.4+11) 
 Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
{quote}
EDIT1 : following Brahma Reddy's suggestion I have repeated the steps with 
Java8 as well and get the same error message.

Ubuntu version,
{quote}$ lsb_release -a
 No LSB modules are available.
 Distributor ID: neon
 Description: KDE neon User Edition 5.12
 Release: 16.04
 Codename: xenial
{quote}
I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From source with checksum 14182d20c972b3e2105580a1ad6990 
 This command was run using 
/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
{quote}
 when I try `bin/hdfs groups` it doesnt return but gives me,
{quote}018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{quote}
when I try, `$ bin/hdfs lsSnapshottableDir`
{quote}lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:  [http://wiki|http://wiki/].
 apache.org/hadoop/ConnectionRefused
{quote}
 
 when I try, `$ bin/hdfs classpath`
{quote}/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
{quote}
core-site.xml
{quote} 
 
 
 fs.defaultFS
 hdfs://localhost:9000
 
 
{quote}
 
 hdfs-site.xml
{quote}
 
 dfs.replication
 1
 
 
{quote}
mapred-site.xml
{quote}
 
 mapreduce.framework.name
 yarn
 
 
{quote}

  was:
I am trying to follow the instructions in the Getting Started guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 

[jira] [Updated] (HDFS-13474) Unable to start Hadoop DataNodes

2018-04-18 Thread robbie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

robbie updated HDFS-13474:
--
Description: 
I am trying to follow the instructions in the Getting Started guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?
  
 Java Version,
{quote}rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java --version 
 java 9.0.4 
 Java(TM) SE Runtime Environment (build 9.0.4+11) 
 Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
{quote}
Ubuntu version,
{quote}$ lsb_release -a
 No LSB modules are available.
 Distributor ID: neon
 Description: KDE neon User Edition 5.12
 Release: 16.04
 Codename: xenial
{quote}
I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From source with checksum 14182d20c972b3e2105580a1ad6990 
 This command was run using 
/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
{quote}
 when I try `bin/hdfs groups` it doesnt return but gives me,
{quote}018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{quote}
when I try, `$ bin/hdfs lsSnapshottableDir`
{quote}lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:  [http://wiki|http://wiki/].
 apache.org/hadoop/ConnectionRefused
{quote}
 
 when I try, `$ bin/hdfs classpath`
{quote}/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
{quote}
core-site.xml
{quote} 
 
 
 fs.defaultFS
 hdfs://localhost:9000
 
 
{quote}
 
 hdfs-site.xml
{quote}
 
 dfs.replication
 1
 
 
{quote}
mapred-site.xml
{quote}
 
 mapreduce.framework.name
 yarn
 
 
{quote}

  was:
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 

[jira] [Updated] (HDFS-13474) Unable to start Hadoop DataNodes

2018-04-18 Thread robert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

robert updated HDFS-13474:
--
Description: 
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?
  
 Java Version,
{quote}rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java --version 
 java 9.0.4 
 Java(TM) SE Runtime Environment (build 9.0.4+11) 
 Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
{quote}
Ubuntu version,
{quote}$ lsb_release -a
 No LSB modules are available.
 Distributor ID: neon
 Description: KDE neon User Edition 5.12
 Release: 16.04
 Codename: xenial
{quote}
I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From source with checksum 14182d20c972b3e2105580a1ad6990 
 This command was run using 
/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
{quote}
 when I try `bin/hdfs groups` it doesnt return but gives me,
{quote}018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{quote}
when I try, `$ bin/hdfs lsSnapshottableDir`
{quote}lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:  [http://wiki|http://wiki/].
 apache.org/hadoop/ConnectionRefused
{quote}
 
 when I try, `$ bin/hdfs classpath`
{quote}/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
{quote}

core-site.xml
{quote} 


fs.defaultFS
hdfs://localhost:9000


{quote}
 
hdfs-site.xml
{quote}


dfs.replication
1


{quote}

mapred-site.xml
{quote}


mapreduce.framework.name
yarn


{quote}


  was:
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 

[jira] [Updated] (HDFS-13474) Unable to start Hadoop DataNodes

2018-04-18 Thread robert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

robert updated HDFS-13474:
--
Description: 
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?
 
Java Version,
{quote}
rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java --version 
java 9.0.4 
Java(TM) SE Runtime Environment (build 9.0.4+11) 
Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
{quote}

Ubuntu version,
{quote}
$ lsb_release -a
No LSB modules are available.
Distributor ID: neon
Description: KDE neon User Edition 5.12
Release: 16.04
Codename: xenial
{quote}

I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From source with checksum 14182d20c972b3e2105580a1ad6990 
 This command was run using 
/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
{quote}
 when I try `bin/hdfs groups` it doesnt return but gives me,
{quote}018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{quote}
when I try, `$ bin/hdfs lsSnapshottableDir`
{quote}lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:  [http://wiki|http://wiki/].
 apache.org/hadoop/ConnectionRefused
{quote}
 
 when I try, `$ bin/hdfs classpath`
{quote}/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
{quote}
 

  was:
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?

 

Java Version,
rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java 

[jira] [Updated] (HDFS-13474) Unable to start Hadoop DataNodes

2018-04-18 Thread robert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

robert updated HDFS-13474:
--
Description: 
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?

 

Java Version,
rcoll...@steelydan.com@c0315:~/temp/logs/hadoop$ java --version 
java 9.0.4 
Java(TM) SE Runtime Environment (build 9.0.4+11) 
Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)

I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From source with checksum 14182d20c972b3e2105580a1ad6990 
 This command was run using 
/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
{quote}
 when I try `bin/hdfs groups` it doesnt return but gives me,
{quote}018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{quote}
when I try, `$ bin/hdfs lsSnapshottableDir`
{quote}lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:  [http://wiki|http://wiki/].
 apache.org/hadoop/ConnectionRefused
{quote}
 
 when I try, `$ bin/hdfs classpath`
{quote}/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/hdfs/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/*:/home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
{quote}
 

  was:
I am trying to follow the instrutions in the GettingStarted guide,

[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_Single_Node]

I have confirmed, that I can `ssh localhost` without a password prompt. I have 
also run the following steps,
{quote}1. $ bin/hdfs namenode -format
 2. $ sbin/start-dfs.sh
{quote}
But I cant run step 3. to browse the location at `[http://localhost:9870/]`. 
When I run `>jsp` from the terminal prompt I just get returned,
{quote}14900 Jps
{quote}
I was expecting a list of my nodes.

In the Logs I see two error messages towards the end,
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
{quote}
{quote}2018-04-18 14:15:42,516 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 1: SIGHUP
{quote}
{quote}2018-04-18 14:15:42,517 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down DataNode at c0315/127.0.1.1
 /
{quote}
I will attach the full logs with this bug report.

Can anyone help even with ways to debug this please ?

I have tried running the commands, `bin/hdfs version`
{quote}Hadoop 3.1.0 
 Source code repository [https://github.com/apache/hadoop] -r 
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
 Compiled by centos on 2018-03-30T00:00Z 
 Compiled with protoc 2.5.0 
 From