Dear Wellington: Many thanks for your help. Deeply appreciate it. It seems to work. I have tried shutting down and starting up twice and tested hdfs dfs -ls /, and it connects to hdfs. Once again many thanks. Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Monday, April 27, 2015 4:24 PM, Wellington Chevreuil <wellington.chevre...@gmail.com> wrote: Because you are probably not defining "dfs.namenode.name.dir", the NN metadata directory is being created at tmp and getting deleted once the process is restarted. On 27 Apr 2015, at 11:50, Anand Murali <anand_vi...@yahoo.com> wrote: Wellington: I have done it at installation time. I shall try once again. However, request you look at this URL, and maybe let me know your views/suggestions. BTW, if I uninstall and re-install this error goes away for that session. Thanks. Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) On Monday, April 27, 2015 4:16 PM, Wellington Chevreuil <wellington.chevre...@gmail.com> wrote: Hello Anand, This error means NN could not find it's metadata directory. You probably need to run "hadoop namenode -format" command before trying to start hdfs. …2015-04-27 15:21:42,696 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimageorg.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.… Running the mentioned command will create this directory. You may also want to define a different location for NN metadata directory, by setting "dfs.namenode.name.dir" property in hdfs-site.xml. On 27 Apr 2015, at 11:02, Anand Murali <anand_vi...@yahoo.com> wrote: Dear Wellington: You were right. There is a error with respect to temp files. Find attached log file. Appreciate your help. Thanks Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) On Monday, April 27, 2015 2:46 PM, Wellington Chevreuil <wellington.chevre...@gmail.com> wrote: There might be some FATAL/ERROR/WARN or Exception messages in this log file that can explain why NN process is dying. Can you paste some of the last lines on the log file? On 27 Apr 2015, at 09:37, Susheel Kumar Gadalay <skgada...@gmail.com> wrote: > jps listing is not showing namenode daemon. > > Verify why namenode is not up from the logs. > > On 4/27/15, Anand Murali <anand_vi...@yahoo.com> wrote: >> Dear All: >> >> Please find below. >> >> and_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ >> start-dfs.sh >> >> Starting namenodes on [localhost] >> localhost: starting namenode, logging to >> /home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-namenode-Latitude-E5540.out >> localhost: starting datanode, logging to >> /home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-datanode-Latitude-E5540.out >> Starting secondary namenodes [0.0.0.0] >> 0.0.0.0: starting secondarynamenode, logging to >> /home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-secondarynamenode-Latitude-E5540.out >> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ start-yarn.sh >> starting yarn daemons >> starting resourcemanager, logging to >> /home/anand_vihar/hadoop-2.6.0/logs/yarn-anand_vihar-resourcemanager-Latitude-E5540.out >> localhost: starting nodemanager, logging to >> /home/anand_vihar/hadoop-2.6.0/logs/yarn-anand_vihar-nodemanager-Latitude-E5540.out >> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ jps >> 7464 Jps >> 7147 NodeManager >> 6863 SecondaryNameNode >> 7017 ResourceManager >> 6686 DataNode >> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ hdfs dfs -ls >> ls: Call From Latitude-E5540/127.0.1.1 to localhost:9000 failed on >> connection exception: java.net.ConnectException: Connection refused; For >> more details see: http://wiki.apache.org/hadoop/ConnectionRefused >> >> Has anybody encountered this error and fixed it. I have checked >> http://wiki.apache.org/hadoop/ConnectionRefused but there is no information >> on how to fix it. It is obvious that it seems to be a network error , but I >> am interested in knowing if anybody has faced this error and fixed it. Reply >> and help most appreciated. >> >> Thanks >> >> Regards, Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - >> 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) <hadoop-anand_vihar-namenode-Latitude-E5540.out><hadoop-anand_vihar-namenode-Latitude-E5540.log>