Any update on the HTTP Error : Still the issue remains but Hadoop is functioning properly.

Thanks


Adarsh Sharma wrote:
Thanks Joey I solved the problem of Safe mode by manually deleting some files ,

bin/hadoop dfsadmin -report , shows the all 2 nodes and safe mode gets OFF after some time. But,

but I have no guess to solve the below error :

WHy my web UI shows :

 HTTP ERROR: 404

/dfshealth.jsp

RequestURI=/dfshealth.jsp

/Powered by Jetty:// <http://jetty.mortbay.org/>
/



Any views on it. Please help

Thanks




Joey Echeverria wrote:
It looks like both datanodes are trying to serve data out of the smae directory. Is there any chance that both datanodes are using the same NFS mount for the dfs.data.dir?

If not, what I would do is delete the data from ${dfs.data.dir} and then re-format the namenode. You'll lose all of your data, hopefully that's not a problem at this time.
-Joey


On Jul 8, 2011, at 0:40, Adarsh Sharma <adarsh.sha...@orkash.com> wrote:

Thanks , Still don't understand the issue.

My name node has repeatedly show these logs :

2011-07-08 09:36:31,365 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop ip=/MAster-IP cmd=listStatus src=/home/hadoop/system dst=null perm=null 2011-07-08 09:36:31,367 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call delete(/home/hadoop/system, true) from Master-IP:53593: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode. The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically. org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/system. Name node is in safe mode. The ratio of reported blocks 0.8293 has not reached the threshold 0.9990. Safe mode will be turned off automatically. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680) at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


And one of my data node shows the below logs :

2011-07-08 09:49:56,967 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER 2011-07-08 09:49:59,962 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 192.168.0.209:50010 is attempting to report storage ID DS-218695497-SLave_IP-50010-1303978807280. Node SLave_IP:50010 is expected to serve this storage. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891) at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

      at org.apache.hadoop.ipc.Client.call(Client.java:740)
      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
      at $Proxy4.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756) at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
      at java.lang.Thread.run(Thread.java:619)

2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020 2011-07-08 09:50:00,072 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting 2011-07-08 09:50:00,074 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting 2011-07-08 09:50:00,076 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020 2011-07-08 09:50:00,077 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2011-07-08 09:50:00,077 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 1 2011-07-08 09:50:00,078 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(SLave_IP:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):DataXceiveServer: java.nio.channels.AsynchronousCloseException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
      at java.lang.Thread.run(Thread.java:619)

2011-07-08 09:50:00,394 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread. 2011-07-08 09:50:01,079 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.209:50010, storageID=DS-218695497-192.168.0.209-50010-1303978807280, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/hdd1-1/data/current'} 2011-07-08 09:50:01,183 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020 2011-07-08 09:50:01,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0 2011-07-08 09:50:01,185 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ws14-suru-lin/

Also my dfsdmin report shows :

bash-3.2$ bin/hadoop dfsadmin -report
Safe mode is ON
Configured Capacity: 59069984768 (55.01 GB)
Present Capacity: 46471880704 (43.28 GB)
DFS Remaining: 45169745920 (42.07 GB)
DFS Used: 1302134784 (1.21 GB)
DFS Used%: 2.8%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: IP:50010
Decommission Status : Normal
Configured Capacity: 59069984768 (55.01 GB)
DFS Used: 1302134784 (1.21 GB)
Non DFS Used: 12598104064 (11.73 GB)
DFS Remaining: 45169745920(42.07 GB)
DFS Used%: 2.2%
DFS Remaining%: 76.47%
Last contact: Fri Jul 08 10:03:40 IST 2011

But I have 2 datanodes.Safe mode is on from the last 1 hour. I know the command to leave it manually. I think the problem arises due to non start up of one of my datanodes. How could i solve this problem .

Also for

HTTP ERROR: 404

/dfshealth.jsp

RequestURI=/dfshealth.jsp

/Powered by Jetty:// <http://jetty.mortbay.org/> error,

I manually check through below command at all nodes On Master :

ash-3.2$ /usr/java/jdk1.6.0_18/bin/jps 7548 SecondaryNameNode
7395 NameNode
7628 JobTracker
7713 Jps

And also on slaves :

[root@ws33-shiv-lin ~]# /usr/java/jdk1.6.0_20/bin/jps 5696 DataNode
5941 Jps
5818 TaskTracker




Thanks



jeff.schm...@shell.com wrote:
Adarsh,

You could also run from command line

[root@xxxxxxx bin]# ./hadoop dfsadmin -report
Configured Capacity: 1151948095488 (1.05 TB)
Present Capacity: 1059350446080 (986.6 GB)
DFS Remaining: 1056175992832 (983.64 GB)
DFS Used: 3174453248 (2.96 GB)
DFS Used%: 0.3%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 5 (5 total, 0 dead)




-----Original Message-----
From: dhru...@gmail.com [mailto:dhru...@gmail.com] On Behalf Of Dhruv
Kumar
Sent: Thursday, July 07, 2011 10:01 AM
To: common-user@hadoop.apache.org
Subject: Re: HTTP Error

1) Check with jps to see if all services are functioning.

2) Have you tried appending dfshealth.jsp at the end of the URL as the
404
says?

Try using this:
http://localhost:50070/dfshealth.jsp



On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma
<adarsh.sha...@orkash.com>wrote:

Dear all,

Today I am stucked with the strange problem in the running hadoop
cluster.
After starting hadoop by bin/start-all.sh, all nodes are started. But
when
I check through web UI ( MAster-Ip:50070), It shows :


 HTTP ERROR: 404

/dfshealth.jsp

RequestURI=/dfshealth.jsp

/Powered by Jetty:// <http://jetty.mortbay.org/>
/

/I check by command line that hadoop cannot able to get out of safe
mode.
/

/I know , manually command to leave safe mode
/

/bin/hadoop dfsadmin -safemode leave
/

/But How can I make hadoop  run properly and what are the reasons of
this
error
/

/
Thanks
/






Reply via email to