Re: Failed to repeat the Quickstart guide for Pseudo-distributed operation

2008-07-08 Thread Shengkai Zhu
After your formatting the namenode second time, your datanodes and namenode
may stay in inconsistency, namely, under imcompatible namespace.

On 7/2/08, Xuan Dzung Doan [EMAIL PROTECTED] wrote:

 I was exactly following the Hadoop 0.16.4 quickstart guide to run a
 Pseudo-distributed operation on my Fedora 8 machine. The first time I did
 it, everything ran successfully (formated a new hdfs, started hadoop
 daemons, then ran the grep example). A moment later, I decided to redo
 everything again. Reformating the hdfs and starting the daemons seemed to
 have no problem; but from the homepage of the namenode's web interface (
 http://localhost:50070/), when I clicked Browse the filesystem, it said
 the following:


 HTTP ERROR: 404
 /browseDirectory.jsp
 RequestURI=/browseDirectory.jsp
 Then when I tried to copy files to the hdfs to re-run the grep example, I
 couldn't with the following long list of exceptions (looks like some
 replication or block allocation issue):

 # bin/hadoop dfs -put conf input

 08/06/29 09:38:42 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/root/input/hadoop-env.sh could only be replicated to 0 nodes, instead
 of 1
at
 org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)

 08/06/29 09:38:42 WARN dfs.DFSClient: NotReplicatedYetException sleeping
 /user/root/input/hadoop-env.sh retries left 4
 08/06/29 09:38:42 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/root/input/hadoop-env.sh could only be replicated to 0 nodes, instead
 of 1
at
 org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
at
 

Re: Failed to repeat the Quickstart guide for Pseudo-distributed operation

2008-07-08 Thread Arun C Murthy

# bin/hadoop dfs -put conf input

08/06/29 09:38:42 INFO dfs.DFSClient:  
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File / 
user/root/input/hadoop-env.sh could only be replicated to 0 nodes,  
instead of 1



Looks like your datanode didn't come up, anything in the logs?
http://wiki.apache.org/hadoop/Help

Arun




Re: Failed to repeat the Quickstart guide for Pseudo-distributed operation

2008-07-08 Thread Deepak Diwakar
You need to delete hadoop-root directory which has been created through DFS.
Usually hadoop creates this directory in /tmp/.
after deletion of the directory, just follow the instruction once again. It
will work.

2008/7/9 Arun C Murthy [EMAIL PROTECTED]:

 # bin/hadoop dfs -put conf input

 08/06/29 09:38:42 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/root/input/hadoop-env.sh could only be replicated to 0 nodes, instead
 of 1



 Looks like your datanode didn't come up, anything in the logs?
 http://wiki.apache.org/hadoop/Help

 Arun





-- 
- Deepak Diwakar,
Associate Software Eng.,
Pubmatic, pune
Contact: +919960930405


Failed to repeat the Quickstart guide for Pseudo-distributed operation

2008-07-01 Thread Xuan Dzung Doan
I was exactly following the Hadoop 0.16.4 quickstart guide to run a 
Pseudo-distributed operation on my Fedora 8 machine. The first time I did it, 
everything ran successfully (formated a new hdfs, started hadoop daemons, then 
ran the grep example). A moment later, I decided to redo everything again. 
Reformating the hdfs and starting the daemons seemed to have no problem; but 
from the homepage of the namenode's web interface (http://localhost:50070/), 
when I clicked Browse the filesystem, it said the following:


HTTP ERROR: 404
/browseDirectory.jsp
RequestURI=/browseDirectory.jsp
Then when I tried to copy files to the hdfs to re-run the grep example, I 
couldn't with the following long list of exceptions (looks like some 
replication or block allocation issue):

# bin/hadoop dfs -put conf input

08/06/29 09:38:42 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: File /user/root/input/hadoop-env.sh could only be 
replicated to 0 nodes, instead of 1
at 
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)

08/06/29 09:38:42 WARN dfs.DFSClient: NotReplicatedYetException sleeping 
/user/root/input/hadoop-env.sh retries left 4
08/06/29 09:38:42 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: File /user/root/input/hadoop-env.sh could only be 
replicated to 0 nodes, instead of 1
at 
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
at 
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)

08/06/29 09:38:42 WARN dfs.DFSClient: NotReplicatedYetException sleeping