Hi,

I just restarted the machine and it's working now.

Cheers,

Donglai

Quoting Dong Zhang <[email protected]>:

>> Hi,
>>
>> I am new to Hadoop. I am following the tutorial on
>>
http://hadoop.apache.org/common/docs/current/quickstart.html
>>
>> I have downloaded the hadoop-0.20.1.tar.gz package and
>> unpackaged it.
>>
>> First, I tried the standalone operation and got the
>> right
>> result:
>>
>> [dlzh...@red hadoop-0.20.1]$ cat output/*
>> 1       dfsadmin
>>
>> After that, I tried the Pseudo-Distributed Operation,
>> following the tutorial exactly, changed the
>> conf/core-site.xml, conf/hdfs-site.xml and
>> conf/mapred-site.xml files.
>> Start the distributed filesystem:
>>
>> [dlzh...@red hadoop-0.20.1]$ bin/hadoop namenode -format
>> 09/10/26 15:52:21 INFO namenode.NameNode: STARTUP_MSG:
>>
/************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = red/127.0.0.1
>> STARTUP_MSG:   args = [-format]
>> STARTUP_MSG:   version = 0.20.1
>> STARTUP_MSG:   build =
>>
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
>> -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC
>> 2009
>>
************************************************************/
>> Re-format filesystem in /tmp/hadoop-dlzhang/dfs/name ?
>> (Y or
>> N) Y
>> 09/10/26 15:52:34 INFO namenode.FSNamesystem:
>> fsOwner=dlzhang,dlzhang
>> 09/10/26 15:52:34 INFO namenode.FSNamesystem:
>> supergroup=supergroup
>> 09/10/26 15:52:34 INFO namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 09/10/26 15:52:34 INFO common.Storage: Image file of
>> size 97
>> saved in 0 seconds.
>> 09/10/26 15:52:34 INFO common.Storage: Storage directory
>> /tmp/hadoop-dlzhang/dfs/name has been successfully
>> formatted.
>> 09/10/26 15:52:34 INFO namenode.NameNode: SHUTDOWN_MSG:
>>
/************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at red/127.0.0.1
>>
************************************************************/
>>
>>
>> Start the hadoop daemons:
>> [dlzh...@red hadoop-0.20.1]$ bin/start-all.sh
>> starting namenode, logging to
>>
/home/dlzhang/downloads/cloud/hadoop-0.20.1/bin/../logs/hadoop-dlzhang-namenode-red.out
>> localhost: starting datanode, logging to
>>
/home/dlzhang/downloads/cloud/hadoop-0.20.1/bin/../logs/hadoop-dlzhang-datanode-red.out
>> localhost: starting secondarynamenode, logging to
>>
/home/dlzhang/downloads/cloud/hadoop-0.20.1/bin/../logs/hadoop-dlzhang-secondarynamenode-red.out
>> starting jobtracker, logging to
>>
/home/dlzhang/downloads/cloud/hadoop-0.20.1/bin/../logs/hadoop-dlzhang-jobtracker-red.out
>> localhost: starting tasktracker, logging to
>>
/home/dlzhang/downloads/cloud/hadoop-0.20.1/bin/../logs/hadoop-dlzhang-tasktracker-red.out
>>
>> (Looks fine) however, in the log file
>> hadoop-dlzhang-namenode-red.log file show some
>> exception:
>>
>> 2009-10-26 16:12:24,370 INFO
>> org.apache.hadoop.ipc.Server:
>> IPC Server handler 0 on 9000, call
>>
addBlock(/tmp/hadoop-dlzhang/mapred/system/jobtracker.info,
>> DFSClient_-1746251417) from 127.0.0.1:49434: error:
>> java.io.IOException: File
>> /tmp/hadoop-dlzhang/mapred/system/jobtracker.info could
>> only
>> be replicated to 0 nodes, instead of 1
>> java.io.IOException: File
>> /tmp/hadoop-dlzhang/mapred/system/jobtracker.info could
>> only
>> be replicated to 0 nodes, instead of 1
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>     at
>>
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>     at
>>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at
>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>     at
>>
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>     at
>>
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>     at
>> java.security.AccessController.doPrivileged(Native
>> Method)
>>     at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>>     at
>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> If I continue to run:
>>
>> [dlzh...@red hadoop-0.20.1]$ bin/hadoop fs -put conf
>> input
>> 09/10/26 16:15:11 WARN hdfs.DFSClient: DataStreamer
>> Exception: org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException: File
>> /user/dlzhang/input/mapred-site.xml could only be
>> replicated
>> to 0 nodes, instead of 1
>>         at
>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
>>         at
>>
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>         at
>> sun.reflect.GeneratedMethodAccessor8.invoke(Unknown
>> Source)
>>         at
>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at
>> java.lang.reflect.Method.invoke(Method.java:597)
>>         at
>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>         at
>>
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>         at
>>
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>         at
>> java.security.AccessController.doPrivileged(Native
>> Method)
>>         at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>         at
>> org.apache.hadoop.ipc.Client.call(Client.java:739)
>>         at
>> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at
>>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at
>> java.lang.reflect.Method.invoke(Method.java:597)
>>         at
>>
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>         at
>>
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at
>>
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>>         at
>>
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>>         at
>>
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>>         at
>>
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
>>
>> 09/10/26 16:15:11 WARN hdfs.DFSClient: Error Recovery
>> for
>> block null bad datanode[0] nodes == null
>> 09/10/26 16:15:11 WARN hdfs.DFSClient: Could not get
>> block
>> locations. Source file
>> "/user/dlzhang/input/mapred-site.xml"
>> - Aborting...
>> put: java.io.IOException: File
>> /user/dlzhang/input/mapred-site.xml could only be
>> replicated
>> to 0 nodes, instead of 1
>>
>>
>> Can anyone give some hints about what's wrong there?
>>
>> Thanks in advance,
>>
>> Donglai Zhang
>>
>>


Reply via email to