[
https://issues.apache.org/jira/browse/HADOOP-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12472858
]
Tom White commented on HADOOP-1013:
-----------------------------------
I've just run the EC2 AMI again and I get different output and *no*
ArithmeticException (it's actually an updated one using Hadoop 0.11.1, but I
don't believe this accounts for the difference since I didn't get the exception
when using 0.11.0):
2007-02-13 15:11:08,026 INFO org.apache.hadoop.dfs.StateChange: STATE* Network
topology has 0 racks and 0 datanodes
2007-02-13 15:11:08,301 INFO org.mortbay.util.Credential: Checking Resource
aliases
2007-02-13 15:11:08,420 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
2007-02-13 15:11:09,305 INFO org.mortbay.util.Container: Started [EMAIL
PROTECTED]
8b8
2007-02-13 15:11:09,407 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2007-02-13 15:11:09,418 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2007-02-13 15:11:09,418 INFO org.mortbay.util.Container: Started
HttpContext[/static,/static]
2007-02-13 15:11:09,422 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:50070
2007-02-13 15:11:09,422 INFO org.mortbay.util.Container: Started [EMAIL
PROTECTED]
2007-02-13 15:11:09,446 INFO org.apache.hadoop.ipc.Server: IPC Server listener
on 50001: starting
2007-02-13 15:11:09,448 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0
on 50001: starting
2007-02-13 15:11:09,448 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1
on 50001: starting
2007-02-13 15:11:09,461 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3
on 50001: starting
2007-02-13 15:11:09,461 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4
on 50001: starting
2007-02-13 15:11:09,461 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5
on 50001: starting
2007-02-13 15:11:09,462 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6
on 50001: starting
2007-02-13 15:11:09,462 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2
on 50001: starting
2007-02-13 15:11:09,462 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7
on 50001: starting
2007-02-13 15:11:09,462 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8
on 50001: starting
2007-02-13 15:11:09,462 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9
on 50001: starting
2007-02-13 15:11:15,251 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
NameSystem.registerDatanode: node registration
from domU-12-31-34-00-02-20.usma2.compute.amazonaws.com:50010 storage
2007-02-13 15:11:15,253 INFO org.apache.hadoop.net.NetworkTopology: Adding a
new node: /default-rack/domU-12-31-34-00
-02-20.usma2.compute.amazonaws.com:50010
2007-02-13 15:11:18,491 WARN org.apache.hadoop.dfs.StateChange: DIR*
FSDirectory.unprotectedDelete: failed to remove
/mnt/hadoop/mapred/.system.crc because it does not exist
2007-02-13 15:11:18,492 WARN org.apache.hadoop.dfs.StateChange: DIR*
FSDirectory.unprotectedDelete: failed to remove
/mnt/hadoop/mapred/system because it does not exist
2007-02-13 15:16:16,841 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit Log
2007-02-13 15:16:17,501 INFO org.apache.hadoop.fs.FSNamesystem: Roll FSImage
Notice that a datanode registers (at 15:11:15,251), unlike on Jim's setup. I
think there is a bug here (the namenode shouldn't throw an exception if there
are no registered datanodes), which is being exposed by a problem in the EC2
scripts (why isn't the datanode registering for Jim). We should take the latter
bug back to HADOOP-952 (Jim - are you able to dig out the logs on the datanode
to see why it didn't connect?).
> ArithmeticException when number of racks is zero
> ------------------------------------------------
>
> Key: HADOOP-1013
> URL: https://issues.apache.org/jira/browse/HADOOP-1013
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.11.0
> Environment: EC2
> Reporter: James P. White
> Assigned To: Hairong Kuang
>
> It seems that if something is wrong in the configuration that results in the
> number of racks being zero, the symptom is a divide-by-zero exception.
> [EMAIL PROTECTED] ~]# cd /usr/local/hadoop-0.11.0/
> [EMAIL PROTECTED] hadoop-0.11.0]# bin/hadoop jar hadoop-0.11.0-examples.jar
> pi 10 10000000
> Number of Maps = 10 Samples per Map = 10000000
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.lang.ArithmeticException: / by zero
> at
> org.apache.hadoop.dfs.FSNamesystem$Replicator.chooseTarget(FSNamesystem.java:2593)
> at
> org.apache.hadoop.dfs.FSNamesystem$Replicator.chooseTarget(FSNamesystem.java:2555)
> at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:684)
> at org.apache.hadoop.dfs.NameNode.create(NameNode.java:248)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:337)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:538)
> at org.apache.hadoop.ipc.Client.call(Client.java:467)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:164)
> at org.apache.hadoop.dfs.$Proxy0.create(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateNewBlock(DFSClient.java:1091)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1031)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1255)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1345)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at
> org.apache.hadoop.fs.FSDataOutputStream$Summer.close(FSDataOutputStream.java:98)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at
> org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:724)
> at org.apache.hadoop.examples.PiEstimator.launch(PiEstimator.java:185)
> at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:226)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:143)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:40)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
> [EMAIL PROTECTED] hadoop-0.11.0]#
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.