See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/807/changes
Changes:
[shv] HADOOP-5509. PendingReplicationBlocks does not start monitor in the
constructor. Contributed by Konstantin Shvachko.
[hairong] HADOOP-5644. Namenode is stuck in safe mode. Contributed by Suresh
Srinivas.
[hairong] HADOOP-5654. TestReplicationPolicy.<init> fails on
java.net.BindException. Contributed by Hairong Kuang.
[rangadi] HADOOP-5581. HDFS should throw FileNotFoundException when while
opening
a file that does not exist. (Brian Bockelman via rangadi)
------------------------------------------
[...truncated 466133 lines...]
[junit] 2009-04-15 08:48:42,937 INFO datanode.DataNode
(DataNode.java:startDataNode(317)) - Opened info server at 58571
[junit] 2009-04-15 08:48:42,938 INFO datanode.DataNode
(DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-15 08:48:42,938 INFO datanode.DirectoryScanner
(DirectoryScanner.java:<init>(133)) - scan starts at 1239793416938 with
interval 21600000
[junit] 2009-04-15 08:48:42,940 INFO http.HttpServer
(HttpServer.java:start(454)) - Jetty bound to port 48047
[junit] 2009-04-15 08:48:42,940 INFO mortbay.log (?:invoke0(?)) -
jetty-6.1.14
[junit] 2009-04-15 08:48:43,003 INFO mortbay.log (?:invoke0(?)) - Started
selectchannelconnec...@localhost:48047
[junit] 2009-04-15 08:48:43,004 INFO jvm.JvmMetrics
(JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with
processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-15 08:48:43,005 INFO metrics.RpcMetrics
(RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode,
port=55779
[junit] 2009-04-15 08:48:43,006 INFO ipc.Server (Server.java:run(471)) -
IPC Server Responder: starting
[junit] 2009-04-15 08:48:43,006 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 1 on 55779: starting
[junit] 2009-04-15 08:48:43,006 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 0 on 55779: starting
[junit] 2009-04-15 08:48:43,006 INFO ipc.Server (Server.java:run(313)) -
IPC Server listener on 55779: starting
[junit] 2009-04-15 08:48:43,006 INFO datanode.DataNode
(DataNode.java:startDataNode(396)) - dnRegistration =
DatanodeRegistration(vesta.apache.org:58571, storageID=, infoPort=48047,
ipcPort=55779)
[junit] 2009-04-15 08:48:43,006 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 2 on 55779: starting
[junit] 2009-04-15 08:48:43,009 INFO hdfs.StateChange
(FSNamesystem.java:registerDatanode(2084)) - BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:58571 storage
DS-1227352605-67.195.138.9-58571-1239785323008
[junit] 2009-04-15 08:48:43,009 INFO net.NetworkTopology
(NetworkTopology.java:add(328)) - Adding a new node:
/default-rack/127.0.0.1:58571
[junit] 2009-04-15 08:48:43,018 INFO datanode.DataNode
(DataNode.java:register(554)) - New storage id
DS-1227352605-67.195.138.9-58571-1239785323008 is assigned to data-node
127.0.0.1:58571
[junit] 2009-04-15 08:48:43,018 INFO datanode.DataNode
(DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:58571,
storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047,
ipcPort=55779)In DataNode.run, data =
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] Starting DataNode 1 with dfs.data.dir:
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4
[junit] 2009-04-15 08:48:43,019 INFO datanode.DataNode
(DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec
Initial delay: 0msec
[junit] 2009-04-15 08:48:43,029 INFO common.Storage
(DataStorage.java:recoverTransitionRead(123)) - Storage directory
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3
is not formatted.
[junit] 2009-04-15 08:48:43,029 INFO common.Storage
(DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-15 08:48:43,033 INFO common.Storage
(DataStorage.java:recoverTransitionRead(123)) - Storage directory
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4
is not formatted.
[junit] 2009-04-15 08:48:43,034 INFO common.Storage
(DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-15 08:48:43,074 INFO datanode.DataNode
(DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 2
msecs
[junit] 2009-04-15 08:48:43,074 INFO datanode.DataNode
(DataNode.java:offerService(739)) - Starting Periodic block scanner.
[junit] 2009-04-15 08:48:43,078 INFO datanode.DataNode
(FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
[junit] 2009-04-15 08:48:43,079 INFO datanode.DataNode
(DataNode.java:startDataNode(317)) - Opened info server at 36698
[junit] 2009-04-15 08:48:43,079 INFO datanode.DataNode
(DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-15 08:48:43,079 INFO datanode.DirectoryScanner
(DirectoryScanner.java:<init>(133)) - scan starts at 1239803740079 with
interval 21600000
[junit] 2009-04-15 08:48:43,081 INFO http.HttpServer
(HttpServer.java:start(454)) - Jetty bound to port 54532
[junit] 2009-04-15 08:48:43,081 INFO mortbay.log (?:invoke0(?)) -
jetty-6.1.14
[junit] 2009-04-15 08:48:43,143 INFO mortbay.log (?:invoke0(?)) - Started
selectchannelconnec...@localhost:54532
[junit] 2009-04-15 08:48:43,145 INFO jvm.JvmMetrics
(JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with
processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-15 08:48:43,146 INFO metrics.RpcMetrics
(RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode,
port=45581
[junit] 2009-04-15 08:48:43,147 INFO ipc.Server (Server.java:run(471)) -
IPC Server Responder: starting
[junit] 2009-04-15 08:48:43,147 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 0 on 45581: starting
[junit] 2009-04-15 08:48:43,147 INFO ipc.Server (Server.java:run(313)) -
IPC Server listener on 45581: starting
[junit] 2009-04-15 08:48:43,148 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 2 on 45581: starting
[junit] 2009-04-15 08:48:43,147 INFO ipc.Server (Server.java:run(934)) -
IPC Server handler 1 on 45581: starting
[junit] 2009-04-15 08:48:43,148 INFO datanode.DataNode
(DataNode.java:startDataNode(396)) - dnRegistration =
DatanodeRegistration(vesta.apache.org:36698, storageID=, infoPort=54532,
ipcPort=45581)
[junit] 2009-04-15 08:48:43,150 INFO hdfs.StateChange
(FSNamesystem.java:registerDatanode(2084)) - BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:36698 storage
DS-291892191-67.195.138.9-36698-1239785323149
[junit] 2009-04-15 08:48:43,150 INFO net.NetworkTopology
(NetworkTopology.java:add(328)) - Adding a new node:
/default-rack/127.0.0.1:36698
[junit] 2009-04-15 08:48:43,153 INFO datanode.DataNode
(DataNode.java:register(554)) - New storage id
DS-291892191-67.195.138.9-36698-1239785323149 is assigned to data-node
127.0.0.1:36698
[junit] 2009-04-15 08:48:43,153 INFO datanode.DataNode
(DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:36698,
storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532,
ipcPort=45581)In DataNode.run, data =
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-15 08:48:43,160 INFO datanode.DataNode
(DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec
Initial delay: 0msec
[junit] 2009-04-15 08:48:43,201 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-15 08:48:43,202 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-15 08:48:43,203 INFO datanode.DataNode
(DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 3
msecs
[junit] 2009-04-15 08:48:43,204 INFO datanode.DataNode
(DataNode.java:offerService(739)) - Starting Periodic block scanner.
[junit] 2009-04-15 08:48:43,219 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-15 08:48:43,220 INFO FSNamesystem.audit
(FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson ip=/127.0.0.1
cmd=create src=/test dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-04-15 08:48:43,225 INFO hdfs.StateChange
(FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock:
/test. blk_-923968447045708337_1001
[junit] 2009-04-15 08:48:43,227 INFO datanode.DataNode
(DataXceiver.java:writeBlock(228)) - Receiving block
blk_-923968447045708337_1001 src: /127.0.0.1:49733 dest: /127.0.0.1:58571
[junit] 2009-04-15 08:48:43,228 INFO datanode.DataNode
(DataXceiver.java:writeBlock(228)) - Receiving block
blk_-923968447045708337_1001 src: /127.0.0.1:58253 dest: /127.0.0.1:36698
[junit] 2009-04-15 08:48:43,231 INFO DataNode.clienttrace
(BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:58253, dest:
/127.0.0.1:36698, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844,
offset: 0, srvID: DS-291892191-67.195.138.9-36698-1239785323149, blockid:
blk_-923968447045708337_1001
[junit] 2009-04-15 08:48:43,231 INFO datanode.DataNode
(BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block
blk_-923968447045708337_1001 terminating
[junit] 2009-04-15 08:48:43,272 INFO hdfs.StateChange
(FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:36698 is added to blk_-923968447045708337_1001 size
4096
[junit] 2009-04-15 08:48:43,273 INFO DataNode.clienttrace
(BlockReceiver.java:run(929)) - src: /127.0.0.1:49733, dest: /127.0.0.1:58571,
bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID:
DS-1227352605-67.195.138.9-58571-1239785323008, blockid:
blk_-923968447045708337_1001
[junit] 2009-04-15 08:48:43,274 INFO datanode.DataNode
(BlockReceiver.java:run(993)) - PacketResponder 1 for block
blk_-923968447045708337_1001 terminating
[junit] 2009-04-15 08:48:43,274 INFO hdfs.StateChange
(FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:58571 is added to blk_-923968447045708337_1001 size
4096
[junit] 2009-04-15 08:48:43,276 INFO hdfs.StateChange
(FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock:
/test. blk_-3625756261105283729_1001
[junit] 2009-04-15 08:48:43,278 INFO datanode.DataNode
(DataXceiver.java:writeBlock(228)) - Receiving block
blk_-3625756261105283729_1001 src: /127.0.0.1:49735 dest: /127.0.0.1:58571
[junit] 2009-04-15 08:48:43,279 INFO datanode.DataNode
(DataXceiver.java:writeBlock(228)) - Receiving block
blk_-3625756261105283729_1001 src: /127.0.0.1:58255 dest: /127.0.0.1:36698
[junit] 2009-04-15 08:48:43,283 INFO DataNode.clienttrace
(BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:58255, dest:
/127.0.0.1:36698, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844,
offset: 0, srvID: DS-291892191-67.195.138.9-36698-1239785323149, blockid:
blk_-3625756261105283729_1001
[junit] 2009-04-15 08:48:43,283 INFO hdfs.StateChange
(FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:36698 is added to blk_-3625756261105283729_1001
size 4096
[junit] 2009-04-15 08:48:43,283 INFO datanode.DataNode
(BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block
blk_-3625756261105283729_1001 terminating
[junit] 2009-04-15 08:48:43,285 INFO hdfs.StateChange
(FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:58571 is added to blk_-3625756261105283729_1001
size 4096
[junit] 2009-04-15 08:48:43,285 INFO DataNode.clienttrace
(BlockReceiver.java:run(929)) - src: /127.0.0.1:49735, dest: /127.0.0.1:58571,
bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID:
DS-1227352605-67.195.138.9-58571-1239785323008, blockid:
blk_-3625756261105283729_1001
[junit] 2009-04-15 08:48:43,286 INFO datanode.DataNode
(BlockReceiver.java:run(993)) - PacketResponder 1 for block
blk_-3625756261105283729_1001 terminating
[junit] 2009-04-15 08:48:43,288 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-15 08:48:43,289 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
[junit]
[junit] Domains:
[junit] Domain = JMImplementation
[junit] Domain = com.sun.management
[junit] Domain = hadoop
[junit] Domain = java.lang
[junit] Domain = java.util.logging
[junit]
[junit] MBeanServer default domain = DefaultDomain
[junit]
[junit] MBean count = 26
[junit]
[junit] Query MBeanServer MBeans:
[junit] hadoop services:
hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-46751665
[junit] hadoop services:
hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1676075030
[junit] hadoop services:
hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1536477668
[junit] hadoop services:
hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1168987622
[junit] hadoop services:
hadoop:service=DataNode,name=RpcActivityForPort45581
[junit] hadoop services:
hadoop:service=DataNode,name=RpcActivityForPort55779
[junit] Info: key = bytes_written; val = 0
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 1
[junit] 2009-04-15 08:48:43,392 INFO ipc.Server (Server.java:stop(1098)) -
Stopping server on 45581
[junit] 2009-04-15 08:48:43,393 INFO ipc.Server (Server.java:run(352)) -
Stopping IPC Server listener on 45581
[junit] 2009-04-15 08:48:43,393 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 0 on 45581: exiting
[junit] 2009-04-15 08:48:43,393 INFO ipc.Server (Server.java:run(536)) -
Stopping IPC Server Responder
[junit] 2009-04-15 08:48:43,393 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 1 on 45581: exiting
[junit] 2009-04-15 08:48:43,393 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 2 on 45581: exiting
[junit] 2009-04-15 08:48:43,394 WARN datanode.DataNode
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:36698,
storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532,
ipcPort=45581):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-15 08:48:43,394 INFO datanode.DataNode
(DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads
is 1
[junit] 2009-04-15 08:48:43,395 INFO datanode.DataBlockScanner
(DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
[junit] 2009-04-15 08:48:43,395 INFO datanode.DataNode
(DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:36698,
storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532,
ipcPort=45581):Finishing DataNode in:
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-15 08:48:43,396 INFO ipc.Server (Server.java:stop(1098)) -
Stopping server on 45581
[junit] 2009-04-15 08:48:43,396 INFO datanode.DataNode
(DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads
is 0
[junit] Shutting down DataNode 0
[junit] 2009-04-15 08:48:43,498 INFO ipc.Server (Server.java:stop(1098)) -
Stopping server on 55779
[junit] 2009-04-15 08:48:43,499 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 2 on 55779: exiting
[junit] 2009-04-15 08:48:43,499 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 1 on 55779: exiting
[junit] 2009-04-15 08:48:43,500 WARN datanode.DataNode
(DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:58571,
storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047,
ipcPort=55779):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-15 08:48:43,499 INFO datanode.DataNode
(DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads
is 1
[junit] 2009-04-15 08:48:43,499 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 0 on 55779: exiting
[junit] 2009-04-15 08:48:43,499 INFO ipc.Server (Server.java:run(536)) -
Stopping IPC Server Responder
[junit] 2009-04-15 08:48:43,499 INFO ipc.Server (Server.java:run(352)) -
Stopping IPC Server listener on 55779
[junit] 2009-04-15 08:48:43,501 INFO datanode.DataBlockScanner
(DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
[junit] 2009-04-15 08:48:43,502 INFO datanode.DataNode
(DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:58571,
storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047,
ipcPort=55779):Finishing DataNode in:
FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] 2009-04-15 08:48:43,502 INFO ipc.Server (Server.java:stop(1098)) -
Stopping server on 55779
[junit] 2009-04-15 08:48:43,502 INFO datanode.DataNode
(DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads
is 0
[junit] 2009-04-15 08:48:43,505 WARN namenode.FSNamesystem
(FSNamesystem.java:run(2359)) - ReplicationMonitor thread received
InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-15 08:48:43,505 WARN namenode.DecommissionManager
(DecommissionManager.java:run(67)) - Monitor interrupted:
java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-15 08:48:43,505 INFO namenode.FSNamesystem
(FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time
for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of
syncs: 2 SyncTimes(ms): 16 2
[junit] 2009-04-15 08:48:43,506 INFO namenode.FSNamesystem
(FSEditLog.java:processIOError(471)) - current list of storage
dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-15 08:48:43,507 INFO ipc.Server (Server.java:stop(1098)) -
Stopping server on 45270
[junit] 2009-04-15 08:48:43,507 INFO ipc.Server (Server.java:run(352)) -
Stopping IPC Server listener on 45270
[junit] 2009-04-15 08:48:43,508 INFO ipc.Server (Server.java:run(536)) -
Stopping IPC Server Responder
[junit] 2009-04-15 08:48:43,508 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 0 on 45270: exiting
[junit] 2009-04-15 08:48:43,508 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 1 on 45270: exiting
[junit] 2009-04-15 08:48:43,509 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 5 on 45270: exiting
[junit] 2009-04-15 08:48:43,509 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 6 on 45270: exiting
[junit] 2009-04-15 08:48:43,510 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 7 on 45270: exiting
[junit] 2009-04-15 08:48:43,509 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 4 on 45270: exiting
[junit] 2009-04-15 08:48:43,510 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 8 on 45270: exiting
[junit] 2009-04-15 08:48:43,510 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 2 on 45270: exiting
[junit] 2009-04-15 08:48:43,510 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 9 on 45270: exiting
[junit] 2009-04-15 08:48:43,510 INFO ipc.Server (Server.java:run(992)) -
IPC Server handler 3 on 45270: exiting
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.996 sec
[junit] Running org.apache.hadoop.util.TestCyclicIteration
[junit]
[junit]
[junit] integers=[]
[junit] map={}
[junit] start=-1, iteration=[]
[junit]
[junit]
[junit] integers=[0]
[junit] map={0=0}
[junit] start=-1, iteration=[0]
[junit] start=0, iteration=[0]
[junit] start=1, iteration=[0]
[junit]
[junit]
[junit] integers=[0, 2]
[junit] map={0=0, 2=2}
[junit] start=-1, iteration=[0, 2]
[junit] start=0, iteration=[2, 0]
[junit] start=1, iteration=[2, 0]
[junit] start=2, iteration=[0, 2]
[junit] start=3, iteration=[0, 2]
[junit]
[junit]
[junit] integers=[0, 2, 4]
[junit] map={0=0, 2=2, 4=4}
[junit] start=-1, iteration=[0, 2, 4]
[junit] start=0, iteration=[2, 4, 0]
[junit] start=1, iteration=[2, 4, 0]
[junit] start=2, iteration=[4, 0, 2]
[junit] start=3, iteration=[4, 0, 2]
[junit] start=4, iteration=[0, 2, 4]
[junit] start=5, iteration=[0, 2, 4]
[junit]
[junit]
[junit] integers=[0, 2, 4, 6]
[junit] map={0=0, 2=2, 4=4, 6=6}
[junit] start=-1, iteration=[0, 2, 4, 6]
[junit] start=0, iteration=[2, 4, 6, 0]
[junit] start=1, iteration=[2, 4, 6, 0]
[junit] start=2, iteration=[4, 6, 0, 2]
[junit] start=3, iteration=[4, 6, 0, 2]
[junit] start=4, iteration=[6, 0, 2, 4]
[junit] start=5, iteration=[6, 0, 2, 4]
[junit] start=6, iteration=[0, 2, 4, 6]
[junit] start=7, iteration=[0, 2, 4, 6]
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.105 sec
[junit] Running org.apache.hadoop.util.TestGenericsUtil
[junit] 2009-04-15 08:48:44,411 WARN conf.Configuration
(Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the
classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml,
mapred-site.xml and hdfs-site.xml to override properties of core-default.xml,
mapred-default.xml and hdfs-default.xml respectively
[junit] 2009-04-15 08:48:44,424 WARN util.GenericOptionsParser
(GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed:
Missing argument for option:jt
[junit] usage: general options are:
[junit] -archives <paths> comma separated archives to be
unarchived
[junit] on the compute machines.
[junit] -conf <configuration file> specify an application configuration
file
[junit] -D <property=value> use value for given property
[junit] -files <paths> comma separated files to be copied
to the
[junit] map reduce cluster
[junit] -fs <local|namenode:port> specify a namenode
[junit] -jt <local|jobtracker:port> specify a job tracker
[junit] -libjars <paths> comma separated jar files to include
in the
[junit] classpath.
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.185 sec
[junit] Running org.apache.hadoop.util.TestIndexedSort
[junit] sortRandom seed:
4570181145371200710(org.apache.hadoop.util.QuickSort)
[junit] testSorted seed:
3936155686533720704(org.apache.hadoop.util.QuickSort)
[junit] testAllEqual setting min/max at
421/12(org.apache.hadoop.util.QuickSort)
[junit] sortWritable seed:
-4256366235203113534(org.apache.hadoop.util.QuickSort)
[junit] QuickSort degen cmp/swp:
23252/3713(org.apache.hadoop.util.QuickSort)
[junit] sortRandom seed:
-2158977114618471186(org.apache.hadoop.util.HeapSort)
[junit] testSorted seed:
-4540180234436261788(org.apache.hadoop.util.HeapSort)
[junit] testAllEqual setting min/max at
327/197(org.apache.hadoop.util.HeapSort)
[junit] sortWritable seed:
-5127121557757729690(org.apache.hadoop.util.HeapSort)
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.061 sec
[junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
[junit] 2009-04-15 08:48:46,245 INFO util.ProcessTree
(ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
[junit] 2009-04-15 08:48:46,750 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 28711
[junit] 2009-04-15 08:48:46,801 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 28711
28713 28714 ]
[junit] 2009-04-15 08:48:53,331 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 28725
28711 28727 28721 28723 28717 28719 28713 28715 ]
[junit] 2009-04-15 08:48:53,345 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero
exit code. This is expected as we are killing the subprocesses of the task
intentionally. org.apache.hadoop.util.Shell$ExitCodeException:
[junit] 2009-04-15 08:48:53,346 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
[junit] 2009-04-15 08:48:53,346 INFO util.ProcessTree
(ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the
process group 28711 with SIGTERM. Exit code 0
[junit] 2009-04-15 08:48:53,428 INFO util.TestProcfsBasedProcessTree
(TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread
successfully joined.
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.277 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] 2009-04-15 08:48:54,409 WARN conf.Configuration
(Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the
classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml,
mapred-site.xml and hdfs-site.xml to override properties of core-default.xml,
mapred-default.xml and hdfs-default.xml respectively
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.629 sec
[junit] Running org.apache.hadoop.util.TestShell
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
[junit] Running org.apache.hadoop.util.TestStringUtils
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec
BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770:
Tests failed!
Total time: 179 minutes 9 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...