See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/491/changes>
Changes:
[eli] HDFS-1487. FSDirectory.removeBlock() should update diskspace count of the
block owner node. Contributed by Zhong Wang.
[eli] HDFS-1507. TestAbandonBlock should abandon a block. Contributed by Eli
Collins
[eli] HDFS-259. Remove intentionally corrupt 0.13 directory layout creation.
Contributed by Todd Lipcon
[omalley] Branching for 0.22
------------------------------------------
[...truncated 756916 lines...]
[junit] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
[junit] at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
[junit] at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
[junit] at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
[junit] at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
[junit] at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
[junit] at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
[junit] at
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
[junit] at
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
[junit] at
org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
[junit] at
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
[junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
[junit] at
junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
[junit] 2010-11-18 16:32:45,327 INFO datanode.DataNode
(DataNode.java:initDataXceiver(467)) - Opened info server at 41536
[junit] 2010-11-18 16:32:45,327 INFO datanode.DataNode
(DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
[junit] 2010-11-18 16:32:45,329 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3>
is not formatted.
[junit] 2010-11-18 16:32:45,329 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2010-11-18 16:32:45,332 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4>
is not formatted.
[junit] 2010-11-18 16:32:45,332 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2010-11-18 16:32:45,381 INFO datanode.DataNode
(FSDataset.java:registerMBean(1772)) - Registered FSDatasetStatusMBean
[junit] 2010-11-18 16:32:45,382 INFO datanode.DirectoryScanner
(DirectoryScanner.java:<init>(149)) - scan starts at 1290109618382 with
interval 21600000
[junit] 2010-11-18 16:32:45,383 INFO http.HttpServer
(HttpServer.java:addGlobalFilter(409)) - Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
[junit] 2010-11-18 16:32:45,384 DEBUG datanode.DataNode
(DataNode.java:startInfoServer(336)) - Datanode listening on localhost:0
[junit] 2010-11-18 16:32:45,384 INFO http.HttpServer
(HttpServer.java:start(579)) - Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the
listener on 0
[junit] 2010-11-18 16:32:45,384 INFO http.HttpServer
(HttpServer.java:start(584)) - listener.getLocalPort() returned 50189
webServer.getConnectors()[0].getLocalPort() returned 50189
[junit] 2010-11-18 16:32:45,385 INFO http.HttpServer
(HttpServer.java:start(617)) - Jetty bound to port 50189
[junit] 2010-11-18 16:32:45,385 INFO mortbay.log (?:invoke(?)) -
jetty-6.1.14
[junit] 2010-11-18 16:32:45,461 INFO mortbay.log (?:invoke(?)) - Started
selectchannelconnec...@localhost:50189
[junit] 2010-11-18 16:32:45,461 INFO jvm.JvmMetrics
(JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with
processName=DataNode, sessionId=null - already initialized
[junit] 2010-11-18 16:32:45,462 INFO ipc.Server (Server.java:run(338)) -
Starting SocketReader
[junit] 2010-11-18 16:32:45,462 INFO metrics.RpcMetrics
(RpcMetrics.java:<init>(63)) - Initializing RPC Metrics with hostName=DataNode,
port=43815
[junit] 2010-11-18 16:32:45,464 INFO metrics.RpcDetailedMetrics
(RpcDetailedMetrics.java:<init>(57)) - Initializing RPC Metrics with
hostName=DataNode, port=43815
[junit] 2010-11-18 16:32:45,464 INFO datanode.DataNode
(DataNode.java:initIpcServer(427)) - dnRegistration =
DatanodeRegistration(h8.grid.sp2.yahoo.net:41536, storageID=, infoPort=50189,
ipcPort=43815)
[junit] 2010-11-18 16:32:45,466 INFO hdfs.StateChange
(FSNamesystem.java:registerDatanode(2508)) - BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:41536 storage
DS-820573405-127.0.1.1-41536-1290097965465
[junit] 2010-11-18 16:32:45,466 INFO net.NetworkTopology
(NetworkTopology.java:add(331)) - Adding a new node:
/default-rack/127.0.0.1:41536
[junit] 2010-11-18 16:32:45,471 INFO datanode.DataNode
(DataNode.java:register(697)) - New storage id
DS-820573405-127.0.1.1-41536-1290097965465 is assigned to data-node
127.0.0.1:41536
[junit] 2010-11-18 16:32:45,472 INFO datanode.DataNode
(DataNode.java:run(1419)) - DatanodeRegistration(127.0.0.1:41536,
storageID=DS-820573405-127.0.1.1-41536-1290097965465, infoPort=50189,
ipcPort=43815)In DataNode.run, data =
FSDataset{dirpath='<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}>
[junit] Starting DataNode 2 with dfs.datanode.data.dir:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/,file>:<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6/>
[junit] 2010-11-18 16:32:45,479 INFO ipc.Server (Server.java:run(608)) -
IPC Server Responder: starting
[junit] 2010-11-18 16:32:45,484 INFO datanode.DataNode
(DataNode.java:offerService(887)) - using BLOCKREPORT_INTERVAL of 21600000msec
Initial delay: 0msec
[junit] 2010-11-18 16:32:45,483 INFO ipc.Server (Server.java:run(443)) -
IPC Server listener on 43815: starting
[junit] 2010-11-18 16:32:45,484 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 0 on 43815: starting
[junit] 2010-11-18 16:32:45,512 INFO datanode.DataNode
(DataNode.java:blockReport(1126)) - BlockReport of 0 blocks got processed in 16
msecs
[junit] 2010-11-18 16:32:45,513 INFO datanode.DataNode
(DataNode.java:offerService(929)) - Starting Periodic block scanner.
[junit] 2010-11-18 16:32:45,536 WARN datanode.DataNode
(DataNode.java:registerMXBean(530)) - Failed to register NameNode MXBean
[junit] javax.management.InstanceAlreadyExistsException:
HadoopInfo:type=DataNodeInfo
[junit] at
com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
[junit] at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
[junit] at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
[junit] at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
[junit] at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
[junit] at
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.registerMXBean(DataNode.java:528)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:498)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:281)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:266)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1556)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1499)
[junit] at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1466)
[junit] at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:614)
[junit] at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:448)
[junit] at
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:176)
[junit] at
org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71)
[junit] at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:168)
[junit] at
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.writeSeveralPackets(TestFiDataTransferProtocol2.java:91)
[junit] at
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.runTest17_19(TestFiDataTransferProtocol2.java:138)
[junit] at
org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_19(TestFiDataTransferProtocol2.java:198)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
[junit] at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
[junit] at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
[junit] at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
[junit] at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
[junit] at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
[junit] at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
[junit] at
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
[junit] at
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
[junit] at
org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
[junit] at
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
[junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
[junit] at
junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
[junit] 2010-11-18 16:32:45,538 INFO datanode.DataNode
(DataNode.java:initDataXceiver(467)) - Opened info server at 43306
[junit] 2010-11-18 16:32:45,539 INFO datanode.DataNode
(DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
[junit] 2010-11-18 16:32:45,541 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5>
is not formatted.
[junit] 2010-11-18 16:32:45,541 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2010-11-18 16:32:45,543 INFO common.Storage
(DataStorage.java:recoverTransitionRead(127)) - Storage directory
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6>
is not formatted.
[junit] 2010-11-18 16:32:45,544 INFO common.Storage
(DataStorage.java:recoverTransitionRead(128)) - Formatting ...
[junit] 2010-11-18 16:32:45,582 INFO datanode.DataNode
(FSDataset.java:registerMBean(1772)) - Registered FSDatasetStatusMBean
[junit] 2010-11-18 16:32:45,582 INFO datanode.DirectoryScanner
(DirectoryScanner.java:<init>(149)) - scan starts at 1290115083582 with
interval 21600000
[junit] 2010-11-18 16:32:45,584 INFO http.HttpServer
(HttpServer.java:addGlobalFilter(409)) - Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
[junit] 2010-11-18 16:32:45,584 DEBUG datanode.DataNode
(DataNode.java:startInfoServer(336)) - Datanode listening on localhost:0
[junit] 2010-11-18 16:32:45,585 INFO http.HttpServer
(HttpServer.java:start(579)) - Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the
listener on 0
[junit] 2010-11-18 16:32:45,585 INFO http.HttpServer
(HttpServer.java:start(584)) - listener.getLocalPort() returned 60767
webServer.getConnectors()[0].getLocalPort() returned 60767
[junit] 2010-11-18 16:32:45,586 INFO http.HttpServer
(HttpServer.java:start(617)) - Jetty bound to port 60767
[junit] 2010-11-18 16:32:45,586 INFO mortbay.log (?:invoke(?)) -
jetty-6.1.14
[junit] 2010-11-18 16:32:45,675 INFO mortbay.log (?:invoke(?)) - Started
selectchannelconnec...@localhost:60767
[junit] 2010-11-18 16:32:45,676 INFO jvm.JvmMetrics
(JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with
processName=DataNode, sessionId=null - already initialized
[junit] 2010-11-18 16:32:45,677 INFO ipc.Server (Server.java:run(338)) -
Starting SocketReader
[junit] 2010-11-18 16:32:45,677 INFO metrics.RpcMetrics
(RpcMetrics.java:<init>(63)) - Initializing RPC Metrics with hostName=DataNode,
port=52910
[junit] 2010-11-18 16:32:45,678 INFO metrics.RpcDetailedMetrics
(RpcDetailedMetrics.java:<init>(57)) - Initializing RPC Metrics with
hostName=DataNode, port=52910
[junit] 2010-11-18 16:32:45,678 INFO datanode.DataNode
(DataNode.java:initIpcServer(427)) - dnRegistration =
DatanodeRegistration(h8.grid.sp2.yahoo.net:43306, storageID=, infoPort=60767,
ipcPort=52910)
[junit] 2010-11-18 16:32:45,680 INFO hdfs.StateChange
(FSNamesystem.java:registerDatanode(2508)) - BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:43306 storage
DS-835262486-127.0.1.1-43306-1290097965679
[junit] 2010-11-18 16:32:45,680 INFO net.NetworkTopology
(NetworkTopology.java:add(331)) - Adding a new node:
/default-rack/127.0.0.1:43306
[junit] 2010-11-18 16:32:45,684 INFO datanode.DataNode
(DataNode.java:register(697)) - New storage id
DS-835262486-127.0.1.1-43306-1290097965679 is assigned to data-node
127.0.0.1:43306
[junit] 2010-11-18 16:32:45,685 INFO datanode.DataNode
(DataNode.java:run(1419)) - DatanodeRegistration(127.0.0.1:43306,
storageID=DS-835262486-127.0.1.1-43306-1290097965679, infoPort=60767,
ipcPort=52910)In DataNode.run, data =
FSDataset{dirpath='<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current/finalized'}>
[junit] 2010-11-18 16:32:45,685 INFO ipc.Server (Server.java:run(608)) -
IPC Server Responder: starting
[junit] 2010-11-18 16:32:45,686 INFO ipc.Server (Server.java:run(443)) -
IPC Server listener on 52910: starting
[junit] 2010-11-18 16:32:45,686 INFO ipc.Server (Server.java:run(1369)) -
IPC Server handler 0 on 52910: starting
[junit] 2010-11-18 16:32:45,687 INFO datanode.DataNode
(DataNode.java:offerService(887)) - using BLOCKREPORT_INTERVAL of 21600000msec
Initial delay: 0msec
[junit] 2010-11-18 16:32:45,692 INFO datanode.DataNode
(DataNode.java:blockReport(1126)) - BlockReport of 0 blocks got processed in 2
msecs
[junit] 2010-11-18 16:32:45,692 INFO datanode.DataNode
(DataNode.java:offerService(929)) - Starting Periodic block scanner.
[junit] 2010-11-18 16:32:45,695 DEBUG hdfs.DFSClient
(DFSClient.java:create(629)) - /pipeline_Fi_19/foo: masked=rwxr-xr-x
[junit] 2010-11-18 16:32:45,695 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,708 INFO FSNamesystem.audit
(FSNamesystem.java:logAuditEvent(148)) - ugi=hudson ip=/127.0.0.1
cmd=create src=/pipeline_Fi_19/foo dst=null
perm=hudson:supergroup:rw-r--r--
[junit] 2010-11-18 16:32:45,710 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=0, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=0
[junit] 2010-11-18 16:32:45,711 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=0, src=/pipeline_Fi_19/foo, bytesCurBlock=1024, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,711 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 0
[junit] 2010-11-18 16:32:45,712 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(444)) - Allocating new block
[junit] 2010-11-18 16:32:45,712 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,712 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=1, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=1024
[junit] 2010-11-18 16:32:45,713 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=1, src=/pipeline_Fi_19/foo, bytesCurBlock=2048, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,713 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 1
[junit] 2010-11-18 16:32:45,713 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,713 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=2, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=2048
[junit] 2010-11-18 16:32:45,713 INFO hdfs.StateChange
(FSNamesystem.java:allocateBlock(1753)) - BLOCK* NameSystem.allocateBlock:
/pipeline_Fi_19/foo.
blk_-1087155876419230760_1001{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:41536|RBW],
ReplicaUnderConstruction[127.0.0.1:43306|RBW],
ReplicaUnderConstruction[127.0.0.1:36464|RBW]]}
[junit] 2010-11-18 16:32:45,714 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=2, src=/pipeline_Fi_19/foo, bytesCurBlock=3072, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,714 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 2
[junit] 2010-11-18 16:32:45,715 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,715 INFO protocol.ClientProtocolAspects
(ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35))
- FI: addBlock Pipeline[127.0.0.1:41536, 127.0.0.1:43306, 127.0.0.1:36464]
[junit] 2010-11-18 16:32:45,715 DEBUG hdfs.DFSClient
(DFSOutputStream.java:createBlockOutputStream(881)) - pipeline = 127.0.0.1:41536
[junit] 2010-11-18 16:32:45,715 DEBUG hdfs.DFSClient
(DFSOutputStream.java:createBlockOutputStream(881)) - pipeline = 127.0.0.1:43306
[junit] 2010-11-18 16:32:45,715 DEBUG hdfs.DFSClient
(DFSOutputStream.java:createBlockOutputStream(881)) - pipeline = 127.0.0.1:36464
[junit] 2010-11-18 16:32:45,716 DEBUG hdfs.DFSClient
(DFSOutputStream.java:createBlockOutputStream(891)) - Connecting to
127.0.0.1:41536
[junit] 2010-11-18 16:32:45,716 DEBUG datanode.DataNode
(DataXceiver.java:<init>(86)) - Number of active connections is: 1
[junit] 2010-11-18 16:32:45,716 DEBUG hdfs.DFSClient
(DFSOutputStream.java:createBlockOutputStream(900)) - Send buf size 131071
[junit] 2010-11-18 16:32:45,717 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:45,717 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=3, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=3072
[junit] 2010-11-18 16:32:45,717 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(73))
- FI: receiverOpWriteBlock
[junit] 2010-11-18 16:32:45,717 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=3, src=/pipeline_Fi_19/foo, bytesCurBlock=4096, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,718 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:45,718 INFO fi.FiTestUtil
(FiTestUtil.java:initialValue(37)) - Thread[DataXceiver for client
/127.0.0.1:44424 [Waiting for operation],5,dataXceiverServer]:
seed=-5326246243198308466
[junit] 2010-11-18 16:32:45,718 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424 [Waiting
for operation] sleeps for 1556ms
[junit] 2010-11-18 16:32:45,718 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 3
[junit] 2010-11-18 16:32:45,719 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,719 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=4, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=4096
[junit] 2010-11-18 16:32:45,719 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=4, src=/pipeline_Fi_19/foo, bytesCurBlock=5120, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,720 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 4
[junit] 2010-11-18 16:32:45,720 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,720 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=5, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=5120
[junit] 2010-11-18 16:32:45,721 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1221)) - DFSClient writeChunk packet full
seqno=5, src=/pipeline_Fi_19/foo, bytesCurBlock=6144, blockSize=1048576,
appendChunk=false
[junit] 2010-11-18 16:32:45,721 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 5
[junit] 2010-11-18 16:32:45,721 DEBUG hdfs.DFSClient
(DFSOutputStream.java:computePacketChunkSize(1144)) - computePacketChunkSize:
src=/pipeline_Fi_19/foo, chunkSize=516, chunksPerPacket=2, packetSize=1057
[junit] 2010-11-18 16:32:45,721 DEBUG hdfs.DFSClient
(DFSOutputStream.java:writeChunk(1202)) - DFSClient writeChunk allocating new
packet seqno=6, src=/pipeline_Fi_19/foo, packetSize=1057, chunksPerPacket=2,
bytesCurBlock=6144
[junit] 2010-11-18 16:32:45,722 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 6
[junit] 2010-11-18 16:32:45,722 INFO hdfs.DFSClientAspects
(DFSClientAspects.aj:ajc$before$org_apache_hadoop_hdfs_DFSClientAspects$5$5ba7280d(86))
- FI: before pipelineClose:
[junit] 2010-11-18 16:32:45,722 DEBUG hdfs.DFSClient
(DFSOutputStream.java:queueCurrentPacket(1157)) - Queued packet 7
[junit] 2010-11-18 16:32:45,723 DEBUG hdfs.DFSClient
(DFSOutputStream.java:waitForAckedSeqno(1408)) - Waiting for ack for: 7
[junit] 2010-11-18 16:32:47,275 DEBUG datanode.DataNode
(DataXceiver.java:opWriteBlock(246)) - writeBlock receive buf size 131071 tcp
no delay true
[junit] 2010-11-18 16:32:47,275 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(251)) - Receiving block
blk_-1087155876419230760_1001 src: /127.0.0.1:44424 dest: /127.0.0.1:41536
[junit] 2010-11-18 16:32:47,276 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(176)) - writeTo blockfile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current/rbw/blk_-1087155876419230760>
of size 0
[junit] 2010-11-18 16:32:47,276 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(178)) - writeTo metafile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current/rbw/blk_-1087155876419230760_1001.meta>
of size 0
[junit] 2010-11-18 16:32:47,277 DEBUG datanode.DataNode
(DataXceiver.java:<init>(86)) - Number of active connections is: 1
[junit] 2010-11-18 16:32:47,287 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:47,287 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(73))
- FI: receiverOpWriteBlock
[junit] 2010-11-18 16:32:47,287 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=1,
duration=[0, 3000), datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:47,287 INFO fi.FiTestUtil
(FiTestUtil.java:initialValue(37)) - Thread[DataXceiver for client
/127.0.0.1:60263 [Waiting for operation],5,dataXceiverServer]:
seed=7365113689811878303
[junit] 2010-11-18 16:32:47,287 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:60263 [Waiting
for operation] sleeps for 506ms
[junit] 2010-11-18 16:32:47,794 DEBUG datanode.DataNode
(DataXceiver.java:opWriteBlock(246)) - writeBlock receive buf size 131071 tcp
no delay true
[junit] 2010-11-18 16:32:47,794 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(251)) - Receiving block
blk_-1087155876419230760_1001 src: /127.0.0.1:60263 dest: /127.0.0.1:43306
[junit] 2010-11-18 16:32:47,795 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(176)) - writeTo blockfile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current/rbw/blk_-1087155876419230760>
of size 0
[junit] 2010-11-18 16:32:47,795 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(178)) - writeTo metafile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current/rbw/blk_-1087155876419230760_1001.meta>
of size 0
[junit] 2010-11-18 16:32:47,796 DEBUG datanode.DataNode
(DataXceiver.java:<init>(86)) - Number of active connections is: 1
[junit] 2010-11-18 16:32:47,796 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:36464
[junit] 2010-11-18 16:32:47,796 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(73))
- FI: receiverOpWriteBlock
[junit] 2010-11-18 16:32:47,796 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=2,
duration=[0, 3000), datanode=127.0.0.1:36464
[junit] 2010-11-18 16:32:47,797 INFO fi.FiTestUtil
(FiTestUtil.java:initialValue(37)) - Thread[DataXceiver for client
/127.0.0.1:59520 [Waiting for operation],5,dataXceiverServer]:
seed=-3002906034623524893
[junit] 2010-11-18 16:32:47,797 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:59520 [Waiting
for operation] sleeps for 1956ms
[junit] 2010-11-18 16:32:49,753 DEBUG datanode.DataNode
(DataXceiver.java:opWriteBlock(246)) - writeBlock receive buf size 131071 tcp
no delay true
[junit] 2010-11-18 16:32:49,753 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(251)) - Receiving block
blk_-1087155876419230760_1001 src: /127.0.0.1:59520 dest: /127.0.0.1:36464
[junit] 2010-11-18 16:32:49,754 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(176)) - writeTo blockfile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current/rbw/blk_-1087155876419230760>
of size 0
[junit] 2010-11-18 16:32:49,754 DEBUG datanode.DataNode
(ReplicaInPipeline.java:createStreams(178)) - writeTo metafile is
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current/rbw/blk_-1087155876419230760_1001.meta>
of size 0
[junit] 2010-11-18 16:32:49,755 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(371)) - Datanode 0 forwarding connect ack to
upstream firstbadlink is
[junit] 2010-11-18 16:32:49,755 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead SUCCESS, datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:49,755 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:36464
[junit] 2010-11-18 16:32:49,755 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=1,
duration=[0, 3000), datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:49,756 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:60263
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 1646ms
[junit] 2010-11-18 16:32:49,755 DEBUG datanode.DataNode
(BlockReceiver.java:run(843)) - PacketResponder 0 seqno = -2 for block
blk_-1087155876419230760_1001 waiting for local datanode to finish write.
[junit] 2010-11-18 16:32:49,755 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=2,
duration=[0, 3000), datanode=127.0.0.1:36464
[junit] 2010-11-18 16:32:49,756 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:59520
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 1012ms
[junit] 2010-11-18 16:32:51,402 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(338)) - Datanode 1 got response for connect ack
from downstream datanode with firstbadlink as
[junit] 2010-11-18 16:32:51,402 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(371)) - Datanode 1 forwarding connect ack to
upstream firstbadlink is
[junit] 2010-11-18 16:32:51,403 INFO datanode.DataTransferProtocolAspects
(DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead SUCCESS, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:51,403 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:51,403 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:51,403 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=1,
duration=[0, 3000), datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:51,403 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 1803ms
[junit] 2010-11-18 16:32:51,403 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:60263
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 2647ms
[junit] 2010-11-18 16:32:53,207 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(338)) - Datanode 2 got response for connect ack
from downstream datanode with firstbadlink as
[junit] 2010-11-18 16:32:53,207 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(371)) - Datanode 2 forwarding connect ack to
upstream firstbadlink is
[junit] 2010-11-18 16:32:53,207 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:53,207 INFO hdfs.DFSClientAspects
(DFSClientAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_DFSClientAspects$2$9396d2df(48))
- FI: after pipelineInitNonAppend: hasError=false errorIndex=-1
[junit] 2010-11-18 16:32:53,207 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:53,208 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 2638ms
[junit] 2010-11-18 16:32:53,208 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:0 offsetInBlock:0
lastPacketInBlock:false lastByteOffsetInBlock: 1024
[junit] 2010-11-18 16:32:53,208 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:1 offsetInBlock:1024
lastPacketInBlock:false lastByteOffsetInBlock: 2048
[junit] 2010-11-18 16:32:53,209 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:2 offsetInBlock:2048
lastPacketInBlock:false lastByteOffsetInBlock: 3072
[junit] 2010-11-18 16:32:53,209 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:3 offsetInBlock:3072
lastPacketInBlock:false lastByteOffsetInBlock: 4096
[junit] 2010-11-18 16:32:53,209 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:4 offsetInBlock:4096
lastPacketInBlock:false lastByteOffsetInBlock: 5120
[junit] 2010-11-18 16:32:53,209 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:5 offsetInBlock:5120
lastPacketInBlock:false lastByteOffsetInBlock: 6144
[junit] 2010-11-18 16:32:53,209 DEBUG hdfs.DFSClient
(DFSOutputStream.java:run(496)) - DataStreamer block
blk_-1087155876419230760_1001 sending packet packet seqno:6 offsetInBlock:6144
lastPacketInBlock:false lastByteOffsetInBlock: 6170
[junit] 2010-11-18 16:32:55,846 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:55,847 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:55,847 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 863ms
[junit] 2010-11-18 16:32:56,710 DEBUG datanode.DataNode
(BlockReceiver.java:receivePacket(456)) - Receiving one packet for block
blk_-1087155876419230760_1001 of length 1024 seqno 0 offsetInBlock 0
lastPacketInBlock false
[junit] 2010-11-18 16:32:56,710 DEBUG datanode.DataNode
(BlockReceiver.java:enqueue(788)) - PacketResponder 2 adding seqno 0 to ack
queue.
[junit] 2010-11-18 16:32:56,710 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$2$56c32214(71))
- FI: callWritePacketToDisk
[junit] 2010-11-18 16:32:56,710 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:56,710 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:56,711 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=1,
duration=[0, 3000), datanode=127.0.0.1:43306
[junit] 2010-11-18 16:32:56,711 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:56,711 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:60263
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 2454ms
[junit] 2010-11-18 16:32:56,711 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 353ms
[junit] 2010-11-18 16:32:57,065 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:57,065 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:57,065 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 377ms
[junit] 2010-11-18 16:32:57,442 DEBUG datanode.DataNode
(BlockReceiver.java:receivePacket(456)) - Receiving one packet for block
blk_-1087155876419230760_1001 of length 1024 seqno 1 offsetInBlock 1024
lastPacketInBlock false
[junit] 2010-11-18 16:32:57,442 DEBUG datanode.DataNode
(BlockReceiver.java:enqueue(788)) - PacketResponder 2 adding seqno 1 to ack
queue.
[junit] 2010-11-18 16:32:57,443 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$2$56c32214(71))
- FI: callWritePacketToDisk
[junit] 2010-11-18 16:32:57,443 INFO datanode.BlockReceiverAspects
(BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(53))
- FI: callReceivePacket, datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:57,443 INFO fi.FiTestUtil
(DataTransferTestUtil.java:run(344)) - FI: SleepAction:pipeline_Fi_19, index=0,
duration=[0, 3000), datanode=127.0.0.1:41536
[junit] 2010-11-18 16:32:57,443 INFO fi.FiTestUtil
(FiTestUtil.java:sleep(92)) - DataXceiver for client /127.0.0.1:44424
[Receiving block blk_-1087155876419230760_1001 client=DFSClient_-244463401]
sleeps for 1915ms
Build timed out. Aborting
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure