See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/87/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE
###########################
[...truncated 500297 lines...]
[junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting
DataBlockScanner thread.
[junit] 11/11/03 00:32:26 INFO datanode.DataNode:
DatanodeRegistration(127.0.0.1:48938,
storageID=DS-475485639-67.195.138.25-48938-1320280345227, infoPort=36358,
ipcPort=59430):Finishing DataNode in:
FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data3/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data4/current/finalized'}
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 59430
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup
to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shutting
down all async disk service threads...
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All
async disk service threads have been shut down.
[junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService:
AsyncDiskService has already shut down.
[junit] 11/11/03 00:32:26 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 42200:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 42200:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 42200:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener on
42200
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup
to exit, active threads is 25
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup
to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting
DataBlockScanner thread.
[junit] 11/11/03 00:32:26 INFO datanode.DataNode:
DatanodeRegistration(127.0.0.1:44434,
storageID=DS-908436179-67.195.138.25-44434-1320280345099, infoPort=55557,
ipcPort=42200):Finishing DataNode in:
FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup
to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shutting
down all async disk service threads...
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All
async disk service threads have been shut down.
[junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService:
AsyncDiskService has already shut down.
[junit] 11/11/03 00:32:26 WARN namenode.FSNamesystem: ReplicationMonitor
thread received InterruptedException.java.lang.InterruptedException: sleep
interrupted
[junit] 11/11/03 00:32:26 WARN namenode.DecommissionManager: Monitor
interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/03 00:32:26 INFO namenode.FSEditLog: Number of transactions:
14 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0
Number of syncs: 7 SyncTimes(ms): 5 2
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 58221
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 5 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 8 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 9 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener on
58221
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 4 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 7 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 6 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 3 on 58221:
exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responder
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.89 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817:
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796:
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87:
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60:
Tests failed!
Total time: 193 minutes 53 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3139
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any)
##############################
4 tests failed.
FAILED:
junit.framework.TestSuite.org.apache.hadoop.mapred.TestFairSchedulerSystem
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection
exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to
localhost/127.0.0.1:0 failed on connection exception:
java.net.ConnectException: Connection refused
at
org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at
org.apache.hadoop.mapred.TestFairSchedulerSystem.setUp(TestFairSchedulerSystem.java:74)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on
connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1055)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy6.getProtocolVersion(Unknown Source)
at
org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:86)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:98)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:456)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:435)
at
org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:322)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:416)
at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:504)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:206)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1164)
at org.apache.hadoop.ipc.Client.call(Client.java:1008)
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Too many open files at sun.nio.ch.IOUtil.initPipe(Native Method) at
sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49) at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:407)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:322)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132) at
java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at
java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at
java.io.BufferedInputStream.read(BufferedInputStream.java:317) at
java.io.DataInputStream.read(DataInputStream.java:132) at
org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122) at
org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297) at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225) at
org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193) at
org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:136) at
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:466) at
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517) at
java.io.DataInputStream.read(DataInputStream.java:132) at
org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at
org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at
org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at
org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at
org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597) at
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:396) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:407)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:322)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122)
at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297)
at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:136)
at
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:466)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517)
at java.io.DataInputStream.read(DataInputStream.java:132)
at
org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at
org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at
org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at
org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at
org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at
org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
FAILED: org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping
Error Message:
port out of range:-1
Stack Trace:
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:118)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:519)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:459)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:459)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:403)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:387)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:576)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:445)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:378)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:259)
at
org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping(TestDumpTypedBytes.java:42)
FAILED: org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading
Error Message:
port out of range:-1
Stack Trace:
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:118)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:519)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:459)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:459)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:403)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:387)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:576)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:445)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:378)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:259)
at
org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading(TestLoadTypedBytes.java:42)