See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/267/changes>
Changes:
[cmccabe] HDFS-8844. TestHDFSCLI does not cleanup the test directory (Masatake
Iwasaki via Colin P. McCabe)
[lei] HADOOP-12269. Update aws-sdk dependency to 1.10.6 (Thomas Demoor via Lei
(Eddy) Xu)
[aajisaka] HADOOP-12274. Remove direct download link from BULIDING.txt.
Contributed by Caleb Severn.
------------------------------------------
[...truncated 6498 lines...]
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1823)
at
org.apache.hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant.teardown(TestBlockPlacementPolicyRackFaultTolerant.java:85)
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 196.378 sec -
in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.192 sec - in
org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.738 sec - in
org.apache.hadoop.hdfs.server.namenode.TestFSImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.754 sec - in
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.806 sec - in
org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.482 sec - in
org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.177 sec - in
org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.432 sec -
in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.303 sec -
in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlockRetry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.471 sec - in
org.apache.hadoop.hdfs.server.namenode.TestAddBlockRetry
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.219 sec <<<
FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
testExcludeDataNodes(org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality)
Time elapsed: 0.703 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at
io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33)
at
io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:692)
at
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:499)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:160)
at
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1722)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1884)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1856)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1830)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1823)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:845)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:473)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:432)
at
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality.testExcludeDataNodes(TestWebHdfsDataLocality.java:154)
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Tests run: 4, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 10.707 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
testNameEditsConfigs(org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs)
Time elapsed: 4.305 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at
io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33)
at
io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:692)
at
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:499)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:160)
at
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:217)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1722)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1884)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1856)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1830)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1823)
at
org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigs(TestNameEditsConfigs.java:221)
testNameEditsRequiredConfigs(org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs)
Time elapsed: 0.035 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1194)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1158)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:163)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:989)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
at
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:219)
at
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:961)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:882)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:814)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:473)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:432)
at
org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsRequiredConfigs(TestNameEditsConfigs.java:344)
testNameEditsConfigsFailure(org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs)
Time elapsed: 1.121 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at
io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33)
at
io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:692)
at
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:499)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:160)
at
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:217)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1722)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1884)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1856)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1830)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1823)
at
org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigsFailure(TestNameEditsConfigs.java:451)
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.815 sec <<<
FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
testFsck(org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes)
Time elapsed: 6.496 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at
io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33)
at
io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:692)
at
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:499)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:160)
at
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1722)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1884)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1856)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1830)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1823)
at
org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes.runTest(TestFsckWithMultipleNameNodes.java:143)
at
org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes.testFsck(TestFsckWithMultipleNameNodes.java:154)
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m;
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.542 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd Time elapsed:
2.542 sec <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at org.apache.hadoop.util.JvmPauseMonitor.start(JvmPauseMonitor.java:81)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:661)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:809)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:793)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1482)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1208)
at
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:971)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:882)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:814)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:473)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:432)
at
org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd.setUpBeforeClass(TestFavoredNodesEndToEnd.java:70)
Results :
Tests in error:
TestBlockPlacementPolicyRackFaultTolerant.teardown:85 » OutOfMemory unable to
...
TestWebHdfsDataLocality.testExcludeDataNodes:154 » OutOfMemory unable to
creat...
TestNameEditsConfigs.testNameEditsConfigs:221 » OutOfMemory unable to create
n...
TestNameEditsConfigs.testNameEditsRequiredConfigs:344 » OutOfMemory unable to
...
TestNameEditsConfigs.testNameEditsConfigsFailure:451 » OutOfMemory unable to
c...
TestFsckWithMultipleNameNodes.testFsck:154->runTest:143 » OutOfMemory unable
t...
TestFavoredNodesEndToEnd.setUpBeforeClass:70 » OutOfMemory unable to create
ne...
Tests run: 1049, Failures: 0, Errors: 7, Skipped: 9
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project
---
[INFO] Deleting
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project
---
[INFO] Executing tasks
main:
[mkdir] Created dir:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable
package
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project
---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:18 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [ 01:37 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.062 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:40 h
[INFO] Finished at: 2015-08-05T13:16:09+00:00
[INFO] Final Memory: 61M/1145M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on
project hadoop-hdfs: ExecutionException: java.lang.OutOfMemoryError: unable to
create new native thread -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4319956 bytes
Compression is 0.0%
Took 3.6 sec
Recording test results
Updating HADOOP-12269
Updating HDFS-8844
Updating HADOOP-12274