Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #35
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/35/changes Changes: [wheat9] HADOOP-10482. Fix various findbugs warnings in hadoop-common. Contributed by Haohui Mai. [wheat9] HADOOP-11388. Remove deprecated o.a.h.metrics.file.FileContext. Contributed by Li Lu. [aw] HADOOP-10950. rework heap management vars (John Smith via aw) [aw] HADOOP-6590. Add a username check for hadoop sub-commands (John Smith via aw) [aw] YARN-2437. start-yarn.sh/stop-yarn should give info (Varun Saxena via aw) [wheat9] HADOOP-11386. Replace \n by %n in format hadoop-common format strings. Contributed by Li Lu. [wheat9] HDFS-5578. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Andrew Purtell. [arp] HDFS-7475. Make TestLazyPersistFiles#testLazyPersistBlocksAreSaved deterministic. (Contributed by Xiaoyu Yao) [harsh] MAPREDUCE-5420. Remove mapreduce.task.tmp.dir from mapred-default.xml. Contributed by James Carman. (harsh) [wheat9] HDFS-7463. Simplify FSNamesystem#getBlockLocationsUpdateTimes. Contributed by Haohui Mai. [arp] HDFS-7503. Namenode restart after large deletions can cause slow processReport (Arpit Agarwal) -- [...truncated 6478 lines...] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestLeaseRenewer Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.149 sec - in org.apache.hadoop.hdfs.TestLeaseRenewer Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 176.102 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.806 sec - in org.apache.hadoop.hdfs.TestDFSRemove Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestFileAppend4 Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.077 sec - in org.apache.hadoop.hdfs.TestFileAppend4 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestParallelRead Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.966 sec - in org.apache.hadoop.hdfs.TestParallelRead Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestClose Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.11 sec - in org.apache.hadoop.hdfs.TestClose Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSAddressConfig Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.253 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.249 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestLargeBlock Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.428 sec - in org.apache.hadoop.hdfs.TestLargeBlock Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestHDFSTrash Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.523 sec - in org.apache.hadoop.hdfs.TestHDFSTrash Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestClientReportBadBlock Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.392 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestWriteRead Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.329 sec - in org.apache.hadoop.hdfs.TestWriteRead Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.727 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in
Hadoop-Hdfs-trunk-Java8 - Build # 35 - Still Failing
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/35/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 6671 lines...] [INFO] [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS FAILURE [ 02:56 h] [INFO] Apache Hadoop HttpFS .. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED [INFO] Apache Hadoop HDFS-NFS SKIPPED [INFO] Apache Hadoop HDFS Project SUCCESS [ 1.602 s] [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 02:56 h [INFO] Finished at: 2014-12-11T14:30:38+00:00 [INFO] Final Memory: 46M/238M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] - [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating MAPREDUCE-5420 Updating HDFS-7503 Updating HADOOP-10950 Updating HADOOP-11388 Updating HADOOP-11386 Updating HDFS-7463 Updating HDFS-5578 Updating HADOOP-10482 Updating YARN-2437 Updating HADOOP-6590 Updating HDFS-7475 Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. FAILED: org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect Error Message: The map of version counts returned by DatanodeManager was not what it was expected to be on iteration 370 expected:0 but was:1 Stack Trace: java.lang.AssertionError: The map of version counts returned by DatanodeManager was not what it was expected to be on iteration 370 expected:0 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150) REGRESSION: org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication.testPendingAndInvalidate Error Message: expected:0 but was:1 Stack Trace: java.lang.AssertionError: expected:0 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication.testPendingAndInvalidate(TestPendingReplication.java:293)
Hadoop-Hdfs-trunk - Build # 1969 - Still Failing
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 7193 lines...] [INFO] [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Skipping javadoc generation [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS FAILURE [ 02:57 h] [INFO] Apache Hadoop HttpFS .. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED [INFO] Apache Hadoop HDFS-NFS SKIPPED [INFO] Apache Hadoop HDFS Project SUCCESS [ 2.158 s] [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 02:57 h [INFO] Finished at: 2014-12-11T14:31:36+00:00 [INFO] Final Memory: 48M/822M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] - [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating MAPREDUCE-5420 Updating HDFS-7503 Updating HADOOP-10950 Updating HADOOP-11388 Updating HADOOP-11386 Updating HDFS-7463 Updating HDFS-5578 Updating HADOOP-10482 Updating YARN-2437 Updating HADOOP-6590 Updating HDFS-7475 Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 3 tests failed. FAILED: org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect Error Message: The map of version counts returned by DatanodeManager was not what it was expected to be on iteration 292 expected:0 but was:1 Stack Trace: java.lang.AssertionError: The map of version counts returned by DatanodeManager was not what it was expected to be on iteration 292 expected:0 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150) REGRESSION: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization Error Message: test timed out after 3 milliseconds Stack Trace: java.lang.Exception: test timed out after 3 milliseconds at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236) at
Build failed in Jenkins: Hadoop-Hdfs-trunk #1969
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/changes Changes: [wheat9] HADOOP-10482. Fix various findbugs warnings in hadoop-common. Contributed by Haohui Mai. [wheat9] HADOOP-11388. Remove deprecated o.a.h.metrics.file.FileContext. Contributed by Li Lu. [aw] HADOOP-10950. rework heap management vars (John Smith via aw) [aw] HADOOP-6590. Add a username check for hadoop sub-commands (John Smith via aw) [aw] YARN-2437. start-yarn.sh/stop-yarn should give info (Varun Saxena via aw) [wheat9] HADOOP-11386. Replace \n by %n in format hadoop-common format strings. Contributed by Li Lu. [wheat9] HDFS-5578. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Andrew Purtell. [arp] HDFS-7475. Make TestLazyPersistFiles#testLazyPersistBlocksAreSaved deterministic. (Contributed by Xiaoyu Yao) [harsh] MAPREDUCE-5420. Remove mapreduce.task.tmp.dir from mapred-default.xml. Contributed by James Carman. (harsh) [wheat9] HDFS-7463. Simplify FSNamesystem#getBlockLocationsUpdateTimes. Contributed by Haohui Mai. [arp] HDFS-7503. Namenode restart after large deletions can cause slow processReport (Arpit Agarwal) -- [...truncated 7000 lines...] Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.142 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.558 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.973 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.265 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.772 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.235 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM Running org.apache.hadoop.hdfs.TestConnCache Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.867 sec - in org.apache.hadoop.hdfs.TestConnCache Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.082 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery Running org.apache.hadoop.hdfs.TestFileAppend Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.59 sec - in org.apache.hadoop.hdfs.TestFileAppend Running org.apache.hadoop.hdfs.TestFileAppend3 Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.106 sec - in org.apache.hadoop.hdfs.TestFileAppend3 Running org.apache.hadoop.hdfs.TestClientReportBadBlock Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.983 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.459 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum Running org.apache.hadoop.hdfs.TestFileCreation Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.797 sec - in org.apache.hadoop.hdfs.TestFileCreation Running org.apache.hadoop.hdfs.TestDFSRemove Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.248 sec - in org.apache.hadoop.hdfs.TestDFSRemove Running org.apache.hadoop.hdfs.TestHdfsAdmin Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.838 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin Running org.apache.hadoop.hdfs.TestDFSUtil Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.24 sec - in org.apache.hadoop.hdfs.TestDFSUtil Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 158.591 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.035 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint Running org.apache.hadoop.hdfs.TestDataTransferKeepalive Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.759 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive Running
[jira] [Created] (HDFS-7511) Fix Boxing/unboxing to parse a primitive in hadoop-hdfs
Xiaoyu Yao created HDFS-7511: Summary: Fix Boxing/unboxing to parse a primitive in hadoop-hdfs Key: HDFS-7511 URL: https://issues.apache.org/jira/browse/HDFS-7511 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaoyu Yao Performance Warnings CodeWarning Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.server.namenode.FileJournalManager.matchEditLogs(File[], boolean) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.DelimitedImageVisitor.visit(ImageVisitor$ImageElement, String) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionVisitor.visit(ImageVisitor$ImageElement, String) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionVisitor.visit(ImageVisitor$ImageElement, String) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.LsImageVisitor.visit(ImageVisitor$ImageElement, String) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.LsImageVisitor.visit(ImageVisitor$ImageElement, String) Bx Boxing/unboxing to parse a primitive org.apache.hadoop.hdfs.tools.offlineImageViewer.LsImageVisitor.visitEnclosingElement(ImageVisitor$ImageElement, ImageVisitor$ImageElement, String) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7512) Fix byte to string encoding issues in hadoop-hdfs
Xiaoyu Yao created HDFS-7512: Summary: Fix byte to string encoding issues in hadoop-hdfs Key: HDFS-7512 URL: https://issues.apache.org/jira/browse/HDFS-7512 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaoyu Yao In hadoop-hdfs, there are some bytes to string conversion using default charsets, which is flagged by findbugs 3.0 because the behavior of conversion depends on the platform settings of encoding. This jira proposes to fix the findbugs warnings below: Internationalization Warnings CodeWarning Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream, DataInputStream, DataOutputStream, String, DataTransferThrottler, DatanodeInfo[], boolean): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap, File, RamDiskReplicaTracker, boolean): new java.util.Scanner(File) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed(): new java.util.Scanner(File) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed(): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.web.webhdfs.ExceptionHandler.exceptionCaught(Throwable): String.getBytes() Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onGetFileChecksum(ChannelHandlerContext): String.getBytes() Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.mover.Mover$Cli.readPathFile(String): new java.io.FileReader(String) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirectory, String, byte[], boolean): new String(byte[]) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for FSImageUtil(): String.getBytes() Dm Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream): new java.io.PrintWriter(OutputStream, boolean) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.tools.offlineImageViewer.FSImageHandler.channelRead0(ChannelHandlerContext, HttpRequest): String.getBytes() Dm Found reliance on default encoding in org.apache.hadoop.hdfs.tools.offlineImageViewer.FSImageHandler.exceptionCaught(ChannelHandlerContext, Throwable): String.getBytes() Dm Found reliance on default encoding in org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): new java.io.PrintWriter(File) Dm Found reliance on default encoding in org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): new java.io.PrintWriter(OutputStream) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent
Colin Patrick McCabe created HDFS-7513: -- Summary: HDFS inotify: add defaultBlockSize to CreateEvent Key: HDFS-7513 URL: https://issues.apache.org/jira/browse/HDFS-7513 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 2.6.0 Reporter: Colin Patrick McCabe HDFS inotify: add defaultBlockSize to CreateEvent -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7514) TestTextCommand fails on Windows
Arpit Agarwal created HDFS-7514: --- Summary: TestTextCommand fails on Windows Key: HDFS-7514 URL: https://issues.apache.org/jira/browse/HDFS-7514 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.6.0 Reporter: Arpit Agarwal Assignee: Arpit Agarwal TestTextCommand fails on Windows *Error Message* {code} Pathname /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro from D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro is not a valid DFS filename. {code} *Stacktrace* {code} java.lang.IllegalArgumentException: Pathname /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro from D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro is not a valid DFS filename. at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at org.apache.hadoop.fs.shell.TestTextCommand.createAvroFile(TestTextCommand.java:113) at org.apache.hadoop.fs.shell.TestTextCommand.testDisplayForAvroFiles(TestTextCommand.java:76) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless
[ https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth resolved HDFS-7212. - Resolution: Duplicate Huge number of BLOCKED threads rendering DataNodes useless -- Key: HDFS-7212 URL: https://issues.apache.org/jira/browse/HDFS-7212 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.4.0 Environment: PROD Reporter: Istvan Szukacs There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM and rendering the service unusable, missing heartbeats and stopping data access. The threads look like this: {code} 3415 (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() @bci=1, line=834 (Interpreted frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node, int) @bci=67, line=867 (Interpreted frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, line=1197 (Interpreted frame) - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, line=214 (Compiled frame) - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled frame) - org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket, org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 (Interpreted frame) - org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String, org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted frame) - org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String) @bci=212, line=413 (Interpreted frame) - org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream) @bci=13, line=172 (Interpreted frame) - org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op) @bci=149, line=92 (Compiled frame) - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 (Compiled frame) - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame) {code} Has anybody seen this before? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project
Brandon Li created HDFS-7516: Summary: Fix findbugs warnings in hdfs-nfs project Key: HDFS-7516 URL: https://issues.apache.org/jira/browse/HDFS-7516 Project: Hadoop HDFS Issue Type: Bug Components: nfs Reporter: Brandon Li Assignee: Brandon Li -- This message was sent by Atlassian JIRA (v6.3.4#6332)