[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957068#comment-14957068 ] Tony Wu commented on HDFS-9238: --- The failed tests are not related to this patch as the patch itself only touches a particular test which did not fail. > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957646#comment-14957646 ] Lei (Eddy) Xu commented on HDFS-9238: - +1. Thanks for working on this, [~twu]. Will commit soon. > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957735#comment-14957735 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-trunk-Commit #8636 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8636/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957920#comment-14957920 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #544 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/544/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958003#comment-14958003 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #531 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/531/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957978#comment-14957978 ] Hudson commented on HDFS-9238: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #1267 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1267/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958039#comment-14958039 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2480 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2480/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957986#comment-14957986 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2434 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2434/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958203#comment-14958203 ] Hudson commented on HDFS-9238: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #497 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/497/]) HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Fix For: 3.0.0, 2.8.0 > > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()
[ https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956057#comment-14956057 ] Hadoop QA commented on HDFS-9238: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 7m 51s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 57s | There were no new javac warning messages. | | {color:green}+1{color} | release audit | 0m 20s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 24s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 28s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 29s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 1m 4s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 50m 2s | Tests failed in hadoop-hdfs. | | | | 73m 10s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.server.namenode.TestFSImage | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | Timed out tests | org.apache.hadoop.hdfs.TestFileConcurrentReader | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12766420/HDFS-9238.001.patch | | Optional Tests | javac unit findbugs checkstyle | | git revision | trunk / 40cac59 | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/12963/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12963/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12963/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12963/console | This message was automatically generated. > Update TestFileCreation#testLeaseExpireHardLimit() to avoid using > DataNodeTestUtils#getFile() > - > > Key: HDFS-9238 > URL: https://issues.apache.org/jira/browse/HDFS-9238 > Project: Hadoop HDFS > Issue Type: Improvement > Components: HDFS, test >Affects Versions: 2.7.1 >Reporter: Tony Wu >Assignee: Tony Wu >Priority: Trivial > Attachments: HDFS-9238.001.patch > > > TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to > open, read and verify blocks written on the DN. It’s better to use > getBlockInputStream() which does exactly the same thing but hides the detail > of getting the block file on disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)