[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902464#comment-16902464 ] Erik Krogen commented on HDFS-14631: {{TestDirectoryStructure.testScanDirectoryStructureWarn}} and {{TestSafeMode.testInitializeReplQueuesEarly}} are both failing for me with or without this patch. All of the other tests pass locally. I just committed v001 to branch-2 and branch-2.9. Thanks [~LiJinglun]! > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-14631-branch-2.9.001.patch, HDFS-14631.001.patch, > HDFS-14631.002.patch, HDFS-14631.003.patch, HDFS-14631.004.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902178#comment-16902178 ] Hadoop QA commented on HDFS-14631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.9 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 14s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.TestFileCorruption | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDatanodeLayoutUpgrade | | | hadoop.hdfs.TestGetBlocks | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | hadoop.hdfs.TestFileCreationDelete | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:c3439fff6be | | JIRA Issue | HDFS-14631 | | JIRA Patch URL | https://issues.apache.org/jira/secu
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902063#comment-16902063 ] Jinglun commented on HDFS-14631: Hi [~xkrogen], thanks for your reminding. Yes, it's relevant to branch-2. Upload branch-2.9.001.patch and pend jenkins. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, > HDFS-14631.003.patch, HDFS-14631.004.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901463#comment-16901463 ] Erik Krogen commented on HDFS-14631: [~LiJinglun] Do you know if this is also relevant to the 2.x line? If so, should we put together a branch-2 backport also? > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, > HDFS-14631.003.patch, HDFS-14631.004.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898584#comment-16898584 ] Hudson commented on HDFS-14631: --- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17022 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17022/]) HDFS-14631.The DirectoryScanner doesn't fix the wrongly placed replica. (weichiu: rev 32607dbd98a7ab70741a2efc98eff548c1e431c1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, > HDFS-14631.003.patch, HDFS-14631.004.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16897226#comment-16897226 ] Hadoop QA commented on HDFS-14631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}157m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestLargeBlockReport | | | hadoop.hdfs.tools.TestDFSZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | HDFS-14631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12976297/HDFS-14631.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cb0347694c9d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d4ab9ae | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27352/testReport/ | | Max. process+thread count | 3217 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896464#comment-16896464 ] Wei-Chiu Chuang commented on HDFS-14631: Looks really good. Thanks for finding the issue and offered the patch. I like the test. The only nit I caught was that the tests should be added in TestDirectoryScanner rather than TestBlockScanner. DirectoryScanner and BlockScanner are quite different stuff in HDFS. Additionally, the tests should have timeout values set. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, > HDFS-14631.003.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888068#comment-16888068 ] Hadoop QA commented on HDFS-14631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.8 Server=18.09.8 Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-14631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12975154/HDFS-14631.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 66ed2714a074 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 79f6118 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27254/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27254/testReport/ | | Max. process+thread count | 5239 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/27254/console | | Powered by
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887946#comment-16887946 ] Jinglun commented on HDFS-14631: Thanks [~hexiaoqiao] for your nice comments. In the new patch-003 I change to use the random blkId, use a local variable baseDir and add more comments. Looking forward [~ayushtkn] for another review and comments.:) > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch, > HDFS-14631.003.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887634#comment-16887634 ] He Xiaoqiao commented on HDFS-14631: [~LiJinglun], [^HDFS-14631.002.patch] looks almost good to me. Some minor comments. 1. The constant block id number is not clean at all. Please check #testLocalReplicaUpdateWithReplica. {code:java} long blkId = 7600037L; {code} 2. Maybe it is not necessary to set {{BASE_PATH}}, we could replace it with the following test {{basedir}}. {code:java} File basedir = new File(GenericTestUtils.getRandomizedTempPath()); {code} 3. Do not understand the following {{assert}}, some comments may be helpful. {code:java} +assertEquals(BASE_PATH + SEP + subdir1 + SEP + "subdir15", LocalReplica +.parseBaseDir(new File(BASE_PATH + SEP + subdir1 + SEP + "subdir15"), +blkId).baseDirPath); {code} I will give my +1 after update. Ping [~ayushtkn], do you mind to take another review? Thanks again. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884862#comment-16884862 ] Hadoop QA commented on HDFS-14631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-14631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12974656/HDFS-14631.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1b4c98204932 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0976f6f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27225/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27225/testReport/ | | Max. process+thread count | 4502 (vs. ulimit of 1) | | modules | C: hadoop-
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884820#comment-16884820 ] Jinglun commented on HDFS-14631: Hi [~hexiaoqiao], thanks for your great suggestions. The suggestions make sense to me and I follow them. Upload patch-002 and pending jenkins. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch, HDFS-14631.002.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16884709#comment-16884709 ] He Xiaoqiao commented on HDFS-14631: [~LiJinglun], Thanks for report and patch this issue. Some review comments from me: 1. we should replace {{hasSubdirs}} with {{true}} since it is always 'true'. {code:java} + if (idToBlockDir.equals(dir)) { +return new ReplicaDirInfo(currentDir.getAbsolutePath(), hasSubdirs); + } {code} 2. It is better to use a random block id to verify the logic. {code:java} +long blkId = 7600037L; {code} 3. It seems {{Exception}} is never thrown in the following method. {code:java} public void testLocalReplicaParsing() throws Exception { ... } {code} 4. It should be better to add annotation for new unit test. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879791#comment-16879791 ] He Xiaoqiao commented on HDFS-14631: Thanks for report and patch it. I would like to review this patch first next week. Thanks again. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879641#comment-16879641 ] Jinglun commented on HDFS-14631: Hi [~hexiaoqiao], do you have time for this? Or do you know who is the right guy to ask for issues related to DirectoryScanner? Looking forward to your comments. > The DirectoryScanner doesn't fix the wrongly placed replica. > > > Key: HDFS-14631 > URL: https://issues.apache.org/jira/browse/HDFS-14631 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-14631.001.patch > > > When DirectoryScanner scans block files, if the block refers to the block > file does not exist the DirectoryScanner will update the block based on the > replica file found on the disk. See FsDatasetImpl#checkAndUpdate. > > {code:java} > /* > * Block exists in volumeMap and the block file exists on the disk > */ > // Compare block files > if (memBlockInfo.blockDataExists()) { > ... > } else { > // Block refers to a block file that does not exist. > // Update the block with the file found on the disk. Since the block > // file and metadata file are found as a pair on the disk, update > // the block based on the metadata file found on the disk > LOG.warn("Block file in replica " > + memBlockInfo.getBlockURI() > + " does not exist. Updating it to the file found during scan " > + diskFile.getAbsolutePath()); > memBlockInfo.updateWithReplica( > StorageLocation.parse(diskFile.toString())); > LOG.warn("Updating generation stamp for block " + blockId > + " from " + memBlockInfo.getGenerationStamp() + " to " + diskGS); > memBlockInfo.setGenerationStamp(diskGS); > } > {code} > But the DirectoryScanner doesn't really fix it because in > LocalReplica#parseBaseDir() the 'subdir' are ignored. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14631) The DirectoryScanner doesn't fix the wrongly placed replica.
[ https://issues.apache.org/jira/browse/HDFS-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878665#comment-16878665 ] Hadoop QA commented on HDFS-14631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}163m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash | \\ \\ || Subsystem || Report/Notes || | Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | HDFS-14631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12973665/HDFS-14631.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7adabd8e8c8e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 729cb3a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27145/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27145/testReport/ | | Max. process+thread count | 2969 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://bui