[jira] [Comment Edited] (HDFS-9411) HDFS NodeLabel support
[ https://issues.apache.org/jira/browse/HDFS-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120730#comment-17120730 ] maobaolong edited comment on HDFS-9411 at 6/1/20, 2:52 AM: --- +1. Feature is pretty good. Is there any update on this? Maybe huawei had done this feature in their product, how about the open source version? was (Author: maobaolong): +1. Feature is pretty good. Is there any update on this? > HDFS NodeLabel support > -- > > Key: HDFS-9411 > URL: https://issues.apache.org/jira/browse/HDFS-9411 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS Node Labels-21-08-2017.pdf, > HDFSNodeLabels-15-09-2016.pdf, HDFSNodeLabels-20-06-2016.pdf, > HDFS_ZoneLabels-16112015.pdf > > > HDFS currently stores data blocks on different datanodes chosen by > BlockPlacement Policy. These datanodes are random within the > scope(local-rack/different-rack/nodegroup) of network topology. > In Multi-tenant (Tenant can be user/service) scenario, blocks of any tenant > can be on any datanodes. > Based on applications of different tenant, sometimes datanode might get busy > making the other tenant's application to slow down. It would be better if > admin's have a provision to logically divide the cluster among multi-tenants. > NodeLabels adds more options to user to specify constraints to select > specific nodes with specific requirements. > High level design doc to follow soon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-9411) HDFS NodeLabel support
[ https://issues.apache.org/jira/browse/HDFS-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] maobaolong updated HDFS-9411: - Comment: was deleted (was: +1. Feature is pretty good. Is there any update on this? ) > HDFS NodeLabel support > -- > > Key: HDFS-9411 > URL: https://issues.apache.org/jira/browse/HDFS-9411 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS Node Labels-21-08-2017.pdf, > HDFSNodeLabels-15-09-2016.pdf, HDFSNodeLabels-20-06-2016.pdf, > HDFS_ZoneLabels-16112015.pdf > > > HDFS currently stores data blocks on different datanodes chosen by > BlockPlacement Policy. These datanodes are random within the > scope(local-rack/different-rack/nodegroup) of network topology. > In Multi-tenant (Tenant can be user/service) scenario, blocks of any tenant > can be on any datanodes. > Based on applications of different tenant, sometimes datanode might get busy > making the other tenant's application to slow down. It would be better if > admin's have a provision to logically divide the cluster among multi-tenants. > NodeLabels adds more options to user to specify constraints to select > specific nodes with specific requirements. > High level design doc to follow soon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9411) HDFS NodeLabel support
[ https://issues.apache.org/jira/browse/HDFS-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120730#comment-17120730 ] maobaolong commented on HDFS-9411: -- +1. Feature is pretty good. Is there any update on this? > HDFS NodeLabel support > -- > > Key: HDFS-9411 > URL: https://issues.apache.org/jira/browse/HDFS-9411 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS Node Labels-21-08-2017.pdf, > HDFSNodeLabels-15-09-2016.pdf, HDFSNodeLabels-20-06-2016.pdf, > HDFS_ZoneLabels-16112015.pdf > > > HDFS currently stores data blocks on different datanodes chosen by > BlockPlacement Policy. These datanodes are random within the > scope(local-rack/different-rack/nodegroup) of network topology. > In Multi-tenant (Tenant can be user/service) scenario, blocks of any tenant > can be on any datanodes. > Based on applications of different tenant, sometimes datanode might get busy > making the other tenant's application to slow down. It would be better if > admin's have a provision to logically divide the cluster among multi-tenants. > NodeLabels adds more options to user to specify constraints to select > specific nodes with specific requirements. > High level design doc to follow soon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9411) HDFS NodeLabel support
[ https://issues.apache.org/jira/browse/HDFS-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120729#comment-17120729 ] maobaolong commented on HDFS-9411: -- +1. Feature is pretty good. Is there any update on this? > HDFS NodeLabel support > -- > > Key: HDFS-9411 > URL: https://issues.apache.org/jira/browse/HDFS-9411 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS Node Labels-21-08-2017.pdf, > HDFSNodeLabels-15-09-2016.pdf, HDFSNodeLabels-20-06-2016.pdf, > HDFS_ZoneLabels-16112015.pdf > > > HDFS currently stores data blocks on different datanodes chosen by > BlockPlacement Policy. These datanodes are random within the > scope(local-rack/different-rack/nodegroup) of network topology. > In Multi-tenant (Tenant can be user/service) scenario, blocks of any tenant > can be on any datanodes. > Based on applications of different tenant, sometimes datanode might get busy > making the other tenant's application to slow down. It would be better if > admin's have a provision to logically divide the cluster among multi-tenants. > NodeLabels adds more options to user to specify constraints to select > specific nodes with specific requirements. > High level design doc to follow soon. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles
[ https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120618#comment-17120618 ] Hadoop QA commented on HDFS-15381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 5s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 57s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29389/artifact/out/Dockerfile | | JIRA Issue | HDFS-15381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004445/HDFS-15381.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e19dd589beb4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles
[ https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120583#comment-17120583 ] Ayush Saxena commented on HDFS-15381: - +1 > Fix typos corrputBlocksFiles to corruptBlocksFiles > -- > > Key: HDFS-15381 > URL: https://issues.apache.org/jira/browse/HDFS-15381 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.2.1 >Reporter: bianqi >Assignee: bianqi >Priority: Trivial > Attachments: HDFS-15381.001.patch > > > Fix typos corrputBlocksFiles to corruptBlocksFiles -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles
[ https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bianqi updated HDFS-15381: -- Status: Patch Available (was: Open) > Fix typos corrputBlocksFiles to corruptBlocksFiles > -- > > Key: HDFS-15381 > URL: https://issues.apache.org/jira/browse/HDFS-15381 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.2.1 >Reporter: bianqi >Assignee: bianqi >Priority: Trivial > Attachments: HDFS-15381.001.patch > > > Fix typos corrputBlocksFiles to corruptBlocksFiles -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles
[ https://issues.apache.org/jira/browse/HDFS-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bianqi updated HDFS-15381: -- Attachment: HDFS-15381.001.patch > Fix typos corrputBlocksFiles to corruptBlocksFiles > -- > > Key: HDFS-15381 > URL: https://issues.apache.org/jira/browse/HDFS-15381 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.2.1 >Reporter: bianqi >Assignee: bianqi >Priority: Trivial > Attachments: HDFS-15381.001.patch > > > Fix typos corrputBlocksFiles to corruptBlocksFiles -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15381) Fix typos corrputBlocksFiles to corruptBlocksFiles
bianqi created HDFS-15381: - Summary: Fix typos corrputBlocksFiles to corruptBlocksFiles Key: HDFS-15381 URL: https://issues.apache.org/jira/browse/HDFS-15381 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.2.1 Reporter: bianqi Assignee: bianqi Fix typos corrputBlocksFiles to corruptBlocksFiles -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15359) EC: Allow closing a file with committed blocks
[ https://issues.apache.org/jira/browse/HDFS-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120559#comment-17120559 ] Hadoop QA commented on HDFS-15359: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 54s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 48s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}191m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestStripedFileAppend | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29387/artifact/out/Dockerfile | | JIRA Issue | HDFS-15359 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004440/HDFS-15359-05.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 3b613e349bfe 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | pe
[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120525#comment-17120525 ] Hadoop QA commented on HDFS-15246: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 3s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 3s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 29 unchanged - 0 fixed = 31 total (was 29) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 43s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}193m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestGetFileChecksum | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29386/artifact/out/Dockerfile | | JIRA Issue | HDFS-15246 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13004438/HDFS-15246.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c8d31c1bc783 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_
[jira] [Commented] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions
[ https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120519#comment-17120519 ] Hudson commented on HDFS-10792: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18311 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18311/]) HDFS-10792. RedundantEditLogInputStream should log caught exceptions. (ayushsaxena: rev ae13a5ccbea10fe86481adbbff574c528e03c7f6) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java > RedundantEditLogInputStream should log caught exceptions > > > Key: HDFS-10792 > URL: https://issues.apache.org/jira/browse/HDFS-10792 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Fix For: 3.4.0 > > Attachments: HDFS-10792.01.patch > > > There are a few places in {{RedundantEditLogInputStream}} where an > IOException is caught but never logged. We should improve the logging of > these exceptions to help debugging. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11041) Unable to unregister FsDatasetState MBean if DataNode is shutdown twice
[ https://issues.apache.org/jira/browse/HDFS-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-11041: Summary: Unable to unregister FsDatasetState MBean if DataNode is shutdown twice (was: Unable to unregister FsDatasetState MBean if if DataNode is shutdown twice) > Unable to unregister FsDatasetState MBean if DataNode is shutdown twice > --- > > Key: HDFS-11041 > URL: https://issues.apache.org/jira/browse/HDFS-11041 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HDFS-11041.01.patch, HDFS-11041.02.patch, > HDFS-11041.03.patch > > > I saw error message like the following in some tests > {noformat} > 2016-10-21 04:09:03,900 [main] WARN util.MBeans > (MBeans.java:unregister(114)) - Error unregistering > Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc > javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) > at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:112) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:2127) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2016) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929) > at > org.apache.hadoop.hdfs.TestDatanodeReport.testDatanodeReport(TestDatanodeReport.java:144) > {noformat} > The test shuts down datanode, and then shutdown cluster, which shuts down the > a datanode twice. Resetting the FsDatasetSpi reference in DataNode to null > resolves the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions
[ https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120516#comment-17120516 ] Ayush Saxena commented on HDFS-10792: - Committed to trunk. Thanx [~weichiu] for the contribution!!! > RedundantEditLogInputStream should log caught exceptions > > > Key: HDFS-10792 > URL: https://issues.apache.org/jira/browse/HDFS-10792 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-10792.01.patch > > > There are a few places in {{RedundantEditLogInputStream}} where an > IOException is caught but never logged. We should improve the logging of > these exceptions to help debugging. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions
[ https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-10792: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > RedundantEditLogInputStream should log caught exceptions > > > Key: HDFS-10792 > URL: https://issues.apache.org/jira/browse/HDFS-10792 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Fix For: 3.4.0 > > Attachments: HDFS-10792.01.patch > > > There are a few places in {{RedundantEditLogInputStream}} where an > IOException is caught but never logged. We should improve the logging of > these exceptions to help debugging. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions
[ https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120515#comment-17120515 ] Hadoop QA commented on HDFS-10792: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDFS-10792 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29388/artifact/out/Dockerfile | | JIRA Issue | HDFS-10792 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825354/HDFS-10792.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29388/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > RedundantEditLogInputStream should log caught exceptions > > > Key: HDFS-10792 > URL: https://issues.apache.org/jira/browse/HDFS-10792 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-10792.01.patch > > > There are a few places in {{RedundantEditLogInputStream}} where an > IOException is caught but never logged. We should improve the logging of > these exceptions to help debugging. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15365) [RBF] findMatching method return wrong result
[ https://issues.apache.org/jira/browse/HDFS-15365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HDFS-15365. - Resolution: Not A Problem > [RBF] findMatching method return wrong result > - > > Key: HDFS-15365 > URL: https://issues.apache.org/jira/browse/HDFS-15365 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.1.1 >Reporter: liuyanyu >Assignee: liuyanyu >Priority: Major > Attachments: image-2020-05-20-11-42-12-763.png, > image-2020-05-20-11-55-34-115.png > > > A mount table /hacluster_root -> hdfs://haclsuter/ is setted on the cluster, > as follows: > !image-2020-05-20-11-42-12-763.png! > when I used > org.apache.hadoop.hdfs.server.federation.router.FederationUtil#findMatching > to find path hdfs://hacluster/yz0516/abc,the result of > FederationUtil.findMatching(mountTableEntries.iterator(), > /yz0516/abc, hacluster) is /hacluster_root/yz0516/hacluster_root/abc. This is > wrong. The correct result should be /hacluster_root/yz0516/abc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10792) RedundantEditLogInputStream should log caught exceptions
[ https://issues.apache.org/jira/browse/HDFS-10792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120513#comment-17120513 ] Ayush Saxena commented on HDFS-10792: - +1 > RedundantEditLogInputStream should log caught exceptions > > > Key: HDFS-10792 > URL: https://issues.apache.org/jira/browse/HDFS-10792 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-10792.01.patch > > > There are a few places in {{RedundantEditLogInputStream}} where an > IOException is caught but never logged. We should improve the logging of > these exceptions to help debugging. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11041) Unable to unregister FsDatasetState MBean if if DataNode is shutdown twice
[ https://issues.apache.org/jira/browse/HDFS-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120511#comment-17120511 ] Ayush Saxena commented on HDFS-11041: - I saw the same trace in my UT of HDFS-15359. The fix seems reasonable and correct. Failed tests seems not related. v003 LGTM +1 Will commit by tomorrow EOD, if no further comments. > Unable to unregister FsDatasetState MBean if if DataNode is shutdown twice > -- > > Key: HDFS-11041 > URL: https://issues.apache.org/jira/browse/HDFS-11041 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HDFS-11041.01.patch, HDFS-11041.02.patch, > HDFS-11041.03.patch > > > I saw error message like the following in some tests > {noformat} > 2016-10-21 04:09:03,900 [main] WARN util.MBeans > (MBeans.java:unregister(114)) - Error unregistering > Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc > javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-33cd714c-0b1a-471f-8efe-f431d7d874bc > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) > at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:112) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:2127) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2016) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929) > at > org.apache.hadoop.hdfs.TestDatanodeReport.testDatanodeReport(TestDatanodeReport.java:144) > {noformat} > The test shuts down datanode, and then shutdown cluster, which shuts down the > a datanode twice. Resetting the FsDatasetSpi reference in DataNode to null > resolves the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15359) EC: Allow closing a file with committed blocks
[ https://issues.apache.org/jira/browse/HDFS-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15359: Attachment: HDFS-15359-05.patch > EC: Allow closing a file with committed blocks > -- > > Key: HDFS-15359 > URL: https://issues.apache.org/jira/browse/HDFS-15359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15359-01.patch, HDFS-15359-02.patch, > HDFS-15359-03.patch, HDFS-15359-04.patch, HDFS-15359-05.patch > > > Presently, {{dfs.namenode.file.close.num-committed-allowed}} is ignored in > case of EC blocks. But in case of heavy loads, IBR's from Datanode may get > delayed and cause the file write to fail. So, can allow EC files to close > with blocks in committed state as REP files -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15359) EC: Allow closing a file with committed blocks
[ https://issues.apache.org/jira/browse/HDFS-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120484#comment-17120484 ] Vinayakumar B commented on HDFS-15359: -- Thanks [~ayushtkn] for the patch. I think the approach of allowing commited block only in case of write happened to all nodes is very reasonable to prevent unexpected dataloss. 2 minor comments {code} if (b.isStriped()) { BlockInfoStriped blkStriped = (BlockInfoStriped) b; if (b.getUnderConstructionFeature().getExpectedStorageLocations().length != blkStriped.getRealTotalBlockNum()) { return b + " is a striped block in " + state + " with less then " + "required number of blocks."; } } {code} Move this check after `if (state != BlockUCState.COMMITTED) ` check. It makes more sense there. In test, {code} // Check if the blockgroup isn't complete then file close shouldn't be // success with block in committed state. cluster.getDataNodes().get(0).shutdown(); FSDataOutputStream str = dfs.create(new Path("/dir/file1")); for (int i = 0; i < 1024 * 1024 * 4; i++) { str.write(i); } DataNodeTestUtils.pauseIBR(cluster.getDataNodes().get(0)); DataNodeTestUtils.pauseIBR(cluster.getDataNodes().get(1)); LambdaTestUtils.intercept(IOException.class, "", () -> str.close()); {code} You should `pauseIBR` datanodes 1 and 2. 0 is already shutdown. +1 once addessed. > EC: Allow closing a file with committed blocks > -- > > Key: HDFS-15359 > URL: https://issues.apache.org/jira/browse/HDFS-15359 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15359-01.patch, HDFS-15359-02.patch, > HDFS-15359-03.patch, HDFS-15359-04.patch > > > Presently, {{dfs.namenode.file.close.num-committed-allowed}} is ignored in > case of EC blocks. But in case of heavy loads, IBR's from Datanode may get > delayed and cause the file write to fail. So, can allow EC files to close > with blocks in committed state as REP files -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120463#comment-17120463 ] hemanthboyina commented on HDFS-15246: -- thanks for the review [~elgoiri] have updated the patch , please review > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15246: - Attachment: HDFS-15246.003.patch > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock
[ https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120455#comment-17120455 ] Jiang Xin edited comment on HDFS-15160 at 5/31/20, 9:03 AM: Hi [~sodonnell] , Thanks for your quick reply. I have one more question, is it safe to change FsDatasetImpl#getBlockLocalPathInfo and DataNode#transferReplicaForPipelineRecovery to read lock? As you mentioned above, both of them would change generationStamp, the `synchronized(replica) ` in FsDatasetImpl#getBlockLocalPathInfo seems not protected generationStamp from being changed in other methods. Would you like to have a review? Thanks. was (Author: jiang xin): Hi [~sodonnell] , Thanks for your quick reply. I have one more question, is it safe to change FsDatasetImpl#getBlockLocalPathInfo and DataNode#transferReplicaForPipelineRecovery to read lock? As you mentioned above, both of them would change generationStamp, the `synchronized(replica) ` in FsDatasetImpl#getBlockLocalPathInfo seems not protected generationStamp from being changed in other methods. Would you like to review it again? Thanks. > ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl > methods should use datanode readlock > --- > > Key: HDFS-15160 > URL: https://issues.apache.org/jira/browse/HDFS-15160 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.3.0 >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-15160.001.patch, HDFS-15160.002.patch, > HDFS-15160.003.patch, HDFS-15160.004.patch, HDFS-15160.005.patch, > HDFS-15160.006.patch, image-2020-04-10-17-18-08-128.png, > image-2020-04-10-17-18-55-938.png > > > Now we have HDFS-15150, we can start to move some DN operations to use the > read lock rather than the write lock to improve concurrence. The first step > is to make the changes to ReplicaMap, as many other methods make calls to it. > This Jira switches read operations against the volume map to use the readLock > rather than the write lock. > Additionally, some methods make a call to replicaMap.replicas() (eg > getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result > in a read only fashion, so they can also be switched to using a readLock. > Next is the directory scanner and disk balancer, which only require a read > lock. > Finally (for this Jira) are various "low hanging fruit" items in BlockSender > and fsdatasetImpl where is it fairly obvious they only need a read lock. > For now, I have avoided changing anything which looks too risky, as I think > its better to do any larger refactoring or risky changes each in their own > Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock
[ https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17120455#comment-17120455 ] Jiang Xin commented on HDFS-15160: -- Hi [~sodonnell] , Thanks for your quick reply. I have one more question, is it safe to change FsDatasetImpl#getBlockLocalPathInfo and DataNode#transferReplicaForPipelineRecovery to read lock? As you mentioned above, both of them would change generationStamp, the `synchronized(replica) ` in FsDatasetImpl#getBlockLocalPathInfo seems not protected generationStamp from being changed in other methods. Would you like to review it again? Thanks. > ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl > methods should use datanode readlock > --- > > Key: HDFS-15160 > URL: https://issues.apache.org/jira/browse/HDFS-15160 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.3.0 >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-15160.001.patch, HDFS-15160.002.patch, > HDFS-15160.003.patch, HDFS-15160.004.patch, HDFS-15160.005.patch, > HDFS-15160.006.patch, image-2020-04-10-17-18-08-128.png, > image-2020-04-10-17-18-55-938.png > > > Now we have HDFS-15150, we can start to move some DN operations to use the > read lock rather than the write lock to improve concurrence. The first step > is to make the changes to ReplicaMap, as many other methods make calls to it. > This Jira switches read operations against the volume map to use the readLock > rather than the write lock. > Additionally, some methods make a call to replicaMap.replicas() (eg > getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result > in a read only fashion, so they can also be switched to using a readLock. > Next is the directory scanner and disk balancer, which only require a read > lock. > Finally (for this Jira) are various "low hanging fruit" items in BlockSender > and fsdatasetImpl where is it fairly obvious they only need a read lock. > For now, I have avoided changing anything which looks too risky, as I think > its better to do any larger refactoring or risky changes each in their own > Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15321) Make DFSAdmin tool to work with ViewFSOverloadScheme
[ https://issues.apache.org/jira/browse/HDFS-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15321: --- Description: When we enable ViewFSOverLoadScheme and used hdfs scheme as overloaded scheme, users work with hdfs uris. But here DFSAdmin expects the impl classe to be DistribbuteFileSystem. If impl class is ViewFSoverloadScheme, it will fail. So, when impl is ViewFSoverloadScheme, we should get corresponding child hdfs to make DFSAdmin to work. This Jira makes the DFSAdmin to work with ViewFSoverloadScheme. > Make DFSAdmin tool to work with ViewFSOverloadScheme > > > Key: HDFS-15321 > URL: https://issues.apache.org/jira/browse/HDFS-15321 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: dfsadmin, fs, viewfs >Affects Versions: 3.2.1 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > > When we enable ViewFSOverLoadScheme and used hdfs scheme as overloaded > scheme, users work with hdfs uris. But here DFSAdmin expects the impl classe > to be DistribbuteFileSystem. If impl class is ViewFSoverloadScheme, it will > fail. > So, when impl is ViewFSoverloadScheme, we should get corresponding child hdfs > to make DFSAdmin to work. > This Jira makes the DFSAdmin to work with ViewFSoverloadScheme. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15321) Make DFSAdmin tool to work with ViewFSOverloadScheme
[ https://issues.apache.org/jira/browse/HDFS-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15321: --- Status: Patch Available (was: Open) > Make DFSAdmin tool to work with ViewFSOverloadScheme > > > Key: HDFS-15321 > URL: https://issues.apache.org/jira/browse/HDFS-15321 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: dfsadmin, fs, viewfs >Affects Versions: 3.2.1 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > > When we enable ViewFSOverLoadScheme and used hdfs scheme as overloaded > scheme, users work with hdfs uris. But here DFSAdmin expects the impl classe > to be DistribbuteFileSystem. If impl class is ViewFSoverloadScheme, it will > fail. > So, when impl is ViewFSoverloadScheme, we should get corresponding child hdfs > to make DFSAdmin to work. > This Jira makes the DFSAdmin to work with ViewFSoverloadScheme. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org