[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708567#comment-16708567 ] Kitti Nanasi commented on HDFS-12946: - Thanks [~jojochuang] for reviewing and committing! I created HDFS-14125 to change the logs to use parameterized log format. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch, > HDFS-12946.09.patch, HDFS-12946.10.patch, HDFS-12946.11.patch, > HDFS-12946.12.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707619#comment-16707619 ] Hudson commented on HDFS-12946: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15547 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15547/]) HDFS-12946. Add a tool to check rack configuration against EC policies. (weichiu: rev dd5e7c6b7239a93f2391beaa11181e442a387db4) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/ECTopologyVerifier.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ECTopologyVerifierResult.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestECAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingMultipleRacks.java > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch, > HDFS-12946.09.patch, HDFS-12946.10.patch, HDFS-12946.11.patch, > HDFS-12946.12.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707598#comment-16707598 ] Wei-Chiu Chuang commented on HDFS-12946: Note, since you instantiated a slf4j logger, it is recommended to use parameterized log format for readability and performance reasons. Okay to do in a separate jira. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch, > HDFS-12946.09.patch, HDFS-12946.10.patch, HDFS-12946.11.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705412#comment-16705412 ] Wei-Chiu Chuang commented on HDFS-12946: +1 -- Didn't review it in depth because Xiao already gave a +1 and I have good faith in it. And I verified the checkstyle/apache license warnings are false positives. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch, > HDFS-12946.09.patch, HDFS-12946.10.patch, HDFS-12946.11.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701820#comment-16701820 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 57s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12949811/HDFS-12946.11.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b4fe0d11aaf0 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700874#comment-16700874 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 362 unchanged - 0 fixed = 363 total (was 362) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 12s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12949712/HDFS-12946.10.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 683a5dd6d93a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700630#comment-16700630 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 10s{color} | {color:red} hadoop-hdfs-project generated 1 new + 536 unchanged - 0 fixed = 537 total (was 536) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 5s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 362 unchanged - 0 fixed = 364 total (was 362) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 15s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12949678/HDFS-12946.09.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cf7997e75470 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700402#comment-16700402 ] Kitti Nanasi commented on HDFS-12946: - Thanks [~xiaochen] for the comments! In patch v009 I fixed the comments and modified FSNamesystem#getVerifyECWithTopologyResult's return type to String to match the format of the other entries in the name node jmx. I created HDFS-14061 for running the topology check in FSN#enableErasureCodingPolicy. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch, > HDFS-12946.09.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678547#comment-16678547 ] Xiao Chen commented on HDFS-12946: -- Thanks Kitti for revving. Latest patch didn't apply to trunk due to DFSTestUtil conflicts, but I reviewed based on an older trunk, looks great! +1 pending: - missing the new method in {{hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java}}, which failed compilation for me - do we really need {{ECTopologyVerifierResultMBean}}? - we can add the topology check to {{FSN#enableErasureCodingPolicy}} as discussed earlier, can do this in a separate jira if you'd like. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678331#comment-16678331 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-12946 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947232/HDFS-12946.08.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/25458/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch, HDFS-12946.08.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677069#comment-16677069 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-12946 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12947107/HDFS-12946.07.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/25447/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677067#comment-16677067 ] Kitti Nanasi commented on HDFS-12946: - Thanks for the comments [~xiaochen]! I fixed them in the latest patch. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch, HDFS-12946.07.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675923#comment-16675923 ] Xiao Chen commented on HDFS-12946: -- Thanks for the update Kitti! Looking closer, I think we can get away without a read lock if we: - protect {{allPolicies}} in ECPM with class object monitor - instead of get a pointer to the array {{allPolicies}}, use a new method to get a collection that contained the policies, also protected by the class object monitor. (e.g. similar to FSN's way of getting dead/live datanodes and DatanodeManager's way synchronization) Other comments: - I think {{getVerifyTopologySupportsEnabledEcPoliciesResult}} is still very long. How do you think about {{getVerifyECWithTopologyResult}}? - FSN: instead of {code} int numOfDataNodes = getBlockManager().getDatanodeManager() .getDatanodes().size(); {code}, how about we add a new {{getNumOfDatanodes}} method to {{DatanodeManager}} and use that? > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675790#comment-16675790 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 5s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 481 unchanged - 0 fixed = 482 total (was 481) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 13 new + 322 unchanged - 0 fixed = 335 total (was 322) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946950/HDFS-12946.06.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c4057b8ffafa 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5ddefdd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/25436/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675588#comment-16675588 ] Kitti Nanasi commented on HDFS-12946: - Thanks [~xiaochen] for the comments! I fixed the comments except the first one, because if I add a read lock for that, it would ruin the functionality introduced in https://issues.apache.org/jira/browse/HDFS-5693 so I need some more time to figure out how to be thread safe while not locking the jmx calls. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch, > HDFS-12946.06.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675519#comment-16675519 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 481 unchanged - 0 fixed = 482 total (was 481) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 13 new + 322 unchanged - 0 fixed = 335 total (was 322) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946926/HDFS-12946.06.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ce1b7a768bb8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 15df2e7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/25434/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/25434/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671118#comment-16671118 ] Xiao Chen commented on HDFS-12946: -- Thanks for the patch Kitti! I think we're getting close. Some review comments: - FSNamesystem: We generally needs fsn/fsd locks when accessing internal states. In this case, I think DNManager is fine, but ECPManager should be protected with a readlock. - ErasureCodingClusterSetupVerifier: I think we should finer extract the logic. In NN, we don't need to loop through the datanodes to get the number of racks - we can just get from {{NetworkTopology}} (e.g. via DNManager). IMO the 'highly uneven rack' check feels like something we can do as a future improvement. It's more subjective, and problem will be visible EC'ed or not. - Following the above, there would be no need to {{reportSet.toArray}} in NN. With thousands of DNs in a cluster, this could be perf-heavy. - EcClusterSetupVerifyResult: Private class doesn't have to define a {{InterfaceStability}}. They're by default Unstable. - Naming: feels like {{ErasureCodingClusterSetupVerifier}} is a bit long. How about {{ECTopologyVerifier}}? We can assume EC is a known concept since this is HDFS private class. If it confused future developers, class javadoc should make it fairly clear. Similar to the method names. For example {{getVerifyClusterSetupSupportsEnabledEcPoliciesResult}} I think {{getECTopologyVerifierResult}} should be ok, or even {{verifyECWithTopology}}. - There's an unnecessary change in {{ECBlockGroupsMBean}} > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670983#comment-16670983 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 0s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 481 unchanged - 0 fixed = 482 total (was 481) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 17 new + 322 unchanged - 0 fixed = 339 total (was 322) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}174m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946414/HDFS-12946.05.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4bcdf14b233b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6668c19 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/25404/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667388#comment-16667388 ] Kitti Nanasi commented on HDFS-12946: - Thanks for the suggestion [~xiaochen], I think that is a very good idea, I will provide a patch for that tomorrow. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662643#comment-16662643 ] Xiao Chen commented on HDFS-12946: -- I had more thoughts into this. How about we extract the logic of {{ErasureCodingClusterSetupVerifier}} util, then use it in both the hdfs-client and hdfs sides? This means we don't need to add any RPCs, and be able to use the logic to calculate in NN (from internal state) and at the client (using existing APIs to get policies and DN stats). JMX would still be great to have. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653540#comment-16653540 ] Kitti Nanasi commented on HDFS-12946: - Thanks for the discussion [~xiaochen] and [~andrew.wang]! I like #2 more as well, because of the same reason that [~xiaochen] mentioned, that it is easy to use by other downstream applications and it can be easily reused in the enable and set policy logics. But even in that case we need to figure out if it will be implemented in fsck or in ECAdmin or somewhere else. About the return type, I believe if the jmx function is exposed via an MXBean and not an MBean, then the function can return complex types. I will modify that in the next patch. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652145#comment-16652145 ] Xiao Chen commented on HDFS-12946: -- Thanks [~andrew.wang] for the comment! My version of recap: There's no existing way to check - Patch 1 was doing this on client-side entirely. But as you said this can't be easily used during EC policy enabling call. Moving this logic to NN-side seems to be a good reuse. Otherwise even if we extract the logic to some util functions, ecadmin would still need to call all these RPCs. Seems like we're left with 2 options here: # Do this client-side, extract the logic and accept the fact that enablePolicy may call other RPCs for validation. (If I understand Andrew's "this would be a more generally useful admin interface" comment correctly. # Do it via this new RPC. We can work on details to make the return value more reasonable (e.g. enum-up the int return value; on MXBean, return a String which is built based on the int/enum value). I'm voting on #2 because I think exposing this via metrics is more flexible and usable by various types of downstream. Thoughts? > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652115#comment-16652115 ] Andrew Wang commented on HDFS-12946: Hi folks, thanks for working on this! Catching up on the discussion, this is a nice change and is something I've hit before too (though hopefully not something we see too often in production). What I'd ask (and about most "monitoring" type applications) is about the usecase. Cluster admins want to automate their alerting and reporting. If they've gotten to the point that they need to take some manual action (e.g. use fsck, {{hdfs debug}}, call this new RPC), it's because something external has told them there is an issue. I go to interactive debugging tools to provide the next level of detail for alerts that can't be easily automated. In this case, it seems like most users would want to automate an alert based on the metric. It's similar to mis-replication. The RPC isn't as useful IMO since it doesn't tell you anything extra, though I would suggest logging a WARN/ERROR when enabling an EC policy and this condition is true. Are there any existing ways of querying the cluster topology and enabled EC policies, and then computing this client-side? If not, I think this would be a more generally useful admin interface than the very-lightweight new RPC. One code comment is that I would prefer having some booleans for the MXBean rather than the integer for additional clarity, since a bare int return type is a bit opaque. In code I'd recommend using an enum or named static constants, but that doesn't work for the MXBean. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651092#comment-16651092 ] Xiao Chen commented on HDFS-12946: -- Thanks [~knanasi] for moving forward and providing patches for demonstration. Good discussions and work. :) I think the intuitive and most common way is via RPC, we have many similar things querying NN. The difference in this case, is this isn't a direct NN status, but more of a calculated result based on NN topology && dir's ec policy. Not exposing it to dfsclient is a good idea, I think ECAdmin is enough for this call. fsck looks tidy in code change, but as you said could have usability confusions. This is in a sense closer to the JMX idea, because the work is done via servlet, hence bypassing the regular RPC. It's a hard call: I feel this EC related command should be under ECAdmin, but fsck implementation would be cleaner. It'd be nice if we can have this still in ECAdmin, but calling fsck to do that, but that's definitely hackier, I don't think we have done that before. [~andrew.wang], any advice / preference? > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650639#comment-16650639 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 402 unchanged - 0 fixed = 404 total (was 402) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 5s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Unread field:NamenodeFsck.java:[line 1204] | | Failed junit tests | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.protocol.TestLayoutVersion | | | hadoop.fs.TestHdfsNativeCodeLoader | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943967/HDFS-12946.04.fsck.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1ff503e2cddc 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fa94d37 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16650426#comment-16650426 ] Kitti Nanasi commented on HDFS-12946: - Thanks [~xiaochen] for the comments and the summary! I agree that ClientProtocol might not be a good place for this RPC call, because CP is responsible for way more important RPC calls than this one, it is even more so for the DFSClient. For the time being I created a patch (patch v003) which removes the DFSClient changes, but keeps the new RPC in the ClientProtocol. And I also fixed the test failures which were related. Having this command in the fsck is a good idea, my only concern is that this new verify command would be more general than the usual fsck checks (it can't be calculated for directories, but it is general in the whole cluster) and that could cause some confusion. There are some other solutions in my mind which could work as well: * It could work like the reconfig command in DFSAdmin, which implements a custom ReconfigurationProtocol, which is good because it doesn't use the existing ClientProtocol, but I don't like it so much, because the command requires the address of the namenode as a parameter. * JMX call in ECAdmin when the new command is executed, the problem with this is that we have to get the namenode's ip address (I'm not sure how to do that in case of HA) and get the result of the verify via JMX. * It could be a metric inside ECBlockGroupStats (that is already exposed on the ClientProtocol), the problem with this is that in this case the new metric shouldn't be recalculated at every invocation, but it should be stored on the namenode, like the other metrics. Then the metric should be recalculated when a policy is enabled or disabled, or when a datanode dies or is added. The last one would be more difficult to react on. Overall I think that the fsck way is the best and easiest solution, so I also uploaded an initial patch (I will add tests later) for that. Note that I think that the return value for the verify method should contain the result message, I also plan to change that in a later patch. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, > HDFS-12946.03.patch, HDFS-12946.04.fsck.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647401#comment-16647401 ] Xiao Chen commented on HDFS-12946: -- Thanks [~knanasi] for the work here! It'd be nice to have a quick summary of offline discussions, for context. I'll try to do it below this time. :) {quote}[~zvenczel] and Kitti wondered if it would have made sense to do this check in the NN (instead of on the client side via multiple RPCs). This way, the enableECPolicy could also be injected with the check, and NN can expose this via jmx. I think this is a good idea, and I appreciate Kitti's quick turnaround on implementing this quickly. {quote} Looking at the patch though, I'm a little worried that this new RPC seems to be very 'light' comparing to other RPCs. We should investigate to see if there's other possibilities so that we do not 'pollute' the ClientNamenodeProtocol. 1 way I found is if we do it in fsck, we could directly call to FSN. There may be better alternatives, but I'd need more time to investigate. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647105#comment-16647105 ] Hadoop QA commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 7s{color} | {color:orange} hadoop-hdfs-project: The patch generated 11 new + 501 unchanged - 0 fixed = 512 total (was 501) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.TestFSNamesystemMBean | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-12946
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645072#comment-16645072 ] Kitti Nanasi commented on HDFS-12946: - I discussed with [~xiaochen] that I will take this task on. [~ayushtkn], it's a good idea, I agree that it would be useful to link this with the enable and set policy commands. I also want to expose the result of this verify via JMX as well, I will upload a patch soon. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12946.01.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605367#comment-16605367 ] Ayush Saxena commented on HDFS-12946: - Thanx [~xiaochen] for working on this.The new feature looks good and would be very helpful. Just one thing can we somehow link it with the enable command also so that when a policy is enabled we can get to know its status with the present network topology. :) > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12946.01.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594922#comment-16594922 ] genericqa commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-12946 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902979/HDFS-12946.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24899/console | | Powered by | Apache Yetus 0.9.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12946.01.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594913#comment-16594913 ] Kitti Nanasi commented on HDFS-12946: - Thanks [~xiaochen] for reporting it and providing the patch! I think this feature will be very useful. The patch looks good to me, I have only one minor comment that the failure scenario tests could check for the actual error code instead of checking for not zero. > Add a tool to check rack configuration against EC policies > -- > > Key: HDFS-12946 > URL: https://issues.apache.org/jira/browse/HDFS-12946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-12946.01.patch > > > From testing we have seen setups with problematic racks / datanodes that > would not suffice basic EC usages. These are usually found out only after the > tests failed. > We should provide a way to check this beforehand. > Some scenarios: > - not enough datanodes compared to EC policy's highest data+parity number > - not enough racks to satisfy BPPRackFaultTolerant > - highly uneven racks to satisfy BPPRackFaultTolerant > - highly uneven racks (so that BPP's considerLoad logic may exclude some busy > nodes on the rack, resulting in #2) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407226#comment-16407226 ] genericqa commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 95 unchanged - 0 fixed = 105 total (was 95) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 0s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}172m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12946 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902979/HDFS-12946.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 46f46770a004 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fe224ff | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23580/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies
[ https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298176#comment-16298176 ] genericqa commented on HDFS-12946: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 92 unchanged - 0 fixed = 102 total (was 92) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSRollback | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.qjournal.server.TestJournalNode | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestReplication | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDatanodeReport | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDecommission | | |