[jira] [Commented] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084556#comment-17084556 ] Hadoop QA commented on HDFS-15280: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 252 unchanged - 2 fixed = 253 total (was 254) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 58s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}194m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15280 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1370/HDFS-15280.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 56fd789f475b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cc5c1da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/29168/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29168/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29168/testReport/ | | Max. process+thread count | 2835 (vs. ul
[jira] [Commented] (HDFS-15278) After execute ‘-setrep 1’, make sure that blocks of the file are dispersed across different datanodes
[ https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084549#comment-17084549 ] Hadoop QA commented on HDFS-15278: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 36s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}194m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15278 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1366/HDFS-15278.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux ade8d1bfb84c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cc5c1da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29167/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/
[jira] [Commented] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084489#comment-17084489 ] Yang Yun commented on HDFS-15280: - Updated to HDFS-15280.002.patch for Jira bug and checkstyle. > Datanode delay random time to report block if BlockManager is busy > -- > > Key: HDFS-15280 > URL: https://issues.apache.org/jira/browse/HDFS-15280 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15280.001.diff, HDFS-15280.002.patch > > > When many Datanodes are reporting at the same time, the cluster may respond > slowly. Limit the concurrent reporting number. If BlockManager is busy, it > rejects new request and the Datanode delay a few random time to report. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15280: Attachment: HDFS-15280.002.patch Status: Patch Available (was: Open) > Datanode delay random time to report block if BlockManager is busy > -- > > Key: HDFS-15280 > URL: https://issues.apache.org/jira/browse/HDFS-15280 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15280.001.diff, HDFS-15280.002.patch > > > When many Datanodes are reporting at the same time, the cluster may respond > slowly. Limit the concurrent reporting number. If BlockManager is busy, it > rejects new request and the Datanode delay a few random time to report. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15280: Status: Open (was: Patch Available) > Datanode delay random time to report block if BlockManager is busy > -- > > Key: HDFS-15280 > URL: https://issues.apache.org/jira/browse/HDFS-15280 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15280.001.diff > > > When many Datanodes are reporting at the same time, the cluster may respond > slowly. Limit the concurrent reporting number. If BlockManager is busy, it > rejects new request and the Datanode delay a few random time to report. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15278) After execute ‘-setrep 1’, make sure that blocks of the file are dispersed across different datanodes
[ https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084478#comment-17084478 ] Yang Yun commented on HDFS-15278: - Updated to HDFS-15278.002.patch for checkstyle errors. > After execute ‘-setrep 1’, make sure that blocks of the file are dispersed > across different datanodes > - > > Key: HDFS-15278 > URL: https://issues.apache.org/jira/browse/HDFS-15278 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15278.001.patch, HDFS-15278.002.patch > > > After execute ‘-setrep 1’, many of blocks of the file may locate on same > machine. Especially the file is written on one datanode machine. That causes > data hot spots and is hard to fix if this machine is down. > Add a chosen history to make sure that blocks of the file are dispersed > across different datanodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15278) After execute ‘-setrep 1’, make sure that blocks of the file are dispersed across different datanodes
[ https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15278: Status: Open (was: Patch Available) > After execute ‘-setrep 1’, make sure that blocks of the file are dispersed > across different datanodes > - > > Key: HDFS-15278 > URL: https://issues.apache.org/jira/browse/HDFS-15278 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15278.001.patch, HDFS-15278.002.patch > > > After execute ‘-setrep 1’, many of blocks of the file may locate on same > machine. Especially the file is written on one datanode machine. That causes > data hot spots and is hard to fix if this machine is down. > Add a chosen history to make sure that blocks of the file are dispersed > across different datanodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15278) After execute ‘-setrep 1’, make sure that blocks of the file are dispersed across different datanodes
[ https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15278: Attachment: HDFS-15278.002.patch Status: Patch Available (was: Open) > After execute ‘-setrep 1’, make sure that blocks of the file are dispersed > across different datanodes > - > > Key: HDFS-15278 > URL: https://issues.apache.org/jira/browse/HDFS-15278 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15278.001.patch, HDFS-15278.002.patch > > > After execute ‘-setrep 1’, many of blocks of the file may locate on same > machine. Especially the file is written on one datanode machine. That causes > data hot spots and is hard to fix if this machine is down. > Add a chosen history to make sure that blocks of the file are dispersed > across different datanodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15277) Parent directory in the explorer does not support all path formats
[ https://issues.apache.org/jira/browse/HDFS-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianfei Jiang updated HDFS-15277: - Description: In HDFS-15239, a new button can click to parent folder is add. However, if the path is not formatted perfectly, it will not get the real parent. Path like follows are supported to get the file list in explorer.html, but they do not get the correct parent: /a/b/c/ parent will get /a/b/c now which should be /a/b /a/b//c parent's parent will get /a/b now which should be /a Otherwise, if current path is / or // or ///. The parent button should be unabled as it has no parent. was: In HDFS-15239, a new button can click to parent folder is add. However, if the path is not formatted perfectly, it will not get the real parent. Path like follows are supported to get the file list in explorer.html, but they do not get the correct parent: /a/b/c/ parent will get /a/b/c now which should be /a/b /a/b//c parent's parent will get /a/b/ now which should be /a Otherwise, if current path is / or // or ///. The parent button should be unabled as it has no parent. > Parent directory in the explorer does not support all path formats > -- > > Key: HDFS-15277 > URL: https://issues.apache.org/jira/browse/HDFS-15277 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Fix For: 3.4.0 > > Attachments: HDFS-15277_001.patch > > > In HDFS-15239, a new button can click to parent folder is add. However, if > the path is not formatted perfectly, it will not get the real parent. > Path like follows are supported to get the file list in explorer.html, but > they do not get the correct parent: > /a/b/c/ parent will get /a/b/c now which should be /a/b > /a/b//c parent's parent will get /a/b now which should be /a > > Otherwise, if current path is / or // or ///. The parent button should be > unabled as it has no parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15281) ZKFC ignores dfs.namenode.rpc-bind-host and uses dfs.namenode.rpc-address to bind to host address
Dhiraj Hegde created HDFS-15281: --- Summary: ZKFC ignores dfs.namenode.rpc-bind-host and uses dfs.namenode.rpc-address to bind to host address Key: HDFS-15281 URL: https://issues.apache.org/jira/browse/HDFS-15281 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.1.3, 3.2.1, 2.9.2, 2.10.0 Reporter: Dhiraj Hegde Fix For: 2.9.3, 3.1.4, 3.2.2, 2.10.1 When ZKFC binds its RPC server to a hostname for listening, it uses the host value specified by dfs.namenode.rpc-address. It should instead use dfs.namenode.rpc-bind-host if that value has been set. If the value has not been set, then it can fall back on dfs.namenode.rpc-address as it currently does. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15266) Add missing DFSOps Statistics in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084373#comment-17084373 ] Íñigo Goiri commented on HDFS-15266: +1 on [^HDFS-15266-02.patch]. > Add missing DFSOps Statistics in WebHDFS > > > Key: HDFS-15266 > URL: https://issues.apache.org/jira/browse/HDFS-15266 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15266-01.patch, HDFS-15266-02.patch > > > Couple of operations doesn't increment the count of number of read/write ops > and DFSOpsCountStatistics > like : getStoragePolicy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15266) Add missing DFSOps Statistics in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084336#comment-17084336 ] Ayush Saxena commented on HDFS-15266: - Thanx [~elgoiri] for the review, Checked {{TestConfiguredFailoverProxyProvider}}, seems not related. Is failing in couple of builds. Will raise seperate Jira to track this too. > Add missing DFSOps Statistics in WebHDFS > > > Key: HDFS-15266 > URL: https://issues.apache.org/jira/browse/HDFS-15266 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15266-01.patch, HDFS-15266-02.patch > > > Couple of operations doesn't increment the count of number of read/write ops > and DFSOpsCountStatistics > like : getStoragePolicy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.
[ https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084321#comment-17084321 ] Hadoop QA commented on HDFS-14646: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 73 unchanged - 12 fixed = 73 total (was 85) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 55s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDisableConnCache | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | | hadoop.hdfs.TestReadStripedFileWithDNFailure | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-14646 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1331/HDFS-14646.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbug
[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084317#comment-17084317 ] Hadoop QA commented on HDFS-15255: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 37s{color} | {color:orange} root: The patch generated 1 new + 590 unchanged - 0 fixed = 591 total (was 590) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 16s{color} | {color:red} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 29s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 43s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 3s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}259m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Call to org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage.equals(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor) in org.apache.hadoop.hdfs.server.namenode.CacheManager.setCachedLocations(LocatedBlock) At CacheM
[jira] [Commented] (HDFS-15266) Add missing DFSOps Statistics in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084285#comment-17084285 ] Íñigo Goiri commented on HDFS-15266: I'm guessing TestConfiguredFailoverProxyProvider is not related but do you mind checking? > Add missing DFSOps Statistics in WebHDFS > > > Key: HDFS-15266 > URL: https://issues.apache.org/jira/browse/HDFS-15266 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15266-01.patch, HDFS-15266-02.patch > > > Couple of operations doesn't increment the count of number of read/write ops > and DFSOpsCountStatistics > like : getStoragePolicy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15275) HttpFS: Response of Create was not correct with noredirect and data are true
[ https://issues.apache.org/jira/browse/HDFS-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084284#comment-17084284 ] Íñigo Goiri commented on HDFS-15275: At some point we should fix all this indentation mess but for now let's leave it as is. +1 on [^HDFS-15275.002.patch]. > HttpFS: Response of Create was not correct with noredirect and data are true > > > Key: HDFS-15275 > URL: https://issues.apache.org/jira/browse/HDFS-15275 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15275.001.patch, HDFS-15275.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15277) Parent directory in the explorer does not support all path formats
[ https://issues.apache.org/jira/browse/HDFS-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084283#comment-17084283 ] Hudson commented on HDFS-15277: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18149 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18149/]) HDFS-15277. Parent directory in the explorer does not support all path (ayushsaxena: rev cc5c1da7c1c29618b5df785d9f1d7a0b737eced1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/explorer.js > Parent directory in the explorer does not support all path formats > -- > > Key: HDFS-15277 > URL: https://issues.apache.org/jira/browse/HDFS-15277 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Fix For: 3.4.0 > > Attachments: HDFS-15277_001.patch > > > In HDFS-15239, a new button can click to parent folder is add. However, if > the path is not formatted perfectly, it will not get the real parent. > Path like follows are supported to get the file list in explorer.html, but > they do not get the correct parent: > /a/b/c/ parent will get /a/b/c now which should be /a/b > /a/b//c parent's parent will get /a/b/ now which should be /a > > Otherwise, if current path is / or // or ///. The parent button should be > unabled as it has no parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15277) Parent directory in the explorer does not support all path formats
[ https://issues.apache.org/jira/browse/HDFS-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15277: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanx [~jiangjianfei] for the contribution and [~elgoiri] for the review!!! > Parent directory in the explorer does not support all path formats > -- > > Key: HDFS-15277 > URL: https://issues.apache.org/jira/browse/HDFS-15277 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Fix For: 3.4.0 > > Attachments: HDFS-15277_001.patch > > > In HDFS-15239, a new button can click to parent folder is add. However, if > the path is not formatted perfectly, it will not get the real parent. > Path like follows are supported to get the file list in explorer.html, but > they do not get the correct parent: > /a/b/c/ parent will get /a/b/c now which should be /a/b > /a/b//c parent's parent will get /a/b/ now which should be /a > > Otherwise, if current path is / or // or ///. The parent button should be > unabled as it has no parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15277) Parent directory in the explorer does not support all path formats
[ https://issues.apache.org/jira/browse/HDFS-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084219#comment-17084219 ] Íñigo Goiri commented on HDFS-15277: Thanks for the fix! +1 on [^HDFS-15277_001.patch]. > Parent directory in the explorer does not support all path formats > -- > > Key: HDFS-15277 > URL: https://issues.apache.org/jira/browse/HDFS-15277 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Attachments: HDFS-15277_001.patch > > > In HDFS-15239, a new button can click to parent folder is add. However, if > the path is not formatted perfectly, it will not get the real parent. > Path like follows are supported to get the file list in explorer.html, but > they do not get the correct parent: > /a/b/c/ parent will get /a/b/c now which should be /a/b > /a/b//c parent's parent will get /a/b/ now which should be /a > > Otherwise, if current path is / or // or ///. The parent button should be > unabled as it has no parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.
[ https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084209#comment-17084209 ] hemanthboyina commented on HDFS-14646: -- we have faced the same issue in our large cluster , updated the patch please review [~csun] [~xkrogen] > Standby NameNode should not upload fsimage to an inappropriate NameNode. > > > Key: HDFS-14646 > URL: https://issues.apache.org/jira/browse/HDFS-14646 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: Xudong Cao >Assignee: Xudong Cao >Priority: Major > Labels: multi-sbnn > Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch, > HDFS-14646.002.patch > > > *Problem Description:* > In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put > the image to all other NNs (whether the peer NN is an ANN or not), and even > if the peer NN immediately replies an error (such as > TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult > .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put > process immediately, but will put the FsImage completely to the peer NN, and > will not read the peer NN's reply until the put is completed. > Depending on the version of Jetty, this behavior can lead to different > consequences : > *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)* > After peer NN called HttpServletResponse.sendError(), the underlying TCP > connection will still be established, and the data SNN sent will be read by > Jetty framework itself in the peer NN side, so the SNN will insignificantly > send the FsImage to the peer NN continuously, causing a waste of time and > bandwidth. In a relatively large HDFS cluster, the size of FsImage can often > reach about 30GB, This is indeed a big waste. > *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty > 9.3.27)* > After peer NN called HttpServletResponse.sendError(), the underlying TCP > connection will be auto closed, and then SNN will directly get an "Error > writing request body to server" exception, as below, note this test needs a > relatively big FSImage (e.g. 10MB level): > {code:java} > 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: > /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: > 9864721. Sent total: 524288 bytes. Size of last segment intended to send: > 4096 bytes. > java.io.IOException: Error writing request body to server > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587) > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: > /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: > 9864721. Sent total: 851968 bytes. Size of last segment intended to send: > 4096 bytes. > java.io.IOException: Error writing request body to server > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587) > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340) > {code} > > *Solution:* > A standby NameNode should not upload fsimage to an inappropriate NameNode, > when he plans to put a FsImage to the peer NN, he need to check whether he > really need to put it at this time. > In detail, local SNN should establish an HTTP connection with the peer NN, > send the put request,
[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.
[ https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-14646: - Attachment: HDFS-14646.002.patch > Standby NameNode should not upload fsimage to an inappropriate NameNode. > > > Key: HDFS-14646 > URL: https://issues.apache.org/jira/browse/HDFS-14646 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: Xudong Cao >Assignee: Xudong Cao >Priority: Major > Labels: multi-sbnn > Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch, > HDFS-14646.002.patch > > > *Problem Description:* > In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put > the image to all other NNs (whether the peer NN is an ANN or not), and even > if the peer NN immediately replies an error (such as > TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult > .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put > process immediately, but will put the FsImage completely to the peer NN, and > will not read the peer NN's reply until the put is completed. > Depending on the version of Jetty, this behavior can lead to different > consequences : > *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)* > After peer NN called HttpServletResponse.sendError(), the underlying TCP > connection will still be established, and the data SNN sent will be read by > Jetty framework itself in the peer NN side, so the SNN will insignificantly > send the FsImage to the peer NN continuously, causing a waste of time and > bandwidth. In a relatively large HDFS cluster, the size of FsImage can often > reach about 30GB, This is indeed a big waste. > *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty > 9.3.27)* > After peer NN called HttpServletResponse.sendError(), the underlying TCP > connection will be auto closed, and then SNN will directly get an "Error > writing request body to server" exception, as below, note this test needs a > relatively big FSImage (e.g. 10MB level): > {code:java} > 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: > /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: > 9864721. Sent total: 524288 bytes. Size of last segment intended to send: > 4096 bytes. > java.io.IOException: Error writing request body to server > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587) > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: > /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: > 9864721. Sent total: 851968 bytes. Size of last segment intended to send: > 4096 bytes. > java.io.IOException: Error writing request body to server > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587) > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340) > {code} > > *Solution:* > A standby NameNode should not upload fsimage to an inappropriate NameNode, > when he plans to put a FsImage to the peer NN, he need to check whether he > really need to put it at this time. > In detail, local SNN should establish an HTTP connection with the peer NN, > send the put request, and then immediately read the response (this is the key > point). If the peer NN does not reply an HTTP_OK, it means the lo
[jira] [Commented] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084169#comment-17084169 ] Hadoop QA commented on HDFS-15280: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 56s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 6s{color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 252 unchanged - 2 fixed = 255 total (was 254) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 27s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Useless condition:success == true at this point At BPServiceActor.java:[line 446] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15280 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1303/HDFS-15280.001.diff | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 513d5621f9ca 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4db598e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checks
[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084138#comment-17084138 ] Lisheng Sun commented on HDFS-15255: Thank [~sodonnell] for your suggestion. According to your comment, I updated the 005 patch. Let's see this result in Jenkins:D > Consider StorageType when DatanodeManager#sortLocatedBlock() > > > Key: HDFS-15255 > URL: https://issues.apache.org/jira/browse/HDFS-15255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15255.001.patch, HDFS-15255.002.patch, > HDFS-15255.003.patch, HDFS-15255.004.patch, HDFS-15255.005.patch > > > When only one replica of a block is SDD, the others are HDD. > When the client reads the data, the current logic is that it considers the > distance between the client and the dn. I think it should also consider the > StorageType of the replica. Priority to return fast StorageType node when the > distance is same. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-15255: --- Attachment: HDFS-15255.005.patch > Consider StorageType when DatanodeManager#sortLocatedBlock() > > > Key: HDFS-15255 > URL: https://issues.apache.org/jira/browse/HDFS-15255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15255.001.patch, HDFS-15255.002.patch, > HDFS-15255.003.patch, HDFS-15255.004.patch, HDFS-15255.005.patch > > > When only one replica of a block is SDD, the others are HDD. > When the client reads the data, the current logic is that it considers the > distance between the client and the dn. I think it should also consider the > StorageType of the replica. Priority to return fast StorageType node when the > distance is same. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15255) Consider StorageType when DatanodeManager#sortLocatedBlock()
[ https://issues.apache.org/jira/browse/HDFS-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084043#comment-17084043 ] Stephen O'Donnell commented on HDFS-15255: -- We still need to address the Find Bugs warning somehow. I think we should revert the earlier change I suggested, as that did not help: {code} for (DatanodeInfo loc : block.getLocations()) { to for (DatanodeInfoWithStorage loc : block.getLocations()) { {code} Then try casting the datanode variable to a DatanodeInfo in this line: {code} if (loc.equals(datanode)) { {code} For the log message added, we should ideally mention the actual parameter names rather than the local variable names as that would give the user a better idea of what is wrong. This log message will be logged for every getBlockLocations call I think. While it highlights a problem, and would only be logged if there is a problem, it would create a lot of noise in the logs. In saying that, if we only log it once, then it may be missed. I think I would prefer to log it just once to avoid the log noise. We could do that in the DatanodeManager constructor with a message like: {code} LOG.warn("{} and {} are incompatible and only one can be enabled. Both are currently enabled.", DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY, DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY {code} There is also one checkstyle warning for an unused import. > Consider StorageType when DatanodeManager#sortLocatedBlock() > > > Key: HDFS-15255 > URL: https://issues.apache.org/jira/browse/HDFS-15255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15255.001.patch, HDFS-15255.002.patch, > HDFS-15255.003.patch, HDFS-15255.004.patch > > > When only one replica of a block is SDD, the others are HDD. > When the client reads the data, the current logic is that it considers the > distance between the client and the dn. I think it should also consider the > StorageType of the replica. Priority to return fast StorageType node when the > distance is same. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
[ https://issues.apache.org/jira/browse/HDFS-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15280: Attachment: HDFS-15280.001.diff Status: Patch Available (was: Open) > Datanode delay random time to report block if BlockManager is busy > -- > > Key: HDFS-15280 > URL: https://issues.apache.org/jira/browse/HDFS-15280 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15280.001.diff > > > When many Datanodes are reporting at the same time, the cluster may respond > slowly. Limit the concurrent reporting number. If BlockManager is busy, it > rejects new request and the Datanode delay a few random time to report. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15280) Datanode delay random time to report block if BlockManager is busy
Yang Yun created HDFS-15280: --- Summary: Datanode delay random time to report block if BlockManager is busy Key: HDFS-15280 URL: https://issues.apache.org/jira/browse/HDFS-15280 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Yang Yun Assignee: Yang Yun When many Datanodes are reporting at the same time, the cluster may respond slowly. Limit the concurrent reporting number. If BlockManager is busy, it rejects new request and the Datanode delay a few random time to report. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15140) Replace FoldedTreeSet in Datanode with SortedSet or TreeMap
[ https://issues.apache.org/jira/browse/HDFS-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083996#comment-17083996 ] HuangTao commented on HDFS-15140: - about the patch, I find two points need to change {code} ReplicaInfo orig = b.getOriginalReplica(); - builders.get(volStorageID).add(orig); + unsortedBlocks.get(volStorageID).add(b); # replace b with orig {code} {code} +unsortedBlocks.put( +v.getStorageID(), new ArrayList(131072)); # should use a constant variable {code} > Replace FoldedTreeSet in Datanode with SortedSet or TreeMap > --- > > Key: HDFS-15140 > URL: https://issues.apache.org/jira/browse/HDFS-15140 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.3.0 >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Attachments: HDFS-15140.001.patch, HDFS-15140.002.patch > > > Based on the problems discussed in HDFS-15131, I would like to explore > replacing the FoldedTreeSet structure in the datanode with a builtin Java > equivalent - either SortedSet or TreeMap. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15243) Child directory should not be deleted or renamed if parent directory is a protected directory
[ https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083991#comment-17083991 ] Hadoop QA commented on HDFS-15243: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 10s{color} | {color:orange} root: The patch generated 1 new + 672 unchanged - 0 fixed = 673 total (was 672) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 22s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 52s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 59s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}213m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15243 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1271/HDFS-15243.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 0e9f1c1
[jira] [Commented] (HDFS-15243) Child directory should not be deleted or renamed if parent directory is a protected directory
[ https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083986#comment-17083986 ] Ayush Saxena commented on HDFS-15243: - If I remember correct, there was a discussion that {{ fs.protected.directories }} this should not be in Hadoop-Common since this HDFS specific, but couldn't move to HDFS due to compatibility issues. Since now we are introducing a new configuration, which already we know is only HDFS specific, there is no point putting it in Common, the object stores and other fs won't respect that. I don't think there would be any problem, if this is kept in HDFS, we can properly document it, so it doesn't get unnoticed. > Child directory should not be deleted or renamed if parent directory is a > protected directory > - > > Key: HDFS-15243 > URL: https://issues.apache.org/jira/browse/HDFS-15243 > Project: Hadoop HDFS > Issue Type: Bug > Components: 3.1.1 >Affects Versions: 3.1.1 >Reporter: liuyanyu >Assignee: liuyanyu >Priority: Major > Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, > HDFS-15243.003.patch, HDFS-15243.004.patch, image-2020-03-28-09-23-31-335.png > > > HDFS-8983 add fs.protected.directories to support protected directories on > NameNode. But as I test, when set a parent directory(eg /testA) to > protected directory, the child directory (eg /testA/testB) still can be > deleted or renamed. When we protect a directory mainly for protecting the > data under this directory , So I think the child directory should not be > delete or renamed if the parent directory is a protected directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15243) Child directory should not be deleted or renamed if parent directory is a protected directory
[ https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083976#comment-17083976 ] liuyanyu commented on HDFS-15243: - thanks [~ayushtkn] for reviewing, I ignored the compatibility issue. But fs.protected.directories and fs.protected.subdirectories.enable are configrations for the same feature, I think those two configrations should be put together, on the same file. Maybe should put them into Common rather than Hdfs. > Child directory should not be deleted or renamed if parent directory is a > protected directory > - > > Key: HDFS-15243 > URL: https://issues.apache.org/jira/browse/HDFS-15243 > Project: Hadoop HDFS > Issue Type: Bug > Components: 3.1.1 >Affects Versions: 3.1.1 >Reporter: liuyanyu >Assignee: liuyanyu >Priority: Major > Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, > HDFS-15243.003.patch, HDFS-15243.004.patch, image-2020-03-28-09-23-31-335.png > > > HDFS-8983 add fs.protected.directories to support protected directories on > NameNode. But as I test, when set a parent directory(eg /testA) to > protected directory, the child directory (eg /testA/testB) still can be > deleted or renamed. When we protect a directory mainly for protecting the > data under this directory , So I think the child directory should not be > delete or renamed if the parent directory is a protected directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15278) After execute ‘-setrep 1’, make sure that blocks of the file are dispersed across different datanodes
[ https://issues.apache.org/jira/browse/HDFS-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083969#comment-17083969 ] Hadoop QA commented on HDFS-15278: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 705 unchanged - 0 fixed = 711 total (was 705) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 1s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HDFS-15278 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1268/HDFS-15278.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 2d4bff616b57 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 55fcbcb | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/29162/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29162/testReport/ | | Max. process+thread count | 3981 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-h
[jira] [Commented] (HDFS-15243) Child directory should not be deleted or renamed if parent directory is a protected directory
[ https://issues.apache.org/jira/browse/HDFS-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083942#comment-17083942 ] Ayush Saxena commented on HDFS-15243: - Thanx [~rain_lyy] for the patch. Seems you have moved {{fs.protected.directories}} from common to hdfs, that isn't required. You just have your new configuration at HDFS, moving configuration from Common to Hdfs will create Compatibility issues. The new configuration name should start form dfs not from fs > Child directory should not be deleted or renamed if parent directory is a > protected directory > - > > Key: HDFS-15243 > URL: https://issues.apache.org/jira/browse/HDFS-15243 > Project: Hadoop HDFS > Issue Type: Bug > Components: 3.1.1 >Affects Versions: 3.1.1 >Reporter: liuyanyu >Assignee: liuyanyu >Priority: Major > Attachments: HDFS-15243.001.patch, HDFS-15243.002.patch, > HDFS-15243.003.patch, HDFS-15243.004.patch, image-2020-03-28-09-23-31-335.png > > > HDFS-8983 add fs.protected.directories to support protected directories on > NameNode. But as I test, when set a parent directory(eg /testA) to > protected directory, the child directory (eg /testA/testB) still can be > deleted or renamed. When we protect a directory mainly for protecting the > data under this directory , So I think the child directory should not be > delete or renamed if the parent directory is a protected directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15277) Parent directory in the explorer does not support all path formats
[ https://issues.apache.org/jira/browse/HDFS-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083939#comment-17083939 ] Ayush Saxena commented on HDFS-15277: - Thanx [~jiangjianfei] for the report. I tried this, on both NN and Router UI. Both places seems to be working as expected. +1, will push by EOD if no further comments!!! > Parent directory in the explorer does not support all path formats > -- > > Key: HDFS-15277 > URL: https://issues.apache.org/jira/browse/HDFS-15277 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Attachments: HDFS-15277_001.patch > > > In HDFS-15239, a new button can click to parent folder is add. However, if > the path is not formatted perfectly, it will not get the real parent. > Path like follows are supported to get the file list in explorer.html, but > they do not get the correct parent: > /a/b/c/ parent will get /a/b/c now which should be /a/b > /a/b//c parent's parent will get /a/b/ now which should be /a > > Otherwise, if current path is / or // or ///. The parent button should be > unabled as it has no parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15279) Add new datanode causes balancer to slow down
liying created HDFS-15279: - Summary: Add new datanode causes balancer to slow down Key: HDFS-15279 URL: https://issues.apache.org/jira/browse/HDFS-15279 Project: Hadoop HDFS Issue Type: Improvement Components: balancer & mover Affects Versions: 2.7.2 Reporter: liying Assignee: liying -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org