[jira] [Updated] (HDFS-13087) Fix: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HDFS-13087: Status: Open (was: Patch Available) > Fix: Snapshots On encryption zones get incorrect EZ settings when encryption > zone changes > - > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416873#comment-16416873 ] Yiqun Lin edited comment on HDFS-13248 at 3/28/18 6:26 AM: --- Hi, [~elgoiri], {quote}Currently, hadoop-hdfs-rbf depends on hadoop-hdfs, not the other way around. If we do this, we would have a cyclic dependency. {quote} How about move the logic {{setCallerContext}} in hadoop-hdfs. It will be good to make the logic of constructing/parsing CallerContext in the same place and supply corresponding method. Otherwise, the logic looks a little tricky. In addition, would you mind adding a simple test for this (maybe in test, the client and the Router are the same address)? was (Author: linyiqun): Hi, [~elgoiri], {quote}Currently, hadoop-hdfs-rbf depends on hadoop-hdfs, not the other way around. If we do this, we would have a cyclic dependency. {quote} How about move the logic {{setCallerContext}} in hadoop-hdfs. It will be good to make the logic of constructing/parsing CallerContext in the same place and supply corresponding method. Otherwise, it's hard to understand. In addition, would you mind adding a simple test for this (maybe in test, the client and the Router are the same address)? > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416890#comment-16416890 ] Sherwood Zheng commented on HDFS-12284: --- [~elgoiri] I've created the ticket for DT: HDFS-13358. For the unit test, I was thinking to start a mini KDC and miniDFS cluster and did similar things as I tested on command line. Will take a look at the failed unit test and add checking condition for checkTGTAndReloginFromKeytab > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-10467 > > Attachments: HDFS-12284.000.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416890#comment-16416890 ] Sherwood Zheng edited comment on HDFS-12284 at 3/28/18 6:25 AM: [~elgoiri] I've created the ticket for DT: HDFS-13358. For the unit test, I was thinking to start a mini KDC and miniDFS cluster and do similar things as I tested on command line. Will take a look at the failed unit test and add checking condition for checkTGTAndReloginFromKeytab was (Author: zhengxg3): [~elgoiri] I've created the ticket for DT: HDFS-13358. For the unit test, I was thinking to start a mini KDC and miniDFS cluster and did similar things as I tested on command line. Will take a look at the failed unit test and add checking condition for checkTGTAndReloginFromKeytab > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-10467 > > Attachments: HDFS-12284.000.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416890#comment-16416890 ] Sherwood Zheng edited comment on HDFS-12284 at 3/28/18 6:25 AM: [~elgoiri] I've created the ticket for DT: HDFS-13358. For the unit test, I was thinking to start a mini KDC and miniDFS cluster and do similar things as I tested on command line(mkdir rm and etc). Will take a look at the failed unit test and add checking condition for checkTGTAndReloginFromKeytab was (Author: zhengxg3): [~elgoiri] I've created the ticket for DT: HDFS-13358. For the unit test, I was thinking to start a mini KDC and miniDFS cluster and do similar things as I tested on command line. Will take a look at the failed unit test and add checking condition for checkTGTAndReloginFromKeytab > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-10467 > > Attachments: HDFS-12284.000.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416873#comment-16416873 ] Yiqun Lin commented on HDFS-13248: -- Hi, [~elgoiri], {quote}Currently, hadoop-hdfs-rbf depends on hadoop-hdfs, not the other way around. If we do this, we would have a cyclic dependency. {quote} How about move the logic {{setCallerContext}} in hadoop-hdfs. It will be good to make the logic of constructing/parsing CallerContext in the same place and supply corresponding method. Otherwise, it's hard to understand. In addition, would you mind adding a simple test for this (maybe in test, the client and the Router are the same address)? > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13358) RBF: Support for Delegation Token
Sherwood Zheng created HDFS-13358: - Summary: RBF: Support for Delegation Token Key: HDFS-13358 URL: https://issues.apache.org/jira/browse/HDFS-13358 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Sherwood Zheng Assignee: Sherwood Zheng -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13349) Unresolved merge conflict in ViewFs.md
[ https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416817#comment-16416817 ] genericqa commented on HDFS-13349: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-3.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 2s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 28m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5aaf88d | | JIRA Issue | HDFS-13349 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916527/HDFS-13349-branch-3.0.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 6bcfc2c4cffd 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.0 / c903efe | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 448 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23694/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Unresolved merge conflict in ViewFs.md > --- > > Key: HDFS-13349 > URL: https://issues.apache.org/jira/browse/HDFS-13349 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.1 >Reporter: Gera Shegalov >Assignee: Yiqun Lin >Priority: Blocker > Attachments: HDFS-13349-branch-3.0.001.patch > > > A backport to 3.0.1 has an unresolved conflict in ViewFs.md change > {code} > commit 9264f10bb35dbe30c75c648bf759e8aeb715834a > Author: Anu Engineer > Date: Tue Feb 6 13:43:45 2018 -0800 > HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by > Xiao Chen. > (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f) > Conflicts: > hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md > > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13349) Unresolved merge conflict in ViewFs.md
[ https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13349: - Status: Patch Available (was: Open) > Unresolved merge conflict in ViewFs.md > --- > > Key: HDFS-13349 > URL: https://issues.apache.org/jira/browse/HDFS-13349 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.1 >Reporter: Gera Shegalov >Assignee: Yiqun Lin >Priority: Blocker > Attachments: HDFS-13349-branch-3.0.001.patch > > > A backport to 3.0.1 has an unresolved conflict in ViewFs.md change > {code} > commit 9264f10bb35dbe30c75c648bf759e8aeb715834a > Author: Anu Engineer > Date: Tue Feb 6 13:43:45 2018 -0800 > HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by > Xiao Chen. > (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f) > Conflicts: > hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md > > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13349) Unresolved merge conflict in ViewFs.md
[ https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin reassigned HDFS-13349: Assignee: Yiqun Lin > Unresolved merge conflict in ViewFs.md > --- > > Key: HDFS-13349 > URL: https://issues.apache.org/jira/browse/HDFS-13349 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.1 >Reporter: Gera Shegalov >Assignee: Yiqun Lin >Priority: Blocker > Attachments: HDFS-13349-branch-3.0.001.patch > > > A backport to 3.0.1 has an unresolved conflict in ViewFs.md change > {code} > commit 9264f10bb35dbe30c75c648bf759e8aeb715834a > Author: Anu Engineer > Date: Tue Feb 6 13:43:45 2018 -0800 > HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by > Xiao Chen. > (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f) > Conflicts: > hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md > > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13349) Unresolved merge conflict in ViewFs.md
[ https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13349: - Attachment: HDFS-13349-branch-3.0.001.patch > Unresolved merge conflict in ViewFs.md > --- > > Key: HDFS-13349 > URL: https://issues.apache.org/jira/browse/HDFS-13349 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.1 >Reporter: Gera Shegalov >Priority: Blocker > Attachments: HDFS-13349-branch-3.0.001.patch > > > A backport to 3.0.1 has an unresolved conflict in ViewFs.md change > {code} > commit 9264f10bb35dbe30c75c648bf759e8aeb715834a > Author: Anu Engineer > Date: Tue Feb 6 13:43:45 2018 -0800 > HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by > Xiao Chen. > (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f) > Conflicts: > hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md > > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13349) Unresolved merge conflict in ViewFs.md
[ https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416780#comment-16416780 ] Yiqun Lin commented on HDFS-13349: -- Attach the patch to fix the conflicts. > Unresolved merge conflict in ViewFs.md > --- > > Key: HDFS-13349 > URL: https://issues.apache.org/jira/browse/HDFS-13349 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.1 >Reporter: Gera Shegalov >Priority: Blocker > Attachments: HDFS-13349-branch-3.0.001.patch > > > A backport to 3.0.1 has an unresolved conflict in ViewFs.md change > {code} > commit 9264f10bb35dbe30c75c648bf759e8aeb715834a > Author: Anu Engineer > Date: Tue Feb 6 13:43:45 2018 -0800 > HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by > Xiao Chen. > (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f) > Conflicts: > hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md > > hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."
[ https://issues.apache.org/jira/browse/HDFS-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota reassigned HDFS-13357: - Assignee: Gabor Bota > Improve AclException message "Invalid ACL: only directories may have a > default ACL." > > > Key: HDFS-13357 > URL: https://issues.apache.org/jira/browse/HDFS-13357 > Project: Hadoop HDFS > Issue Type: Improvement > Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, > Hive >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > > I found this warning message in a HDFS cluster > {noformat} > 2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from > 10.0.0.1:39508 Call#79376996 > Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only > directories may have a default ACL. > 2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: > PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE > ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only > directories may have a default ACL. > {noformat} > However it doesn't tell me which file had this invalid ACL. > This cluster has Sentry enabled, so it is possible this invalid ACL doesn't > come from HDFS, but from Sentry. > File this Jira to improve the message and add file name in it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."
Wei-Chiu Chuang created HDFS-13357: -- Summary: Improve AclException message "Invalid ACL: only directories may have a default ACL." Key: HDFS-13357 URL: https://issues.apache.org/jira/browse/HDFS-13357 Project: Hadoop HDFS Issue Type: Improvement Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, Hive Reporter: Wei-Chiu Chuang I found this warning message in a HDFS cluster {noformat} 2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler 90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from 10.0.0.1:39508 Call#79376996 Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only directories may have a default ACL. 2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only directories may have a default ACL. {noformat} However it doesn't tell me which file had this invalid ACL. This cluster has Sentry enabled, so it is possible this invalid ACL doesn't come from HDFS, but from Sentry. File this Jira to improve the message and add file name in it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12971) DfsClient hang on hedged getFirstToComplete
[ https://issues.apache.org/jira/browse/HDFS-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416764#comment-16416764 ] maobaolong commented on HDFS-12971: --- [~muyannian] Thank you for this research, can you attache a better patch file? You can use {code:bash} // Some comments here git diff > HDFS-12971.001.patch {code} > DfsClient hang on hedged getFirstToComplete > > > Key: HDFS-12971 > URL: https://issues.apache.org/jira/browse/HDFS-12971 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, hdfs-client >Affects Versions: 2.6.0, 2.7.0 > Environment: hadoop 2.6.0 (dfs.client.hedged.read.threadpool.size>0) >Reporter: muyannian >Priority: Major > Attachments: 1.jpg, 2.jpg, 3.jpg, 4.jpg, > DFSInputStream-afterpatch.java, DFSInputStream-beforepatch.java, > DFSInputStream.java.patch > > Original Estimate: 96h > Remaining Estimate: 96h > > when i used hdfs hedged read , i found DFSInputStream hang on > getFirstToComplete method. > the reason is when something has exception on datanode,or namenode , for > example FileNotFoundException, that may cause hang up for ever. future has > finished but still call get "future = hedgedService.take()" method ,so cause > hang up. > in the attatch file is my jstack and java patch. > > dfs.client.hedged.read.threadpool.size > 512 > > > > dfs.client.hedged.read.threshold.millis > 300 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota
[ https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416762#comment-16416762 ] Yiqun Lin commented on HDFS-13346: -- Thanks [~elgoiri] for sharing the comments. {quote} You would have the same issue if A sets it to 200 and B later sets it to 300. You have two admins doing conflicting operations. {quote} Agreed. Actually these should be the conflicting operations. Even though one admin set quota successfully, but after this the other admin do the new updating. The previous admin will see the false quota value. So I mean following way should be still okay: {code} One better way is that we invoke update operation this.rpcServer.setQuota in RouterAdminServer, not in RouterQuotaUpdateService. {code} [~liuhongtong], I think you can go ahead by this way, :). > RBF: Fix synchronization of router quota and ns quota > - > > Key: HDFS-13346 > URL: https://issues.apache.org/jira/browse/HDFS-13346 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Major > Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch > > > Check Router Quota and ns Quota: > {code} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 150/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Update Router Quota: > {code:java} > $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400 > Successfully set quota for mount point /ns10t > {code} > Check Router Quota and ns Quota: > {code:java} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 400/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Now Router Quota has updated successfully, but ns Quota not. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416755#comment-16416755 ] Virajith Jalaparti commented on HDFS-13347: --- Looks like [~linyiqun] committed this while I was looking over the patch :) > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416751#comment-16416751 ] Virajith Jalaparti commented on HDFS-13347: --- [~elgoiri], a couple of comments: 1) Mark the added {{getNamenodeMetrics()}} as {{@VisibleForTesting}}? Also, these do not need to be public. 2) It would be good to add more documentation for the new {{requireResponse}} parameter, something along the lines that if this is set to false, any failure to get the reports will go unnoticed. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416747#comment-16416747 ] Hudson commented on HDFS-13347: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13890 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13890/]) HDFS-13347. RBF: Cache datanode reports. Contributed by Inigo Goiri. (yqlin: rev a71656c1c1bf6c680f1382a76ddcac870061f320) * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterMetricsService.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416744#comment-16416744 ] Yiqun Lin commented on HDFS-13347: -- Have committed to trunk, branch-3.1 and branch-3.0. There are some conflicts for merging branch-2 and branch-2.9. [~elgoiri], would you take a look for this and commit to these two branches? > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13347: - Affects Version/s: 3.0.0 Fix Version/s: 3.1.1 3.2.0 3.0.2 > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts
[ https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416732#comment-16416732 ] Dennis Huo commented on HDFS-13056: --- Thanks for taking another look [~xiaochen]! Applied fixes in [^HDFS-13056.013.patch]. I misinterpreted some of your previous comments (didn't notice the mention of LimitedPrivate vs Private) which was why I removed the InterfaceStability annotations; fixed now with InterfaceStability.Unstable in the ones with LimitedPrivate, and left the InterfaceAudience.Private ones with no InterfaceStability annotation to let those default to Unstable. I went ahead and removed the isDebugEnabled checks in FileChecksumHelper. In BlockChecksumHelper, I had kept the isDebugEnabled() checks whenever the format string included calling something like CrcUtil.toMultiCrcString which is potentially moderately expensive (not too much, but in keeping with where other places in the code avoid "slightly" expensive things with isDebugEnabled()); prior to [this set of edits|https://github.com/apache/hadoop/pull/344/commits/391230bf2932a05da76c4a11eadea97b04de4bad#diff-c8fa735a74e1474b5d455843f850b168] I had used wrapper "Object" that lazy-evaluates toString() to avoid isDebugEnabled() in favor of just always passing it in as the arg to the log formatter. I guess the options are: # Keep the calls to CrcUtil.to*CrcString wrapped inside isDebugEnabled() as-is # Remove the check and just always perform that string-creation even if it won't be used by the logger # Split out a "String crcDebugString = null;" and then assign it to CrcUtil.toMultiCrcString inside an isDebugEnabled() block, and then unconditionally pass the crcDebugString into the log format args (this was done in FileChecksumHelper just because it was convenient for sharing a debug string between MD5 and CRC codepaths). I have no strong preference on any of those options. Added global timeouts of 10 seconds to TestCrcUtil and TestCrcComposer. > Expose file-level composite CRCs in HDFS which are comparable across > different instances/layouts > > > Key: HDFS-13056 > URL: https://issues.apache.org/jira/browse/HDFS-13056 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, erasure-coding, federation, hdfs >Affects Versions: 3.0.0 >Reporter: Dennis Huo >Assignee: Dennis Huo >Priority: Major > Attachments: HDFS-13056-branch-2.8.001.patch, > HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, > HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, > HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, > HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, > HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, > HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, > HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, > Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, > hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf > > > FileChecksum was first introduced in > [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then > has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are > already stored as part of datanode metadata, and the MD5 approach is used to > compute an aggregate value in a distributed manner, with individual datanodes > computing the MD5-of-CRCs per-block in parallel, and the HDFS client > computing the second-level MD5. > > A shortcoming of this approach which is often brought up is the fact that > this FileChecksum is sensitive to the internal block-size and chunk-size > configuration, and thus different HDFS files with different block/chunk > settings cannot be compared. More commonly, one might have different HDFS > clusters which use different block sizes, in which case any data migration > won't be able to use the FileChecksum for distcp's rsync functionality or for > verifying end-to-end data integrity (on top of low-level data integrity > checks applied at data transfer time). > > This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 > during the addition of checksum support for striped erasure-coded files; > while there was some discussion of using CRC composability, it still > ultimately settled on hierarchical MD5 approach, which also adds the problem > that checksums of basic replicated files are not comparable to striped files. > > This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses > CRC composition to remain completely chunk/block agnostic, and allows > comparison between striped vs replica
[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts
[ https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dennis Huo updated HDFS-13056: -- Attachment: HDFS-13056.013.patch > Expose file-level composite CRCs in HDFS which are comparable across > different instances/layouts > > > Key: HDFS-13056 > URL: https://issues.apache.org/jira/browse/HDFS-13056 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, erasure-coding, federation, hdfs >Affects Versions: 3.0.0 >Reporter: Dennis Huo >Assignee: Dennis Huo >Priority: Major > Attachments: HDFS-13056-branch-2.8.001.patch, > HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, > HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, > HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, > HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, > HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, > HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, > HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, > Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, > hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf > > > FileChecksum was first introduced in > [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then > has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are > already stored as part of datanode metadata, and the MD5 approach is used to > compute an aggregate value in a distributed manner, with individual datanodes > computing the MD5-of-CRCs per-block in parallel, and the HDFS client > computing the second-level MD5. > > A shortcoming of this approach which is often brought up is the fact that > this FileChecksum is sensitive to the internal block-size and chunk-size > configuration, and thus different HDFS files with different block/chunk > settings cannot be compared. More commonly, one might have different HDFS > clusters which use different block sizes, in which case any data migration > won't be able to use the FileChecksum for distcp's rsync functionality or for > verifying end-to-end data integrity (on top of low-level data integrity > checks applied at data transfer time). > > This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 > during the addition of checksum support for striped erasure-coded files; > while there was some discussion of using CRC composability, it still > ultimately settled on hierarchical MD5 approach, which also adds the problem > that checksums of basic replicated files are not comparable to striped files. > > This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses > CRC composition to remain completely chunk/block agnostic, and allows > comparison between striped vs replicated files, between different HDFS > instances, and possible even between HDFS and other external storage systems. > This feature can also be added in-place to be compatible with existing block > metadata, and doesn't need to change the normal path of chunk verification, > so is minimally invasive. This also means even large preexisting HDFS > deployments could adopt this feature to retroactively sync data. A detailed > design document can be found here: > https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416731#comment-16416731 ] Yiqun Lin commented on HDFS-13347: -- LGTM, +1. I'd like to help commit this, :). > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416562#comment-16416562 ] genericqa commented on HDFS-13356: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 39m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13356 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916483/HDFS-13356.00.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux b5144822c8e3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23690/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23690/testReport/ | | Max. process+thread count | 3101 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23690/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions:
[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416571#comment-16416571 ] Ajay Kumar commented on HDFS-13248: --- [~elgoiri] thanks for working on this. Using CallerContext to append client ip with some delimiter is bit hacky. Personally i think using UGI tokens will be cleaner. Even if we decide to go with CallerContext we should make delimiter configurable. > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416560#comment-16416560 ] genericqa commented on HDFS-13356: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 39m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}158m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13356 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916486/HDFS-13356.01.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux ec628f263327 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23691/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23691/testReport/ | | Max. process+thread count | 2574 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23691/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels:
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416554#comment-16416554 ] genericqa commented on HDFS-13356: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 59s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 37m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 75m 46s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13356 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916489/HDFS-13356.02.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 96eef6c6c531 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23692/testReport/ | | Max. process+thread count | 3766 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23692/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, > HDFS-13356.02.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in
[jira] [Commented] (HDFS-13352) RBF: Add xsl stylesheet for hdfs-rbf-default.xml
[ https://issues.apache.org/jira/browse/HDFS-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416528#comment-16416528 ] Takanobu Asanuma commented on HDFS-13352: - Thanks for reviewing and committing it, [~elgoiri]! > RBF: Add xsl stylesheet for hdfs-rbf-default.xml > > > Key: HDFS-13352 > URL: https://issues.apache.org/jira/browse/HDFS-13352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13352.1.patch > > > {{configuration.xsl}} is required for browsing {{hdfs-rbf-default.xml}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13338) Update BUILDING.txt for building native libraries
[ https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416525#comment-16416525 ] Takanobu Asanuma commented on HDFS-13338: - Thanks for committing it, [~James C]! > Update BUILDING.txt for building native libraries > - > > Key: HDFS-13338 > URL: https://issues.apache.org/jira/browse/HDFS-13338 > Project: Hadoop HDFS > Issue Type: Task > Components: build, documentation, native >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Critical > Attachments: HDFS-13338.1.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:213) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416523#comment-16416523 ] Subru Krishnan commented on HDFS-13347: --- I am not familiar with this part of the code. [~virajith], can you take a quick look & commit? Thanks. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416511#comment-16416511 ] Giovanni Matteo Fumarola edited comment on HDFS-13347 at 3/28/18 12:30 AM: --- LGTM +1. [~elgoiri] can you please commit it? was (Author: giovanni.fumarola): LGTM +1. [~subru] can you please commit it? > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads
[ https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416513#comment-16416513 ] Chris Douglas commented on HDFS-13272: -- bq. Rather than making it configurable, it might be better to simply reduce it. This makes sense to branch-3.x. For older branches, there still might be hftp users Reducing the default makes sense for both 3.x and branch-2. [~kihwal], to be clear, should it be configurable on branch-2 so it can be raised for hftp users, but dropped to the minimum for 3.x? > DataNodeHttpServer to have configurable HttpServer2 threads > --- > > Key: HDFS-13272 > URL: https://issues.apache.org/jira/browse/HDFS-13272 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > > In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 > threads. In addition to the possibility of this being too few threads, it is > much higher than necessary in resource constrained environments such as > MiniDFSCluster. To avoid compatibility issues, rather than using > {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new > configuration for the DataNode's thread pool size. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416511#comment-16416511 ] Giovanni Matteo Fumarola commented on HDFS-13347: - LGTM +1. [~subru] can you please commit it? > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416451#comment-16416451 ] genericqa commented on HDFS-13347: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13347 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916476/HDFS-13347.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 409dfc8a98d5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23689/testReport/ | | Max. process+thread count | 930 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23689/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/bro
[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows
[ https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416439#comment-16416439 ] Íñigo Goiri commented on HDFS-13336: {quote} This sounds more sustainable to me, not just for Windows compatibility, but to avoid tests that assume a fixed, default base dir. {quote} OK, let's test this locally and open a JIRA for the general MiniDFSCluster change. > Test cases of TestWriteToReplica failed in windows > -- > > Key: HDFS-13336 > URL: https://issues.apache.org/jira/browse/HDFS-13336 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, > HDFS-13336.002.patch > > > Test cases of TestWriteToReplica failed in windows with errors like: > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Error Details > Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Stack Trace > java.io.IOException: Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416376#comment-16416376 ] Bharat Viswanadham commented on HDFS-13356: --- Attached patch v02 Moved the comments to before field definition. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, > HDFS-13356.02.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13356: -- Attachment: HDFS-13356.02.patch > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, > HDFS-13356.02.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows
[ https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416372#comment-16416372 ] Chris Douglas commented on HDFS-13336: -- bq. a generic fix could be made for org.apache.hadoop.hdfs.MiniDFSCluster#determineDfsBaseDir, if by default a randomized temp path is returned This sounds more sustainable to me, not just for Windows compatibility, but to avoid tests that assume a fixed, default base dir. > Test cases of TestWriteToReplica failed in windows > -- > > Key: HDFS-13336 > URL: https://issues.apache.org/jira/browse/HDFS-13336 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, > HDFS-13336.002.patch > > > Test cases of TestWriteToReplica failed in windows with errors like: > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Error Details > Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Stack Trace > java.io.IOException: Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416358#comment-16416358 ] Bharat Viswanadham commented on HDFS-13356: --- Fixed minor formatting issue in the comments. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13356: -- Attachment: HDFS-13356.01.patch > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13356: -- Attachment: HDFS-13356.00.patch > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13356: -- Status: Patch Available (was: Open) > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > Attachments: HDFS-13356.00.patch > > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13356: - Component/s: balancer & mover > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13356: - Labels: balancer upgrades (was: upgrades) > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13356: - Target Version/s: 3.0.2, 3.1.1 > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: balancer, upgrades > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13356: - Labels: upgrades (was: ) > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: upgrades > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13356: -- Affects Version/s: 2.7.5 > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416338#comment-16416338 ] Bharat Viswanadham commented on HDFS-13356: --- It will affect when upgraded from 2.x to 3.x, so set the version to 2.7.5 > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.5 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM
[ https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416328#comment-16416328 ] genericqa commented on HDFS-13354: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdsl in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdsl in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} hadoop-hdsl_server-scm generated 0 new + 0 unchanged - 83 fixed = 0 total (was 83) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s{color} | {color:red} The patch generated 70 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:7a542fb | | JIRA Issue | HDFS-13354 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916467/HDFS-13354-HDFS-7240.00.patch | | Optional Tests | asflicense compile javac javado
[jira] [Comment Edited] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416260#comment-16416260 ] Arpit Agarwal edited comment on HDFS-13314 at 3/27/18 10:10 PM: bq. if numErrors == 0 then namenode should not exit. We test this code path since we have many unit tests that exercise saveNamespace. e.g. multiple test cases in TestFSImage would fail if NN exits when numErrors == 0. bq. if numErrors > 0 then namenode should exit. No easy way to do so without refactoring existing classes or inserting some test hooks. I don't think it is worth the effort. Do you feel it is a blocker to committing this validation? was (Author: arpitagarwal): bq. if numErrors == 0 then namenode should not exit. We test this code path since we have many unit tests that exercise saveNamespace. e.g. multiple test cases in TestFSImage would fail if NN exits when numErrors == 0. bq. if numErrors > 0 then namenode should exit. No easy way to do so without refactoring or inserting test hooks. I don't think it is worth the effort. Do you feel it is a blocker to committing this validation? > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416322#comment-16416322 ] Wei-Chiu Chuang commented on HDFS-13356: Thanks for creating an issue, [~bharatviswa]. Could you also add the affect version? It would be nice to know your version before & after the upgrade. Thank you > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
Bharat Viswanadham created HDFS-13356: - Summary: Balancer:Set default value of minBlockSize to 10mb Key: HDFS-13356 URL: https://issues.apache.org/jira/browse/HDFS-13356 Project: Hadoop HDFS Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham It seems we can run into a problem while a rolling upgrade with this. The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send it yet, so NN will use the default, which you set to 0. So NN will start unexpectedly sending small blocks to the Balancer. So we should # either change the default in protobuf to 10 MB # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use the configuration variable {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. If you agree, we should create a follow up jira. I wanted to backport this down the chain of branches, but this upgrade scenario is stopping me. [~barnaul] commented this in HDFS-13222 jira. https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb
[ https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416314#comment-16416314 ] Bharat Viswanadham commented on HDFS-13356: --- I will go with option 1, which is simple code change. > Balancer:Set default value of minBlockSize to 10mb > --- > > Key: HDFS-13356 > URL: https://issues.apache.org/jira/browse/HDFS-13356 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > It seems we can run into a problem while a rolling upgrade with this. > The Balancer is upgraded after NameNodes, so once NN is upgraded it will > expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send > it yet, so NN will use the default, which you set to 0. So NN will start > unexpectedly sending small blocks to the Balancer. So we should > # either change the default in protobuf to 10 MB > # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use > the configuration variable > {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}. > If you agree, we should create a follow up jira. I wanted to backport this > down the chain of branches, but this upgrade scenario is stopping me. > [~barnaul] commented this in HDFS-13222 jira. > https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416310#comment-16416310 ] Bharat Viswanadham commented on HDFS-13222: --- [~shv] agreed. Will open a Jira to fix this. > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.006.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416300#comment-16416300 ] genericqa commented on HDFS-13347: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13347 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916464/HDFS-13347.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd5c5f1c0bdf 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23687/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/23687/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/23687/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/23687/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416295#comment-16416295 ] genericqa commented on HDFS-13347: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13347 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916462/HDFS-13347.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a54e96c58d3f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fe41c6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23686/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/23686/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/23686/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/23686/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-h
[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows
[ https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416280#comment-16416280 ] Xiao Liang commented on HDFS-13336: --- As [~elgoiri] suggested, a generic fix could be made for org.apache.hadoop.hdfs.MiniDFSCluster#determineDfsBaseDir, if by default a randomized temp path is returned(with GenericTestUtils.getRandomizedTempPath()) while HDFS_MINIDFS_BASEDIR is not specified, many similar test failures on Windows could be fixed. [~chris.douglas] how would you think? > Test cases of TestWriteToReplica failed in windows > -- > > Key: HDFS-13336 > URL: https://issues.apache.org/jira/browse/HDFS-13336 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, > HDFS-13336.002.patch > > > Test cases of TestWriteToReplica failed in windows with errors like: > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Error Details > Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > h4. > !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png! > Stack Trace > java.io.IOException: Could not fully delete > F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1 > at > org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416261#comment-16416261 ] genericqa commented on HDFS-13347: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 56s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13347 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916457/HDFS-13347.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ed84bbab91a7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285bbaa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23685/testReport/ | | Max. process+thread count | 928 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23685/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/brow
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416260#comment-16416260 ] Arpit Agarwal commented on HDFS-13314: -- bq. if numErrors == 0 then namenode should not exit. We test this code path since we have many unit tests that exercise saveNamespace. e.g. multiple test cases in TestFSImage would fail if NN exits when numErrors == 0. bq. if numErrors > 0 then namenode should exit. No easy way to do so without refactoring or inserting test hooks. I don't think it is worth the effort. Do you feel it is a blocker to committing this validation? > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM
[ https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13354: -- Attachment: HDFS-13354-HDFS-7240.00.patch > Add config for min number of data nodes to come out of chill mode in SCM > > > Key: HDFS-13354 > URL: https://issues.apache.org/jira/browse/HDFS-13354 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13354-HDFS-7240.00.patch > > > SCM will come out of ChillMode if one datanode reports in now. We need to > support a number of known datanodes before SCM comes out of Chill Mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM
[ https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13354: -- Status: Patch Available (was: In Progress) > Add config for min number of data nodes to come out of chill mode in SCM > > > Key: HDFS-13354 > URL: https://issues.apache.org/jira/browse/HDFS-13354 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13354-HDFS-7240.00.patch > > > SCM will come out of ChillMode if one datanode reports in now. We need to > support a number of known datanodes before SCM comes out of Chill Mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13355) Create IO provider for hdsl
[ https://issues.apache.org/jira/browse/HDFS-13355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13355: -- Summary: Create IO provider for hdsl (was: Create IO provider abstraction for hdsl) > Create IO provider for hdsl > --- > > Key: HDFS-13355 > URL: https://issues.apache.org/jira/browse/HDFS-13355 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: HDFS-7240 > > > Create an abstraction like FileIoProvider for hdsl to handle disk failure and > other issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13355) Create IO provider abstraction for hdsl
Ajay Kumar created HDFS-13355: - Summary: Create IO provider abstraction for hdsl Key: HDFS-13355 URL: https://issues.apache.org/jira/browse/HDFS-13355 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: HDFS-7240 Reporter: Ajay Kumar Assignee: Ajay Kumar Fix For: HDFS-7240 Create an abstraction like FileIoProvider for hdsl to handle disk failure and other issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416248#comment-16416248 ] genericqa commented on HDFS-13331: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 16s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 46s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 11s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 53s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 12s{color} | {color:green} root: The patch generated 0 new + 363 unchanged - 1 fixed = 363 total (was 364) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}254m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.
[jira] [Created] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM
Bharat Viswanadham created HDFS-13354: - Summary: Add config for min number of data nodes to come out of chill mode in SCM Key: HDFS-13354 URL: https://issues.apache.org/jira/browse/HDFS-13354 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham SCM will come out of ChillMode if one datanode reports in now. We need to support a number of known datanodes before SCM comes out of Chill Mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM
[ https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13354 started by Bharat Viswanadham. - > Add config for min number of data nodes to come out of chill mode in SCM > > > Key: HDFS-13354 > URL: https://issues.apache.org/jira/browse/HDFS-13354 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > SCM will come out of ChillMode if one datanode reports in now. We need to > support a number of known datanodes before SCM comes out of Chill Mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.005.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch, > HDFS-13347.005.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.
[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416212#comment-16416212 ] Xiao Chen commented on HDFS-13281: -- Still don't understand the use cases, please help me understand. I played with distcp a little bit, IIUC this is an optimization for that scenario so that the extra EDEK doesn't get generated unnecessarily. How does HDFS-12597 use it? From [this comment|https://issues.apache.org/jira/browse/HDFS-12574?focusedCommentId=16329149&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16329149] client will still create the file in NN, only the streaming part will write encrypted data to DN, right? bq. /.reserved/raw is a special path prefix and you should be very careful to use it and not use irresponsibly. Exactly. Instead of educating every user when they raise an issue, can we proactively prevent this? How are we planning to set the EDEKs after a /.reserved/raw file is created? 1 atomic RPC to startFile with a user-provided edek feels safer, but messier to do > Namenode#createFile should be /.reserved/raw/ aware. > > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-13281.001.patch > > > If I want to write to /.reserved/raw/ and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416209#comment-16416209 ] Tsz Wo Nicholas Sze commented on HDFS-13314: More details: The suggested unit test sounds like that we should test whether the if-statement in Java is working properly. > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416206#comment-16416206 ] Íñigo Goiri commented on HDFS-13347: Thanks [~giovanni.fumarola] for the comments. For {{RouterRpcServer}}, the idea is that even if a subcluster is down, we want to see the report for the rest; I made it a parameter and added a javadoc to clarify. I also made TIME_OUT a config parameter. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.004.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch, HDFS-13347.004.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416199#comment-16416199 ] Tsz Wo Nicholas Sze commented on HDFS-13314: [~shahrs87], imho, the unit test you suggested does not sound useful. Thanks. > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13296) GenericTestUtils generates paths with drive letter in Windows and fail webhdfs related test cases
[ https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13296: - Fix Version/s: 3.1.0 > GenericTestUtils generates paths with drive letter in Windows and fail > webhdfs related test cases > - > > Key: HDFS-13296 > URL: https://issues.apache.org/jira/browse/HDFS-13296 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Liang >Assignee: Xiao Liang >Priority: Major > Labels: windows > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13296.000.patch > > > In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will > added drive letter to the path in windows, some test cases use the generated > path to send webhdfs request, which will fail due to the drive letter in the > URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test" > GenericTestUtils#getTempPath has the similar issue in Windows. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled
[ https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13145: - Fix Version/s: 3.1.0 > SBN crash when transition to ANN with in-progress edit tailing enabled > -- > > Key: HDFS-13145 > URL: https://issues.apache.org/jira/browse/HDFS-13145 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, namenode >Affects Versions: 3.0.0 >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 3.1.0, 3.0.2 > > Attachments: HDFS-13145.000.patch, HDFS-13145.001.patch > > > With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} > will send two batches to JNs, one normal edit batch followed by a dummy batch > to update the commit ID on JNs. > {code} > QuorumCall qcall = loggers.sendEdits( > segmentTxId, firstTxToFlush, > numReadyTxns, data); > loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits"); > > // Since we successfully wrote this batch, let the loggers know. Any > future > // RPCs will thus let the loggers know of the most recent transaction, > even > // if a logger has fallen behind. > loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1); > // If we don't have this dummy send, committed TxId might be one-batch > // stale on the Journal Nodes > if (updateCommittedTxId) { > QuorumCall fakeCall = loggers.sendEdits( > segmentTxId, firstTxToFlush, > 0, new byte[0]); > loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits"); > } > {code} > Between each batch, it will wait for the JNs to reach a quorum. However, if > the ANN crashes in between, then SBN will crash while transiting to ANN: > {code} > java.lang.IllegalStateException: Cannot start writing at txid 24312595802 > when there is a stream available for read: .. > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839) > at > org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61) > at > org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64) > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622) > at > org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107) > at > org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490) > 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1 > {code} > This is because without the dummy batch, the {{commitTxnId}} will lag behind > the {{endTxId}}, which caused the check in {{openForWrite}} to fail: > {code} > List streams = new ArrayList(); > journalSet.selectInputStreams(streams, segmentTxId, true, false); > if (!streams.isEmpty()) { > String error = String.format("Cannot start writing at txid %s " + > "when there is a stream available for read: %s", > segmentTxId, streams.get(0)); > IOUtils.cleanupWithLogger(LOG, > streams.toArray(new EditLogInputStream[0])); > throw new IllegalStateException(error); > } > {code} > In our environment, this can be reproduced pretty consistently, which will > leave the cluster with no running namenodes. Even though we are using a 2.8.2 > backport, I believe the same issue also exist in 3.0.x. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-13300: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~nandakumar131] Thanks for getting this done. I really appreciate how much work has gone into this. [~elek] and [~xyao] Thanks for reviewing and testing this patch. I have committed this to the feature branch. > Ozone: Remove DatanodeID dependency from HDSL and Ozone > > > Key: HDFS-13300 > URL: https://issues.apache.org/jira/browse/HDFS-13300 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13300-HDFS-7240.000.patch, > HDFS-13300-HDFS-7240.001.patch, HDFS-13300-HDFS-7240.002.patch, > HDFS-13300-HDFS-7240.003.patch, HDFS-13300-HDFS-7240.004.patch > > > DatanodeID has been modified to add HDSL/Ozone related information > previously. This jira is to remove DatanodeID dependency from HDSL/Ozone to > make it truly pluggable without having the need to modify DatanodeID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416173#comment-16416173 ] Giovanni Matteo Fumarola edited comment on HDFS-13347 at 3/27/18 8:21 PM: -- Thanks [~elgoiri] for the patch. Few comments: *RouterRpcServer* Why did you change to false? *NamenodeBeanMetrics* TIME_OUT should be configurable. was (Author: giovanni.fumarola): Thanks [~elgoiri] for the path. Few comments: *RouterRpcServer* Why did you change to false? *NamenodeBeanMetrics* TIME_OUT should be configurable. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416173#comment-16416173 ] Giovanni Matteo Fumarola commented on HDFS-13347: - Thanks [~elgoiri] for the path. Few comments: *RouterRpcServer* Why did you change to false? *NamenodeBeanMetrics* TIME_OUT should be configurable. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
[ https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416169#comment-16416169 ] genericqa commented on HDFS-13341: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} server in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} framework in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} server in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} framework in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} server in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} framework in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s{color} | {color:red} server in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.002.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: (was: HDFS-13347.002.patch) > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.003.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch, HDFS-13347.003.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)
[ https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416156#comment-16416156 ] Arpit Agarwal commented on HDFS-10419: -- HDDS risks confusion with Hard Disk Drives: https://www.google.com/search?q=hdds (I like it personally though). > Building HDFS on top of new storage layer (HDSL) > > > Key: HDFS-10419 > URL: https://issues.apache.org/jira/browse/HDFS-10419 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jing Zhao >Assignee: Jing Zhao >Priority: Major > Attachments: Evolving NN using new block-container layer.pdf > > > In HDFS-7240, Ozone defines storage containers to store both the data and the > metadata. The storage container layer provides an object storage interface > and aims to manage data/metadata in a distributed manner. More details about > storage containers can be found in the design doc in HDFS-7240. > HDFS can adopt the storage containers to store and manage blocks. The general > idea is: > # Each block can be treated as an object and the block ID is the object's key. > # Blocks will still be stored in DataNodes but as objects in storage > containers. > # The block management work can be separated out of the NameNode and will be > handled by the storage container layer in a more distributed way. The > NameNode will only manage the namespace (i.e., files and directories). > # For each file, the NameNode only needs to record a list of block IDs which > are used as keys to obtain real data from storage containers. > # A new DFSClient implementation talks to both NameNode and the storage > container layer to read/write. > HDFS, especially the NameNode, can get much better scalability from this > design. Currently the NameNode's heaviest workload comes from the block > management, which includes maintaining the block-DataNode mapping, receiving > full/incremental block reports, tracking block states (under/over/miss > replicated), and joining every writing pipeline protocol to guarantee the > data consistency. These work bring high memory footprint and make NameNode > suffer from GC. HDFS-5477 already proposes to convert BlockManager as a > service. If we can build HDFS on top of the storage container layer, we not > only separate out the BlockManager from the NameNode, but also replace it > with a new distributed management scheme. > The storage container work is currently in progress in HDFS-7240, and the > work proposed here is still in an experimental/exploring stage. We can do > this experiment in a feature branch so that people with interests can be > involved. > A design doc will be uploaded later explaining more details. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13297) Add config validation util
[ https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416153#comment-16416153 ] Ajay Kumar commented on HDFS-13297: --- Attaching first pass to patch. Will submit once we have [HADOOP-15295] in HDFS-7240 branch. cc: [~anu], [~xyao] > Add config validation util > -- > > Key: HDFS-13297 > URL: https://issues.apache.org/jira/browse/HDFS-13297 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13297-HDFS-7240.000.patch > > > Add a generic util to validate configuration based on TAGS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416152#comment-16416152 ] genericqa commented on HDFS-13347: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 2s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13347 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916440/HDFS-13347.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fee661de471f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285bbaa | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23684/testReport/ | | Max. process+thread count | 940 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23684/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/brow
[jira] [Updated] (HDFS-13297) Add config validation util
[ https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13297: -- Attachment: HDFS-13297-HDFS-7240.000.patch > Add config validation util > -- > > Key: HDFS-13297 > URL: https://issues.apache.org/jira/browse/HDFS-13297 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13297-HDFS-7240.000.patch > > > Add a generic util to validate configuration based on TAGS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416088#comment-16416088 ] Konstantin Shvachko edited comment on HDFS-13222 at 3/27/18 6:51 PM: - Yes you are right, the Balancer will be discarding blocks less than {{minBlockSize}}, but it could be a lot of unnecessary blocks on a real cluster. So the Balancer will have to send more {{getBlocks()}} requests to the NameNode increasing unnecessarily the load on it. This will look very confusing. I think we should fix it. was (Author: shv): Yes you are right, the Balancer will be discarding blocks less than {{minBlockSize}}, but it could be a lot of unnecessary blocks on a real cluster. So the Balancer will have to send more {{getBlocks())) requests to the NameNode increasing unnecessarily the load on it. This will look very confusing. I think we should fix it. > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416088#comment-16416088 ] Konstantin Shvachko commented on HDFS-13222: Yes you are right, the Balancer will be discarding blocks less than {{minBlockSize}}, but it could be a lot of unnecessary blocks on a real cluster. So the Balancer will have to send more {{getBlocks())) requests to the NameNode increasing unnecessarily the load on it. This will look very confusing. I think we should fix it. > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13338) Update BUILDING.txt for building native libraries
[ https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416065#comment-16416065 ] Hudson commented on HDFS-13338: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13886 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13886/]) HDFS-13338. Update BUILDING.txt for building native libraries. (james.clampffer: rev 285bbaa4329f7586b9c404ce2e557d36041099c0) * (edit) BUILDING.txt > Update BUILDING.txt for building native libraries > - > > Key: HDFS-13338 > URL: https://issues.apache.org/jira/browse/HDFS-13338 > Project: Hadoop HDFS > Issue Type: Task > Components: build, documentation, native >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Critical > Attachments: HDFS-13338.1.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:213) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13347: --- Attachment: HDFS-13347.002.patch > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13347.000.patch, HDFS-13347.001.patch, > HDFS-13347.002.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
[ https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415977#comment-16415977 ] Xiaoyu Yao edited comment on HDFS-13341 at 3/27/18 6:30 PM: Thanks [~elek] for working on this. Patch looks good to me. I just have two minor questions: 1. OzoneHttpServer.java NIT: Can we rename the class to HttpServer as it is under the hdsl package? 2. Confirm if Jenkins failures are related or not. I triggered another Jenkins run [here|https://builds.apache.org/job/PreCommit-HDFS-Build/23683/] to see if this is infra issue for the build failures. The patch builds successfully in my local environment. was (Author: xyao): Thanks [~elek] for working on this. Patch looks good to me. I just have two minor questions: 1. OzoneHttpServer.java NIT: Can we rename the class to HttpServer as it is under the hdsl package? 2. Confirm if Jenkins failures are related or not. I will trigger another Jenkins run to see if this is infra issue or not. > Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework > -- > > Key: HDFS-13341 > URL: https://issues.apache.org/jira/browse/HDFS-13341 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HDFS-13341-HDFS-7240.001.patch > > > ServiceRuntimeInfo is a generic interface to provide common information via > JMX beans (such as build version, compile info, started time). > Currently it is used only by KSM/SCM, I suggest to move it to the > hadoop-hdsl/framework project from hadoop-commons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)
[ https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416047#comment-16416047 ] Jitendra Nath Pandey commented on HDFS-10419: - I am fine with either of (3) and (4). bq. Do you plan to keep name Ozone? For object store? Yes, we intend to call the object store as Ozone, and we plan to rename KSM to Ozone-Master or ozone-manager. Please let us know your thoughts around that. > Building HDFS on top of new storage layer (HDSL) > > > Key: HDFS-10419 > URL: https://issues.apache.org/jira/browse/HDFS-10419 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jing Zhao >Assignee: Jing Zhao >Priority: Major > Attachments: Evolving NN using new block-container layer.pdf > > > In HDFS-7240, Ozone defines storage containers to store both the data and the > metadata. The storage container layer provides an object storage interface > and aims to manage data/metadata in a distributed manner. More details about > storage containers can be found in the design doc in HDFS-7240. > HDFS can adopt the storage containers to store and manage blocks. The general > idea is: > # Each block can be treated as an object and the block ID is the object's key. > # Blocks will still be stored in DataNodes but as objects in storage > containers. > # The block management work can be separated out of the NameNode and will be > handled by the storage container layer in a more distributed way. The > NameNode will only manage the namespace (i.e., files and directories). > # For each file, the NameNode only needs to record a list of block IDs which > are used as keys to obtain real data from storage containers. > # A new DFSClient implementation talks to both NameNode and the storage > container layer to read/write. > HDFS, especially the NameNode, can get much better scalability from this > design. Currently the NameNode's heaviest workload comes from the block > management, which includes maintaining the block-DataNode mapping, receiving > full/incremental block reports, tracking block states (under/over/miss > replicated), and joining every writing pipeline protocol to guarantee the > data consistency. These work bring high memory footprint and make NameNode > suffer from GC. HDFS-5477 already proposes to convert BlockManager as a > service. If we can build HDFS on top of the storage container layer, we not > only separate out the BlockManager from the NameNode, but also replace it > with a new distributed management scheme. > The storage container work is currently in progress in HDFS-7240, and the > work proposed here is still in an experimental/exploring stage. We can do > this experiment in a feature branch so that people with interests can be > involved. > A design doc will be uploaded later explaining more details. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart
[ https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416048#comment-16416048 ] Kihwal Lee commented on HDFS-12749: --- The latest patch is a step in the right direction. Some of the exceptions thrown by namenode to datanode are fatal and terminal. I.e. retry will never work. The catch block of {{offerService()}} takes this into account. Simply throwing one in {{register()}} will get ignored in {{BPServiceActor#processCommand()}}. {{shouldServiceRun}} needs to be set false in order to stop the actor thread. {code:java} } catch(RemoteException re) { String reClass = re.getClassName(); if (UnregisteredNodeException.class.getName().equals(reClass) || DisallowedDatanodeException.class.getName().equals(reClass) || IncorrectVersionException.class.getName().equals(reClass)) { LOG.warn(this + " is shutting down", re); shouldServiceRun = false; return; } LOG.warn("RemoteException in offerService", re); sleepAfterException(); } catch (IOException e) { LOG.warn("IOException in offerService", e); sleepAfterException(); } {code} You can keep your change in {{register()}} and simply add the same logic to the {{processCommand()}}'s catch block. I.e. crack open the {{RemoteException}} and stop the actor thread if it is one of the terminal exceptions. I know it's hard, but it will be nice if you can add a test case. > DN may not send block report to NN after NN restart > --- > > Key: HDFS-12749 > URL: https://issues.apache.org/jira/browse/HDFS-12749 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1 >Reporter: TanYuxin >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-12749-branch-2.7.002.patch, > HDFS-12749-trunk.003.patch, HDFS-12749.001.patch > > > Now our cluster have thousands of DN, millions of files and blocks. When NN > restart, NN's load is very high. > After NN restart,DN will call BPServiceActor#reRegister method to register. > But register RPC will get a IOException since NN is busy dealing with Block > Report. The exception is caught at BPServiceActor#processCommand. > Next is the caught IOException: > {code:java} > WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing > datanode Command > java.io.IOException: Failed on local exception: java.io.IOException: > java.net.SocketTimeoutException: 6 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local > host is: "DataNode_Host/Datanode_IP"; destination host is: > "NameNode_Host":Port; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773) > at org.apache.hadoop.ipc.Client.call(Client.java:1474) > at org.apache.hadoop.ipc.Client.call(Client.java:1407) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864) > at java.lang.Thread.run(Thread.java:745) > {code} > The un-catched IOException breaks BPServiceActor#register, and the Block > Report can not be sent immediately. > {code} > /** >* Register one bp with the corresponding NameNode >* >* The bpDatanode needs to register with the namenode on startup in order >* 1) to report which storage it is serving now and >* 2) to receive a registrationID >* >* issued by the namenode to recognize registered datanodes. >* >* @param nsInfo current NamespaceInfo >* @see FSNamesystem#registerDatanode(DatanodeRegistration) >* @throws IOException >*/ > void register(NamespaceInfo nsInfo) throws IOException { > // The handshake() phase loaded the block pool storage > // of
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416046#comment-16416046 ] Rushabh S Shah commented on HDFS-13314: --- bq. perhaps we could do some ugly fault injection to create dangling references and corrupted diff lists in the image. I am not that much concerned with the test case showing how the image is corrupted. I would like to see a test case to verify the following 2 scenarios * if numErrors == 0 then namenode should not exit. * if numErrors > 0 then namenode should exit. Quickly going through ExitUtil class, I see you can use {{ExitUtil#disableSystemExit}}. This will save the exception somewhere in ExitUtil class. {{ExitUtil#terminate}} will throw an {{ExitException}}. Hope this helps. > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13340) Ozone: Fix false positive RAT warning when project built without hds/cblock
[ https://issues.apache.org/jira/browse/HDFS-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-13340: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks [~elek] for the contribution. I've committed the patch to the feature branch. > Ozone: Fix false positive RAT warning when project built without hds/cblock > --- > > Key: HDFS-13340 > URL: https://issues.apache.org/jira/browse/HDFS-13340 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13340-HDFS-7240.001.patch > > > First of all: All the licence headers are handled well on this branch. > Unfortunatelly maven don't know it. If the project is built *without* -P > hdsl. The rat exclude rules in the hdsl/cblock/ozone projects are not used as > these projects are not used as maven project they are handled as static files. > The solutions is: > 1. Instead proper exclude I added the licence headers to some test file > 2. I added an additional exclude to the root pom.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13338) Update BUILDING.txt for building native libraries
[ https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-13338: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Update BUILDING.txt for building native libraries > - > > Key: HDFS-13338 > URL: https://issues.apache.org/jira/browse/HDFS-13338 > Project: Hadoop HDFS > Issue Type: Task > Components: build, documentation, native >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Critical > Attachments: HDFS-13338.1.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:213) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13338) Update BUILDING.txt for building native libraries
[ https://issues.apache.org/jira/browse/HDFS-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416043#comment-16416043 ] James Clampffer commented on HDFS-13338: [~tasanuma0829] I committed this to trunk. Thanks for your contribution! > Update BUILDING.txt for building native libraries > - > > Key: HDFS-13338 > URL: https://issues.apache.org/jira/browse/HDFS-13338 > Project: Hadoop HDFS > Issue Type: Task > Components: build, documentation, native >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Critical > Attachments: HDFS-13338.1.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > [ERROR] around Ant part ... dir="/.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /.../hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project > hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1 > around Ant part ... dir="/root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" > executable="cmake">... @ 5:119 in > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml > at org.apache.maven.lifecycle.internal.MojoExecutor.execute > (MojoExecutor.java:213) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call
[ https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar reassigned HDFS-13348: -- Assignee: Shashikant Banerjee (was: Nanda kumar) > Ozone: Update IP and hostname in Datanode from SCM's response to the register > call > -- > > Key: HDFS-13348 > URL: https://issues.apache.org/jira/browse/HDFS-13348 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > > Whenever a Datanode registers with SCM, the SCM resolves the IP address and > hostname of the Datanode form the RPC call. This IP address and hostname > should be sent back to Datanode in the response to register call and the > Datanode has to update the values from the response to its > {{DatanodeDetails}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13351) Revert HDFS-11156 from branch-2/branch-2.8
[ https://issues.apache.org/jira/browse/HDFS-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416028#comment-16416028 ] Andrew Wang commented on HDFS-13351: +1 pending > Revert HDFS-11156 from branch-2/branch-2.8 > -- > > Key: HDFS-13351 > URL: https://issues.apache.org/jira/browse/HDFS-13351 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: HDFS-13351-branch-2.001.patch > > > Per discussion in HDFS-11156, lets revert the change from branch-2 and > branch-2.8. New patch can be tracked in HDFS-12459 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org