[ https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943281#comment-16943281 ]
Hadoop QA commented on HDFS-14216: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 25s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:080e9d0f9b3 | | JIRA Issue | HDFS-14216 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982031/HDFS-14216.branch-3.1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 35488e4a2808 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.1 / ab7ecd6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28008/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28008/testReport/ | | Max. process+thread count | 3848 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/28008/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > NullPointerException happens in NamenodeWebHdfs > ----------------------------------------------- > > Key: HDFS-14216 > URL: https://issues.apache.org/jira/browse/HDFS-14216 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: lujie > Assignee: lujie > Priority: Critical > Fix For: 3.3.0, 3.2.1 > > Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, > HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, > HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log > > > workload > {code:java} > curl -i -X PUT -T $HOMEPARH/test.txt > "http://hadoop1:9870/webhdfs/v1/input?op=CREATE&excludedatanodes=hadoop2" > {code} > the method > {code:java} > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String > excludeDatanodes){ > HashSet<Node> excludes = new HashSet<Node>(); > if (excludeDatanodes != null) { > for (String host : StringUtils > .getTrimmedStringCollection(excludeDatanodes)) { > int idx = host.indexOf(":"); > if (idx != -1) { > excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr( > host.substring(0, idx), Integer.parseInt(host.substring(idx + > 1)))); > } else { > > excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280 > } > } > } > } > {code} > when datanode(e.g.hadoop2) is {color:#d04437}just wiped before > line280{color}, or{color:#333333} > {color}{color:#ff0000}we{color}{color:#ff0000} give the wrong DN > name{color}*,*then bm.getDatanodeManager().getDatanodeByHost(host) will > return null, *_excludes_* *containes null*. while *_excludes_* are used > later, NPE happens: > {code:java} > java.lang.NullPointerException > at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113) > at > org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672) > at > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533) > at > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491) > at > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323) > at > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384) > at > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652) > at > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600) > at > org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597) > at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73) > at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830) > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org