[ 
https://issues.apache.org/jira/browse/HDFS-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17765301#comment-17765301
 ] 

ASF GitHub Bot commented on HDFS-17110:
---------------------------------------

teamconfx opened a new pull request, #6077:
URL: https://github.com/apache/hadoop/pull/6077

   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17110
   This PR adds a check for `cluster` not null to avoid hiding the actual 
exception with the `NullPointerException`.
   
   ### How was this patch tested?
   1. Set dfs.namenode.replication.min=12396
   2. Run 
org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort
   The test throws "Unexpected configuration parameters: 
dfs.namenode.replication.min = 12396 > dfs.replication.max = 512".
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




>  Null Pointer Exception when running 
> TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17110
>                 URL: https://issues.apache.org/jira/browse/HDFS-17110
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ConfX
>            Priority: Critical
>         Attachments: reproduce.sh
>
>
> h2. What happened
> After setting {{{}dfs.namenode.replication.min=12396{}}}, running test 
> {{org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort}}
>  results in a {{{}NullPointerException{}}}.
> h2. Where's the bug
> In the test 
> {{{}org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort{}}}:
> {noformat}
>     } finally {
>       cluster.shutdown();
>     }{noformat}
> the test tries to shutdown the cluster for cleaning up. However, if the 
> cluster is not generated and cluster=null, the NPE would conceal other 
> failures.
> h2. How to reproduce
>  # Set {{dfs.namenode.replication.min=12396}}
>  # Run 
> {{org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort}}
> and the following exception should be observed:
> {noformat}
> java.lang.NullPointerException
>     at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA.testHarUriWithHaUriWithNoPort(TestHarFileSystemWithHA.java:60){noformat}
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to