[jira] [Commented] (HDFS-17110) Null Pointer Exception when running TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort
[ https://issues.apache.org/jira/browse/HDFS-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765397#comment-17765397 ] ASF GitHub Bot commented on HDFS-17110: --- hadoop-yetus commented on PR #6077: URL: https://github.com/apache/hadoop/pull/6077#issuecomment-1720325352 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 49m 5s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 42m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 37s | | the patch passed | | +1 :green_heart: | shadedclient | 42m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 247m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6077/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 407m 4s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6077/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6077 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b83492b953c9 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4dca062b176fb3dc290729ea1ab643c174a118c2 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6077/1/testReport/ | | Max. process+thread count | 2400 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-17110) Null Pointer Exception when running TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort
[ https://issues.apache.org/jira/browse/HDFS-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765301#comment-17765301 ] ASF GitHub Bot commented on HDFS-17110: --- teamconfx opened a new pull request, #6077: URL: https://github.com/apache/hadoop/pull/6077 ### Description of PR https://issues.apache.org/jira/browse/HDFS-17110 This PR adds a check for `cluster` not null to avoid hiding the actual exception with the `NullPointerException`. ### How was this patch tested? 1. Set dfs.namenode.replication.min=12396 2. Run org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort The test throws "Unexpected configuration parameters: dfs.namenode.replication.min = 12396 > dfs.replication.max = 512". ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Null Pointer Exception when running > TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort > -- > > Key: HDFS-17110 > URL: https://issues.apache.org/jira/browse/HDFS-17110 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: ConfX >Priority: Critical > Attachments: reproduce.sh > > > h2. What happened > After setting {{{}dfs.namenode.replication.min=12396{}}}, running test > {{org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort}} > results in a {{{}NullPointerException{}}}. > h2. Where's the bug > In the test > {{{}org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort{}}}: > {noformat} > } finally { > cluster.shutdown(); > }{noformat} > the test tries to shutdown the cluster for cleaning up. However, if the > cluster is not generated and cluster=null, the NPE would conceal other > failures. > h2. How to reproduce > # Set {{dfs.namenode.replication.min=12396}} > # Run > {{org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA#testHarUriWithHaUriWithNoPort}} > and the following exception should be observed: > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA.testHarUriWithHaUriWithNoPort(TestHarFileSystemWithHA.java:60){noformat} > For an easy reproduction, run the reproduce.sh in the attachment. > We are happy to provide a patch if this issue is confirmed. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org