[ https://issues.apache.org/jira/browse/HDFS-17772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17945106#comment-17945106 ]
ASF GitHub Bot commented on HDFS-17772: --------------------------------------- hadoop-yetus commented on PR #7617: URL: https://github.com/apache/hadoop/pull/7617#issuecomment-2809906854 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 3m 46s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 14s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.26+4-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.26+4-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 30s | | trunk passed | | +1 :green_heart: | shadedclient | 41m 10s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.26+4-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.26+4-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 43s | | the patch passed with JDK Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 42m 41s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | unit | 5m 14s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 155m 31s | | | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.48 ServerAPI=1.48 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7617/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/7617 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8c7a26b12cf6 5.15.0-136-generic #147-Ubuntu SMP Sat Mar 15 15:53:30 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b97c8980e4ea1ba6e556d198e2be9c939e929219 | | Default Java | Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.26+4-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_442-8u442-b06~us1-0ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7617/2/testReport/ | | Max. process+thread count | 862 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7617/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > The JournaledEditsCache has an int overflow issue, causing the maximum > capacity to always be Integer MAX_VALUE > -------------------------------------------------------------------------------------------------------------- > > Key: HDFS-17772 > URL: https://issues.apache.org/jira/browse/HDFS-17772 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 3.4.2 > Reporter: Guo Wei > Priority: Minor > Labels: pull-request-available > Fix For: 3.4.2 > > > When we use the RBF SBN READ in the production environment, we found the > following issue. > HDFS-16550 provides the parameter `dfs.journalnode.edit-cache-size.bytes` to > control cache size based on journalnode memory ratio, but there is an issue > of int overflow: > HDFS-16550 When using the `dfs.journalnode.edit-cache-size.bytes` parameter > to control cache capacity, during the initialization of the `capacity` in > `org.apache.hadoop.hdfs.qjournal.server.JournaledEditsCache#JournaledEditsCache`, > a long-to-int overflow issue occurs. For instance, when the heap memory is > configured as 32GB (where `Runtime.getRuntime().maxMemory()` returns > 30,542,397,440 bytes), the overflow results in the `capacity` being truncated > to `Integer.MAX_VALUE` (2,147,483,647). This renders the parameter setting > ineffective, as the intended proportional cache capacity cannot be achieved. > To resolve this, the `capacity` should be declared as a `long` type, and the > `totalSize` variable should also be converted to a `long` type to prevent > overflow in scenarios where `capacity` exceeds 2,147,483,647, ensuring both > variables can accurately represent large values without integer limitations. > The error situation is as follows: > {code:java} > // code placeholder > the dfs.Journalnode.edit-cache-size.fraction parameter uses the default value > of 0.5f. I configured the heap memory size of Journalnode to 30,542,397,440 > bytes, and expected it to be 15,271,198,720 bytes, but the capacity size is > always Integer.MAX_VALUE=2,147,483,647 bytes > 2025-04-15 14:14:03,970 INFO server.Journal > (JournaledEditsCache.java:<init>(144)) - Enabling the journaled edits cache > with a capacity of bytes: 2147483647 > The repaired result is as follows,meet expectation: > 2025-04-15 16:04:44,840 INFO server.Journal > (JournaledEditsCache.java:<init>(144)) - Enabling the journaled edits cache > with a capacity of bytes: 15271198720 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org