[ 
https://issues.apache.org/jira/browse/HDFS-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ConfX updated HDFS-17189:
-------------------------
    Description: 
The garbage collection data buffer size equals to 
`dfs.namenode.gc.time.monitor.observation.window.ms` divided by 
`dfs.namenode.gc.time.monitor.sleep.interval.ms` plus 2. When the observation 
window is a large value, the caculation overflows, resulting in a negative 
buffer size.

To reproduce:
1. set `dfs.namenode.gc.time.monitor.observation.window.ms` to 945099495m
2. run `mvn surefire:test 
-Dtest=org.apache.hadoop.hdfs.TestHDFSFileSystemContract#testWriteReadAndDeleteHalfABlock`

We create a PR that provides a fix by checking the computed buffer size is not 
only within `128 * 1024` but also greater than 0.

  was:
The garbage collection data buffer size equals to 
`dfs.namenode.gc.time.monitor.observation.window.ms` divided by 
`dfs.namenode.gc.time.monitor.sleep.interval.ms` plus 2. When the observation 
window is a large value, the caculation overflows, resulting in a negative 
buffer size.

To reproduce:
1. set `dfs.namenode.gc.time.monitor.observation.window.ms` to 945099495m
2. run `mvn surefire:test 
-Dtest=org.apache.hadoop.hdfs.TestHDFSFileSystemContract#testWriteReadAndDeleteHalfABlock`

This PR provides a fix by checking the computed buffer size is not only within 
`128 * 1024` but also greater than 0.


> GcTimeMonitor crashes with NegativeArraySizeException during initialization
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-17189
>                 URL: https://issues.apache.org/jira/browse/HDFS-17189
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.3.6
>            Reporter: ConfX
>            Priority: Major
>              Labels: pull-request-available
>
> The garbage collection data buffer size equals to 
> `dfs.namenode.gc.time.monitor.observation.window.ms` divided by 
> `dfs.namenode.gc.time.monitor.sleep.interval.ms` plus 2. When the observation 
> window is a large value, the caculation overflows, resulting in a negative 
> buffer size.
> To reproduce:
> 1. set `dfs.namenode.gc.time.monitor.observation.window.ms` to 945099495m
> 2. run `mvn surefire:test 
> -Dtest=org.apache.hadoop.hdfs.TestHDFSFileSystemContract#testWriteReadAndDeleteHalfABlock`
> We create a PR that provides a fix by checking the computed buffer size is 
> not only within `128 * 1024` but also greater than 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to