[ https://issues.apache.org/jira/browse/HADOOP-18896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ASF GitHub Bot updated HADOOP-18896: ------------------------------------ Labels: pull-request-available (was: ) > NegativeArraySizeException thrown in FSOutputSummer.java given large > file.bytes-per-checksum > -------------------------------------------------------------------------------------------- > > Key: HADOOP-18896 > URL: https://issues.apache.org/jira/browse/HADOOP-18896 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 3.3.6 > Reporter: ConfX > Priority: Critical > Labels: pull-request-available > > Buffer size of FSOutputSummer equals to `file.bytes-per-checksum` times > `BUFFER_NUM_CHUNKS`. A large `file.bytes-per-checksum` causes buffer size to > overflow and crash with NegativeArraySizeException. > To reproduce: > 1. set `file.bytes-per-checksum` to 238609295 > 2. `mvn surefire:test > -Dtest=org.apache.hadoop.hdfs.TestDecommissionWithStriped#testFileSmallerThanOneStripe` > We linked this issue to a PR that provides a fix which checks the buffer size > is positive after multiplying `file.bytes-per-checksum` with > `BUFFER_NUM_CHUNKS` -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org