[
https://issues.apache.org/jira/browse/MAPREDUCE-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18029370#comment-18029370
]
ASF GitHub Bot commented on MAPREDUCE-7447:
-------------------------------------------
github-actions[bot] commented on PR #6048:
URL: https://github.com/apache/hadoop/pull/6048#issuecomment-3395512731
We're closing this stale PR because it has been open for 100 days with no
activity. This isn't a judgement on the merit of the PR in any way. It's just a
way of keeping the PR queue manageable.
If you feel like this was a mistake, or you would like to continue working
on it, please feel free to re-open it and ask for a committer to remove the
stale tag and review again.
Thanks all for your contribution.
> Unnecessary NPE encountered when starting CryptoOutputStream with
> encrypted-intermediate-data
> ---------------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-7447
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7447
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Reporter: ConfX
> Priority: Critical
> Labels: pull-request-available
> Attachments: reproduce.sh
>
>
> h2. What happened?
> Got NullPointerException when initializing a {{{}CryptoOutputStream{}}}.
> h2. Where's the bug?
> In line 106 of {{{}CryptoOutputStream{}}},the code lacks a check to verify
> whether the key parameter is null or not.
> {noformat}
> public CryptoOutputStream(OutputStream out, CryptoCodec codec,
> int bufferSize, byte[] key, byte[] iv, long streamOffset,
> boolean closeOutputStream)
> throws IOException {
> ...
> this.key = key.clone();{noformat}
> As a result, when the configuration provides a null key, the key.clone()
> operation will throw a NullPointerException.
> It is essential to add a null check for the key parameter before using it.
> h2. How to reproduce?
> (1) set {{mapreduce.job.encrypted-intermediate-data}} to {{true}}
> (2) run
> {{org.apache.hadoop.mapreduce.task.reduce.TestMergeManager#testLargeMemoryLimits}}
> h2. Stacktrace
> h2. Stacktrace
> {noformat}
> java.lang.NullPointerException
> at
> org.apache.hadoop.crypto.CryptoOutputStream.<init>(CryptoOutputStream.java:106)
> at
> org.apache.hadoop.fs.crypto.CryptoFSDataOutputStream.<init>(CryptoFSDataOutputStream.java:38)
> at
> org.apache.hadoop.mapreduce.CryptoUtils.wrapIfNecessary(CryptoUtils.java:141)
> at
> org.apache.hadoop.mapreduce.security.IntermediateEncryptedStream.wrapIfNecessary(IntermediateEncryptedStream.java:46)
> at
> org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:87)
> at
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:274)
> at
> org.apache.hadoop.mapreduce.task.reduce.TestMergeManager.verifyReservedMapOutputType(TestMergeManager.java:309)
> at
> org.apache.hadoop.mapreduce.task.reduce.TestMergeManager.testLargeMemoryLimits(TestMergeManager.java:303){noformat}
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]