[ https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17712371#comment-17712371 ]
ASF GitHub Bot commented on HADOOP-18695: ----------------------------------------- steveloughran commented on code in PR #5548: URL: https://github.com/apache/hadoop/pull/5548#discussion_r1166794079 ########## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java: ########## @@ -590,10 +593,9 @@ public void uploadObject(PutObjectRequest putObjectRequest, PutObjectOptions putOptions) throws IOException { - retry("Writing Object", - putObjectRequest.getKey(), true, - withinAuditSpan(getAuditSpan(), () -> - owner.putObjectDirect(putObjectRequest, putOptions))); + // the transfer manager is not involved; instead it is directly + // PUT. + putObject(putObjectRequest, putOptions, null); Review Comment: ...cut it. > S3A: reject multipart copy requests when disabled > ------------------------------------------------- > > Key: HADOOP-18695 > URL: https://issues.apache.org/jira/browse/HADOOP-18695 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 > Affects Versions: 3.4.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Minor > Labels: pull-request-available > > follow-on to HADOOP-18637 and support for huge file uploads with stores which > don't support MPU. > * prevent use of API against any s3 store when disabled, using logging > auditor to reject it > * tests to verify rename of huge files still works (by setting large part > size) -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org