Syed Shameerur Rahman created HADOOP-19576:
----------------------------------------------
Summary: Insert Overwrite Jobs With MagicCommitter Fails On S3
Express Storage
Key: HADOOP-19576
URL: https://issues.apache.org/jira/browse/HADOOP-19576
Project: Hadoop Common
Issue Type: Bug
Reporter: Syed Shameerur Rahman
Query engines which uses Magic Committer to overwrite a directory would ideally
upload the MPUs (not complete) and then delete the contents of the directory
before committing the MPU.
For S3 express storage, The directory purge operation is enabled by default.
Refer
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L688]
for code pointers.
Due to this, the pending MPU uploads are purged and query fails with
{{NoSuchUpload: The specified multipart upload does not exist. The upload ID
might be invalid, or the multipart upload might have been aborted or completed.
}}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]